Skip to content

Commit 60834b4

Browse files
authored
Merge pull request #442 from unit8co/develop
Release 0.10.1
2 parents 36ffcdb + e229855 commit 60834b4

File tree

4 files changed

+25
-11
lines changed

4 files changed

+25
-11
lines changed

CHANGELOG.md

+13-4
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,16 @@
44
Darts is still in an early development phase and we cannot always guarantee backwards compatibility. Changes that may **break code which uses a previous release of Darts** are marked with a "🔴".
55

66
## [Unreleased](https://github.com/unit8co/darts/tree/develop)
7-
[Full Changelog](https://github.com/unit8co/darts/compare/0.10.0...develop)
7+
[Full Changelog](https://github.com/unit8co/darts/compare/0.10.1...develop)
8+
9+
## [0.10.1](https://github.com/unit8co/darts/tree/0.10.0) (2021-08-19)
10+
### For users of the library:
11+
12+
**Fixed:**
13+
- A bug with memory pinning that was causing issues with training models on GPUs.
14+
15+
**Changed:**
16+
- Clarified conda support on the README
817

918
## [0.10.0](https://github.com/unit8co/darts/tree/0.10.0) (2021-08-13)
1019
### For users of the library:
@@ -15,7 +24,7 @@ argument, but it wasn't always clear whether this represented "past-observed" or
1524
We have made this clearer. Now all covariate-aware models support `past_covariates` and/or `future_covariates` argument
1625
in their `fit()` and `predict()` methods, which makes it clear what series is used as a past or future covariate.
1726
We recommend [this article](https://medium.com/unit8-machine-learning-publication/time-series-forecasting-using-past-and-future-external-data-with-darts-1f0539585993)
18-
for more informations and examples.
27+
for more information and examples.
1928

2029
- 🔴 Significant improvement of `RegressionModel` (incl. `LinearRegressionModel` and `RandomForest`).
2130
These models now support training on multiple (possibly multivariate) time series. They also support both
@@ -232,7 +241,7 @@ All implementations of `GlobalForecastingModel`s support multivariate time serie
232241
- Ensemble models, a new kind of `ForecastingModel` which allows to ensemble multiple models to make predictions:
233242
- `EnsembleModel` is the abstract base class for ensemble models. Classes deriving from `EnsembleModel` must implement the `ensemble()` method, which takes in a `List[TimeSeries]` of predictions from the constituent models, and returns the ensembled prediction (a single `TimeSeries` object)
234243
- `RegressionEnsembleModel`, a concrete implementation of `EnsembleModel `which allows to specify any regression model (providing `fit()` and `predict()` methods) to use to ensemble the constituent models' predictions.
235-
- A new method to `TorchForecastingModel`: `untrained_model()` returns the model as it was initally created, allowing to retrain the exact same model from scratch. Works both when specifying a `random_state` or not.
244+
- A new method to `TorchForecastingModel`: `untrained_model()` returns the model as it was initially created, allowing to retrain the exact same model from scratch. Works both when specifying a `random_state` or not.
236245
- New `ForecastingModel.backtest()` and `RegressionModel.backtest()` functions which by default compute a single error score from the historical forecasts the model would have produced.
237246
- A new `reduction` parameter allows to specify whether to compute the mean/median/… of errors or (when `reduction` is set to `None`) to return a list of historical errors.
238247
- The previous `backtest()` functionality still exists but has been renamed `historical_forecasts()`
@@ -264,7 +273,7 @@ All implementations of `GlobalForecastingModel`s support multivariate time serie
264273
- Implementing your own data transformers:
265274
- Data transformers which need to be fitted first should derive from the `FittableDataTransformer` base class and implement a `fit()` method. Fittable transformers also provide a `fit_transform()` method, which fits the transformer and then transforms the data with a single call.
266275
- Data transformers which perform an invertible transformation should derive from the `InvertibleDataTransformer` base class and implement a `inverse_transform()` method.
267-
- Data transformers wich are neither fittable nor invertible should derive from the `BaseDataTransformer` base class
276+
- Data transformers which are neither fittable nor invertible should derive from the `BaseDataTransformer` base class
268277
- All data transformers must implement a `transform()` method.
269278
- Concrete `DataTransformer` implementations:
270279
- `MissingValuesFiller` wraps around `fill_missing_value()` and allows to fill missing values using either a constant value or the `pd.interpolate()` method.

README.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -158,8 +158,8 @@ Some of the models depend on `prophet` and `torch`, which have non-Python depend
158158
A Conda environment is thus recommended because it will handle all of those in one go.
159159

160160
### From conda-forge
161-
Currently only Python 3.7 is fully supported with conda; consider using PyPI if you are running
162-
into troubles.
161+
Currently only Linux and macOS on the x86_64 architecture are fully supported with
162+
conda; consider using PyPI if you are running into troubles.
163163

164164
To create a conda environment for Python 3.7
165165
(after installing [conda](https://docs.conda.io/en/latest/miniconda.html)):
@@ -225,7 +225,7 @@ To run the tests for specific flavours of the library, replace `_all` with `_cor
225225

226226
### Documentation
227227

228-
To build documantation locally just run
228+
To build documentation locally just run
229229
```bash
230230
./gradlew buildDocs
231231
```

darts/models/torch_forecasting_model.py

+8-3
Original file line numberDiff line numberDiff line change
@@ -217,7 +217,7 @@ def _batch_collate_fn(self, batch: List[Tuple]) -> Tuple:
217217
elem = first_sample[i]
218218
if isinstance(elem, np.ndarray):
219219
aggregated.append(
220-
torch.from_numpy(np.stack([sample[i] for sample in batch], axis=0)).to(self.device)
220+
torch.from_numpy(np.stack([sample[i] for sample in batch], axis=0))
221221
)
222222
elif elem is None:
223223
aggregated.append(None)
@@ -682,7 +682,7 @@ def predict_from_dataset(self,
682682
batch_size=batch_size,
683683
shuffle=False,
684684
num_workers=0,
685-
pin_memory=False,
685+
pin_memory=True,
686686
drop_last=False,
687687
collate_fn=self._batch_collate_fn)
688688
predictions = []
@@ -691,7 +691,7 @@ def predict_from_dataset(self,
691691
self.model.eval()
692692
with torch.no_grad():
693693
for batch_tuple in iterator:
694-
694+
batch_tuple = self._batch_to_device(batch_tuple)
695695
input_data_tuple, batch_input_series = batch_tuple[:-1], batch_tuple[-1]
696696

697697
# number of individual series to be predicted in current batch
@@ -754,6 +754,10 @@ def _sample_tiling(self, input_data_tuple, batch_sample_size):
754754
tiled_input_data.append(None)
755755
return tuple(tiled_input_data)
756756

757+
def _batch_to_device(self, batch):
758+
batch = [elem.to(self.device) if isinstance(elem, torch.Tensor) else elem for elem in batch]
759+
return tuple(batch)
760+
757761
def untrained_model(self):
758762
return self._load_untrained_model(_get_untrained_models_folder(self.work_dir, self.model_name))
759763

@@ -791,6 +795,7 @@ def _train(self,
791795

792796
for batch_idx, train_batch in enumerate(train_loader):
793797
self.model.train()
798+
train_batch = self._batch_to_device(train_batch)
794799
output = self._produce_train_output(train_batch[:-1])
795800
target = train_batch[-1] # By convention target is always the last element returned by datasets
796801
loss = self._compute_loss(output, target)

setup_u8darts.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ def read_requirements(path):
2929

3030
setup(
3131
name='u8darts',
32-
version="0.10.0",
32+
version="0.10.1",
3333
description='A python library for easy manipulation and forecasting of time series.',
3434
long_description=LONG_DESCRIPTION,
3535
long_description_content_type="text/markdown",

0 commit comments

Comments
 (0)