Skip to content

Commit

Permalink
Merge branch 'jdb78:master' into weight-size-bug-fix
Browse files Browse the repository at this point in the history
  • Loading branch information
bendavidsteel authored Aug 4, 2023
2 parents 5e96e9d + 7c775c1 commit c266977
Show file tree
Hide file tree
Showing 49 changed files with 10,980 additions and 9,468 deletions.
1 change: 1 addition & 0 deletions .env
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
PYTORCH_ENABLE_MPS_FALLBACK=1
6 changes: 3 additions & 3 deletions .github/workflows/test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -72,9 +72,9 @@ jobs:
shell: bash
run: poetry install -E "github-actions graph mqf2"

- name: Install pytorch geometric dependencies
shell: bash
run: poetry run pip install torch-scatter torch-sparse torch-cluster torch-spline-conv torch-geometric -f https://data.pyg.org/whl/torch-1.12.1+cpu.html
# - name: Install pytorch geometric dependencies
# shell: bash
# run: poetry run pip install pyg_lib torch-scatter torch-sparse torch-cluster torch-spline-conv torch-geometric -f https://data.pyg.org/whl/torch-2.0.0+cpu.html

- name: Run pytest
shell: bash
Expand Down
2 changes: 1 addition & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -104,7 +104,7 @@ celerybeat.pid
*.sage.py

# Environments
.env
# .env
.venv
env/
venv/
Expand Down
24 changes: 12 additions & 12 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,26 +2,26 @@
# See https://pre-commit.com/hooks.html for more hooks
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.1.0
rev: v4.4.0
hooks:
- id: trailing-whitespace
- id: end-of-file-fixer
- id: check-yaml
- id: check-ast
- repo: https://gitlab.com/pycqa/flake8
rev: "3.9.2"
- id: trailing-whitespace
- id: end-of-file-fixer
- id: check-yaml
- id: check-ast
- repo: https://github.com/pycqa/flake8
rev: 6.0.0
hooks:
- id: flake8
- id: flake8
- repo: https://github.com/pre-commit/mirrors-isort
rev: v5.10.1
hooks:
- id: isort
- id: isort
- repo: https://github.com/psf/black
rev: 22.3.0
rev: 23.1.0
hooks:
- id: black
- id: black
- repo: https://github.com/nbQA-dev/nbQA
rev: 1.3.1
rev: 1.6.4
hooks:
- id: nbqa-black
- id: nbqa-isort
Expand Down
16 changes: 13 additions & 3 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,16 @@
# Release Notes

## v0.10.4 UNRELEASED (xx/xx/xxxx)
## v1.0.0 Update to pytorch 2.0 (10/04/2023)


### Breaking Changes

- Upgraded to pytorch 2.0 and lightning 2.0. This brings a couple of changes, such as configuration of trainers. See the [lightning upgrade guide](https://lightning.ai/docs/pytorch/latest/upgrade/migration_guide.html). For PyTorch Forecasting, this particularly means if you are developing own models, the class method `epoch_end` has been renamed to `on_epoch_end` and replacing `model.summarize()` with `ModelSummary(model, max_depth=-1)` and `Tuner(trainer)` is its own class, so `trainer.tuner` needs replacing. (#1280)
- Changed the `predict()` interface returning named tuple - see tutorials.

### Changes

- The predict method is now using the lightning predict functionality and allows writing results to disk (#1280).

### Fixed

Expand Down Expand Up @@ -81,7 +91,7 @@

### Added

- Added support for running `pytorch_lightning.trainer.test` (#759)
- Added support for running `lightning.trainer.test` (#759)

### Fixed

Expand Down Expand Up @@ -402,7 +412,7 @@ This release has only one purpose: Allow usage of PyTorch Lightning 1.0 - all te
- Using `LearningRateMonitor` instead of `LearningRateLogger`
- Use `EarlyStopping` callback in trainer `callbacks` instead of `early_stopping` argument
- Update metric system `update()` and `compute()` methods
- Use `trainer.tuner.lr_find()` instead of `trainer.lr_find()` in tutorials and examples
- Use `Tuner(trainer).lr_find()` instead of `trainer.lr_find()` in tutorials and examples
- Update poetry to 1.1.0

---
Expand Down
12 changes: 6 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,6 @@ Specifically, the package provides
- Multiple neural network architectures for timeseries forecasting that have been enhanced
for real-world deployment and come with in-built interpretation capabilities
- Multi-horizon timeseries metrics
- Ranger optimizer for faster model training
- Hyperparameter tuning with [optuna](https://optuna.readthedocs.io/)

The package is built on [pytorch-lightning](https://pytorch-lightning.readthedocs.io/) to allow training on CPUs, single and multiple GPUs out-of-the-box.
Expand Down Expand Up @@ -86,11 +85,12 @@ Networks can be trained with the [PyTorch Lighning Trainer](https://pytorch-ligh

```python
# imports for training
import pytorch_lightning as pl
from pytorch_lightning.loggers import TensorBoardLogger
from pytorch_lightning.callbacks import EarlyStopping, LearningRateMonitor
import lightning.pytorch as pl
from lightning.pytorch.loggers import TensorBoardLogger
from lightning.pytorch.callbacks import EarlyStopping, LearningRateMonitor
# import dataset, network to train and metric to optimize
from pytorch_forecasting import TimeSeriesDataSet, TemporalFusionTransformer, QuantileLoss
from lightning.pytorch.tuner import Tuner

# load data: this is pandas dataframe with at least a column for
# * the target (what you want to predict)
Expand Down Expand Up @@ -133,7 +133,7 @@ early_stop_callback = EarlyStopping(monitor="val_loss", min_delta=1e-4, patience
lr_logger = LearningRateMonitor()
trainer = pl.Trainer(
max_epochs=100,
gpus=0, # run on CPU, if on multiple GPUs, use accelerator="ddp"
accelerator="auto", # run on CPU, if on multiple GPUs, use strategy="ddp"
gradient_clip_val=0.1,
limit_train_batches=30, # 30 batches per epoch
callbacks=[lr_logger, early_stop_callback],
Expand All @@ -160,7 +160,7 @@ tft = TemporalFusionTransformer.from_dataset(
print(f"Number of parameters in network: {tft.size()/1e3:.1f}k")

# find the optimal learning rate
res = trainer.lr_find(
res = Tuner(trainer).lr_find(
tft, train_dataloaders=train_dataloader, val_dataloaders=val_dataloader, early_stop_threshold=1000.0, max_lr=0.3,
)
# and plot the result - always visually confirm that the suggested learning rate makes sense
Expand Down
12 changes: 7 additions & 5 deletions docs/requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -3,15 +3,17 @@ nbsphinx
pandoc
docutils
pydata-sphinx-theme
pytorch-lightning>=0.9.0
lightning>=2.0.0
cloudpickle
torch>=1.6
optuna>=2.0
torch>=2.0
optuna>=3.1.0
scipy
pandas>=1.0
scikit-learn>0.23
pandas>=1.3
scikit-learn>1.2
matplotlib
statsmodels
ipython
nbconvert>=6.3.0
recommonmark>=0.7.1
pytorch-optimizer>=2.5.1
fastapi>0.80
1 change: 0 additions & 1 deletion docs/source/api.rst
Original file line number Diff line number Diff line change
Expand Up @@ -11,5 +11,4 @@ API
data
models
metrics
optim
utils
12 changes: 6 additions & 6 deletions docs/source/getting-started.rst
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ The general setup for training and testing a model is
directly if you do not wish to load the entire training dataset at inference time.

#. Instantiate a model using the its ``.from_dataset()`` method.
#. Create a ``pytorch_lightning.Trainer()`` object.
#. Create a ``lightning.Trainer()`` object.
#. Find the optimal learning rate with its ``.tuner.lr_find()`` method.
#. Train the model with early stopping on the training dataset and use the tensorboard logs
to understand if it has converged with acceptable accuracy.
Expand All @@ -65,9 +65,9 @@ Example

.. code-block:: python
import pytorch_lightning as pl
from pytorch_lightning.callbacks import EarlyStopping, LearningRateMonitor
import lightning.pytorch as pl
from lightning.pytorch.callbacks import EarlyStopping, LearningRateMonitor
from lightning.pytorch.tuner import Tuner
from pytorch_forecasting import TimeSeriesDataSet, TemporalFusionTransformer
# load data
Expand Down Expand Up @@ -105,7 +105,7 @@ Example
lr_logger = LearningRateMonitor()
trainer = pl.Trainer(
max_epochs=100,
gpus=0,
accelerator="auto",
gradient_clip_val=0.1,
limit_train_batches=30,
callbacks=[lr_logger, early_stop_callback],
Expand All @@ -127,7 +127,7 @@ Example
print(f"Number of parameters in network: {tft.size()/1e3:.1f}k")
# find optimal learning rate (set limit_train_batches to 1.0 and log_interval = -1)
res = trainer.tuner.lr_find(
res = Tuner(trainer).lr_find(
tft, train_dataloaders=train_dataloader, val_dataloaders=val_dataloader, early_stop_threshold=1000.0, max_lr=0.3,
)
Expand Down
1 change: 0 additions & 1 deletion docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,6 @@ Specifically, the package provides
* Multiple neural network architectures for timeseries forecasting that have been enhanced
for real-world deployment and come with in-built interpretation capabilities
* Multi-horizon timeseries metrics
* Ranger optimizer for faster model training
* Hyperparameter tuning with `optuna <https://optuna.readthedocs.io/>`_

The package is built on `PyTorch Lightning <https://pytorch-lightning.readthedocs.io/>`_ to allow
Expand Down
Loading

0 comments on commit c266977

Please sign in to comment.