Releases: ashleve/lightning-hydra-template
Releases · ashleve/lightning-hydra-template
v2.0.3
What's Changed
- Lightning + Aim dependency fix in conda environment.yaml and
setup.pyby @tesfaldet in #575 - Lightning import fix in
instantiators.pyby @tesfaldet in #577 - Fix WandB config improper hierarchical display of keys by @dreaquil in #583
- Removing yaml extension from hydra config names in defaults lists by @tesfaldet in #584
- Docstrings revamp by @tesfaldet in #589
- Rename
pyrootutilstorootutilsby @ashleve in #592 - Fixes colorlog issue where
train.logis saved in project root dir by @tesfaldet in #588 - Fix accelerator in
tests/test_train.pyby @caplett in #595 - Update PyTorch Lightning DDP Documentation Links in
README.mdby @amorehead in #601 - Fix
torch.compileonnn.moduleinstead of onLightningModuleby @tesfaldet in #587
New Contributors
Full Changelog: v2.0.2...v2.0.3
v2.0.2
v2.0.1
What's Changed
- Update badges in README.md @amorehead in #553
- Fix filename instantiatiators.py -> instantiators.py by @Phimos in #558
- Fix dead links in callback configs by @tbazin in #557
New Contributors
Full Changelog: v2.0.0...v2.0.1
v2.0.0
Release for alignment with PyTorch 2.0 and Lightning 2.0.
Changes
🚀 Features
- Support for logging with Aim @tesfaldet (#534)
- Add option for pytorch 2.0 model compilation @ashleve (#550)
🧹 Maintenance
- Update template to Lightning 2.0 @johnnynunez (#548)
- Update pre-commit hooks @ashleve (#549)
- Refactor utils @ashleve (#541)
📝️ Documentation
Full Changelog: v1.5.3...v2.0.0
v1.5.3
Changes
🚀 Features
- Add
__init__.pytoconfigs/folder @ashleve (#539) - Support for installing dependencies with conda @tesfaldet (#532)
- Encourage resetting all validation metrics when training starts @ashleve (#540)
🧹 Maintenance
- Set hydra version to 1.3 in tests @ashleve (#542)
- Bump hydra-core from 1.3.1 to 1.3.2 @dependabot (#536)
v1.5.2
Changes
🧹 Maintenance
- Deprecate Python 3.7 @ashleve (#523)
- Bump pytorch-lightning from 1.8.3 to 1.9.1 @dependabot (#522)
- Hotfix for isort poetry incompatibility @atong01 (#515)
- Fix use of deprecated LightningLoggerBase class @colobas (#517)
📝️ Documentation
- Update
README.md@ashleve (#524) - Fix readme typo @amorehead (#507)
v1.5.1
Changes
🧹 Maintenance
- Add PR authors to release draft config @ashleve (#503)
- Remove object instantiation from
__main__methods @ashleve (#502) - Rename
datamodulesfolder todata@ashleve (#501) - Refactor tests @ashleve (#498)
- Change root setup to
.project-rootfile @ashleve (#496) - Add
devbranch to PR tests workflow @ashleve (#497)
📝️ Documentation
v1.5.0
Changes
🚀 Features
- Add release drafter (#493)
- Add
codecov.ymlto prevent failing CI pipeline on coverage decrease (#484) - Add learning rate scheduler example (#439)
- Make use of learning rate scheduler optional (#449)
- Add shellcheck linter (#427)
🐛 Bug Fixes
- Fix sending hparams to only one logger (#479)
- Fix logging metrics in DDP mode (#426)
- Fix
make clean-logscommand (#430) - Fix
make synccommand (#423) - Fix missing CPU trainer (#402)
- Fix typing (#401)
🧹 Maintenance
- Upgrade to hydra 1.3 (#480)
- Rename
step()tomodel_step()for compatibility with recent lightning release (#472) - Upgrade deprecated TPU import (#473)
- Upgrade deprecated accuracy metric initialization to recent torchmetrics release (#475)
- Refactor
task_wrapperdecorator (#488) - Move tasks code inside entry files (#421)
- Pre-commit config updates for jupyter notebooks and flake8 (#435)
- Add separate job for macos in CI test worfklow ( #474)
- Add separate job for windows in CI test worfklow (#422)
- Disable ignoring net in mnist module (#481)
- Remove debug from makefile (#482)
- Bump pytorch-lightning from 1.7.1 to 1.8.1 (#468)
- Bump torchmetrics from 0.9.3 to 0.10.0 (#454)
- Bump pytorch-lightning from 1.6.5 to 1.7.1 (#408)
- Bump hydra-core from 1.3.0 to 1.3.1 (#492)
📝️ Documentation & Comments
- Add Vertex AI integration repo to readme (#440)
- Add explicit comment warning to
training_epoch_end()(#486) - Improve utils warnings (#483)
- Fix filenames in docstring (#428)
- Update example of using tags command in
README.md(#465) - Improve comments (#429, #441, #476)
- Fix broken link of datamodule (#444)
- Update
README.md(#419, #425, #442)
@ashleve @yipliu @amorehead @atong01 @YuCao16 @Yongtae723 @cauliyang
v1.4.0
What's Changed
- Adapt template to
hydra 1.2- no more changing the working directory by default - Rename
test.pyandtest.yamltoeval.pyandeval.yaml(so as to avoid confusion with project tests) - Move
train.pyandeval.pyinsidesrc/ - Add
pyrootutilspackage for standardizing the project root setup intrain.pyandeval.py - Rename
pipelinestotasks - Create a separate folder for tasks
- Add
task_nameto main config, which determines hydra output folder path - Introduce
@task_wrapperdecorator for applying utilities before and after the task is executed - Standardize what is returned from tasks:
Tuple[metric_dict, object_dict] - Add
SimpleDenseNetconfig to model config with recursive instantiation - Add optimizer config to model config using
_partial_: true - Remove
_convert_=partialfrom trainer instantiation (no longer needed since recent lightning release) - Add
ckpt_pathto main config,trainer.fit()andtrainer.test(), for compatibility with recent lightning release - Add resetting
val_acc_bestmetric at the start of the training to prevent storing results from validation sanity checks - Add verifying logger is not None before logging hparams
- Add
tagsto main config - Add prompting user to input tags when none are provide to
utils.extras() - Remove experiment
name(since tags andtask_nameare enough) - Rename
configtocfgsince it's the standard naming convention in hydra - Split utils into multiple files:
utils.py,rich_utils.py,pylogger.py - Add
utils.instantiate_callbacks()andutils.instantiate_loggers()to reduce the boilerplate in tasks - Add
utils.get_metric_value()for safely retrieving optimized metric. - Rename
utils.finish()toutils.close_loggers() - Move extra config utils to
configs/extras/default.yaml - Replace deprecated
trainer.gpusargument withtrainer.acceleratorandtrainer.devices - Add separate trainer configs for GPU, CPU, simulating DDP on CPU and MPS accelerator (Accelerated PyTorch Training on Mac)
- Add
hydra.mode=MULTIRUNtomnist_optuna.yamlconfig (so using-mis no longer needed when attaching this config) - Update
mnist_opuna.yamlfor compatibility with new search space syntax inhydra 1.2 - Disable callbacks in debug configs by default (fixes
debug/overfit.yaml) - Disable hydra command line debug logger in debug configs by default
- Split callbacks config into multiple files
- Replace
setup.cfgwithpyproject.tomlsince it's a more versatile standard (PEP 518) - Add pre-commit hooks:
pyupgrade(automatically upgrading python syntax to newer version),bandit(security linter),codespell(spelling linter),mdformat(markdown formatting) - Redesign testing and add tests covering ddp, multirun, loggers, resuming training and evaluation
- Implement CI workflows with GitHub Actions: executing pytest, test code coverage measuring, code quality testing for main branch and PRs
- Add
depandabot - Add pull request template
- Add
setup.py - Add
Makefile - Update
README.md
@nils-werner @johnnynunez @elisim @yu-xiang-wang @yipliu @Gxinhu @binlee52
v1.3.0
The template has been significantly refactored.
List of changes:
- Introduce multiple pipelines, to showcase example of how one can separate training from evaluation,
run.pyhas been replaced bytrain.pyandtest.py - The
modegroup config has been removed since it was confusing, now every run is treated as an experiment, and debugging is moved to a separate config group - Introduce
debugconfig group - Introduce
log_dirconfig group - Move wandb callbacks to the branch
wandb-callbacksto make template logger-agnostic - Refactor rich config printing, now all config groups are always printed instead of just the pre-selected ones, but you can still decide on the print order
- Add
nbstripoutto pre-commit hooks, for automatic clearing of jupyter notebooks outputs before commit - Update packages in
requirements.txtandpre-commit-config.yamlto newest versions - Remove some of the unimportant default optuna parameters in
mnist_optuna.yamland add more explanatory comments - Remove no longer needed utilities from
utils.extras() - Add config flag for skipping training
- Fix hydra package versions in
requirements.txtfor mac compatibility - Remove redundant parts in filenames:
mnist_model.yaml->mnist.yaml,mnist_datamodule.yaml->datamodule.yaml - Change
mnist_model.py->mnist_module.pyandMNISTLitModel->MNISTLitModule - Rename folder
modules/tocomponents/ - Change
accelerator="ddp"tostrategy="ddp"since it was depracated by lightning - Remove trainer arguments depracated by lightning:
weight summaryandprogress_bar_refresh_rate - Specify black profile for isort inside
.pre-commit-config.yamljust in case someone deletes thesetup.cfg - Specify
testpathinsetup.cfgso pytest knows all test files are placed only intests/folder - Allow for using relative checkpoint paths
- Rename folder
bashtoscripts - Introduce
vendordir as a "best practice" for storing third party code - Introduce local config files in
configs/local/, which can be used for storing machine/user specific configurations, e.g. configuration of slurm cluster - Unify logging directories structure
- Add
RichModelSummaryto default callbacks - Fix missing parameter in "Accessing datamodule attributes" trick in
README.md - General
README.mdimprovements
Special thanks to: @nils-werner @charlesincharge @Steve-Tod
for their PRs.