Releases: romulus0914/NASBench-PyTorch
Releases · romulus0914/NASBench-PyTorch
v1.3.1
v1.3
The code was modified so that it is easier to reproduce the original results - before, only the code structure was the same, but the hyperparameters were different and the optimizer was SGD - there were difficulties with making RMSProp training work.
Now the networks can be successfully trained with RMSProp and with the same hyperparameters as in the paper.
- Added reproducibility section to the readme
- Hyperparameters were modified so that they match those from the NAS-Bench-101 paper
- TensorFlow version of RMSProp is supported
- Gradient clipping can be turned off
Special thanks to @longerhost for helping to reproduce the original training!
v1.2.3
- fixed a bug where the model couldn't be cast to double (torch.zeros was replaced by torch.zeros_like)
- by @abhash-er
1.2.2
1.2.1
- fixed inconsistencies in devices when training on cuda - torch.zeros() caused the problem
1.2
- fixed a bug in training - when optimizer was None, it wasn't set to sgd properly
- modified the code so that the networks can be passed to
torch.jit.script()
1.1
- Updated the nasbench query example in the README
- More convenient checkpointing during training. Before, the checkpoint function had to have this signature:
checkpoint_func(network, metrics)
Since this version:
checkpoint_func(network, metrics, epoch number)
1.0
nasbench_pytorch 1.0
NASBench-PyTorch is now an installable package uploaded to Pypi. You can install it like this (working PyTorch installation is needed):
pip install nasbench_pytorch
What's Changed
- Refactor the project to a package-like structure by @gabrielasuchopar in #2