[0.1.0] - 2025-10-25
Features
- Implement network pruning logic (#2)
- Implement FLOPs computation for additional layer types (#8)
- Add model comparison script (#9)
Refactor
- Enhance logging, and adjust checkpointing logic for training (#1)
- Improve logging setup and model loading process in testing (#5)
- Rename project to "neural-pruning" (#6)
- Improve modularity, clean code, clarify logic (#7)
Documentation
- Add README (#11)
Miscellaneous Tasks
- Consolidate configuration parsing and logging utilities (#3)
- Adjust default config paths for training and testing tasks (#4)
- Add example configuration files (#10)
- Add MIT License file (#12)
All Commits
- refactor: enhance logging, and adjust checkpointing logic for training by @VioletsOleander in #1
- feat: implement network pruning logic by @VioletsOleander in #2
- chore: consolidate configuration parsing and logging utilities by @VioletsOleander in #3
- chore: adjust default config paths for training and testing tasks by @VioletsOleander in #4
- feat: implement FLOPs computation for additional layer types by @VioletsOleander in #8
- feat: add model comparison script by @VioletsOleander in #9
- chore: add example configuration files by @VioletsOleander in #10
- docs: add README by @VioletsOleander in #11
- chore: add MIT License file by @VioletsOleander in #12
- chore: release v0.1.0 by @VioletsOleander in #13
Full Changelog: https://github.com/VioletsOleander/neural-pruning/commits/v0.1.0