|
169 | 169 |
|
170 | 170 | ### New Features |
171 | 171 |
|
172 | | -- \[Documentation\] Updated, executable ESM-2nv notebooks demonstrating: Data preprocessing and model training with custom datasets, Fine-tuning on FLIP data, Inference on OAS sequences, Pre-training from scratch and continuing training |
173 | | -- \[Documentation\] New notebook demonstrating Zero-Shot Protein Design Using ESM-2nv. Thank you to @awlange from A-Alpha Bio for contributing the original version of this recipe! |
| 172 | +- [Documentation] Updated, executable ESM-2nv notebooks demonstrating: Data preprocessing and model training with custom datasets, Fine-tuning on FLIP data, Inference on OAS sequences, Pre-training from scratch and continuing training |
| 173 | +- [Documentation] New notebook demonstrating Zero-Shot Protein Design Using ESM-2nv. Thank you to @awlange from A-Alpha Bio for contributing the original version of this recipe! |
174 | 174 |
|
175 | 175 | ### Bug fixes and Improvements |
176 | 176 |
|
177 | | -- \[Geneformer\] Fixed bug in preprocessing due to a relocation of dependent artifacts. |
178 | | -- \[Geneformer\] Fixes bug in finetuning to use the newer preprocessing constructor. |
| 177 | +- [Geneformer] Fixed bug in preprocessing due to a relocation of dependent artifacts. |
| 178 | +- [Geneformer] Fixes bug in finetuning to use the newer preprocessing constructor. |
179 | 179 |
|
180 | 180 | ## BioNeMo Framework v1.8 |
181 | 181 |
|
182 | 182 | ### New Features |
183 | 183 |
|
184 | | -- \[Documentation\] Updated, executable MolMIM notebooks demonstrating: Training on custom data, Inference and downstream prediction, ZINC15 dataset preprocesing, and CMA-ES optimization |
185 | | -- \[Dependencies\] Upgraded the framework to [NeMo v1.23](https://github.com/NVIDIA/NeMo/tree/v1.23.0), which updates PyTorch to version 2.2.0a0+81ea7a4 and CUDA to version 12.3. |
| 184 | +- [Documentation] Updated, executable MolMIM notebooks demonstrating: Training on custom data, Inference and downstream prediction, ZINC15 dataset preprocesing, and CMA-ES optimization |
| 185 | +- [Dependencies] Upgraded the framework to [NeMo v1.23](https://github.com/NVIDIA/NeMo/tree/v1.23.0), which updates PyTorch to version 2.2.0a0+81ea7a4 and CUDA to version 12.3. |
186 | 186 |
|
187 | 187 | ### Bug fixes and Improvements |
188 | 188 |
|
189 | | -- \[ESM2\] Fixed a bug in gradient accumulation in encoder fine-tuning |
190 | | -- \[MegaMolBART\] Make MegaMolBART encoder finetuning respect random seed set by user |
191 | | -- \[MegaMolBART\] Finetuning with val_check_interval=1 bug fix |
| 189 | +- [ESM2] Fixed a bug in gradient accumulation in encoder fine-tuning |
| 190 | +- [MegaMolBART] Make MegaMolBART encoder finetuning respect random seed set by user |
| 191 | +- [MegaMolBART] Finetuning with val_check_interval=1 bug fix |
192 | 192 |
|
193 | 193 | ### Known Issues |
194 | 194 |
|
|
204 | 204 |
|
205 | 205 | ### New Features |
206 | 206 |
|
207 | | -- \[EquiDock\] Remove steric clashes as a post-processing step after equidock inference. |
208 | | -- \[Documentation\] Updated Getting Started section which sequentially describes prerequisites, BioNeMo Framework access, startup instructions, and next steps. |
| 207 | +- [EquiDock] Remove steric clashes as a post-processing step after equidock inference. |
| 208 | +- [Documentation] Updated Getting Started section which sequentially describes prerequisites, BioNeMo Framework access, startup instructions, and next steps. |
209 | 209 |
|
210 | 210 | ### Known Issues |
211 | 211 |
|
|
215 | 215 |
|
216 | 216 | ### New Features |
217 | 217 |
|
218 | | -- \[Model Fine-tuning\] `model.freeze_layers` fine-tuning config parameter added to freeze a specified number of layers. Thank you to github user [@nehap25](https://github.com/nehap25)! |
219 | | -- \[ESM2\] Loading pre-trained ESM-2 weights and continue pre-training on the MLM objective on a custom FASTA dataset is now supported. |
220 | | -- \[OpenFold\] MLPerf feature 3.2 bug (mha_fused_gemm) fix has merged. |
221 | | -- \[OpenFold\] MLPerf feature 3.10 integrated into bionemo framework. |
222 | | -- \[DiffDock\] Updated data loading module for DiffDock model training, changing from sqlite3 backend to webdataset. |
| 218 | +- [Model Fine-tuning] `model.freeze_layers` fine-tuning config parameter added to freeze a specified number of layers. Thank you to github user [@nehap25](https://github.com/nehap25)! |
| 219 | +- [ESM2] Loading pre-trained ESM-2 weights and continue pre-training on the MLM objective on a custom FASTA dataset is now supported. |
| 220 | +- [OpenFold] MLPerf feature 3.2 bug (mha_fused_gemm) fix has merged. |
| 221 | +- [OpenFold] MLPerf feature 3.10 integrated into bionemo framework. |
| 222 | +- [DiffDock] Updated data loading module for DiffDock model training, changing from sqlite3 backend to webdataset. |
223 | 223 |
|
224 | 224 | ## BioNeMo Framework v1.5 |
225 | 225 |
|
|
0 commit comments