forked from instadeepai/mlip
-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathCHANGELOG
More file actions
157 lines (125 loc) · 6.84 KB
/
CHANGELOG
File metadata and controls
157 lines (125 loc) · 6.84 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
# Changelog
## Release 0.0.8
- Making So3krates model perform better
- Fixing Dataloader for `prefetch_factor==0` situation
- Enabling loading of entire dataset into memory for smaller datasets
- Adding shared memory support for multiprocessing
## Release 0.0.7
- Fixing support for custom models.
- Adding charge and spin support for the `LiTEN` model.
- Refactoring radial basis function and cutoff / radial envelope function. Removing
wrong added cutoff of EquiformerV2 and UMA.
- Adding So3krates model.
## Release 0.0.6
- Bumping minimum jax/flax/orbax/matscipy/numpy version and fixing compatibility
issues.
- The dataset is no longer fully loaded into memory before training to support
larger datasets. To support dropping empty/unseen systems, addtional
`*_exclude.npy` files are generated and stored alongside the dataset files
if any systems are excluded while preprocessing.
- Adding multi-task support for the `LiTEN` model.
- Simpifying the `LiTEN` model by removing unused layers.
## Release 0.0.5
- Adding support for spin, charge and task/dataset input.
- Adding support for drop unseen elements from model.
- Adding L2MAELoss borrowed from ocpmodels.
- Adding support for using pre-trained models for training.
- Fixing bugs when using `extended_metrics` and two stage training.
- Fixing batched MD simulation with JAX-MD backend (bugs introduced by MLIP).
## Release 0.0.4
- Rebasing dipm on mlip-v0.1.6 to fix bugs (such as incorrect distance on periodic
systems) and add new features.
- Adding support for EquiformerV2 model file conversion.
- Adding TensorBoard and WandB loggers for training.
- Fixing typos and incorrect wigner-d matrix / euler angles.
- Changing LICENSE to LGPL-3.0-or-later.
## Release 0.0.3
- Adding support for mixed precision training
- Adding support for force head in EquiformerV2 and UMA models
- Fixing parallel training
- Fixing nan gradients of SO2 Convolution in special cases and incorrect MACE irreps
- Adapting docs for DIPM
## Release 0.0.2
- Adding model architectures: EquiformerV2, UMA
- Providing tools for downloading, decompressing, splitting, merging and converting
LMDB and ExtXYZ datasets to HDF5 datasets, and removing ExtXYZ dataset support
- Making HDF5 format to be compatible with the format used by MACE library
- Supporting input of dataset directory, parallel loading, and automatic train/val/test
splitting
## Release 0.0.1
- Adding model architectures: LiTEN
- Changing the repository name and reorganizing the code structure
- Migrating from flax.linen to flax.nnx
- Renaming function or class: MlipNetwork -> ForceModel, parse_activation ->
get_activation_fn, RadialEmbeddingBlock -> RadialEmbeddingLayer, etc.
- Deleting `swish` activation (equal to `silu`)
- Changing the model file format from `.zip` to `.safetensors`
## Release 0.1.6 (MLIP)
- Fixing incorrect instructions for GPU-compatible installation: most shells require
quotes around pip installations with extras.
## Release 0.1.5 (MLIP)
- Adding batched simulations feature for MD simulations and energy minimizations
with the JAX-MD backend.
- Removing now useless `stress_virial` prediction.
- Fixing correctness of `stress` and 0K `pressure` predictions. In 0.1.4,
the stress computation actually involved a derivative with respect to
cell but with fixed positions. Now, the strain also acts on positions within
the unit cell, thus deforming the material homogeneously. This rigorously
translation-invariant stress exempts from any Virial term correction of
cell boundary effects. See for instance
[Thompson, Plimpton and Mattson 2009, eq (2)](https://doi.org/10.1063/1.3245303).
- Migrating from poetry to uv for dependency and package management.
- Improving inefficient logging strategy in ASE simulation backend.
- Clarifying in the documentation that we recommend a smaller value for the timestep
when running energy minimizations with the JAX-MD simulation backend.
- Removing need for separate install command for JAX-MD dependency.
- Adding easier install method for GPU-compatible JAX.
## Release 0.1.4 (MLIP)
- Removing constraints on some dependencies, such as numpy, jax, and flax. The mlip
library now allows for more flexibility in dependency versions for downstream
projects. This includes support for the newest jax versions 0.6.x and 0.7.x.
- Fixing simulation tutorial notebook by pinning versions of visualization helper
libraries.
- Adding the option to pass the `dataset_info` of a trained model to
`GraphDatasetBuilder`, which is important for downstream tasks. Failure to do so
might lead to silent inconsistencies in the mapping from atomic numbers to specie
indices, especially when the downstream data has fewer elements than the training
set (see e.g. the fine-tuning tutorial).
- Fixing the `stress` predictions, with new formulas for the virial stress
and 0 Kelvin pressure term. These features should still be seen as beta for now
as we proceed to test them further (see docstrings for more details).
## Release 0.1.3 (MLIP)
- Adding two new options to our MACE implementation (see `MaceConfig`, these features
should be considered in beta state for now):
+ `gate_nodes: bool` to apply a scalar node gating after the power expansion
layer,
+ `species_embedding_dim: int | None` to optionally encode pairwise node
species of edges in the convolution block.
Making use of these options may improve
inference speed at similar accuracy.
- Fixing a bug where stress predictions would override energy and force predictions
to `None` when `predict_stress = True`. Note that stress computations
should not be considered reliable for now, and will be fixed in an upcoming
release.
## Release 0.1.2 (MLIP)
- Fixing the computation of metrics during training, by reweighting the metrics of
each batch to account for a varying number of real graphs per batch; this results
in the metrics being independent of the batching strategy and number of GPUs employed
- In addition to the point above, fixing the computation of RMSE metrics by now
only computing MSE metrics in the loss and taking the square root at the very end
when logging
- Deleting relative and 95-percentile metrics, as they are not straightforward to
compute on-the-fly with our dynamic batching strategy; we recommend to compute them
separately for a model checkpoint if necessary
- Small amount of modifications to README and documentation
## Release 0.1.1 (MLIP)
- Small amount of modifications to README and documentation
- Adding link to white paper in README
## Release 0.1.0 (MLIP)
- Implemented model architectures: MACE, NequIP and ViSNet
- Dataset preprocessing
- Training of MLIP models
- Batched inference with trained MLIP models
- MD simulations with MLIP models using JAX-MD and ASE simulation backends
- Energy minimizations with MLIP models using the same simulation backends
- Fine-tuning of pre-trained MLIP models (only for MACE)