This repository contains PET-MAD - a universal interatomic potential for advanced materials modeling across the periodic table. This model is based on the Point Edge Transformer (PET) model trained on the Massive Atomic Diversity (MAD) Dataset and is capable of predicting energies and forces in complex atomistic simulations.
- Universality: PET-MAD is a generally-applicable model that can be used for a wide range of materials and molecules.
- Accuracy: PET-MAD achieves high accuracy in various types of atomistic simulations of organic and inorganic systems, comparable with system-specific models, while being fast and efficient.
- Efficiency: PET-MAD achieves high computational efficiency and low memory usage, making it suitable for large-scale simulations.
- Infrastructure: Various MD engines are available for diverse research and application needs.
- HPC Compatibility: Efficient in HPC environments for extensive simulations.
You can install PET-MAD with pip:
pip install git+https://github.com/lab-cosmo/pet-mad.git
Or directly from PyPI (available soon):
pip install pet-mad
Alternatively, you can install PET-MAD using conda
package manager, which is especially important
for running PET-MAD with LAMMPS.
Warning
We strongly recommend installing PET-MAD using Miniforge
as a base conda
provider, because other conda
providers (such as Anaconda
) may yield undesired
behavior when resolving dependencies and are usually slower than Miniforge
. Smooth installation
and use of PET-MAD is not guaranteed with other conda
providers.
To install Miniforge on unix-like systems, run:
wget "https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-$(uname)-$(uname -m).sh"
bash Miniforge3-$(uname)-$(uname -m).sh
Once Miniforge is installed, create a new conda environment and install PET-MAD with:
conda create -n pet-mad
conda activate pet-mad
conda install -c metatensor -c conda-forge pet-mad
Currently, we provide the following pre-trained models:
latest
: PET-MAD model trained on the MAD dataset, which contains 95,595 structures, including 3D and 2D inorganic crystals, surfaces, molecular crystals, nanoclusters, and molecules.
You can use the PET-MAD calculator, which is compatible with the Atomic Simulation Environment (ASE):
from pet_mad.calculator import PETMADCalculator
from ase.build import bulk
atoms = bulk("Si", cubic=True, a=5.43, crystalstructure="diamond")
pet_mad_calculator = PETMADCalculator(version="latest", device="cpu")
atoms.calc = pet_mad_calculator
energy = atoms.get_potential_energy()
forces = atoms.get_forces()
These ASE methods are ideal for single-structure evaluations, but they are inefficient for the evaluation on a large number of pre-defined structures. To perform efficient evaluation in that case, read here.
Efficient evaluation of PET-MAD on a desired dataset is also available from the command line via
metatrain
, which is installed as a depencecy of PET-MAD.
To evaluate the model, you first need to fetch the PET-MAD model from the HuggingFace repository:
mtt export https://huggingface.co/lab-cosmo/pet-mad/resolve/main/models/pet-mad-latest.ckpt
This command will download the model and convert it to TorchScript format, also collecting the
libraries required to run the model in the extensions
folder. Then you need to create the
options.yaml
file and specify the dataset you want to evaluate the model on
(where the dataset is stored in extxyz
format):
systems: your-test-dataset.xyz
targets:
energy:
key: "energy"
unit: "eV"
Then, you can use the mtt eval
command to evaluate the model on a dataset:
mtt eval pet-mad-latest.pt options.yaml --batch-size=16 --extensions-dir=extensions --output=predictions.xyz
This will create a file called predictions.xyz
with the predicted energies and forces for each
structure in the dataset. More details on how to use metatrain
can be found in the
Metatrain documentation.
PET-MAD integrates with the following atomistic simulation engines:
- Atomic Simulation Environment (ASE)
- LAMMPS (including the KOKKOS support)
- i-PI
- OpenMM (coming soon)
- GROMACS (coming soon)
To use PET-MAD with LAMMPS, you need to first install PET-MAD from conda
(see the installation instructions above). Then, install LAMMPS-METATENSOR,
which enables PET-MAD support:
conda install -c metatensor -c conda-forge lammps-metatensor
For GPU-accelerated LAMMPS:
conda install -c metatensor -c conda-forge lammps-metatensor=*=cuda*
Different MPI implementations are available:
nompi
: Serial versionopenmpi
: OpenMPImpich
: MPICH
Example for GPU-accelerated OpenMPI version:
conda install -c metatensor -c conda-forge lammps-metatensor=*=cuda*openmpi*
Please note that this version is not KOKKOS-enabled, so it provides limited performance on GPUs. A recipe to install the KOKKOS-enabled version of LAMMPS-METATENSOR is available here, and a direct conda installation will also be available soon.
To use system-installed MPI for HPC, install dummy MPI packages first:
conda install "mpich=x.y.z=external_*"
conda install "openmpi=x.y.z=external_*"
where x.y.z
is the version of your system-installed MPI. For example, for OpenMPI 4.1.4, use:
conda install "openmpi=4.1.4=external_*"
Then, install LAMMPS-METATENSOR with the openmpi
variant:
Fetch the PET-MAD checkpoint from the HuggingFace repository:
mtt export https://huggingface.co/lab-cosmo/pet-mad/resolve/main/models/pet-mad-latest.ckpt
This will download the model and convert it to TorchScript format compatible with LAMMPS,
using the metatensor
and metatrain
libraries, which PET-MAD is based on.
Prepare a lammps.in
file using the metatensor pair_style
and defining the
mapping from LAMMPS types in the data file to elements PET-MAD can handle using the
pair_coeff * *
syntax. Here we use 14 for Si.
units metal
atom_style atomic
read_data silicon.data
pair_style metatensor pet-mad-latest.pt &
device cpu &
extensions extensions
pair_coeff * * 14
neighbor 2.0 bin
timestep 0.001
dump myDump all xyz 10 trajectory.xyz
dump_modify myDump element Si
thermo_style multi
thermo 1
velocity all create 300 87287 mom yes rot yes
fix 1 all nvt temp 300 300 0.10
run 100
Create the silicon.data
data file for a silicon system.
# LAMMPS data file for Silicon unit cell
8 atoms
1 atom types
0.0 5.43 xlo xhi
0.0 5.43 ylo yhi
0.0 5.43 zlo zhi
Masses
1 28.084999992775295 # Si
Atoms # atomic
1 1 0 0 0
2 1 1.3575 1.3575 1.3575
3 1 0 2.715 2.715
4 1 1.3575 4.0725 4.0725
5 1 2.715 0 2.715
6 1 4.0725 1.3575 4.0725
7 1 2.715 2.715 0
8 1 4.0725 4.0725 1.3575
Run LAMMPS. By default, LAMMPS installs the lmp_serial
executable for
the serial version and lmp_mpi
for the MPI version. Because of that,
the running command will be different depending on the version:
lmp_serial -in lammps.in # Serial version
mpirun -np 1 lmp_mpi -in lammps.in # MPI version
-
For CPU calculations, use a single MPI task unless simulating large systems (30+ Å box size). Multi-threading can be enabled via:
export OMP_NUM_THREADS=4
-
For GPU calculations, use one MPI task per GPU.
You can combine the PET-MAD calculator with the torch based implementation of the D3 dispersion correction of pfnet-research
- torch-dftd
:
Within the PET-MAD environment you can install torch-dftd
via:
pip install torch-dftd
Then you can use the D3Calculator
class to combine the two calculators:
from torch_dftd.torch_dftd3_calculator import TorchDFTD3Calculator
from pet_mad.calculator import PETMADCalculator
from ase.calculators.mixing import SumCalculator
device = "cuda:0" if torch.cuda.is_available() else "cpu"
calc_MAD = PETMADCalculator(version="latest", device=device)
dft_d3 = TorchDFTD3Calculator(device=device, xc="pbesol", damping="bj")
combined_calc = SumCalculator([calc_MAD, dft_d3])
# assign the calculator to the atoms object
# atoms.calc = combined_calc
More examples for ASE, i-PI, and LAMMPS are available in the Atomistic Cookbook.
PET-MAD can be fine-tuned using the Metatrain library.
Additional documentation can be found in the Metatensor and Metatrain repositories.
If you use PET-MAD in your research, please cite:
@misc{PET-MAD-2025,
title={PET-MAD, a universal interatomic potential for advanced materials modeling},
author={Arslan Mazitov and Filippo Bigi and Matthias Kellner and Paolo Pegolo and Davide Tisi and Guillaume Fraux and Sergey Pozdnyakov and Philip Loche and Michele Ceriotti},
year={2025},
eprint={2503.14118},
archivePrefix={arXiv},
primaryClass={cond-mat.mtrl-sci},
url={https://arxiv.org/abs/2503.14118}
}