Edit model card

You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

Model Card for mace-universal / mace-mp

MACE-MP is a pretrained general-purpose foundational interatomic potential published with the preprint arXiv:2401.00096.

This repository is the archive of pretrained checkpoints for manual loading using MACECalculator or further fine-tuning. Now the easiest way to use models is to follow the documentation for foundtional models. All the models are trained with MPTrj data, Materials Project relaxation trajectories compiled by CHGNet authors to cover 89 elements and 1.6M configurations. The checkpoint was used for materials stability prediction on Matbench Discovery and the associated preprint.

MACE (Multiple Atomic Cluster Expansion) is a machine learning interatomic potential (MLIP) with higher order equivariant message passing. For more information about MACE formalism, please see authors' paper.

Usage

  1. (optional) Install Pytorch, ASE prerequisites for preferred version
  2. Install MACE through GitHub (not through pypi)
pip install git+https://github.com/ACEsuit/mace.git
  1. Use MACECalculator
from mace.calculators import MACECalculator
from ase import Atoms, units
from ase.build import bulk
from ase.md.npt import NPT

atoms = bulk("NaCl", crystalstructure='rocksalt', a=3.54, cubic=True)

calculator = MACECalculator(
  model_paths=/path/to/pretrained.model,
  device=device,
  default_dtype="float32" or "float64",
)

atoms.calc = calculator

dyn = NPT(
  atoms=atoms,
  timestep=timestep,
  temperature_K=temperature,
  externalstress=externalstress,
  ttime=ttime,
  pfactor=pfactor,
)

dyn.run(steps)

Citing

If you use the pretrained models in this repository, please cite all the following:

@article{batatia2023foundation,
  title={A foundation model for atomistic materials chemistry},
  author={Batatia, Ilyes and Benner, Philipp and Chiang, Yuan and Elena, Alin M and Kov{\'a}cs, D{\'a}vid P and Riebesell, Janosh and Advincula, Xavier R and Asta, Mark and Baldwin, William J and Bernstein, Noam and others},
  journal={arXiv preprint arXiv:2401.00096},
  year={2023}
}

@inproceedings{Batatia2022mace,
  title={{MACE}: Higher Order Equivariant Message Passing Neural Networks for Fast and Accurate Force Fields},
  author={Ilyes Batatia and David Peter Kovacs and Gregor N. C. Simm and Christoph Ortner and Gabor Csanyi},
  booktitle={Advances in Neural Information Processing Systems},
  editor={Alice H. Oh and Alekh Agarwal and Danielle Belgrave and Kyunghyun Cho},
  year={2022},
  url={https://openreview.net/forum?id=YPpSngE-ZU}
}

@article{riebesell2023matbench,
  title={Matbench Discovery--An evaluation framework for machine learning crystal stability prediction},
  author={Riebesell, Janosh and Goodall, Rhys EA and Jain, Anubhav and Benner, Philipp and Persson, Kristin A and Lee, Alpha A},
  journal={arXiv preprint arXiv:2308.14920},
  year={2023}
}


@misc {yuan_chiang_2023,
  author       = { {Yuan Chiang, Philipp Benner} },
  title        = { mace-universal (Revision e5ebd9b) },
  year         = 2023,
  url          = { https://huggingface.co/cyrusyc/mace-universal },
  doi          = { 10.57967/hf/1202 },
  publisher    = { Hugging Face }
}

@article{deng2023chgnet,
  title={CHGNet as a pretrained universal neural network potential for charge-informed atomistic modelling},
  author={Deng, Bowen and Zhong, Peichen and Jun, KyuJung and Riebesell, Janosh and Han, Kevin and Bartel, Christopher J and Ceder, Gerbrand},
  journal={Nature Machine Intelligence},
  pages={1--11},
  year={2023},
  publisher={Nature Publishing Group UK London}
}

Training Guide

Training Data

For now, please download MPTrj data from figshare. We may upload to HuggingFace Datasets in the future.

Fine-tuning

We provide an example multi-GPU training script 2023-08-14-mace-universal.sbatch, which uses 40 A100s on NERSC Perlmutter. Please see MACE multi-gpu branch for more detailed instructions.

Downloads last month
0
Unable to determine this model's library. Check the docs .