Spaces:
Running
Running
update readme
Browse files- .github/README.md +10 -10
- README.md +0 -47
.github/README.md
CHANGED
@@ -3,31 +3,33 @@
|
|
3 |
<a href="https://zenodo.org/doi/10.5281/zenodo.13704399"><img src="https://zenodo.org/badge/776930320.svg" alt="DOI"></a>
|
4 |
</div>
|
5 |
|
6 |
-
|
7 |
> [!CAUTION]
|
8 |
> MLIP Arena is currently in pre-alpha. The results are not stable. Please intepret them with care.
|
9 |
|
10 |
> [!NOTE]
|
11 |
-
> If you're interested in joining the effort, please reach out to Yuan at [cyrusyc@berkeley.edu](mailto:cyrusyc@berkeley.edu).
|
12 |
|
13 |
MLIP Arena is an open-source platform for benchmarking machine learning interatomic potentials (MLIPs). The platform provides a unified interface for users to evaluate the performance of their models on a variety of tasks, including single-point density functional theory calculations and molecular dynamics simulations. The platform is designed to be extensible, allowing users to contribute new models, benchmarks, and training data to the platform.
|
14 |
|
15 |
## Contribute
|
16 |
|
17 |
-
MLIP Arena is now in pre-alpha. If you're interested in joining the effort, please reach out to Yuan at [cyrusyc@berkeley.edu](mailto:cyrusyc@berkeley.edu).
|
18 |
|
19 |
### Add new MLIP models
|
20 |
|
21 |
-
If you have pretrained MLIP models that you would like to contribute to the MLIP Arena and show benchmark in real-time,
|
|
|
|
|
22 |
|
23 |
-
|
24 |
-
|
|
|
25 |
3. CPU benchmarking will be performed automatically. Due to the limited amount GPU compute, if you would like to be considered for GPU benchmarking, please create a pull request to demonstrate the offline performance of your model (published paper or preprint). We will review and select the models to be benchmarked on GPU.
|
26 |
|
27 |
### Add new benchmark tasks
|
28 |
|
29 |
1. Create a new [Hugging Face Dataset](https://huggingface.co/new-dataset) repository and upload the reference data (e.g. DFT, AIMD, experimental measurements such as RDF).
|
30 |
-
2. Follow the task template to implement the task class and upload the script along with metadata to the MLIP Arena [here]().
|
31 |
3. Code a benchmark script to evaluate the performance of your model on the task. The script should be able to load the model and the dataset, and output the evaluation metrics.
|
32 |
|
33 |
#### Molecular dynamics calculations
|
@@ -44,6 +46,4 @@ If you have pretrained MLIP models that you would like to contribute to the MLIP
|
|
44 |
|
45 |
### Add new training datasets
|
46 |
|
47 |
-
[Hugging Face Auto-Train](https://huggingface.co/docs/hub/webhooks-guide-auto-retrain)
|
48 |
-
|
49 |
-
|
|
|
3 |
<a href="https://zenodo.org/doi/10.5281/zenodo.13704399"><img src="https://zenodo.org/badge/776930320.svg" alt="DOI"></a>
|
4 |
</div>
|
5 |
|
|
|
6 |
> [!CAUTION]
|
7 |
> MLIP Arena is currently in pre-alpha. The results are not stable. Please intepret them with care.
|
8 |
|
9 |
> [!NOTE]
|
10 |
+
> If you're interested in joining the effort, please reach out to Yuan at [cyrusyc@berkeley.edu](mailto:cyrusyc@berkeley.edu). See [project page](https://github.com/orgs/atomind-ai/projects/1) for some outstanding tasks.
|
11 |
|
12 |
MLIP Arena is an open-source platform for benchmarking machine learning interatomic potentials (MLIPs). The platform provides a unified interface for users to evaluate the performance of their models on a variety of tasks, including single-point density functional theory calculations and molecular dynamics simulations. The platform is designed to be extensible, allowing users to contribute new models, benchmarks, and training data to the platform.
|
13 |
|
14 |
## Contribute
|
15 |
|
16 |
+
MLIP Arena is now in pre-alpha. If you're interested in joining the effort, please reach out to Yuan at [cyrusyc@berkeley.edu](mailto:cyrusyc@berkeley.edu). See [project page](https://github.com/orgs/atomind-ai/projects/1) for some outstanding tasks.
|
17 |
|
18 |
### Add new MLIP models
|
19 |
|
20 |
+
If you have pretrained MLIP models that you would like to contribute to the MLIP Arena and show benchmark in real-time, there are two ways:
|
21 |
+
|
22 |
+
#### Hugging Face Model
|
23 |
|
24 |
+
0. Inherit Hugging Face [ModelHubMixin](https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins) class to your awesome model class definition. We recommend [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin).
|
25 |
+
1. Create a new [Hugging Face Model](https://huggingface.co/new) repository and upload the model file using [push_to_hub function](https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins#huggingface_hub.ModelHubMixin.push_to_hub).
|
26 |
+
2. Follow the template to code the I/O interface for your model, and upload the script along with metadata to the MLIP Arena [here](../mlip_arena/models/README.md).
|
27 |
3. CPU benchmarking will be performed automatically. Due to the limited amount GPU compute, if you would like to be considered for GPU benchmarking, please create a pull request to demonstrate the offline performance of your model (published paper or preprint). We will review and select the models to be benchmarked on GPU.
|
28 |
|
29 |
### Add new benchmark tasks
|
30 |
|
31 |
1. Create a new [Hugging Face Dataset](https://huggingface.co/new-dataset) repository and upload the reference data (e.g. DFT, AIMD, experimental measurements such as RDF).
|
32 |
+
2. Follow the task template to implement the task class and upload the script along with metadata to the MLIP Arena [here](../mlip_arena/tasks/README.md).
|
33 |
3. Code a benchmark script to evaluate the performance of your model on the task. The script should be able to load the model and the dataset, and output the evaluation metrics.
|
34 |
|
35 |
#### Molecular dynamics calculations
|
|
|
46 |
|
47 |
### Add new training datasets
|
48 |
|
49 |
+
[Hugging Face Auto-Train](https://huggingface.co/docs/hub/webhooks-guide-auto-retrain)
|
|
|
|
README.md
CHANGED
@@ -6,51 +6,4 @@ sdk_version: 1.36.0 # The latest supported version
|
|
6 |
app_file: serve/app.py
|
7 |
---
|
8 |
|
9 |
-
<div align="center">
|
10 |
-
<h1>MLIP Arena</h1>
|
11 |
-
<a href="https://zenodo.org/doi/10.5281/zenodo.13704399"><img src="https://zenodo.org/badge/776930320.svg" alt="DOI"></a>
|
12 |
-
</div>
|
13 |
-
|
14 |
-
> [!CAUTION]
|
15 |
-
> MLIP Arena is currently in pre-alpha. The results are not stable. Please intepret them with care.
|
16 |
-
|
17 |
-
> [!NOTE]
|
18 |
-
> If you're interested in joining the effort, please reach out to Yuan at [cyrusyc@berkeley.edu](mailto:cyrusyc@berkeley.edu).
|
19 |
-
|
20 |
-
MLIP Arena is an open-source platform for benchmarking machine learning interatomic potentials (MLIPs). The platform provides a unified interface for users to evaluate the performance of their models on a variety of tasks, including single-point density functional theory calculations and molecular dynamics simulations. The platform is designed to be extensible, allowing users to contribute new models, benchmarks, and training data to the platform.
|
21 |
-
|
22 |
-
## Contribute
|
23 |
-
|
24 |
-
MLIP Arena is now in pre-alpha. If you're interested in joining the effort, please reach out to Yuan at [cyrusyc@berkeley.edu](mailto:cyrusyc@berkeley.edu).
|
25 |
-
|
26 |
-
### Add new MLIP models
|
27 |
-
|
28 |
-
If you have pretrained MLIP models that you would like to contribute to the MLIP Arena and show benchmark in real-time, please follow these steps:
|
29 |
-
|
30 |
-
1. Create a new [Hugging Face Model](https://huggingface.co/new) repository and upload the model file.
|
31 |
-
2. Follow the template to code the I/O interface for your model, and upload the script along with metadata to the MLIP Arena [here]().
|
32 |
-
3. CPU benchmarking will be performed automatically. Due to the limited amount GPU compute, if you would like to be considered for GPU benchmarking, please create a pull request to demonstrate the offline performance of your model (published paper or preprint). We will review and select the models to be benchmarked on GPU.
|
33 |
-
|
34 |
-
### Add new benchmark tasks
|
35 |
-
|
36 |
-
1. Create a new [Hugging Face Dataset](https://huggingface.co/new-dataset) repository and upload the reference data (e.g. DFT, AIMD, experimental measurements such as RDF).
|
37 |
-
2. Follow the task template to implement the task class and upload the script along with metadata to the MLIP Arena [here]().
|
38 |
-
3. Code a benchmark script to evaluate the performance of your model on the task. The script should be able to load the model and the dataset, and output the evaluation metrics.
|
39 |
-
|
40 |
-
#### Molecular dynamics calculations
|
41 |
-
|
42 |
-
- [ ] [MD17](http://www.sgdml.org/#datasets)
|
43 |
-
- [ ] [MD22](http://www.sgdml.org/#datasets)
|
44 |
-
|
45 |
-
|
46 |
-
#### Single-point density functional theory calculations
|
47 |
-
|
48 |
-
- [ ] MPTrj
|
49 |
-
- [ ] QM9
|
50 |
-
- [ ] [Alexandria](https://alexandria.icams.rub.de/)
|
51 |
-
|
52 |
-
### Add new training datasets
|
53 |
-
|
54 |
-
[Hugging Face Auto-Train](https://huggingface.co/docs/hub/webhooks-guide-auto-retrain)
|
55 |
-
|
56 |
|
|
|
6 |
app_file: serve/app.py
|
7 |
---
|
8 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
9 |
|