katielink commited on
Commit
fa7948b
1 Parent(s): 8ed35a5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +82 -84
README.md CHANGED
@@ -3,90 +3,88 @@ license: apache-2.0
3
  tags:
4
  - biology
5
  - chemistry
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6
  ---
7
 
8
- # MoLFormer
9
 
10
- **MoLFormer** is a large-scale chemical language model designed with the intention of learning a model trained on small molecules which are represented as SMILES strings. MoLFormer leverges Masked Language Modeling and employs a linear attention Transformer combined with rotary embeddings.
11
-
12
- ![MoLFormer](https://media.github.ibm.com/user/4935/files/594363e6-497b-4b91-9493-36ed46f623a2)
13
-
14
- An overview of the MoLFormer pipeline is seen in the image above. One can see that the transformer based neural network model is trained on a large collection of chemical molecules represented by SMILES sequences from two public chemical datasets PubChem and Zinc in a self-supervised fashion. The MOLFORMER architecture was designed with an efficient linear attention mechanism and relative positional embeddings with the goal of learning a meaningful and compressed representation of chemical molecules. After training the MOLFORMER foundation model was then adopted to different downstream molecular property prediction tasks via fine-tuning on task-specific data. To further test the representative power of MOLFORMER the MOLFORMER encodings were used to recover molecular similarity, and analysis on the correspondence between the interatomic spatial distance and attention value for a given molecule was performed.
15
-
16
- ## Finetuning Datasets
17
- Just as with the pretraining data the code expects the finetuning datasets to be in the following hierarchy. These datasets were provided in the finetune_datasets.zip
18
-
19
- ```
20
- data/
21
- ├── bace
22
- │ ├── test.csv
23
- │ ├── train.csv
24
- │ └── valid.csv
25
- ├── bbbp
26
- │ ├── test.csv
27
- │ ├── train.csv
28
- │ └── valid.csv
29
- ├── clintox
30
- │ ├── test.csv
31
- │ ├── train.csv
32
- │ └── valid.csv
33
- ├── esol
34
- │ ├── test.csv
35
- │ ├── train.csv
36
- │ └── valid.csv
37
- ├── freesolv
38
- │ ├── test.csv
39
- │ ├── train.csv
40
- │ └── valid.csv
41
- ├── hiv
42
- │ ├── test.csv
43
- │ ├── train.csv
44
- │ └── valid.csv
45
- ├── lipo
46
- │ ├── lipo_test.csv
47
- │ ├── lipo_train.csv
48
- │ └── lipo_valid.csv
49
- ├── qm9
50
- │ ├── qm9.csv
51
- │ ├── qm9_test.csv
52
- │ ├── qm9_train.csv
53
- │ └── qm9_valid.csv
54
- ├── sider
55
- │ ├── test.csv
56
- │ ├── train.csv
57
- │ └── valid.csv
58
- └── tox21
59
- ├── test.csv
60
- ├── tox21.csv
61
- ├── train.csv
62
- └── valid.csv
63
- ```
64
-
65
-
66
- ## Citations
67
- ```
68
- @article{10.1038/s42256-022-00580-7,
69
- year = {2022},
70
- title = {{Large-scale chemical language representations capture molecular structure and properties}},
71
- author = {Ross, Jerret and Belgodere, Brian and Chenthamarakshan, Vijil and Padhi, Inkit and Mroueh, Youssef and Das, Payel},
72
- journal = {Nature Machine Intelligence},
73
- doi = {10.1038/s42256-022-00580-7},
74
- abstract = {{Models based on machine learning can enable accurate and fast molecular property predictions, which is of interest in drug discovery and material design. Various supervised machine learning models have demonstrated promising performance, but the vast chemical space and the limited availability of property labels make supervised learning challenging. Recently, unsupervised transformer-based language models pretrained on a large unlabelled corpus have produced state-of-the-art results in many downstream natural language processing tasks. Inspired by this development, we present molecular embeddings obtained by training an efficient transformer encoder model, MoLFormer, which uses rotary positional embeddings. This model employs a linear attention mechanism, coupled with highly distributed training, on SMILES sequences of 1.1 billion unlabelled molecules from the PubChem and ZINC datasets. We show that the learned molecular representation outperforms existing baselines, including supervised and self-supervised graph neural networks and language models, on several downstream tasks from ten benchmark datasets. They perform competitively on two others. Further analyses, specifically through the lens of attention, demonstrate that MoLFormer trained on chemical SMILES indeed learns the spatial relationships between atoms within a molecule. These results provide encouraging evidence that large-scale molecular language models can capture sufficient chemical and structural information to predict various distinct molecular properties, including quantum-chemical properties. Large language models have recently emerged with extraordinary capabilities, and these methods can be applied to model other kinds of sequence, such as string representations of molecules. Ross and colleagues have created a transformer-based model, trained on a large dataset of molecules, which provides good results on property prediction tasks.}},
75
- pages = {1256--1264},
76
- number = {12},
77
- volume = {4}
78
- }
79
- ```
80
-
81
- ```
82
- @misc{https://doi.org/10.48550/arxiv.2106.09553,
83
- doi = {10.48550/ARXIV.2106.09553},
84
- url = {https://arxiv.org/abs/2106.09553},
85
- author = {Ross, Jerret and Belgodere, Brian and Chenthamarakshan, Vijil and Padhi, Inkit and Mroueh, Youssef and Das, Payel},
86
- keywords = {Machine Learning (cs.LG), Computation and Language (cs.CL), Biomolecules (q-bio.BM), FOS: Computer and information sciences, FOS: Computer and information sciences, FOS: Biological sciences, FOS: Biological sciences},
87
- title = {Large-Scale Chemical Language Representations Capture Molecular Structure and Properties},
88
- publisher = {arXiv},
89
- year = {2021},
90
- copyright = {arXiv.org perpetual, non-exclusive license}
91
- }
92
- ```
 
3
  tags:
4
  - biology
5
  - chemistry
6
+ configs:
7
+ - config_name: bace
8
+ data_files:
9
+ - split: train
10
+ path: bace/train.csv
11
+ - split: test
12
+ path: bace/test.csv
13
+ - split: val
14
+ path: bace/valid.csv
15
+ - config_name: bbbp
16
+ data_files:
17
+ - split: train
18
+ path: bbbp/train.csv
19
+ - split: test
20
+ path: bbbp/test.csv
21
+ - split: val
22
+ path: bbbp/valid.csv
23
+ - config_name: clintox
24
+ data_files:
25
+ - split: train
26
+ path: clintox/train.csv
27
+ - split: test
28
+ path: clintox/test.csv
29
+ - split: val
30
+ path: clintox/valid.csv
31
+ - config_name: esol
32
+ data_files:
33
+ - split: train
34
+ path: esol/train.csv
35
+ - split: test
36
+ path: esol/test.csv
37
+ - split: val
38
+ path: esol/valid.csv
39
+ - config_name: freesolv
40
+ data_files:
41
+ - split: train
42
+ path: freesolv/train.csv
43
+ - split: test
44
+ path: freesolv/test.csv
45
+ - split: val
46
+ path: freesolv/valid.csv
47
+ - config_name: hiv
48
+ data_files:
49
+ - split: train
50
+ path: hiv/train.csv
51
+ - split: test
52
+ path: hiv/test.csv
53
+ - split: val
54
+ path: hiv/valid.csv
55
+ - config_name: lipo
56
+ data_files:
57
+ - split: train
58
+ path: lipo/train.csv
59
+ - split: test
60
+ path: lipo/test.csv
61
+ - split: val
62
+ path: lipo/valid.csv
63
+ - config_name: qm9
64
+ data_files:
65
+ - split: train
66
+ path: qm9/train.csv
67
+ - split: test
68
+ path: qm9/test.csv
69
+ - split: val
70
+ path: qm9/valid.csv
71
+ - config_name: sider
72
+ data_files:
73
+ - split: train
74
+ path: sider/train.csv
75
+ - split: test
76
+ path: sider/test.csv
77
+ - split: val
78
+ path: sider/valid.csv
79
+ - config_name: tox21
80
+ data_files:
81
+ - split: train
82
+ path: tox21/train.csv
83
+ - split: test
84
+ path: tox21/test.csv
85
+ - split: val
86
+ path: tox21/valid.csv
87
  ---
88
 
89
+ # MoleculeNet Benchmark
90