sagawa commited on
Commit
319b295
1 Parent(s): 55faa25

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +57 -2
README.md CHANGED
@@ -1,5 +1,60 @@
1
  ---
2
  license: mit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
4
- # PubChem-10m-t5-v2
5
- We trained T5 on SMILES from PubChem using the task of masked-language modeling (MLM). Compared to PubChem-t5, PubChemC-t5-v2 uses a character-level tokenizer. This model can be used for the prediction of molecules' properties, reactions, or interactions with proteins by changing the way of finetuning.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
+ datasets:
4
+ - sagawa/pubchem-10m-canonicalized
5
+ metrics:
6
+ - accuracy
7
+ model-index:
8
+ - name: PubChem-10m-t5
9
+ results:
10
+ - task:
11
+ name: Masked Language Modeling
12
+ type: fill-mask
13
+ dataset:
14
+ name: sagawa/pubchem-10m-canonicalized
15
+ type: sagawa/pubchem-10m-canonicalized
16
+ metrics:
17
+ - name: Accuracy
18
+ type: accuracy
19
+ value: 0.9189779162406921
20
  ---
21
+
22
+ # PubChem-10m-t5
23
+
24
+ This model is a fine-tuned version of [google/t5-v1_1-base](https://huggingface.co/microsoft/deberta-base) on the sagawa/pubchem-10m-canonicalized dataset.
25
+ It achieves the following results on the evaluation set:
26
+ - Loss: 0.2165
27
+ - Accuracy: 0.9190
28
+
29
+
30
+ ## Model description
31
+
32
+ We trained t5 on SMILES from PubChem using the task of masked-language modeling (MLM). Compared to PubChem-10m-t5, PubChem-10m-t5-v2 uses a character-level tokenizer, and it was also trained on PubChem.
33
+
34
+
35
+ ## Intended uses & limitations
36
+
37
+ This model can be used for the prediction of molecules' properties, reactions, or interactions with proteins by changing the way of finetuning.
38
+
39
+ ## Training and evaluation data
40
+
41
+ We downloaded [PubChem data](https://drive.google.com/file/d/1ygYs8dy1-vxD1Vx6Ux7ftrXwZctFjpV3/view) and canonicalized them using RDKit. Then, we dropped duplicates. The total number of data is 9999960, and they were randomly split into train:validation=10:1.
42
+
43
+ ## Training procedure
44
+
45
+ ### Training hyperparameters
46
+
47
+ The following hyperparameters were used during training:
48
+ - learning_rate: 5e-03
49
+ - seed: 42
50
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
51
+ - lr_scheduler_type: linear
52
+ - num_epochs: 10.0
53
+
54
+ ### Training results
55
+
56
+ | Training Loss | Step | Accuracy | Validation Loss |
57
+ |:-------------:|:------:|:--------:|:---------------:|
58
+ | 0.2592 | 100000 | 0.8997 | 0.2784 |
59
+ | 0.2790 | 200000 | 0.9095 | 0.2468 |
60
+ | 0.2278 | 300000 | 0.9162 | 0.2256 |