shahrukhx01
commited on
Commit
•
8790142
1
Parent(s):
8d40725
Update README.md
Browse files
README.md
CHANGED
@@ -22,4 +22,4 @@ Introduces contrastive learning alongside multi-task regression, and masked lang
|
|
22 |
|
23 |
### Pretraining steps for this model:
|
24 |
|
25 |
-
- Pretrain BERT model with Masked language modeling with masked proportion set to 15% on Guacamol
|
|
|
22 |
|
23 |
### Pretraining steps for this model:
|
24 |
|
25 |
+
- Pretrain BERT model with Masked language modeling with masked proportion set to 15% on Guacamol datasetFore more details please see our [github repository](https://github.com/uds-lsv/enumeration-aware-molecule-transformers).
|