File size: 425 Bytes
f8e5108
b00abdc
 
dcf0cad
 
b00abdc
 
 
 
92a3afc
1
2
3
4
5
6
7
8
9
10
11
This model is further trained on top of scibert-base using masked language modeling loss (MLM). The corpus is roughly abstracts from 270,000 earth science-based publications.

The tokenizer used is AutoTokenizer, which is trained on the same corpus.

Stay tuned for further downstream task tests and updates to the model.

in the works
- MLM + NSP task loss
- Add more data sources for training
- Test using downstream tasks