Transformer language model for Croatian and Serbian

Trained on 6GB datasets that contain Croatian and Serbian language for two epochs (500k steps). Leipzig, OSCAR and srWac datasets

Model #params Arch. Training data
Andrija/SRoBERTa-L 80M Third Leipzig Corpus, OSCAR and srWac (6 GB of text)
New

Select AutoNLP in the “Train” menu to fine-tune this model automatically.

Downloads last month
14
Hosted inference API
Fill-Mask
Mask token: <mask>
This model can be loaded on the Inference API on-demand.