Transformer language model for Croatian and Serbian

Trained on 28GB datasets that contain Croatian and Serbian language for one epochs (3 mil. steps). Leipzig Corpus, OSCAR, srWac, hrWac, cc100-hr and cc100-sr datasets

Model #params Arch. Training data
Andrija/SRoBERTa-XL 80M Forth Leipzig Corpus, OSCAR, srWac, hrWac, cc100-hr and cc100-sr (28 GB of text)
New

Select AutoNLP in the “Train” menu to fine-tune this model automatically.

Downloads last month
5
Hosted inference API
Fill-Mask
Mask token: <mask>
Examples
Examples
This model can be loaded on the Inference API on-demand.