Transformer language model for Croatian and Serbian

Trained on 3GB datasets that contain Croatian and Serbian language for two epochs. Leipzig and OSCAR datasets

Information of dataset

Model #params Arch. Training data
Andrija/SRoBERTa-base 80M Second Leipzig Corpus and OSCAR (3 GB of text)

Select AutoNLP in the “Train” menu to fine-tune this model automatically.

Downloads last month
Hosted inference API
Mask token: <mask>
This model can be loaded on the Inference API on-demand.