Transformer language model for Croatian and Serbian

Trained on 3GB datasets that contain Croatian and Serbian language for two epochs. Leipzig and OSCAR datasets

Information of dataset

Model #params Arch. Training data
Andrija/SRoBERTa-base 80M Second Leipzig Corpus and OSCAR (3 GB of text)
Downloads last month
9
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train Andrija/SRoBERTa-base