Edit model card

Model is developed in support of the University of Belgrade doctoral dissertation "Composite pseudogrammars based on parallel language models of Serbian" by Mihailo Škorić.

This small gpt-2 model was trained on several corpora for Serbian, including "The corpus of Contemporary Serbian", SrpELTeC and WikiKorpus by JeRTeh – Society for Language Resources and Technologies.

Downloads last month
Hosted inference API
This model can be loaded on the Inference API on-demand.