IceBERT

IceBERT was trained with fairseq using the RoBERTa-base model. The training data used is shown in the table below.

Dataset Size Tokens
Icelandic Gigaword Corpus v20.05 (IGC) 8.2 GB 1,388M
Icelandic Common Crawl Corpus (IC3) 4.9 GB 824M
Greynir News articles 456 MB 76M
Icelandic Sagas 9 MB 1.7M
Open Icelandic e-books (Rafbókavefurinn) 14 MB 2.6M
Data from the medical library of Landspitali 33 MB 5.2M
Student theses from Icelandic universities (Skemman) 2.2 GB 367M
Total 15.8 GB 2,664M
New

Select AutoNLP in the “Train” menu to fine-tune this model automatically.

Downloads last month
509
Hosted inference API
Fill-Mask
Mask token: <mask>
This model can be loaded on the Inference API on-demand.