Edit model card


This model was trained with fairseq using the RoBERTa-base architecture. The model xlm-roberta-base was used as a starting point. It is one of many models we have trained for Icelandic, see the paper referenced below for further details. The training data used is shown in the table below.

Dataset Size Tokens
Icelandic Common Crawl Corpus (IC3) 4.9 GB 824M


The model is described in this paper https://arxiv.org/abs/2201.05601. Please cite the paper if you make use of the model.

  author    = {V{\'{e}}steinn Sn{\ae}bjarnarson and
               Haukur Barri S{\'{\i}}monarson and
               P{\'{e}}tur Orri Ragnarsson and
               Svanhv{\'{\i}}t Lilja Ing{\'{o}}lfsd{\'{o}}ttir and
               Haukur P{\'{a}}ll J{\'{o}}nsson and
               Vilhj{\'{a}}lmur {\TH}orsteinsson and
               Hafsteinn Einarsson},
  title     = {A Warm Start and a Clean Crawled Corpus - {A} Recipe for Good Language
  journal   = {CoRR},
  volume    = {abs/2201.05601},
  year      = {2022},
  url       = {https://arxiv.org/abs/2201.05601},
  eprinttype = {arXiv},
  eprint    = {2201.05601},
  timestamp = {Thu, 20 Jan 2022 14:21:35 +0100},
  biburl    = {https://dblp.org/rec/journals/corr/abs-2201-05601.bib},
  bibsource = {dblp computer science bibliography, https://dblp.org}
Downloads last month