vesteinn commited on
Commit
232cccc
1 Parent(s): 2bcae27

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +18 -19
README.md CHANGED
@@ -32,25 +32,24 @@ This model was trained with fairseq using the RoBERTa-base architecture. It is o
32
  The model is described in this paper [https://arxiv.org/abs/2201.05601](https://arxiv.org/abs/2201.05601). Please cite the paper if you make use of the model.
33
 
34
  ```
35
- @article{DBLP:journals/corr/abs-2201-05601,
36
- author = {V{\'{e}}steinn Sn{\ae}bjarnarson and
37
- Haukur Barri S{\'{\i}}monarson and
38
- P{\'{e}}tur Orri Ragnarsson and
39
- Svanhv{\'{\i}}t Lilja Ing{\'{o}}lfsd{\'{o}}ttir and
40
- Haukur P{\'{a}}ll J{\'{o}}nsson and
41
- Vilhj{\'{a}}lmur {\TH}orsteinsson and
42
- Hafsteinn Einarsson},
43
- title = {A Warm Start and a Clean Crawled Corpus - {A} Recipe for Good Language
44
- Models},
45
- journal = {CoRR},
46
- volume = {abs/2201.05601},
47
- year = {2022},
48
- url = {https://arxiv.org/abs/2201.05601},
49
- eprinttype = {arXiv},
50
- eprint = {2201.05601},
51
- timestamp = {Thu, 20 Jan 2022 14:21:35 +0100},
52
- biburl = {https://dblp.org/rec/journals/corr/abs-2201-05601.bib},
53
- bibsource = {dblp computer science bibliography, https://dblp.org}
54
  }
 
55
  ```
56
 
32
  The model is described in this paper [https://arxiv.org/abs/2201.05601](https://arxiv.org/abs/2201.05601). Please cite the paper if you make use of the model.
33
 
34
  ```
35
+ @inproceedings{snaebjarnarson-etal-2022-warm,
36
+ title = "A Warm Start and a Clean Crawled Corpus - A Recipe for Good Language Models",
37
+ author = "Sn{\ae}bjarnarson, V{\'e}steinn and
38
+ S{\'\i}monarson, Haukur Barri and
39
+ Ragnarsson, P{\'e}tur Orri and
40
+ Ing{\'o}lfsd{\'o}ttir, Svanhv{\'\i}t Lilja and
41
+ J{\'o}nsson, Haukur and
42
+ Thorsteinsson, Vilhjalmur and
43
+ Einarsson, Hafsteinn",
44
+ booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference",
45
+ month = jun,
46
+ year = "2022",
47
+ address = "Marseille, France",
48
+ publisher = "European Language Resources Association",
49
+ url = "https://aclanthology.org/2022.lrec-1.464",
50
+ pages = "4356--4366",
51
+ abstract = "We train several language models for Icelandic, including IceBERT, that achieve state-of-the-art performance in a variety of downstream tasks, including part-of-speech tagging, named entity recognition, grammatical error detection and constituency parsing. To train the models we introduce a new corpus of Icelandic text, the Icelandic Common Crawl Corpus (IC3), a collection of high quality texts found online by targeting the Icelandic top-level-domain .is. Several other public data sources are also collected for a total of 16GB of Icelandic text. To enhance the evaluation of model performance and to raise the bar in baselines for Icelandic, we manually translate and adapt the WinoGrande commonsense reasoning dataset. Through these efforts we demonstrate that a properly cleaned crawled corpus is sufficient to achieve state-of-the-art results in NLP applications for low to medium resource languages, by comparison with models trained on a curated corpus. We further show that initializing models using existing multilingual models can lead to state-of-the-art results for some downstream tasks.",
 
 
52
  }
53
+
54
  ```
55