Datasets:

Multilinguality:
multilingual
Size Categories:
10M<n<100M
1M<n<10M
Language Creators:
found
Annotations Creators:
no-annotation
Source Datasets:
original
ArXiv:
Tags:
License:
Files changed (2) hide show
  1. README.md +31 -8
  2. cc100.py +23 -7
README.md CHANGED
@@ -138,7 +138,7 @@ task_ids:
138
  - language-modeling
139
  - masked-language-modeling
140
  paperswithcode_id: cc100
141
- pretty_name: CC100
142
  dataset_info:
143
  - config_name: am
144
  features:
@@ -181,7 +181,7 @@ config_names:
181
  - sr
182
  ---
183
 
184
- # Dataset Card for CC100
185
 
186
  ## Table of Contents
187
  - [Dataset Description](#dataset-description)
@@ -210,8 +210,11 @@ config_names:
210
  ## Dataset Description
211
 
212
  - **Homepage:** https://data.statmt.org/cc-100/
213
- - **Repository:** None
214
- - **Paper:** https://www.aclweb.org/anthology/2020.acl-main.747.pdf, https://www.aclweb.org/anthology/2020.lrec-1.494.pdf
 
 
 
215
  - **Leaderboard:** [More Information Needed]
216
  - **Point of Contact:** [More Information Needed]
217
 
@@ -221,7 +224,7 @@ This corpus is an attempt to recreate the dataset used for training XLM-R. This
221
 
222
  ### Supported Tasks and Leaderboards
223
 
224
- CC-100 is mainly inteded to pretrain language models and word represantations.
225
 
226
  ### Languages
227
 
@@ -319,6 +322,8 @@ Statistical Machine Translation at the University of Edinburgh makes no claims o
319
 
320
  ### Citation Information
321
 
 
 
322
  ```bibtex
323
  @inproceedings{conneau-etal-2020-unsupervised,
324
  title = "Unsupervised Cross-lingual Representation Learning at Scale",
@@ -332,12 +337,16 @@ Statistical Machine Translation at the University of Edinburgh makes no claims o
332
  Ott, Myle and
333
  Zettlemoyer, Luke and
334
  Stoyanov, Veselin",
 
 
 
 
335
  booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
336
  month = jul,
337
  year = "2020",
338
  address = "Online",
339
  publisher = "Association for Computational Linguistics",
340
- url = "https://www.aclweb.org/anthology/2020.acl-main.747",
341
  doi = "10.18653/v1/2020.acl-main.747",
342
  pages = "8440--8451",
343
  abstract = "This paper shows that pretraining multilingual language models at scale leads to significant performance gains for a wide range of cross-lingual transfer tasks. We train a Transformer-based masked language model on one hundred languages, using more than two terabytes of filtered CommonCrawl data. Our model, dubbed XLM-R, significantly outperforms multilingual BERT (mBERT) on a variety of cross-lingual benchmarks, including +14.6{\%} average accuracy on XNLI, +13{\%} average F1 score on MLQA, and +2.4{\%} F1 score on NER. XLM-R performs particularly well on low-resource languages, improving 15.7{\%} in XNLI accuracy for Swahili and 11.4{\%} for Urdu over previous XLM models. We also present a detailed empirical analysis of the key factors that are required to achieve these gains, including the trade-offs between (1) positive transfer and capacity dilution and (2) the performance of high and low resource languages at scale. Finally, we show, for the first time, the possibility of multilingual modeling without sacrificing per-language performance; XLM-R is very competitive with strong monolingual models on the GLUE and XNLI benchmarks. We will make our code and models publicly available.",
@@ -354,12 +363,26 @@ Statistical Machine Translation at the University of Edinburgh makes no claims o
354
  Guzm{\'a}n, Francisco and
355
  Joulin, Armand and
356
  Grave, Edouard",
357
- booktitle = "Proceedings of the 12th Language Resources and Evaluation Conference",
 
 
 
 
 
 
 
 
 
 
 
 
 
 
358
  month = may,
359
  year = "2020",
360
  address = "Marseille, France",
361
  publisher = "European Language Resources Association",
362
- url = "https://www.aclweb.org/anthology/2020.lrec-1.494",
363
  pages = "4003--4012",
364
  abstract = "Pre-training text representations have led to significant improvements in many areas of natural language processing. The quality of these models benefits greatly from the size of the pretraining corpora as long as its quality is preserved. In this paper, we describe an automatic pipeline to extract massive high-quality monolingual datasets from Common Crawl for a variety of languages. Our pipeline follows the data processing introduced in fastText (Mikolov et al., 2017; Grave et al., 2018), that deduplicates documents and identifies their language. We augment this pipeline with a filtering step to select documents that are close to high quality corpora like Wikipedia.",
365
  language = "English",
 
138
  - language-modeling
139
  - masked-language-modeling
140
  paperswithcode_id: cc100
141
+ pretty_name: CC-100
142
  dataset_info:
143
  - config_name: am
144
  features:
 
181
  - sr
182
  ---
183
 
184
+ # Dataset Card for CC-100
185
 
186
  ## Table of Contents
187
  - [Dataset Description](#dataset-description)
 
210
  ## Dataset Description
211
 
212
  - **Homepage:** https://data.statmt.org/cc-100/
213
+ - **Repository:** [More Information Needed]
214
+ - **Paper:** https://aclanthology.org/2020.acl-main.747/
215
+ - **Paper:** https://aclanthology.org/2020.lrec-1.494/
216
+ - **Paper:** https://arxiv.org/abs/1911.02116
217
+ - **Paper:** https://arxiv.org/abs/1911.00359
218
  - **Leaderboard:** [More Information Needed]
219
  - **Point of Contact:** [More Information Needed]
220
 
 
224
 
225
  ### Supported Tasks and Leaderboards
226
 
227
+ CC-100 is mainly intended to pretrain language models and word representations.
228
 
229
  ### Languages
230
 
 
322
 
323
  ### Citation Information
324
 
325
+ Please cite the following if you found the resources in this corpus useful:
326
+
327
  ```bibtex
328
  @inproceedings{conneau-etal-2020-unsupervised,
329
  title = "Unsupervised Cross-lingual Representation Learning at Scale",
 
337
  Ott, Myle and
338
  Zettlemoyer, Luke and
339
  Stoyanov, Veselin",
340
+ editor = "Jurafsky, Dan and
341
+ Chai, Joyce and
342
+ Schluter, Natalie and
343
+ Tetreault, Joel",
344
  booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
345
  month = jul,
346
  year = "2020",
347
  address = "Online",
348
  publisher = "Association for Computational Linguistics",
349
+ url = "https://aclanthology.org/2020.acl-main.747",
350
  doi = "10.18653/v1/2020.acl-main.747",
351
  pages = "8440--8451",
352
  abstract = "This paper shows that pretraining multilingual language models at scale leads to significant performance gains for a wide range of cross-lingual transfer tasks. We train a Transformer-based masked language model on one hundred languages, using more than two terabytes of filtered CommonCrawl data. Our model, dubbed XLM-R, significantly outperforms multilingual BERT (mBERT) on a variety of cross-lingual benchmarks, including +14.6{\%} average accuracy on XNLI, +13{\%} average F1 score on MLQA, and +2.4{\%} F1 score on NER. XLM-R performs particularly well on low-resource languages, improving 15.7{\%} in XNLI accuracy for Swahili and 11.4{\%} for Urdu over previous XLM models. We also present a detailed empirical analysis of the key factors that are required to achieve these gains, including the trade-offs between (1) positive transfer and capacity dilution and (2) the performance of high and low resource languages at scale. Finally, we show, for the first time, the possibility of multilingual modeling without sacrificing per-language performance; XLM-R is very competitive with strong monolingual models on the GLUE and XNLI benchmarks. We will make our code and models publicly available.",
 
363
  Guzm{\'a}n, Francisco and
364
  Joulin, Armand and
365
  Grave, Edouard",
366
+ editor = "Calzolari, Nicoletta and
367
+ B{\'e}chet, Fr{\'e}d{\'e}ric and
368
+ Blache, Philippe and
369
+ Choukri, Khalid and
370
+ Cieri, Christopher and
371
+ Declerck, Thierry and
372
+ Goggi, Sara and
373
+ Isahara, Hitoshi and
374
+ Maegaard, Bente and
375
+ Mariani, Joseph and
376
+ Mazo, H{\'e}l{\`e}ne and
377
+ Moreno, Asuncion and
378
+ Odijk, Jan and
379
+ Piperidis, Stelios",
380
+ booktitle = "Proceedings of the Twelfth Language Resources and Evaluation Conference",
381
  month = may,
382
  year = "2020",
383
  address = "Marseille, France",
384
  publisher = "European Language Resources Association",
385
+ url = "https://aclanthology.org/2020.lrec-1.494",
386
  pages = "4003--4012",
387
  abstract = "Pre-training text representations have led to significant improvements in many areas of natural language processing. The quality of these models benefits greatly from the size of the pretraining corpora as long as its quality is preserved. In this paper, we describe an automatic pipeline to extract massive high-quality monolingual datasets from Common Crawl for a variety of languages. Our pipeline follows the data processing introduced in fastText (Mikolov et al., 2017; Grave et al., 2018), that deduplicates documents and identifies their language. We augment this pipeline with a filtering step to select documents that are close to high quality corpora like Wikipedia.",
388
  language = "English",
cc100.py CHANGED
@@ -29,20 +29,23 @@ _CITATION = """\
29
  Goyal, Naman and
30
  Chaudhary, Vishrav and
31
  Wenzek, Guillaume and
32
- Guzm{'a}n, Francisco and
33
  Grave, Edouard and
34
  Ott, Myle and
35
  Zettlemoyer, Luke and
36
  Stoyanov, Veselin",
 
 
 
 
37
  booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
38
  month = jul,
39
  year = "2020",
40
  address = "Online",
41
  publisher = "Association for Computational Linguistics",
42
- url = "https://www.aclweb.org/anthology/2020.acl-main.747",
43
  doi = "10.18653/v1/2020.acl-main.747",
44
  pages = "8440--8451",
45
- abstract = "This paper shows that pretraining multilingual language models at scale leads to significant performance gains for a wide range of cross-lingual transfer tasks. We train a Transformer-based masked language model on one hundred languages, using more than two terabytes of filtered CommonCrawl data. Our model, dubbed XLM-R, significantly outperforms multilingual BERT (mBERT) on a variety of cross-lingual benchmarks, including +14.6{%} average accuracy on XNLI, +13{%} average F1 score on MLQA, and +2.4{%} F1 score on NER. XLM-R performs particularly well on low-resource languages, improving 15.7{%} in XNLI accuracy for Swahili and 11.4{%} for Urdu over previous XLM models. We also present a detailed empirical analysis of the key factors that are required to achieve these gains, including the trade-offs between (1) positive transfer and capacity dilution and (2) the performance of high and low resource languages at scale. Finally, we show, for the first time, the possibility of multilingual modeling without sacrificing per-language performance; XLM-R is very competitive with strong monolingual models on the GLUE and XNLI benchmarks. We will make our code and models publicly available.",
46
  }
47
  @inproceedings{wenzek-etal-2020-ccnet,
48
  title = "{CCN}et: Extracting High Quality Monolingual Datasets from Web Crawl Data",
@@ -50,17 +53,30 @@ _CITATION = """\
50
  Lachaux, Marie-Anne and
51
  Conneau, Alexis and
52
  Chaudhary, Vishrav and
53
- Guzm{'a}n, Francisco and
54
  Joulin, Armand and
55
  Grave, Edouard",
56
- booktitle = "Proceedings of the 12th Language Resources and Evaluation Conference",
 
 
 
 
 
 
 
 
 
 
 
 
 
 
57
  month = may,
58
  year = "2020",
59
  address = "Marseille, France",
60
  publisher = "European Language Resources Association",
61
- url = "https://www.aclweb.org/anthology/2020.lrec-1.494",
62
  pages = "4003--4012",
63
- abstract = "Pre-training text representations have led to significant improvements in many areas of natural language processing. The quality of these models benefits greatly from the size of the pretraining corpora as long as its quality is preserved. In this paper, we describe an automatic pipeline to extract massive high-quality monolingual datasets from Common Crawl for a variety of languages. Our pipeline follows the data processing introduced in fastText (Mikolov et al., 2017; Grave et al., 2018), that deduplicates documents and identifies their language. We augment this pipeline with a filtering step to select documents that are close to high quality corpora like Wikipedia.",
64
  language = "English",
65
  ISBN = "979-10-95546-34-4",
66
  }
 
29
  Goyal, Naman and
30
  Chaudhary, Vishrav and
31
  Wenzek, Guillaume and
32
+ Guzm{\\'a}n, Francisco and
33
  Grave, Edouard and
34
  Ott, Myle and
35
  Zettlemoyer, Luke and
36
  Stoyanov, Veselin",
37
+ editor = "Jurafsky, Dan and
38
+ Chai, Joyce and
39
+ Schluter, Natalie and
40
+ Tetreault, Joel",
41
  booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
42
  month = jul,
43
  year = "2020",
44
  address = "Online",
45
  publisher = "Association for Computational Linguistics",
46
+ url = "https://aclanthology.org/2020.acl-main.747",
47
  doi = "10.18653/v1/2020.acl-main.747",
48
  pages = "8440--8451",
 
49
  }
50
  @inproceedings{wenzek-etal-2020-ccnet,
51
  title = "{CCN}et: Extracting High Quality Monolingual Datasets from Web Crawl Data",
 
53
  Lachaux, Marie-Anne and
54
  Conneau, Alexis and
55
  Chaudhary, Vishrav and
56
+ Guzm{\\'a}n, Francisco and
57
  Joulin, Armand and
58
  Grave, Edouard",
59
+ editor = "Calzolari, Nicoletta and
60
+ B{\\'e}chet, Fr{\\'e}d{\\'e}ric and
61
+ Blache, Philippe and
62
+ Choukri, Khalid and
63
+ Cieri, Christopher and
64
+ Declerck, Thierry and
65
+ Goggi, Sara and
66
+ Isahara, Hitoshi and
67
+ Maegaard, Bente and
68
+ Mariani, Joseph and
69
+ Mazo, H{\\'e}l{\\`e}ne and
70
+ Moreno, Asuncion and
71
+ Odijk, Jan and
72
+ Piperidis, Stelios",
73
+ booktitle = "Proceedings of the Twelfth Language Resources and Evaluation Conference",
74
  month = may,
75
  year = "2020",
76
  address = "Marseille, France",
77
  publisher = "European Language Resources Association",
78
+ url = "https://aclanthology.org/2020.lrec-1.494",
79
  pages = "4003--4012",
 
80
  language = "English",
81
  ISBN = "979-10-95546-34-4",
82
  }