system HF staff commited on
Commit
8d09a36
1 Parent(s): 9a25f80

Update files from the datasets library (from 1.18.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.18.0

Files changed (3) hide show
  1. README.md +19 -18
  2. cc100.py +2 -2
  3. dataset_infos.json +1 -1
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ---
2
  annotations_creators:
3
- - found
4
  language_creators:
5
  - found
6
  languages:
@@ -167,7 +167,7 @@ pretty_name: CC100
167
 
168
  ## Dataset Description
169
 
170
- - **Homepage:** http://data.statmt.org/cc-100/
171
  - **Repository:** None
172
  - **Paper:** https://www.aclweb.org/anthology/2020.acl-main.747.pdf, https://www.aclweb.org/anthology/2020.lrec-1.494.pdf
173
  - **Leaderboard:** [More Information Needed]
@@ -175,20 +175,19 @@ pretty_name: CC100
175
 
176
  ### Dataset Summary
177
 
178
- To load a language which isn't part of the config, all you need to do is specify the language code in the config.
179
- You can find the valid languages in Homepage section of Dataset Description: http://data.statmt.org/cc-100/
180
- E.g.
181
-
182
- `dataset = load_dataset("cc100", lang="en")`
183
-
184
 
185
  ### Supported Tasks and Leaderboards
186
 
187
- [More Information Needed]
188
 
189
  ### Languages
190
 
191
- [More Information Needed]
 
 
 
 
192
 
193
  ## Dataset Structure
194
 
@@ -200,6 +199,8 @@ An example from the `am` configuration:
200
  {'id': '0', 'text': 'ተለዋዋጭ የግድግዳ አንግል ሙቅ አንቀሳቅሷል ቲ-አሞሌ አጥቅሼ ...\n'}
201
  ```
202
 
 
 
203
  ### Data Fields
204
 
205
  The data fields are:
@@ -232,23 +233,23 @@ Sizes of some configurations:
232
 
233
  #### Who are the source language producers?
234
 
235
- [More Information Needed]
236
 
237
  ### Annotations
238
 
239
- [More Information Needed]
240
 
241
  #### Annotation process
242
 
243
- [More Information Needed]
244
 
245
  #### Who are the annotators?
246
 
247
- [More Information Needed]
248
 
249
  ### Personal and Sensitive Information
250
 
251
- [More Information Needed]
252
 
253
  ## Considerations for Using the Data
254
 
@@ -268,11 +269,11 @@ Sizes of some configurations:
268
 
269
  ### Dataset Curators
270
 
271
- [More Information Needed]
272
 
273
  ### Licensing Information
274
 
275
- [More Information Needed]
276
 
277
  ### Citation Information
278
 
@@ -326,4 +327,4 @@ Sizes of some configurations:
326
 
327
  ### Contributions
328
 
329
- Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
1
  ---
2
  annotations_creators:
3
+ - no-annotation
4
  language_creators:
5
  - found
6
  languages:
167
 
168
  ## Dataset Description
169
 
170
+ - **Homepage:** https://data.statmt.org/cc-100/
171
  - **Repository:** None
172
  - **Paper:** https://www.aclweb.org/anthology/2020.acl-main.747.pdf, https://www.aclweb.org/anthology/2020.lrec-1.494.pdf
173
  - **Leaderboard:** [More Information Needed]
175
 
176
  ### Dataset Summary
177
 
178
+ This corpus is an attempt to recreate the dataset used for training XLM-R. This corpus comprises of monolingual data for 100+ languages and also includes data for romanized languages (indicated by *_rom). This was constructed using the urls and paragraph indices provided by the CC-Net repository by processing January-December 2018 Commoncrawl snapshots.
 
 
 
 
 
179
 
180
  ### Supported Tasks and Leaderboards
181
 
182
+ CC-100 is mainly inteded to pretrain language models and word represantations.
183
 
184
  ### Languages
185
 
186
+ To load a language which isn't part of the config, all you need to do is specify the language code in the config.
187
+ You can find the valid languages in Homepage section of Dataset Description: https://data.statmt.org/cc-100/
188
+ E.g.
189
+
190
+ `dataset = load_dataset("cc100", lang="en")`
191
 
192
  ## Dataset Structure
193
 
199
  {'id': '0', 'text': 'ተለዋዋጭ የግድግዳ አንግል ሙቅ አንቀሳቅሷል ቲ-አሞሌ አጥቅሼ ...\n'}
200
  ```
201
 
202
+ Each data point is a paragraph of text. The paragraphs are presented in the original (unshuffled) order. Documents are separated by a data point consisting of a single newline character.
203
+
204
  ### Data Fields
205
 
206
  The data fields are:
233
 
234
  #### Who are the source language producers?
235
 
236
+ The data comes from multiple web pages in a large variety of languages.
237
 
238
  ### Annotations
239
 
240
+ The dataset does not contain any additional annotations.
241
 
242
  #### Annotation process
243
 
244
+ [N/A]
245
 
246
  #### Who are the annotators?
247
 
248
+ [N/A]
249
 
250
  ### Personal and Sensitive Information
251
 
252
+ Being constructed from Common Crawl, personal and sensitive information might be present. This **must** be considered before training deep learning models with CC-100, specially in the case of text-generation models.
253
 
254
  ## Considerations for Using the Data
255
 
269
 
270
  ### Dataset Curators
271
 
272
+ This dataset was prepared by [Statistical Machine Translation at the University of Edinburgh](https://www.statmt.org/ued/) using the [CC-Net](https://github.com/facebookresearch/cc_net) toolkit by Facebook Research.
273
 
274
  ### Licensing Information
275
 
276
+ Statistical Machine Translation at the University of Edinburgh makes no claims of intellectual property on the work of preparation of the corpus. By using this, you are also bound by the [Common Crawl terms of use](https://commoncrawl.org/terms-of-use/) in respect of the content contained in the dataset.
277
 
278
  ### Citation Information
279
 
327
 
328
  ### Contributions
329
 
330
+ Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
cc100.py CHANGED
@@ -20,7 +20,7 @@ import datasets
20
  _DESCRIPTION = """\
21
  This corpus is an attempt to recreate the dataset used for training XLM-R. This corpus comprises of monolingual data for 100+ languages and also includes data for romanized languages (indicated by *_rom). This was constructed using the urls and paragraph indices provided by the CC-Net repository by processing January-December 2018 Commoncrawl snapshots. Each file comprises of documents separated by double-newlines and paragraphs within the same document separated by a newline. The data is generated using the open source CC-Net repository. No claims of intellectual property are made on the work of preparation of the corpus.
22
  """
23
- _HOMEPAGE_URL = "http://data.statmt.org/cc-100/"
24
  _CITATION = """\
25
  @inproceedings{conneau-etal-2020-unsupervised,
26
  title = "Unsupervised Cross-lingual Representation Learning at Scale",
@@ -67,7 +67,7 @@ _CITATION = """\
67
  """
68
 
69
  _VERSION = "1.0.0"
70
- _BASE_URL = "http://data.statmt.org/cc-100/{}.txt.xz"
71
 
72
  # Please note: due to the size of the data, only few examples are provided.
73
  # However, you can pass the lang parameter in config to fetch data of any language in the corpus
20
  _DESCRIPTION = """\
21
  This corpus is an attempt to recreate the dataset used for training XLM-R. This corpus comprises of monolingual data for 100+ languages and also includes data for romanized languages (indicated by *_rom). This was constructed using the urls and paragraph indices provided by the CC-Net repository by processing January-December 2018 Commoncrawl snapshots. Each file comprises of documents separated by double-newlines and paragraphs within the same document separated by a newline. The data is generated using the open source CC-Net repository. No claims of intellectual property are made on the work of preparation of the corpus.
22
  """
23
+ _HOMEPAGE_URL = "https://data.statmt.org/cc-100/"
24
  _CITATION = """\
25
  @inproceedings{conneau-etal-2020-unsupervised,
26
  title = "Unsupervised Cross-lingual Representation Learning at Scale",
67
  """
68
 
69
  _VERSION = "1.0.0"
70
+ _BASE_URL = "https://data.statmt.org/cc-100/{}.txt.xz"
71
 
72
  # Please note: due to the size of the data, only few examples are provided.
73
  # However, you can pass the lang parameter in config to fetch data of any language in the corpus
dataset_infos.json CHANGED
@@ -1 +1 @@
1
- {"am": {"description": "This corpus is an attempt to recreate the dataset used for training XLM-R. This corpus comprises of monolingual data for 100+ languages and also includes data for romanized languages (indicated by *_rom). This was constructed using the urls and paragraph indices provided by the CC-Net repository by processing January-December 2018 Commoncrawl snapshots. Each file comprises of documents separated by double-newlines and paragraphs within the same document separated by a newline. The data is generated using the open source CC-Net repository. No claims of intellectual property are made on the work of preparation of the corpus.\n", "citation": "@inproceedings{conneau-etal-2020-unsupervised,\n title = \"Unsupervised Cross-lingual Representation Learning at Scale\",\n author = \"Conneau, Alexis and\n Khandelwal, Kartikay and\n Goyal, Naman and\n Chaudhary, Vishrav and\n Wenzek, Guillaume and\n Guzm{'a}n, Francisco and\n Grave, Edouard and\n Ott, Myle and\n Zettlemoyer, Luke and\n Stoyanov, Veselin\",\n booktitle = \"Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics\",\n month = jul,\n year = \"2020\",\n address = \"Online\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://www.aclweb.org/anthology/2020.acl-main.747\",\n doi = \"10.18653/v1/2020.acl-main.747\",\n pages = \"8440--8451\",\n abstract = \"This paper shows that pretraining multilingual language models at scale leads to significant performance gains for a wide range of cross-lingual transfer tasks. We train a Transformer-based masked language model on one hundred languages, using more than two terabytes of filtered CommonCrawl data. Our model, dubbed XLM-R, significantly outperforms multilingual BERT (mBERT) on a variety of cross-lingual benchmarks, including +14.6{%} average accuracy on XNLI, +13{%} average F1 score on MLQA, and +2.4{%} F1 score on NER. XLM-R performs particularly well on low-resource languages, improving 15.7{%} in XNLI accuracy for Swahili and 11.4{%} for Urdu over previous XLM models. We also present a detailed empirical analysis of the key factors that are required to achieve these gains, including the trade-offs between (1) positive transfer and capacity dilution and (2) the performance of high and low resource languages at scale. Finally, we show, for the first time, the possibility of multilingual modeling without sacrificing per-language performance; XLM-R is very competitive with strong monolingual models on the GLUE and XNLI benchmarks. We will make our code and models publicly available.\",\n}\n@inproceedings{wenzek-etal-2020-ccnet,\n title = \"{CCN}et: Extracting High Quality Monolingual Datasets from Web Crawl Data\",\n author = \"Wenzek, Guillaume and\n Lachaux, Marie-Anne and\n Conneau, Alexis and\n Chaudhary, Vishrav and\n Guzm{'a}n, Francisco and\n Joulin, Armand and\n Grave, Edouard\",\n booktitle = \"Proceedings of the 12th Language Resources and Evaluation Conference\",\n month = may,\n year = \"2020\",\n address = \"Marseille, France\",\n publisher = \"European Language Resources Association\",\n url = \"https://www.aclweb.org/anthology/2020.lrec-1.494\",\n pages = \"4003--4012\",\n abstract = \"Pre-training text representations have led to significant improvements in many areas of natural language processing. The quality of these models benefits greatly from the size of the pretraining corpora as long as its quality is preserved. In this paper, we describe an automatic pipeline to extract massive high-quality monolingual datasets from Common Crawl for a variety of languages. Our pipeline follows the data processing introduced in fastText (Mikolov et al., 2017; Grave et al., 2018), that deduplicates documents and identifies their language. We augment this pipeline with a filtering step to select documents that are close to high quality corpora like Wikipedia.\",\n language = \"English\",\n ISBN = \"979-10-95546-34-4\",\n}\n", "homepage": "http://data.statmt.org/cc-100/", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "text": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "cc100", "config_name": "am", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 935440775, "num_examples": 3124561, "dataset_name": "cc100"}}, "download_checksums": {"http://data.statmt.org/cc-100/am.txt.xz": {"num_bytes": 138821056, "checksum": "97102e1118dee22103349ca38315aecee8cdcc62cb7d7f70d803d37c73525cf5"}}, "download_size": 138821056, "post_processing_size": null, "dataset_size": 935440775, "size_in_bytes": 1074261831}, "sr": {"description": "This corpus is an attempt to recreate the dataset used for training XLM-R. This corpus comprises of monolingual data for 100+ languages and also includes data for romanized languages (indicated by *_rom). This was constructed using the urls and paragraph indices provided by the CC-Net repository by processing January-December 2018 Commoncrawl snapshots. Each file comprises of documents separated by double-newlines and paragraphs within the same document separated by a newline. The data is generated using the open source CC-Net repository. No claims of intellectual property are made on the work of preparation of the corpus.\n", "citation": "@inproceedings{conneau-etal-2020-unsupervised,\n title = \"Unsupervised Cross-lingual Representation Learning at Scale\",\n author = \"Conneau, Alexis and\n Khandelwal, Kartikay and\n Goyal, Naman and\n Chaudhary, Vishrav and\n Wenzek, Guillaume and\n Guzm{'a}n, Francisco and\n Grave, Edouard and\n Ott, Myle and\n Zettlemoyer, Luke and\n Stoyanov, Veselin\",\n booktitle = \"Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics\",\n month = jul,\n year = \"2020\",\n address = \"Online\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://www.aclweb.org/anthology/2020.acl-main.747\",\n doi = \"10.18653/v1/2020.acl-main.747\",\n pages = \"8440--8451\",\n abstract = \"This paper shows that pretraining multilingual language models at scale leads to significant performance gains for a wide range of cross-lingual transfer tasks. We train a Transformer-based masked language model on one hundred languages, using more than two terabytes of filtered CommonCrawl data. Our model, dubbed XLM-R, significantly outperforms multilingual BERT (mBERT) on a variety of cross-lingual benchmarks, including +14.6{%} average accuracy on XNLI, +13{%} average F1 score on MLQA, and +2.4{%} F1 score on NER. XLM-R performs particularly well on low-resource languages, improving 15.7{%} in XNLI accuracy for Swahili and 11.4{%} for Urdu over previous XLM models. We also present a detailed empirical analysis of the key factors that are required to achieve these gains, including the trade-offs between (1) positive transfer and capacity dilution and (2) the performance of high and low resource languages at scale. Finally, we show, for the first time, the possibility of multilingual modeling without sacrificing per-language performance; XLM-R is very competitive with strong monolingual models on the GLUE and XNLI benchmarks. We will make our code and models publicly available.\",\n}\n@inproceedings{wenzek-etal-2020-ccnet,\n title = \"{CCN}et: Extracting High Quality Monolingual Datasets from Web Crawl Data\",\n author = \"Wenzek, Guillaume and\n Lachaux, Marie-Anne and\n Conneau, Alexis and\n Chaudhary, Vishrav and\n Guzm{'a}n, Francisco and\n Joulin, Armand and\n Grave, Edouard\",\n booktitle = \"Proceedings of the 12th Language Resources and Evaluation Conference\",\n month = may,\n year = \"2020\",\n address = \"Marseille, France\",\n publisher = \"European Language Resources Association\",\n url = \"https://www.aclweb.org/anthology/2020.lrec-1.494\",\n pages = \"4003--4012\",\n abstract = \"Pre-training text representations have led to significant improvements in many areas of natural language processing. The quality of these models benefits greatly from the size of the pretraining corpora as long as its quality is preserved. In this paper, we describe an automatic pipeline to extract massive high-quality monolingual datasets from Common Crawl for a variety of languages. Our pipeline follows the data processing introduced in fastText (Mikolov et al., 2017; Grave et al., 2018), that deduplicates documents and identifies their language. We augment this pipeline with a filtering step to select documents that are close to high quality corpora like Wikipedia.\",\n language = \"English\",\n ISBN = \"979-10-95546-34-4\",\n}\n", "homepage": "http://data.statmt.org/cc-100/", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "text": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "cc100", "config_name": "sr", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 10299427460, "num_examples": 35747957, "dataset_name": "cc100"}}, "download_checksums": {"http://data.statmt.org/cc-100/sr.txt.xz": {"num_bytes": 1578989320, "checksum": "e90b3df3955f7da69ccf2dd703aa89e54a4b05ee9a1e6f2bf9b34f11f11b4262"}}, "download_size": 1578989320, "post_processing_size": null, "dataset_size": 10299427460, "size_in_bytes": 11878416780}, "ka": {"description": "This corpus is an attempt to recreate the dataset used for training XLM-R. This corpus comprises of monolingual data for 100+ languages and also includes data for romanized languages (indicated by *_rom). This was constructed using the urls and paragraph indices provided by the CC-Net repository by processing January-December 2018 Commoncrawl snapshots. Each file comprises of documents separated by double-newlines and paragraphs within the same document separated by a newline. The data is generated using the open source CC-Net repository. No claims of intellectual property are made on the work of preparation of the corpus.\n", "citation": "@inproceedings{conneau-etal-2020-unsupervised,\n title = \"Unsupervised Cross-lingual Representation Learning at Scale\",\n author = \"Conneau, Alexis and\n Khandelwal, Kartikay and\n Goyal, Naman and\n Chaudhary, Vishrav and\n Wenzek, Guillaume and\n Guzm{'a}n, Francisco and\n Grave, Edouard and\n Ott, Myle and\n Zettlemoyer, Luke and\n Stoyanov, Veselin\",\n booktitle = \"Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics\",\n month = jul,\n year = \"2020\",\n address = \"Online\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://www.aclweb.org/anthology/2020.acl-main.747\",\n doi = \"10.18653/v1/2020.acl-main.747\",\n pages = \"8440--8451\",\n abstract = \"This paper shows that pretraining multilingual language models at scale leads to significant performance gains for a wide range of cross-lingual transfer tasks. We train a Transformer-based masked language model on one hundred languages, using more than two terabytes of filtered CommonCrawl data. Our model, dubbed XLM-R, significantly outperforms multilingual BERT (mBERT) on a variety of cross-lingual benchmarks, including +14.6{%} average accuracy on XNLI, +13{%} average F1 score on MLQA, and +2.4{%} F1 score on NER. XLM-R performs particularly well on low-resource languages, improving 15.7{%} in XNLI accuracy for Swahili and 11.4{%} for Urdu over previous XLM models. We also present a detailed empirical analysis of the key factors that are required to achieve these gains, including the trade-offs between (1) positive transfer and capacity dilution and (2) the performance of high and low resource languages at scale. Finally, we show, for the first time, the possibility of multilingual modeling without sacrificing per-language performance; XLM-R is very competitive with strong monolingual models on the GLUE and XNLI benchmarks. We will make our code and models publicly available.\",\n}\n@inproceedings{wenzek-etal-2020-ccnet,\n title = \"{CCN}et: Extracting High Quality Monolingual Datasets from Web Crawl Data\",\n author = \"Wenzek, Guillaume and\n Lachaux, Marie-Anne and\n Conneau, Alexis and\n Chaudhary, Vishrav and\n Guzm{'a}n, Francisco and\n Joulin, Armand and\n Grave, Edouard\",\n booktitle = \"Proceedings of the 12th Language Resources and Evaluation Conference\",\n month = may,\n year = \"2020\",\n address = \"Marseille, France\",\n publisher = \"European Language Resources Association\",\n url = \"https://www.aclweb.org/anthology/2020.lrec-1.494\",\n pages = \"4003--4012\",\n abstract = \"Pre-training text representations have led to significant improvements in many areas of natural language processing. The quality of these models benefits greatly from the size of the pretraining corpora as long as its quality is preserved. In this paper, we describe an automatic pipeline to extract massive high-quality monolingual datasets from Common Crawl for a variety of languages. Our pipeline follows the data processing introduced in fastText (Mikolov et al., 2017; Grave et al., 2018), that deduplicates documents and identifies their language. We augment this pipeline with a filtering step to select documents that are close to high quality corpora like Wikipedia.\",\n language = \"English\",\n ISBN = \"979-10-95546-34-4\",\n}\n", "homepage": "http://data.statmt.org/cc-100/", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "text": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "cc100", "config_name": "ka", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 10228918845, "num_examples": 31708119, "dataset_name": "cc100"}}, "download_checksums": {"http://data.statmt.org/cc-100/ka.txt.xz": {"num_bytes": 1100446372, "checksum": "e4d155ce56a203d819787b3d7e5c97383e6132b20f924680502a21eea216528f"}}, "download_size": 1100446372, "post_processing_size": null, "dataset_size": 10228918845, "size_in_bytes": 11329365217}}
1
+ {"am": {"description": "This corpus is an attempt to recreate the dataset used for training XLM-R. This corpus comprises of monolingual data for 100+ languages and also includes data for romanized languages (indicated by *_rom). This was constructed using the urls and paragraph indices provided by the CC-Net repository by processing January-December 2018 Commoncrawl snapshots. Each file comprises of documents separated by double-newlines and paragraphs within the same document separated by a newline. The data is generated using the open source CC-Net repository. No claims of intellectual property are made on the work of preparation of the corpus.\n", "citation": "@inproceedings{conneau-etal-2020-unsupervised,\n title = \"Unsupervised Cross-lingual Representation Learning at Scale\",\n author = \"Conneau, Alexis and\n Khandelwal, Kartikay and\n Goyal, Naman and\n Chaudhary, Vishrav and\n Wenzek, Guillaume and\n Guzm{'a}n, Francisco and\n Grave, Edouard and\n Ott, Myle and\n Zettlemoyer, Luke and\n Stoyanov, Veselin\",\n booktitle = \"Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics\",\n month = jul,\n year = \"2020\",\n address = \"Online\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://www.aclweb.org/anthology/2020.acl-main.747\",\n doi = \"10.18653/v1/2020.acl-main.747\",\n pages = \"8440--8451\",\n abstract = \"This paper shows that pretraining multilingual language models at scale leads to significant performance gains for a wide range of cross-lingual transfer tasks. We train a Transformer-based masked language model on one hundred languages, using more than two terabytes of filtered CommonCrawl data. Our model, dubbed XLM-R, significantly outperforms multilingual BERT (mBERT) on a variety of cross-lingual benchmarks, including +14.6{%} average accuracy on XNLI, +13{%} average F1 score on MLQA, and +2.4{%} F1 score on NER. XLM-R performs particularly well on low-resource languages, improving 15.7{%} in XNLI accuracy for Swahili and 11.4{%} for Urdu over previous XLM models. We also present a detailed empirical analysis of the key factors that are required to achieve these gains, including the trade-offs between (1) positive transfer and capacity dilution and (2) the performance of high and low resource languages at scale. Finally, we show, for the first time, the possibility of multilingual modeling without sacrificing per-language performance; XLM-R is very competitive with strong monolingual models on the GLUE and XNLI benchmarks. We will make our code and models publicly available.\",\n}\n@inproceedings{wenzek-etal-2020-ccnet,\n title = \"{CCN}et: Extracting High Quality Monolingual Datasets from Web Crawl Data\",\n author = \"Wenzek, Guillaume and\n Lachaux, Marie-Anne and\n Conneau, Alexis and\n Chaudhary, Vishrav and\n Guzm{'a}n, Francisco and\n Joulin, Armand and\n Grave, Edouard\",\n booktitle = \"Proceedings of the 12th Language Resources and Evaluation Conference\",\n month = may,\n year = \"2020\",\n address = \"Marseille, France\",\n publisher = \"European Language Resources Association\",\n url = \"https://www.aclweb.org/anthology/2020.lrec-1.494\",\n pages = \"4003--4012\",\n abstract = \"Pre-training text representations have led to significant improvements in many areas of natural language processing. The quality of these models benefits greatly from the size of the pretraining corpora as long as its quality is preserved. In this paper, we describe an automatic pipeline to extract massive high-quality monolingual datasets from Common Crawl for a variety of languages. Our pipeline follows the data processing introduced in fastText (Mikolov et al., 2017; Grave et al., 2018), that deduplicates documents and identifies their language. We augment this pipeline with a filtering step to select documents that are close to high quality corpora like Wikipedia.\",\n language = \"English\",\n ISBN = \"979-10-95546-34-4\",\n}\n", "homepage": "https://data.statmt.org/cc-100/", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "text": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "cc100", "config_name": "am", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 935440775, "num_examples": 3124561, "dataset_name": "cc100"}}, "download_checksums": {"https://data.statmt.org/cc-100/am.txt.xz": {"num_bytes": 138821056, "checksum": "97102e1118dee22103349ca38315aecee8cdcc62cb7d7f70d803d37c73525cf5"}}, "download_size": 138821056, "post_processing_size": null, "dataset_size": 935440775, "size_in_bytes": 1074261831}, "sr": {"description": "This corpus is an attempt to recreate the dataset used for training XLM-R. This corpus comprises of monolingual data for 100+ languages and also includes data for romanized languages (indicated by *_rom). This was constructed using the urls and paragraph indices provided by the CC-Net repository by processing January-December 2018 Commoncrawl snapshots. Each file comprises of documents separated by double-newlines and paragraphs within the same document separated by a newline. The data is generated using the open source CC-Net repository. No claims of intellectual property are made on the work of preparation of the corpus.\n", "citation": "@inproceedings{conneau-etal-2020-unsupervised,\n title = \"Unsupervised Cross-lingual Representation Learning at Scale\",\n author = \"Conneau, Alexis and\n Khandelwal, Kartikay and\n Goyal, Naman and\n Chaudhary, Vishrav and\n Wenzek, Guillaume and\n Guzm{'a}n, Francisco and\n Grave, Edouard and\n Ott, Myle and\n Zettlemoyer, Luke and\n Stoyanov, Veselin\",\n booktitle = \"Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics\",\n month = jul,\n year = \"2020\",\n address = \"Online\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://www.aclweb.org/anthology/2020.acl-main.747\",\n doi = \"10.18653/v1/2020.acl-main.747\",\n pages = \"8440--8451\",\n abstract = \"This paper shows that pretraining multilingual language models at scale leads to significant performance gains for a wide range of cross-lingual transfer tasks. We train a Transformer-based masked language model on one hundred languages, using more than two terabytes of filtered CommonCrawl data. Our model, dubbed XLM-R, significantly outperforms multilingual BERT (mBERT) on a variety of cross-lingual benchmarks, including +14.6{%} average accuracy on XNLI, +13{%} average F1 score on MLQA, and +2.4{%} F1 score on NER. XLM-R performs particularly well on low-resource languages, improving 15.7{%} in XNLI accuracy for Swahili and 11.4{%} for Urdu over previous XLM models. We also present a detailed empirical analysis of the key factors that are required to achieve these gains, including the trade-offs between (1) positive transfer and capacity dilution and (2) the performance of high and low resource languages at scale. Finally, we show, for the first time, the possibility of multilingual modeling without sacrificing per-language performance; XLM-R is very competitive with strong monolingual models on the GLUE and XNLI benchmarks. We will make our code and models publicly available.\",\n}\n@inproceedings{wenzek-etal-2020-ccnet,\n title = \"{CCN}et: Extracting High Quality Monolingual Datasets from Web Crawl Data\",\n author = \"Wenzek, Guillaume and\n Lachaux, Marie-Anne and\n Conneau, Alexis and\n Chaudhary, Vishrav and\n Guzm{'a}n, Francisco and\n Joulin, Armand and\n Grave, Edouard\",\n booktitle = \"Proceedings of the 12th Language Resources and Evaluation Conference\",\n month = may,\n year = \"2020\",\n address = \"Marseille, France\",\n publisher = \"European Language Resources Association\",\n url = \"https://www.aclweb.org/anthology/2020.lrec-1.494\",\n pages = \"4003--4012\",\n abstract = \"Pre-training text representations have led to significant improvements in many areas of natural language processing. The quality of these models benefits greatly from the size of the pretraining corpora as long as its quality is preserved. In this paper, we describe an automatic pipeline to extract massive high-quality monolingual datasets from Common Crawl for a variety of languages. Our pipeline follows the data processing introduced in fastText (Mikolov et al., 2017; Grave et al., 2018), that deduplicates documents and identifies their language. We augment this pipeline with a filtering step to select documents that are close to high quality corpora like Wikipedia.\",\n language = \"English\",\n ISBN = \"979-10-95546-34-4\",\n}\n", "homepage": "https://data.statmt.org/cc-100/", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "text": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "cc100", "config_name": "sr", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 10299427460, "num_examples": 35747957, "dataset_name": "cc100"}}, "download_checksums": {"https://data.statmt.org/cc-100/sr.txt.xz": {"num_bytes": 1578989320, "checksum": "e90b3df3955f7da69ccf2dd703aa89e54a4b05ee9a1e6f2bf9b34f11f11b4262"}}, "download_size": 1578989320, "post_processing_size": null, "dataset_size": 10299427460, "size_in_bytes": 11878416780}, "ka": {"description": "This corpus is an attempt to recreate the dataset used for training XLM-R. This corpus comprises of monolingual data for 100+ languages and also includes data for romanized languages (indicated by *_rom). This was constructed using the urls and paragraph indices provided by the CC-Net repository by processing January-December 2018 Commoncrawl snapshots. Each file comprises of documents separated by double-newlines and paragraphs within the same document separated by a newline. The data is generated using the open source CC-Net repository. No claims of intellectual property are made on the work of preparation of the corpus.\n", "citation": "@inproceedings{conneau-etal-2020-unsupervised,\n title = \"Unsupervised Cross-lingual Representation Learning at Scale\",\n author = \"Conneau, Alexis and\n Khandelwal, Kartikay and\n Goyal, Naman and\n Chaudhary, Vishrav and\n Wenzek, Guillaume and\n Guzm{'a}n, Francisco and\n Grave, Edouard and\n Ott, Myle and\n Zettlemoyer, Luke and\n Stoyanov, Veselin\",\n booktitle = \"Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics\",\n month = jul,\n year = \"2020\",\n address = \"Online\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://www.aclweb.org/anthology/2020.acl-main.747\",\n doi = \"10.18653/v1/2020.acl-main.747\",\n pages = \"8440--8451\",\n abstract = \"This paper shows that pretraining multilingual language models at scale leads to significant performance gains for a wide range of cross-lingual transfer tasks. We train a Transformer-based masked language model on one hundred languages, using more than two terabytes of filtered CommonCrawl data. Our model, dubbed XLM-R, significantly outperforms multilingual BERT (mBERT) on a variety of cross-lingual benchmarks, including +14.6{%} average accuracy on XNLI, +13{%} average F1 score on MLQA, and +2.4{%} F1 score on NER. XLM-R performs particularly well on low-resource languages, improving 15.7{%} in XNLI accuracy for Swahili and 11.4{%} for Urdu over previous XLM models. We also present a detailed empirical analysis of the key factors that are required to achieve these gains, including the trade-offs between (1) positive transfer and capacity dilution and (2) the performance of high and low resource languages at scale. Finally, we show, for the first time, the possibility of multilingual modeling without sacrificing per-language performance; XLM-R is very competitive with strong monolingual models on the GLUE and XNLI benchmarks. We will make our code and models publicly available.\",\n}\n@inproceedings{wenzek-etal-2020-ccnet,\n title = \"{CCN}et: Extracting High Quality Monolingual Datasets from Web Crawl Data\",\n author = \"Wenzek, Guillaume and\n Lachaux, Marie-Anne and\n Conneau, Alexis and\n Chaudhary, Vishrav and\n Guzm{'a}n, Francisco and\n Joulin, Armand and\n Grave, Edouard\",\n booktitle = \"Proceedings of the 12th Language Resources and Evaluation Conference\",\n month = may,\n year = \"2020\",\n address = \"Marseille, France\",\n publisher = \"European Language Resources Association\",\n url = \"https://www.aclweb.org/anthology/2020.lrec-1.494\",\n pages = \"4003--4012\",\n abstract = \"Pre-training text representations have led to significant improvements in many areas of natural language processing. The quality of these models benefits greatly from the size of the pretraining corpora as long as its quality is preserved. In this paper, we describe an automatic pipeline to extract massive high-quality monolingual datasets from Common Crawl for a variety of languages. Our pipeline follows the data processing introduced in fastText (Mikolov et al., 2017; Grave et al., 2018), that deduplicates documents and identifies their language. We augment this pipeline with a filtering step to select documents that are close to high quality corpora like Wikipedia.\",\n language = \"English\",\n ISBN = \"979-10-95546-34-4\",\n}\n", "homepage": "https://data.statmt.org/cc-100/", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "text": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "cc100", "config_name": "ka", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 10228918845, "num_examples": 31708119, "dataset_name": "cc100"}}, "download_checksums": {"https://data.statmt.org/cc-100/ka.txt.xz": {"num_bytes": 1100446372, "checksum": "e4d155ce56a203d819787b3d7e5c97383e6132b20f924680502a21eea216528f"}}, "download_size": 1100446372, "post_processing_size": null, "dataset_size": 10228918845, "size_in_bytes": 11329365217}}