system HF staff commited on
Commit
a9db7bf
β€’
1 Parent(s): 2166edd

Update files from the datasets library (from 1.9.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.9.0

Files changed (1) hide show
  1. README.md +74 -12
README.md CHANGED
@@ -12,7 +12,7 @@ languages:
12
  - be
13
  - bg
14
  - bn
15
- - bn_rom
16
  - br
17
  - bs
18
  - ca
@@ -39,7 +39,7 @@ languages:
39
  - ha
40
  - he
41
  - hi
42
- - hi_rom
43
  - hr
44
  - ht
45
  - hu
@@ -71,7 +71,7 @@ languages:
71
  - mr
72
  - ms
73
  - my
74
- - my_zaw
75
  - ne
76
  - nl
77
  - 'no'
@@ -100,9 +100,9 @@ languages:
100
  - sv
101
  - sw
102
  - ta
103
- - ta_rom
104
  - te
105
- - te_rom
106
  - th
107
  - tl
108
  - tn
@@ -110,7 +110,7 @@ languages:
110
  - ug
111
  - uk
112
  - ur
113
- - ur_rom
114
  - uz
115
  - vi
116
  - wo
@@ -125,7 +125,10 @@ licenses:
125
  multilinguality:
126
  - multilingual
127
  size_categories:
128
- - n>1M
 
 
 
129
  source_datasets:
130
  - original
131
  task_categories:
@@ -133,9 +136,10 @@ task_categories:
133
  task_ids:
134
  - language-modeling
135
  paperswithcode_id: cc100
 
136
  ---
137
 
138
- # Dataset Card Creation Guide
139
 
140
  ## Table of Contents
141
  - [Dataset Description](#dataset-description)
@@ -190,15 +194,27 @@ E.g.
190
 
191
  ### Data Instances
192
 
193
- [More Information Needed]
 
 
 
 
194
 
195
  ### Data Fields
196
 
197
- [More Information Needed]
 
 
 
198
 
199
  ### Data Splits
200
 
201
- [More Information Needed]
 
 
 
 
 
202
 
203
  ## Dataset Creation
204
 
@@ -260,7 +276,53 @@ E.g.
260
 
261
  ### Citation Information
262
 
263
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
264
 
265
  ### Contributions
266
 
12
  - be
13
  - bg
14
  - bn
15
+ - bn-Latn
16
  - br
17
  - bs
18
  - ca
39
  - ha
40
  - he
41
  - hi
42
+ - hi-Latn
43
  - hr
44
  - ht
45
  - hu
71
  - mr
72
  - ms
73
  - my
74
+ - my-x-zawgyi
75
  - ne
76
  - nl
77
  - 'no'
100
  - sv
101
  - sw
102
  - ta
103
+ - ta-Latn
104
  - te
105
+ - te-Latn
106
  - th
107
  - tl
108
  - tn
110
  - ug
111
  - uk
112
  - ur
113
+ - ur-Latn
114
  - uz
115
  - vi
116
  - wo
125
  multilinguality:
126
  - multilingual
127
  size_categories:
128
+ am:
129
+ - 1M<n<10M
130
+ sr:
131
+ - 10M<n<100M
132
  source_datasets:
133
  - original
134
  task_categories:
136
  task_ids:
137
  - language-modeling
138
  paperswithcode_id: cc100
139
+ pretty_name: CC100
140
  ---
141
 
142
+ # Dataset Card for CC100
143
 
144
  ## Table of Contents
145
  - [Dataset Description](#dataset-description)
194
 
195
  ### Data Instances
196
 
197
+ An example from the `am` configuration:
198
+
199
+ ```
200
+ {'id': '0', 'text': 'α‰°αˆˆα‹‹α‹‹αŒ­ α‹¨αŒα‹΅αŒα‹³ αŠ αŠ•αŒαˆ αˆ™α‰… αŠ αŠ•α‰€αˆ³α‰…αˆ·αˆ ቲ-አሞሌ አαŒ₯α‰…αˆΌ ...\n'}
201
+ ```
202
 
203
  ### Data Fields
204
 
205
+ The data fields are:
206
+
207
+ - id: id of the example
208
+ - text: content as a string
209
 
210
  ### Data Splits
211
 
212
+ Sizes of some configurations:
213
+
214
+ | name |train|
215
+ |----------|----:|
216
+ |am|3124561|
217
+ |sr|35747957|
218
 
219
  ## Dataset Creation
220
 
276
 
277
  ### Citation Information
278
 
279
+ ```bibtex
280
+ @inproceedings{conneau-etal-2020-unsupervised,
281
+ title = "Unsupervised Cross-lingual Representation Learning at Scale",
282
+ author = "Conneau, Alexis and
283
+ Khandelwal, Kartikay and
284
+ Goyal, Naman and
285
+ Chaudhary, Vishrav and
286
+ Wenzek, Guillaume and
287
+ Guzm{\'a}n, Francisco and
288
+ Grave, Edouard and
289
+ Ott, Myle and
290
+ Zettlemoyer, Luke and
291
+ Stoyanov, Veselin",
292
+ booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
293
+ month = jul,
294
+ year = "2020",
295
+ address = "Online",
296
+ publisher = "Association for Computational Linguistics",
297
+ url = "https://www.aclweb.org/anthology/2020.acl-main.747",
298
+ doi = "10.18653/v1/2020.acl-main.747",
299
+ pages = "8440--8451",
300
+ abstract = "This paper shows that pretraining multilingual language models at scale leads to significant performance gains for a wide range of cross-lingual transfer tasks. We train a Transformer-based masked language model on one hundred languages, using more than two terabytes of filtered CommonCrawl data. Our model, dubbed XLM-R, significantly outperforms multilingual BERT (mBERT) on a variety of cross-lingual benchmarks, including +14.6{\%} average accuracy on XNLI, +13{\%} average F1 score on MLQA, and +2.4{\%} F1 score on NER. XLM-R performs particularly well on low-resource languages, improving 15.7{\%} in XNLI accuracy for Swahili and 11.4{\%} for Urdu over previous XLM models. We also present a detailed empirical analysis of the key factors that are required to achieve these gains, including the trade-offs between (1) positive transfer and capacity dilution and (2) the performance of high and low resource languages at scale. Finally, we show, for the first time, the possibility of multilingual modeling without sacrificing per-language performance; XLM-R is very competitive with strong monolingual models on the GLUE and XNLI benchmarks. We will make our code and models publicly available.",
301
+ }
302
+ ```
303
+
304
+ ```bibtex
305
+ @inproceedings{wenzek-etal-2020-ccnet,
306
+ title = "{CCN}et: Extracting High Quality Monolingual Datasets from Web Crawl Data",
307
+ author = "Wenzek, Guillaume and
308
+ Lachaux, Marie-Anne and
309
+ Conneau, Alexis and
310
+ Chaudhary, Vishrav and
311
+ Guzm{\'a}n, Francisco and
312
+ Joulin, Armand and
313
+ Grave, Edouard",
314
+ booktitle = "Proceedings of the 12th Language Resources and Evaluation Conference",
315
+ month = may,
316
+ year = "2020",
317
+ address = "Marseille, France",
318
+ publisher = "European Language Resources Association",
319
+ url = "https://www.aclweb.org/anthology/2020.lrec-1.494",
320
+ pages = "4003--4012",
321
+ abstract = "Pre-training text representations have led to significant improvements in many areas of natural language processing. The quality of these models benefits greatly from the size of the pretraining corpora as long as its quality is preserved. In this paper, we describe an automatic pipeline to extract massive high-quality monolingual datasets from Common Crawl for a variety of languages. Our pipeline follows the data processing introduced in fastText (Mikolov et al., 2017; Grave et al., 2018), that deduplicates documents and identifies their language. We augment this pipeline with a filtering step to select documents that are close to high quality corpora like Wikipedia.",
322
+ language = "English",
323
+ ISBN = "979-10-95546-34-4",
324
+ }
325
+ ```
326
 
327
  ### Contributions
328