system HF staff commited on
Commit
0b138df
1 Parent(s): 14dcde8

Update files from the datasets library (from 1.3.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.3.0

Files changed (1) hide show
  1. README.md +266 -0
README.md ADDED
@@ -0,0 +1,266 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ ---
3
+
4
+ # Dataset Card for "lince"
5
+
6
+ ## Table of Contents
7
+ - [Dataset Description](#dataset-description)
8
+ - [Dataset Summary](#dataset-summary)
9
+ - [Supported Tasks](#supported-tasks)
10
+ - [Languages](#languages)
11
+ - [Dataset Structure](#dataset-structure)
12
+ - [Data Instances](#data-instances)
13
+ - [Data Fields](#data-fields)
14
+ - [Data Splits Sample Size](#data-splits-sample-size)
15
+ - [Dataset Creation](#dataset-creation)
16
+ - [Curation Rationale](#curation-rationale)
17
+ - [Source Data](#source-data)
18
+ - [Annotations](#annotations)
19
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
20
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
21
+ - [Social Impact of Dataset](#social-impact-of-dataset)
22
+ - [Discussion of Biases](#discussion-of-biases)
23
+ - [Other Known Limitations](#other-known-limitations)
24
+ - [Additional Information](#additional-information)
25
+ - [Dataset Curators](#dataset-curators)
26
+ - [Licensing Information](#licensing-information)
27
+ - [Citation Information](#citation-information)
28
+ - [Contributions](#contributions)
29
+
30
+ ## [Dataset Description](#dataset-description)
31
+
32
+ - **Homepage:** [http://ritual.uh.edu/lince](http://ritual.uh.edu/lince)
33
+ - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
34
+ - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
35
+ - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
36
+ - **Size of downloaded dataset files:** 8.67 MB
37
+ - **Size of the generated dataset:** 53.81 MB
38
+ - **Total amount of disk used:** 62.48 MB
39
+
40
+ ### [Dataset Summary](#dataset-summary)
41
+
42
+ LinCE is a centralized Linguistic Code-switching Evaluation benchmark
43
+ (https://ritual.uh.edu/lince/) that contains data for training and evaluating
44
+ NLP systems on code-switching tasks.
45
+
46
+ ### [Supported Tasks](#supported-tasks)
47
+
48
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
49
+
50
+ ### [Languages](#languages)
51
+
52
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
53
+
54
+ ## [Dataset Structure](#dataset-structure)
55
+
56
+ We show detailed information for up to 5 configurations of the dataset.
57
+
58
+ ### [Data Instances](#data-instances)
59
+
60
+ #### lid_hineng
61
+
62
+ - **Size of downloaded dataset files:** 0.41 MB
63
+ - **Size of the generated dataset:** 2.28 MB
64
+ - **Total amount of disk used:** 2.69 MB
65
+
66
+ An example of 'validation' looks as follows.
67
+ ```
68
+ {
69
+ "idx": 0,
70
+ "lid": ["other", "other", "lang1", "lang1", "lang1", "other", "lang1", "lang1", "lang1", "lang1", "lang1", "lang1", "lang1", "mixed", "lang1", "lang1", "other"],
71
+ "words": ["@ZahirJ", "@BinyavangaW", "Loved", "the", "ending", "!", "I", "could", "have", "offered", "you", "some", "ironic", "chai-tea", "for", "it", ";)"]
72
+ }
73
+ ```
74
+
75
+ #### lid_msaea
76
+
77
+ - **Size of downloaded dataset files:** 0.77 MB
78
+ - **Size of the generated dataset:** 4.66 MB
79
+ - **Total amount of disk used:** 5.43 MB
80
+
81
+ An example of 'train' looks as follows.
82
+ ```
83
+ This example was too long and was cropped:
84
+
85
+ {
86
+ "idx": 0,
87
+ "lid": ["ne", "lang2", "other", "lang2", "lang2", "other", "other", "lang2", "lang2", "lang2", "lang2", "lang2", "lang2", "lang2", "lang2", "lang2", "lang2", "lang2", "lang2", "lang2", "lang2", "other", "lang2", "lang2", "lang2", "ne", "lang2", "lang2"],
88
+ "words": "[\"علاء\", \"بخير\", \"،\", \"معنوياته\", \"كويسة\", \".\", \"..\", \"اسخف\", \"حاجة\", \"بس\", \"ان\", \"كل\", \"واحد\", \"منهم\", \"بييقى\", \"مقفول\", \"عليه\"..."
89
+ }
90
+ ```
91
+
92
+ #### lid_nepeng
93
+
94
+ - **Size of downloaded dataset files:** 0.52 MB
95
+ - **Size of the generated dataset:** 3.06 MB
96
+ - **Total amount of disk used:** 3.58 MB
97
+
98
+ An example of 'validation' looks as follows.
99
+ ```
100
+ {
101
+ "idx": 1,
102
+ "lid": ["other", "lang2", "lang2", "lang2", "lang2", "lang1", "lang1", "lang1", "lang1", "lang1", "lang2", "lang2", "other", "mixed", "lang2", "lang2", "other", "other", "other", "other"],
103
+ "words": ["@nirvikdada", "la", "hamlai", "bhetna", "paayeko", "will", "be", "your", "greatest", "gift", "ni", "dada", ";P", "#TreatChaiyo", "j", "hos", ";)", "@zappylily", "@AsthaGhm", "@ayacs_asis"]
104
+ }
105
+ ```
106
+
107
+ #### lid_spaeng
108
+
109
+ - **Size of downloaded dataset files:** 1.13 MB
110
+ - **Size of the generated dataset:** 6.51 MB
111
+ - **Total amount of disk used:** 7.64 MB
112
+
113
+ An example of 'train' looks as follows.
114
+ ```
115
+ {
116
+ "idx": 0,
117
+ "lid": ["other", "other", "lang1", "lang1", "lang1", "other", "lang1", "lang1"],
118
+ "words": ["11:11", ".....", "make", "a", "wish", ".......", "night", "night"]
119
+ }
120
+ ```
121
+
122
+ #### ner_hineng
123
+
124
+ - **Size of downloaded dataset files:** 0.13 MB
125
+ - **Size of the generated dataset:** 0.75 MB
126
+ - **Total amount of disk used:** 0.88 MB
127
+
128
+ An example of 'train' looks as follows.
129
+ ```
130
+ {
131
+ "idx": 1,
132
+ "lid": ["en", "en", "en", "en", "en", "en", "hi", "hi", "hi", "hi", "hi", "hi", "hi", "en", "en", "en", "en", "rest"],
133
+ "ner": ["O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-PERSON", "I-PERSON", "O", "O", "O", "B-PERSON", "I-PERSON"],
134
+ "words": ["I", "liked", "a", "@YouTube", "video", "https://t.co/DmVqhZbdaI", "Kabhi", "Palkon", "Pe", "Aasoon", "Hai-", "Kishore", "Kumar", "-Vocal", "Cover", "By", "Stephen", "Qadir"]
135
+ }
136
+ ```
137
+
138
+ ### [Data Fields](#data-fields)
139
+
140
+ The data fields are the same among all splits.
141
+
142
+ #### lid_hineng
143
+ - `idx`: a `int32` feature.
144
+ - `words`: a `list` of `string` features.
145
+ - `lid`: a `list` of `string` features.
146
+
147
+ #### lid_msaea
148
+ - `idx`: a `int32` feature.
149
+ - `words`: a `list` of `string` features.
150
+ - `lid`: a `list` of `string` features.
151
+
152
+ #### lid_nepeng
153
+ - `idx`: a `int32` feature.
154
+ - `words`: a `list` of `string` features.
155
+ - `lid`: a `list` of `string` features.
156
+
157
+ #### lid_spaeng
158
+ - `idx`: a `int32` feature.
159
+ - `words`: a `list` of `string` features.
160
+ - `lid`: a `list` of `string` features.
161
+
162
+ #### ner_hineng
163
+ - `idx`: a `int32` feature.
164
+ - `words`: a `list` of `string` features.
165
+ - `lid`: a `list` of `string` features.
166
+ - `ner`: a `list` of `string` features.
167
+
168
+ ### [Data Splits Sample Size](#data-splits-sample-size)
169
+
170
+ | name |train|validation|test|
171
+ |----------|----:|---------:|---:|
172
+ |lid_hineng| 4823| 744|1854|
173
+ |lid_msaea | 8464| 1116|1663|
174
+ |lid_nepeng| 8451| 1332|3228|
175
+ |lid_spaeng|21030| 3332|8289|
176
+ |ner_hineng| 1243| 314| 522|
177
+
178
+ ## [Dataset Creation](#dataset-creation)
179
+
180
+ ### [Curation Rationale](#curation-rationale)
181
+
182
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
183
+
184
+ ### [Source Data](#source-data)
185
+
186
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
187
+
188
+ ### [Annotations](#annotations)
189
+
190
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
191
+
192
+ ### [Personal and Sensitive Information](#personal-and-sensitive-information)
193
+
194
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
195
+
196
+ ## [Considerations for Using the Data](#considerations-for-using-the-data)
197
+
198
+ ### [Social Impact of Dataset](#social-impact-of-dataset)
199
+
200
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
201
+
202
+ ### [Discussion of Biases](#discussion-of-biases)
203
+
204
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
205
+
206
+ ### [Other Known Limitations](#other-known-limitations)
207
+
208
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
209
+
210
+ ## [Additional Information](#additional-information)
211
+
212
+ ### [Dataset Curators](#dataset-curators)
213
+
214
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
215
+
216
+ ### [Licensing Information](#licensing-information)
217
+
218
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
219
+
220
+ ### [Citation Information](#citation-information)
221
+
222
+ ```
223
+
224
+ @inproceedings{molina-etal-2016-overview,
225
+ title = "Overview for the Second Shared Task on Language Identification in Code-Switched Data",
226
+ author = "Molina, Giovanni and
227
+ AlGhamdi, Fahad and
228
+ Ghoneim, Mahmoud and
229
+ Hawwari, Abdelati and
230
+ Rey-Villamizar, Nicolas and
231
+ Diab, Mona and
232
+ Solorio, Thamar",
233
+ booktitle = "Proceedings of the Second Workshop on Computational Approaches to Code Switching",
234
+ month = nov,
235
+ year = "2016",
236
+ address = "Austin, Texas",
237
+ publisher = "Association for Computational Linguistics",
238
+ url = "https://www.aclweb.org/anthology/W16-5805",
239
+ doi = "10.18653/v1/W16-5805",
240
+ pages = "40--49",
241
+ }
242
+
243
+ @inproceedings{aguilar-etal-2020-lince,
244
+ title = "{L}in{CE}: A Centralized Benchmark for Linguistic Code-switching Evaluation",
245
+ author = "Aguilar, Gustavo and
246
+ Kar, Sudipta and
247
+ Solorio, Thamar",
248
+ booktitle = "Proceedings of The 12th Language Resources and Evaluation Conference",
249
+ month = may,
250
+ year = "2020",
251
+ address = "Marseille, France",
252
+ publisher = "European Language Resources Association",
253
+ url = "https://www.aclweb.org/anthology/2020.lrec-1.223",
254
+ pages = "1803--1813",
255
+ language = "English",
256
+ ISBN = "979-10-95546-34-4",
257
+ }
258
+
259
+ Note that each LinCE dataset has its own citation. Please see the source to see
260
+ the correct citation for each contained dataset.
261
+ ```
262
+
263
+
264
+ ### Contributions
265
+
266
+ Thanks to [@lhoestq](https://github.com/lhoestq), [@thomwolf](https://github.com/thomwolf), [@gaguilar](https://github.com/gaguilar) for adding this dataset.