Datasets:
Tasks:
Text Classification
Modalities:
Text
Formats:
csv
Sub-tasks:
multi-class-classification
Size:
10K - 100K
parquet-converter
commited on
Commit
•
c0aa92e
1
Parent(s):
aa56583
Update parquet files
Browse files
.gitattributes
DELETED
@@ -1,28 +0,0 @@
|
|
1 |
-
*.7z filter=lfs diff=lfs merge=lfs -text
|
2 |
-
*.arrow filter=lfs diff=lfs merge=lfs -text
|
3 |
-
*.bin filter=lfs diff=lfs merge=lfs -text
|
4 |
-
*.bin.* filter=lfs diff=lfs merge=lfs -text
|
5 |
-
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
6 |
-
*.ftz filter=lfs diff=lfs merge=lfs -text
|
7 |
-
*.gz filter=lfs diff=lfs merge=lfs -text
|
8 |
-
*.h5 filter=lfs diff=lfs merge=lfs -text
|
9 |
-
*.joblib filter=lfs diff=lfs merge=lfs -text
|
10 |
-
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
11 |
-
*.model filter=lfs diff=lfs merge=lfs -text
|
12 |
-
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
13 |
-
*.onnx filter=lfs diff=lfs merge=lfs -text
|
14 |
-
*.ot filter=lfs diff=lfs merge=lfs -text
|
15 |
-
*.parquet filter=lfs diff=lfs merge=lfs -text
|
16 |
-
*.pb filter=lfs diff=lfs merge=lfs -text
|
17 |
-
*.pt filter=lfs diff=lfs merge=lfs -text
|
18 |
-
*.pth filter=lfs diff=lfs merge=lfs -text
|
19 |
-
*.rar filter=lfs diff=lfs merge=lfs -text
|
20 |
-
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
21 |
-
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
22 |
-
*.tflite filter=lfs diff=lfs merge=lfs -text
|
23 |
-
*.tgz filter=lfs diff=lfs merge=lfs -text
|
24 |
-
*.xz filter=lfs diff=lfs merge=lfs -text
|
25 |
-
*.zip filter=lfs diff=lfs merge=lfs -text
|
26 |
-
*.zstandard filter=lfs diff=lfs merge=lfs -text
|
27 |
-
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
28 |
-
train.csv filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
README.md
DELETED
@@ -1,153 +0,0 @@
|
|
1 |
-
---
|
2 |
-
annotations_creators: []
|
3 |
-
language_creators: []
|
4 |
-
language:
|
5 |
-
- ar
|
6 |
-
- bg
|
7 |
-
- de
|
8 |
-
- el
|
9 |
-
- en
|
10 |
-
- es
|
11 |
-
- fr
|
12 |
-
- hi
|
13 |
-
- it
|
14 |
-
- ja
|
15 |
-
- nl
|
16 |
-
- pl
|
17 |
-
- pt
|
18 |
-
- ru
|
19 |
-
- sw
|
20 |
-
- th
|
21 |
-
- tr
|
22 |
-
- ur
|
23 |
-
- vi
|
24 |
-
- zh
|
25 |
-
license: []
|
26 |
-
multilinguality:
|
27 |
-
- multilingual
|
28 |
-
pretty_name: Language Identification dataset
|
29 |
-
size_categories:
|
30 |
-
- unknown
|
31 |
-
source_datasets:
|
32 |
-
- extended|amazon_reviews_multi
|
33 |
-
- extended|xnli
|
34 |
-
- extended|stsb_multi_mt
|
35 |
-
task_categories:
|
36 |
-
- text-classification
|
37 |
-
task_ids:
|
38 |
-
- multi-class-classification
|
39 |
-
---
|
40 |
-
|
41 |
-
# Dataset Card for Language Identification dataset
|
42 |
-
|
43 |
-
## Table of Contents
|
44 |
-
- [Table of Contents](#table-of-contents)
|
45 |
-
- [Dataset Description](#dataset-description)
|
46 |
-
- [Dataset Summary](#dataset-summary)
|
47 |
-
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
|
48 |
-
- [Languages](#languages)
|
49 |
-
- [Dataset Structure](#dataset-structure)
|
50 |
-
- [Data Instances](#data-instances)
|
51 |
-
- [Data Fields](#data-fields)
|
52 |
-
- [Data Splits](#data-splits)
|
53 |
-
- [Dataset Creation](#dataset-creation)
|
54 |
-
- [Curation Rationale](#curation-rationale)
|
55 |
-
- [Source Data](#source-data)
|
56 |
-
- [Annotations](#annotations)
|
57 |
-
- [Personal and Sensitive Information](#personal-and-sensitive-information)
|
58 |
-
- [Considerations for Using the Data](#considerations-for-using-the-data)
|
59 |
-
- [Social Impact of Dataset](#social-impact-of-dataset)
|
60 |
-
- [Discussion of Biases](#discussion-of-biases)
|
61 |
-
- [Other Known Limitations](#other-known-limitations)
|
62 |
-
- [Additional Information](#additional-information)
|
63 |
-
- [Dataset Curators](#dataset-curators)
|
64 |
-
- [Licensing Information](#licensing-information)
|
65 |
-
- [Citation Information](#citation-information)
|
66 |
-
- [Contributions](#contributions)
|
67 |
-
|
68 |
-
## Dataset Description
|
69 |
-
|
70 |
-
- **Homepage:**
|
71 |
-
- **Repository:**
|
72 |
-
- **Paper:**
|
73 |
-
- **Leaderboard:**
|
74 |
-
- **Point of Contact:**
|
75 |
-
|
76 |
-
### Dataset Summary
|
77 |
-
|
78 |
-
The Language Identification dataset is a collection of 90k samples consisting of text passages and corresponding language label.
|
79 |
-
This dataset was created by collecting data from 3 sources: [Multilingual Amazon Reviews Corpus](https://huggingface.co/datasets/amazon_reviews_multi), [XNLI](https://huggingface.co/datasets/xnli), and [STSb Multi MT](https://huggingface.co/datasets/stsb_multi_mt).
|
80 |
-
|
81 |
-
|
82 |
-
### Supported Tasks and Leaderboards
|
83 |
-
|
84 |
-
The dataset can be used to train a model for language identification, which is a **multi-class text classification** task.
|
85 |
-
The model [papluca/xlm-roberta-base-language-detection](https://huggingface.co/papluca/xlm-roberta-base-language-detection), which is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base), was trained on this dataset and currently achieves 99.6% accuracy on the test set.
|
86 |
-
|
87 |
-
### Languages
|
88 |
-
|
89 |
-
The Language Identification dataset contains text in 20 languages, which are:
|
90 |
-
|
91 |
-
`arabic (ar), bulgarian (bg), german (de), modern greek (el), english (en), spanish (es), french (fr), hindi (hi), italian (it), japanese (ja), dutch (nl), polish (pl), portuguese (pt), russian (ru), swahili (sw), thai (th), turkish (tr), urdu (ur), vietnamese (vi), and chinese (zh)`
|
92 |
-
|
93 |
-
## Dataset Structure
|
94 |
-
|
95 |
-
### Data Instances
|
96 |
-
|
97 |
-
For each instance, there is a string for the text and a string for the label (the language tag). Here is an example:
|
98 |
-
|
99 |
-
`{'labels': 'fr', 'text': 'Conforme à la description, produit pratique.'}`
|
100 |
-
|
101 |
-
|
102 |
-
### Data Fields
|
103 |
-
|
104 |
-
- **labels:** a string indicating the language label.
|
105 |
-
- **text:** a string consisting of one or more sentences in one of the 20 languages listed above.
|
106 |
-
|
107 |
-
### Data Splits
|
108 |
-
|
109 |
-
The Language Identification dataset has 3 splits: *train*, *valid*, and *test*.
|
110 |
-
The train set contains 70k samples, while the validation and test sets 10k each.
|
111 |
-
All splits are perfectly balanced: the train set contains 3500 samples per language, while the validation and test sets 500.
|
112 |
-
|
113 |
-
## Dataset Creation
|
114 |
-
|
115 |
-
### Curation Rationale
|
116 |
-
|
117 |
-
This dataset was built during *The Hugging Face Course Community Event*, which took place in November 2021, with the goal of collecting a dataset with enough samples for each language to train a robust language detection model.
|
118 |
-
|
119 |
-
### Source Data
|
120 |
-
|
121 |
-
The Language Identification dataset was created by collecting data from 3 sources: [Multilingual Amazon Reviews Corpus](https://huggingface.co/datasets/amazon_reviews_multi), [XNLI](https://huggingface.co/datasets/xnli), and [STSb Multi MT](https://huggingface.co/datasets/stsb_multi_mt).
|
122 |
-
|
123 |
-
### Personal and Sensitive Information
|
124 |
-
|
125 |
-
The dataset does not contain any personal information about the authors or the crowdworkers.
|
126 |
-
|
127 |
-
## Considerations for Using the Data
|
128 |
-
|
129 |
-
### Social Impact of Dataset
|
130 |
-
|
131 |
-
This dataset was developed as a benchmark for evaluating (balanced) multi-class text classification models.
|
132 |
-
|
133 |
-
### Discussion of Biases
|
134 |
-
|
135 |
-
The possible biases correspond to those of the 3 datasets on which this dataset is based.
|
136 |
-
|
137 |
-
## Additional Information
|
138 |
-
|
139 |
-
### Dataset Curators
|
140 |
-
|
141 |
-
[More Information Needed]
|
142 |
-
|
143 |
-
### Licensing Information
|
144 |
-
|
145 |
-
[More Information Needed]
|
146 |
-
|
147 |
-
### Citation Information
|
148 |
-
|
149 |
-
[More Information Needed]
|
150 |
-
|
151 |
-
### Contributions
|
152 |
-
|
153 |
-
Thanks to [@LucaPapariello](https://github.com/LucaPapariello) for adding this dataset.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
train.csv → papluca--language-identification/csv-test.parquet
RENAMED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:6267555ee0d5ec49c7551158c04667a2be648beed96b2aaf0d433347b553cf7e
|
3 |
+
size 1326491
|
papluca--language-identification/csv-train.parquet
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ac29cd1a61d3c98262e2310afa19814dc0f9bd70c38a4d6d8beae280357685d3
|
3 |
+
size 9291505
|
papluca--language-identification/csv-validation.parquet
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:c6d4acbb90f45a32d0f74ebf6e92982123810d67111adbe949800749b81407b6
|
3 |
+
size 1342617
|
test.csv
DELETED
The diff for this file is too large to render.
See raw diff
|
|
valid.csv
DELETED
The diff for this file is too large to render.
See raw diff
|
|