leondz commited on
Commit
cc93b53
1 Parent(s): 91a4d9f

add dataset card

Browse files
Files changed (1) hide show
  1. README.md +205 -0
README.md ADDED
@@ -0,0 +1,205 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - found
4
+ language_creators:
5
+ - found
6
+ languages:
7
+ - da
8
+ - nn
9
+ - nb
10
+ - fo
11
+ - is
12
+ - sv
13
+ licenses:
14
+ - cc-by-sa-3.0
15
+ multilinguality:
16
+ - multilingual
17
+ pretty_name: Nordic Language ID for Distinguishing between Similar Languages
18
+ size_categories:
19
+ - 100K<n<1M
20
+ source_datasets:
21
+ - original
22
+ task_categories:
23
+ - text-classification
24
+ task_ids:
25
+ - language-identification
26
+ ---
27
+
28
+ # Dataset Card for nordic_langid
29
+
30
+ ## Table of Contents
31
+ - [Dataset Description](#dataset-description)
32
+ - [Dataset Summary](#dataset-summary)
33
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
34
+ - [Languages](#languages)
35
+ - [Dataset Structure](#dataset-structure)
36
+ - [Data Instances](#data-instances)
37
+ - [Data Fields](#data-instances)
38
+ - [Data Splits](#data-instances)
39
+ - [Dataset Creation](#dataset-creation)
40
+ - [Curation Rationale](#curation-rationale)
41
+ - [Source Data](#source-data)
42
+ - [Annotations](#annotations)
43
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
44
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
45
+ - [Social Impact of Dataset](#social-impact-of-dataset)
46
+ - [Discussion of Biases](#discussion-of-biases)
47
+ - [Other Known Limitations](#other-known-limitations)
48
+ - [Additional Information](#additional-information)
49
+ - [Dataset Curators](#dataset-curators)
50
+ - [Licensing Information](#licensing-information)
51
+ - [Citation Information](#citation-information)
52
+
53
+ ## Dataset Description
54
+
55
+ - **Homepage:** [https://github.com/renhaa/NordicDSL](https://github.com/renhaa/NordicDSL)
56
+ - **Repository:** [https://github.com/renhaa/NordicDSL](https://github.com/renhaa/NordicDSL)
57
+ - **Paper:** [https://aclanthology.org/2021.vardial-1.8/](https://aclanthology.org/2021.vardial-1.8/)
58
+ - **Leaderboard:** [Needs More Information]
59
+ - **Point of Contact:** [Leon Derczynski](mailto:leod@itu.dk)
60
+
61
+ ### Dataset Summary
62
+
63
+ Automatic language identification is a challenging problem. Discriminating
64
+ between closely related languages is especially difficult. This paper presents
65
+ a machine learning approach for automatic language identification for the
66
+ Nordic languages, which often suffer miscategorisation by existing
67
+ state-of-the-art tools. Concretely we will focus on discrimination between six
68
+ Nordic languages: Danish, Swedish, Norwegian (Nynorsk), Norwegian (Bokmål),
69
+ Faroese and Icelandic.
70
+
71
+ This is the data for the tasks. Two variants are provided: 10K and 50K, with
72
+ holding 10,000 and 50,000 examples for each language respectively.
73
+
74
+ For more info, see the paper: [Discriminating Between Similar Nordic Languages](https://aclanthology.org/2021.vardial-1.8/).
75
+
76
+ ### Supported Tasks and Leaderboards
77
+
78
+ *
79
+
80
+ ### Languages
81
+
82
+ This dataset is in six similar Nordic languages:
83
+
84
+ - Danish, `da`
85
+ - Faroese, `fo`
86
+ - Icelandic, `is`
87
+ - Norwegian Bokmål, `nb`
88
+ - Norwegian Nynorsk, `nn`
89
+ - Swedish, `sv`
90
+
91
+ ## Dataset Structure
92
+
93
+ The dataset has two parts, one with 10K samples per language and another with 50K per language.
94
+ The original splits and data allocation used in the paper is presented here.
95
+
96
+ ### Data Instances
97
+
98
+ [Needs More Information]
99
+
100
+ ### Data Fields
101
+
102
+ - `id`: the sentence's unique identifier, `string`
103
+ - `sentence`: the test to be classifier, a `string`
104
+ - `language`: the class, one of `da`, `fo`, `is`, `nb`, `no`, `sv`.
105
+
106
+ ### Data Splits
107
+
108
+ Train and Test splits are provided, divided using the code provided with the paper.
109
+
110
+ ## Dataset Creation
111
+
112
+ ### Curation Rationale
113
+
114
+ Data is taken from Wikipedia and Tatoeba from each of these six languages.
115
+
116
+ ### Source Data
117
+
118
+ #### Initial Data Collection and Normalization
119
+
120
+ **Data collection** Data was scraped from Wikipedia. We downloaded summaries for randomly chosen Wikipedia
121
+ articles in each of the languages, saved as raw text
122
+ to six .txt files of about 10MB each.
123
+ The 50K section is extended with Tatoeba data, which provides a different register to Wikipedia text, and then topped up with more Wikipedia data.
124
+
125
+ **Extracting Sentences** The first pass in sentence
126
+ tokenisation is splitting by line breaks. We then extract shorter sentences with the sentence tokenizer
127
+ (sent_tokenize) function from NLTK (Loper
128
+ and Bird, 2002). This does a better job than just
129
+ splitting by ’.’ due to the fact that abbreviations,
130
+ which can appear in a legitimate sentence, typically
131
+ include a period symbol.
132
+
133
+ **Cleaning characters** The initial data set has
134
+ many characters that do not belong to the alphabets of the languages we work with. Often the
135
+ Wikipedia pages for people or places contain names
136
+ in foreign languages. For example a summary
137
+ might contain Chinese or Russian characters which
138
+ are not strong signals for the purpose of discriminating between the target languages.
139
+ Further, it can be that some characters in the
140
+ target languages are mis-encoded. These misencodings are also not likely to be intrinsically
141
+ strong or stable signals.
142
+ To simplify feature extraction, and to reduce the
143
+ size of the vocabulary, the raw data is converted
144
+ to lowercase and stripped of all characters which
145
+ are not part of the standard alphabet of the six
146
+ languages using a character whitelist.
147
+
148
+ #### Who are the source language producers?
149
+
150
+ The source language is from Wikipedia contributors and Tatoeba contributors.
151
+
152
+ ### Annotations
153
+
154
+ #### Annotation process
155
+
156
+ The annotations were found.
157
+
158
+ #### Who are the annotators?
159
+
160
+ The annotations were found. They are determined by which language section a contributor posts their content to.
161
+
162
+ ### Personal and Sensitive Information
163
+
164
+ The data hasn't been checked for PII, and is already all public. Tatoeba is is based on translations of synthetic conversational turns and is unlikely to bear personal or sensitive information.
165
+
166
+ ## Considerations for Using the Data
167
+
168
+ ### Social Impact of Dataset
169
+
170
+ This dataset is intended to help correctly identify content in the languages of six minority languages. Existing systems often confuse these, especially Bokmål and Danish or Icelandic and Faroese. However, some dialects are missed (for example Bornholmsk) and the closed nature of the classification task thus excludes speakers of these languages without recognising their existence.
171
+
172
+ ### Discussion of Biases
173
+
174
+ The text comes from only two genres, so might not transfer well to other domains.
175
+
176
+ ### Other Known Limitations
177
+
178
+ [Needs More Information]
179
+
180
+ ## Additional Information
181
+
182
+ ### Dataset Curators
183
+
184
+ [Needs More Information]
185
+
186
+ ### Licensing Information
187
+
188
+ The data here is licensed CC-BY-SA 3.0. If you use this data, you MUST state its origin.
189
+
190
+ ### Citation Information
191
+
192
+ ````
193
+ @inproceedings{haas-derczynski-2021-discriminating,
194
+ title = "Discriminating Between Similar Nordic Languages",
195
+ author = "Haas, Ren{\'e} and
196
+ Derczynski, Leon",
197
+ booktitle = "Proceedings of the Eighth Workshop on NLP for Similar Languages, Varieties and Dialects",
198
+ month = apr,
199
+ year = "2021",
200
+ address = "Kiyv, Ukraine",
201
+ publisher = "Association for Computational Linguistics",
202
+ url = "https://aclanthology.org/2021.vardial-1.8",
203
+ pages = "67--75",
204
+ }
205
+ ```