Datasets:

Languages:
English
Multilinguality:
monolingual
Size Categories:
10K<n<100K
Language Creators:
found
Annotations Creators:
crowdsourced
Tags:
License:
bassie96code commited on
Commit
ae59990
1 Parent(s): 1f7e999

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +264 -9
README.md CHANGED
@@ -1,14 +1,269 @@
1
- annotations_creators: []
2
- language: []
3
- language_creators: []
4
- license: []
 
 
 
 
 
5
  multilinguality:
6
  - monolingual
7
- pretty_name: Label_lijsten
8
- size_categories: []
9
- source_datasets: []
10
- tags: []
11
  task_categories:
12
  - token-classification
13
  task_ids:
14
- - named-entity-recognition
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - crowdsourced
4
+ language_creators:
5
+ - found
6
+ language:
7
+ - en
8
+ license:
9
+ - other
10
  multilinguality:
11
  - monolingual
12
+ size_categories:
13
+ - 10K<n<100K
14
+ source_datasets:
 
15
  task_categories:
16
  - token-classification
17
  task_ids:
18
+ - named-entity-recognition
19
+ paperswithcode_id: toktekst-met-labels
20
+ pretty_name: Toktekst-met-labels
21
+ dataset_info:
22
+ features:
23
+ - name: id
24
+ dtype: string
25
+ - name: tok_wettekst
26
+ sequence: string
27
+ - name: label-lijsten
28
+ sequence:
29
+ class_label:
30
+ names:
31
+ '0': O
32
+ '1': B-subj
33
+ '2': I-subj
34
+ '3': Betr
35
+ config_name: label_lijsten
36
+ splits:
37
+ - name: train
38
+ num_bytes: 6931345
39
+ num_examples: 90
40
+ - name: validation
41
+ num_bytes: 1739223
42
+ num_examples: 5
43
+ - name: test
44
+ num_bytes: 1582054
45
+ num_examples: 5
46
+ download_size: 982975
47
+ dataset_size: 10252622
48
+ train-eval-index:
49
+ - config: toktekst-met-labels
50
+ task: token-classification
51
+ task_id: element-extraction
52
+ splits:
53
+ train_split: train
54
+ eval_split: test
55
+ col_mapping:
56
+ tok_wettekst: tokens
57
+ label-lijsten: tags
58
+ metrics:
59
+ - type: seqeval
60
+ name: seqeval
61
+ ---
62
+ # Dataset Card for "conll2003"
63
+
64
+ ## Table of Contents
65
+ - [Dataset Description](#dataset-description)
66
+ - [Dataset Summary](#dataset-summary)
67
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
68
+ - [Languages](#languages)
69
+ - [Dataset Structure](#dataset-structure)
70
+ - [Data Instances](#data-instances)
71
+ - [Data Fields](#data-fields)
72
+ - [Data Splits](#data-splits)
73
+ - [Dataset Creation](#dataset-creation)
74
+ - [Curation Rationale](#curation-rationale)
75
+ - [Source Data](#source-data)
76
+ - [Annotations](#annotations)
77
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
78
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
79
+ - [Social Impact of Dataset](#social-impact-of-dataset)
80
+ - [Discussion of Biases](#discussion-of-biases)
81
+ - [Other Known Limitations](#other-known-limitations)
82
+ - [Additional Information](#additional-information)
83
+ - [Dataset Curators](#dataset-curators)
84
+ - [Licensing Information](#licensing-information)
85
+ - [Citation Information](#citation-information)
86
+ - [Contributions](#contributions)
87
+
88
+ ## Dataset Description
89
+
90
+ - **Homepage:** [https://www.aclweb.org/anthology/W03-0419/](https://www.aclweb.org/anthology/W03-0419/)
91
+ - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
92
+ - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
93
+ - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
94
+ - **Size of downloaded dataset files:** 4.85 MB
95
+ - **Size of the generated dataset:** 10.26 MB
96
+ - **Total amount of disk used:** 15.11 MB
97
+
98
+ ### Dataset Summary
99
+
100
+ The shared task of CoNLL-2003 concerns language-independent named entity recognition. We will concentrate on
101
+ four types of named entities: persons, locations, organizations and names of miscellaneous entities that do
102
+ not belong to the previous three groups.
103
+
104
+ The CoNLL-2003 shared task data files contain four columns separated by a single space. Each word has been put on
105
+ a separate line and there is an empty line after each sentence. The first item on each line is a word, the second
106
+ a part-of-speech (POS) tag, the third a syntactic chunk tag and the fourth the named entity tag. The chunk tags
107
+ and the named entity tags have the format I-TYPE which means that the word is inside a phrase of type TYPE. Only
108
+ if two phrases of the same type immediately follow each other, the first word of the second phrase will have tag
109
+ B-TYPE to show that it starts a new phrase. A word with tag O is not part of a phrase. Note the dataset uses IOB2
110
+ tagging scheme, whereas the original dataset uses IOB1.
111
+
112
+ For more details see https://www.clips.uantwerpen.be/conll2003/ner/ and https://www.aclweb.org/anthology/W03-0419
113
+
114
+ ### Supported Tasks and Leaderboards
115
+
116
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
117
+
118
+ ### Languages
119
+
120
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
121
+
122
+ ## Dataset Structure
123
+
124
+ ### Data Instances
125
+
126
+ #### conll2003
127
+
128
+ - **Size of downloaded dataset files:** 4.85 MB
129
+ - **Size of the generated dataset:** 10.26 MB
130
+ - **Total amount of disk used:** 15.11 MB
131
+
132
+ An example of 'train' looks as follows.
133
+
134
+ ```
135
+ {
136
+ "chunk_tags": [11, 12, 12, 21, 13, 11, 11, 21, 13, 11, 12, 13, 11, 21, 22, 11, 12, 17, 11, 21, 17, 11, 12, 12, 21, 22, 22, 13, 11, 0],
137
+ "id": "0",
138
+ "ner_tags": [0, 3, 4, 0, 0, 0, 0, 0, 0, 7, 0, 0, 0, 0, 0, 7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
139
+ "pos_tags": [12, 22, 22, 38, 15, 22, 28, 38, 15, 16, 21, 35, 24, 35, 37, 16, 21, 15, 24, 41, 15, 16, 21, 21, 20, 37, 40, 35, 21, 7],
140
+ "tokens": ["The", "European", "Commission", "said", "on", "Thursday", "it", "disagreed", "with", "German", "advice", "to", "consumers", "to", "shun", "British", "lamb", "until", "scientists", "determine", "whether", "mad", "cow", "disease", "can", "be", "transmitted", "to", "sheep", "."]
141
+ }
142
+ ```
143
+
144
+ The original data files have `-DOCSTART-` lines used to separate documents, but these lines are removed here.
145
+ Indeed `-DOCSTART-` is a special line that acts as a boundary between two different documents, and it is filtered out in this implementation.
146
+
147
+ ### Data Fields
148
+
149
+ The data fields are the same among all splits.
150
+
151
+ #### conll2003
152
+ - `id`: a `string` feature.
153
+ - `tokens`: a `list` of `string` features.
154
+ - `pos_tags`: a `list` of classification labels (`int`). Full tagset with indices:
155
+
156
+ ```python
157
+ {'"': 0, "''": 1, '#': 2, '$': 3, '(': 4, ')': 5, ',': 6, '.': 7, ':': 8, '``': 9, 'CC': 10, 'CD': 11, 'DT': 12,
158
+ 'EX': 13, 'FW': 14, 'IN': 15, 'JJ': 16, 'JJR': 17, 'JJS': 18, 'LS': 19, 'MD': 20, 'NN': 21, 'NNP': 22, 'NNPS': 23,
159
+ 'NNS': 24, 'NN|SYM': 25, 'PDT': 26, 'POS': 27, 'PRP': 28, 'PRP$': 29, 'RB': 30, 'RBR': 31, 'RBS': 32, 'RP': 33,
160
+ 'SYM': 34, 'TO': 35, 'UH': 36, 'VB': 37, 'VBD': 38, 'VBG': 39, 'VBN': 40, 'VBP': 41, 'VBZ': 42, 'WDT': 43,
161
+ 'WP': 44, 'WP$': 45, 'WRB': 46}
162
+ ```
163
+
164
+ - `chunk_tags`: a `list` of classification labels (`int`). Full tagset with indices:
165
+
166
+ ```python
167
+ {'O': 0, 'B-ADJP': 1, 'I-ADJP': 2, 'B-ADVP': 3, 'I-ADVP': 4, 'B-CONJP': 5, 'I-CONJP': 6, 'B-INTJ': 7, 'I-INTJ': 8,
168
+ 'B-LST': 9, 'I-LST': 10, 'B-NP': 11, 'I-NP': 12, 'B-PP': 13, 'I-PP': 14, 'B-PRT': 15, 'I-PRT': 16, 'B-SBAR': 17,
169
+ 'I-SBAR': 18, 'B-UCP': 19, 'I-UCP': 20, 'B-VP': 21, 'I-VP': 22}
170
+ ```
171
+
172
+ - `ner_tags`: a `list` of classification labels (`int`). Full tagset with indices:
173
+
174
+ ```python
175
+ {'O': 0, 'B-PER': 1, 'I-PER': 2, 'B-ORG': 3, 'I-ORG': 4, 'B-LOC': 5, 'I-LOC': 6, 'B-MISC': 7, 'I-MISC': 8}
176
+ ```
177
+
178
+ ### Data Splits
179
+
180
+ | name |train|validation|test|
181
+ |---------|----:|---------:|---:|
182
+ |conll2003|14041| 3250|3453|
183
+
184
+ ## Dataset Creation
185
+
186
+ ### Curation Rationale
187
+
188
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
189
+
190
+ ### Source Data
191
+
192
+ #### Initial Data Collection and Normalization
193
+
194
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
195
+
196
+ #### Who are the source language producers?
197
+
198
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
199
+
200
+ ### Annotations
201
+
202
+ #### Annotation process
203
+
204
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
205
+
206
+ #### Who are the annotators?
207
+
208
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
209
+
210
+ ### Personal and Sensitive Information
211
+
212
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
213
+
214
+ ## Considerations for Using the Data
215
+
216
+ ### Social Impact of Dataset
217
+
218
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
219
+
220
+ ### Discussion of Biases
221
+
222
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
223
+
224
+ ### Other Known Limitations
225
+
226
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
227
+
228
+ ## Additional Information
229
+
230
+ ### Dataset Curators
231
+
232
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
233
+
234
+ ### Licensing Information
235
+
236
+ From the [CoNLL2003 shared task](https://www.clips.uantwerpen.be/conll2003/ner/) page:
237
+
238
+ > The English data is a collection of news wire articles from the Reuters Corpus. The annotation has been done by people of the University of Antwerp. Because of copyright reasons we only make available the annotations. In order to build the complete data sets you will need access to the Reuters Corpus. It can be obtained for research purposes without any charge from NIST.
239
+
240
+ The copyrights are defined below, from the [Reuters Corpus page](https://trec.nist.gov/data/reuters/reuters.html):
241
+
242
+ > The stories in the Reuters Corpus are under the copyright of Reuters Ltd and/or Thomson Reuters, and their use is governed by the following agreements:
243
+ >
244
+ > [Organizational agreement](https://trec.nist.gov/data/reuters/org_appl_reuters_v4.html)
245
+ >
246
+ > This agreement must be signed by the person responsible for the data at your organization, and sent to NIST.
247
+ >
248
+ > [Individual agreement](https://trec.nist.gov/data/reuters/ind_appl_reuters_v4.html)
249
+ >
250
+ > This agreement must be signed by all researchers using the Reuters Corpus at your organization, and kept on file at your organization.
251
+
252
+ ### Citation Information
253
+
254
+ ```
255
+ @inproceedings{tjong-kim-sang-de-meulder-2003-introduction,
256
+ title = "Introduction to the {C}o{NLL}-2003 Shared Task: Language-Independent Named Entity Recognition",
257
+ author = "Tjong Kim Sang, Erik F. and
258
+ De Meulder, Fien",
259
+ booktitle = "Proceedings of the Seventh Conference on Natural Language Learning at {HLT}-{NAACL} 2003",
260
+ year = "2003",
261
+ url = "https://www.aclweb.org/anthology/W03-0419",
262
+ pages = "142--147",
263
+ }
264
+ ```
265
+
266
+
267
+ ### Contributions
268
+
269
+ Thanks to [@jplu](https://github.com/bassie96code)