mohammedriza-rahman commited on
Commit
c86721b
1 Parent(s): d007179

Upload 4 files

Browse files
Files changed (4) hide show
  1. README.md +356 -0
  2. conll2003.py +244 -0
  3. dataset_infos.json +1 -0
  4. gitattributes +27 -0
README.md ADDED
@@ -0,0 +1,356 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - crowdsourced
4
+ language_creators:
5
+ - found
6
+ language:
7
+ - en
8
+ license:
9
+ - other
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 10K<n<100K
14
+ source_datasets:
15
+ - extended|other-reuters-corpus
16
+ task_categories:
17
+ - token-classification
18
+ task_ids:
19
+ - named-entity-recognition
20
+ - part-of-speech
21
+ paperswithcode_id: conll-2003
22
+ pretty_name: CoNLL-2003
23
+ dataset_info:
24
+ features:
25
+ - name: id
26
+ dtype: string
27
+ - name: tokens
28
+ sequence: string
29
+ - name: pos_tags
30
+ sequence:
31
+ class_label:
32
+ names:
33
+ '0': '"'
34
+ '1': ''''''
35
+ '2': '#'
36
+ '3': $
37
+ '4': (
38
+ '5': )
39
+ '6': ','
40
+ '7': .
41
+ '8': ':'
42
+ '9': '``'
43
+ '10': CC
44
+ '11': CD
45
+ '12': DT
46
+ '13': EX
47
+ '14': FW
48
+ '15': IN
49
+ '16': JJ
50
+ '17': JJR
51
+ '18': JJS
52
+ '19': LS
53
+ '20': MD
54
+ '21': NN
55
+ '22': NNP
56
+ '23': NNPS
57
+ '24': NNS
58
+ '25': NN|SYM
59
+ '26': PDT
60
+ '27': POS
61
+ '28': PRP
62
+ '29': PRP$
63
+ '30': RB
64
+ '31': RBR
65
+ '32': RBS
66
+ '33': RP
67
+ '34': SYM
68
+ '35': TO
69
+ '36': UH
70
+ '37': VB
71
+ '38': VBD
72
+ '39': VBG
73
+ '40': VBN
74
+ '41': VBP
75
+ '42': VBZ
76
+ '43': WDT
77
+ '44': WP
78
+ '45': WP$
79
+ '46': WRB
80
+ - name: chunk_tags
81
+ sequence:
82
+ class_label:
83
+ names:
84
+ '0': O
85
+ '1': B-ADJP
86
+ '2': I-ADJP
87
+ '3': B-ADVP
88
+ '4': I-ADVP
89
+ '5': B-CONJP
90
+ '6': I-CONJP
91
+ '7': B-INTJ
92
+ '8': I-INTJ
93
+ '9': B-LST
94
+ '10': I-LST
95
+ '11': B-NP
96
+ '12': I-NP
97
+ '13': B-PP
98
+ '14': I-PP
99
+ '15': B-PRT
100
+ '16': I-PRT
101
+ '17': B-SBAR
102
+ '18': I-SBAR
103
+ '19': B-UCP
104
+ '20': I-UCP
105
+ '21': B-VP
106
+ '22': I-VP
107
+ - name: ner_tags
108
+ sequence:
109
+ class_label:
110
+ names:
111
+ '0': O
112
+ '1': B-PER
113
+ '2': I-PER
114
+ '3': B-ORG
115
+ '4': I-ORG
116
+ '5': B-LOC
117
+ '6': I-LOC
118
+ '7': B-MISC
119
+ '8': I-MISC
120
+ config_name: conll2003
121
+ splits:
122
+ - name: train
123
+ num_bytes: 6931345
124
+ num_examples: 14041
125
+ - name: validation
126
+ num_bytes: 1739223
127
+ num_examples: 3250
128
+ - name: test
129
+ num_bytes: 1582054
130
+ num_examples: 3453
131
+ download_size: 982975
132
+ dataset_size: 10252622
133
+ train-eval-index:
134
+ - config: conll2003
135
+ task: token-classification
136
+ task_id: entity_extraction
137
+ splits:
138
+ train_split: train
139
+ eval_split: test
140
+ col_mapping:
141
+ tokens: tokens
142
+ ner_tags: tags
143
+ metrics:
144
+ - type: seqeval
145
+ name: seqeval
146
+ ---
147
+
148
+ # Dataset Card for "conll2003"
149
+
150
+ ## Table of Contents
151
+ - [Dataset Description](#dataset-description)
152
+ - [Dataset Summary](#dataset-summary)
153
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
154
+ - [Languages](#languages)
155
+ - [Dataset Structure](#dataset-structure)
156
+ - [Data Instances](#data-instances)
157
+ - [Data Fields](#data-fields)
158
+ - [Data Splits](#data-splits)
159
+ - [Dataset Creation](#dataset-creation)
160
+ - [Curation Rationale](#curation-rationale)
161
+ - [Source Data](#source-data)
162
+ - [Annotations](#annotations)
163
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
164
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
165
+ - [Social Impact of Dataset](#social-impact-of-dataset)
166
+ - [Discussion of Biases](#discussion-of-biases)
167
+ - [Other Known Limitations](#other-known-limitations)
168
+ - [Additional Information](#additional-information)
169
+ - [Dataset Curators](#dataset-curators)
170
+ - [Licensing Information](#licensing-information)
171
+ - [Citation Information](#citation-information)
172
+ - [Contributions](#contributions)
173
+
174
+ ## Dataset Description
175
+
176
+ - **Homepage:** [https://www.aclweb.org/anthology/W03-0419/](https://www.aclweb.org/anthology/W03-0419/)
177
+ - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
178
+ - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
179
+ - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
180
+ - **Size of downloaded dataset files:** 4.85 MB
181
+ - **Size of the generated dataset:** 10.26 MB
182
+ - **Total amount of disk used:** 15.11 MB
183
+
184
+ ### Dataset Summary
185
+
186
+ The shared task of CoNLL-2003 concerns language-independent named entity recognition. We will concentrate on
187
+ four types of named entities: persons, locations, organizations and names of miscellaneous entities that do
188
+ not belong to the previous three groups.
189
+
190
+ The CoNLL-2003 shared task data files contain four columns separated by a single space. Each word has been put on
191
+ a separate line and there is an empty line after each sentence. The first item on each line is a word, the second
192
+ a part-of-speech (POS) tag, the third a syntactic chunk tag and the fourth the named entity tag. The chunk tags
193
+ and the named entity tags have the format I-TYPE which means that the word is inside a phrase of type TYPE. Only
194
+ if two phrases of the same type immediately follow each other, the first word of the second phrase will have tag
195
+ B-TYPE to show that it starts a new phrase. A word with tag O is not part of a phrase. Note the dataset uses IOB2
196
+ tagging scheme, whereas the original dataset uses IOB1.
197
+
198
+ For more details see https://www.clips.uantwerpen.be/conll2003/ner/ and https://www.aclweb.org/anthology/W03-0419
199
+
200
+ ### Supported Tasks and Leaderboards
201
+
202
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
203
+
204
+ ### Languages
205
+
206
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
207
+
208
+ ## Dataset Structure
209
+
210
+ ### Data Instances
211
+
212
+ #### conll2003
213
+
214
+ - **Size of downloaded dataset files:** 4.85 MB
215
+ - **Size of the generated dataset:** 10.26 MB
216
+ - **Total amount of disk used:** 15.11 MB
217
+
218
+ An example of 'train' looks as follows.
219
+
220
+ ```
221
+ {
222
+ "chunk_tags": [11, 12, 12, 21, 13, 11, 11, 21, 13, 11, 12, 13, 11, 21, 22, 11, 12, 17, 11, 21, 17, 11, 12, 12, 21, 22, 22, 13, 11, 0],
223
+ "id": "0",
224
+ "ner_tags": [0, 3, 4, 0, 0, 0, 0, 0, 0, 7, 0, 0, 0, 0, 0, 7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
225
+ "pos_tags": [12, 22, 22, 38, 15, 22, 28, 38, 15, 16, 21, 35, 24, 35, 37, 16, 21, 15, 24, 41, 15, 16, 21, 21, 20, 37, 40, 35, 21, 7],
226
+ "tokens": ["The", "European", "Commission", "said", "on", "Thursday", "it", "disagreed", "with", "German", "advice", "to", "consumers", "to", "shun", "British", "lamb", "until", "scientists", "determine", "whether", "mad", "cow", "disease", "can", "be", "transmitted", "to", "sheep", "."]
227
+ }
228
+ ```
229
+
230
+ The original data files have `-DOCSTART-` lines used to separate documents, but these lines are removed here.
231
+ Indeed `-DOCSTART-` is a special line that acts as a boundary between two different documents, and it is filtered out in this implementation.
232
+
233
+ ### Data Fields
234
+
235
+ The data fields are the same among all splits.
236
+
237
+ #### conll2003
238
+ - `id`: a `string` feature.
239
+ - `tokens`: a `list` of `string` features.
240
+ - `pos_tags`: a `list` of classification labels (`int`). Full tagset with indices:
241
+
242
+ ```python
243
+ {'"': 0, "''": 1, '#': 2, '$': 3, '(': 4, ')': 5, ',': 6, '.': 7, ':': 8, '``': 9, 'CC': 10, 'CD': 11, 'DT': 12,
244
+ 'EX': 13, 'FW': 14, 'IN': 15, 'JJ': 16, 'JJR': 17, 'JJS': 18, 'LS': 19, 'MD': 20, 'NN': 21, 'NNP': 22, 'NNPS': 23,
245
+ 'NNS': 24, 'NN|SYM': 25, 'PDT': 26, 'POS': 27, 'PRP': 28, 'PRP$': 29, 'RB': 30, 'RBR': 31, 'RBS': 32, 'RP': 33,
246
+ 'SYM': 34, 'TO': 35, 'UH': 36, 'VB': 37, 'VBD': 38, 'VBG': 39, 'VBN': 40, 'VBP': 41, 'VBZ': 42, 'WDT': 43,
247
+ 'WP': 44, 'WP$': 45, 'WRB': 46}
248
+ ```
249
+
250
+ - `chunk_tags`: a `list` of classification labels (`int`). Full tagset with indices:
251
+
252
+ ```python
253
+ {'O': 0, 'B-ADJP': 1, 'I-ADJP': 2, 'B-ADVP': 3, 'I-ADVP': 4, 'B-CONJP': 5, 'I-CONJP': 6, 'B-INTJ': 7, 'I-INTJ': 8,
254
+ 'B-LST': 9, 'I-LST': 10, 'B-NP': 11, 'I-NP': 12, 'B-PP': 13, 'I-PP': 14, 'B-PRT': 15, 'I-PRT': 16, 'B-SBAR': 17,
255
+ 'I-SBAR': 18, 'B-UCP': 19, 'I-UCP': 20, 'B-VP': 21, 'I-VP': 22}
256
+ ```
257
+
258
+ - `ner_tags`: a `list` of classification labels (`int`). Full tagset with indices:
259
+
260
+ ```python
261
+ {'O': 0, 'B-PER': 1, 'I-PER': 2, 'B-ORG': 3, 'I-ORG': 4, 'B-LOC': 5, 'I-LOC': 6, 'B-MISC': 7, 'I-MISC': 8}
262
+ ```
263
+
264
+ ### Data Splits
265
+
266
+ | name |train|validation|test|
267
+ |---------|----:|---------:|---:|
268
+ |conll2003|14041| 3250|3453|
269
+
270
+ ## Dataset Creation
271
+
272
+ ### Curation Rationale
273
+
274
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
275
+
276
+ ### Source Data
277
+
278
+ #### Initial Data Collection and Normalization
279
+
280
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
281
+
282
+ #### Who are the source language producers?
283
+
284
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
285
+
286
+ ### Annotations
287
+
288
+ #### Annotation process
289
+
290
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
291
+
292
+ #### Who are the annotators?
293
+
294
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
295
+
296
+ ### Personal and Sensitive Information
297
+
298
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
299
+
300
+ ## Considerations for Using the Data
301
+
302
+ ### Social Impact of Dataset
303
+
304
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
305
+
306
+ ### Discussion of Biases
307
+
308
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
309
+
310
+ ### Other Known Limitations
311
+
312
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
313
+
314
+ ## Additional Information
315
+
316
+ ### Dataset Curators
317
+
318
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
319
+
320
+ ### Licensing Information
321
+
322
+ From the [CoNLL2003 shared task](https://www.clips.uantwerpen.be/conll2003/ner/) page:
323
+
324
+ > The English data is a collection of news wire articles from the Reuters Corpus. The annotation has been done by people of the University of Antwerp. Because of copyright reasons we only make available the annotations. In order to build the complete data sets you will need access to the Reuters Corpus. It can be obtained for research purposes without any charge from NIST.
325
+
326
+ The copyrights are defined below, from the [Reuters Corpus page](https://trec.nist.gov/data/reuters/reuters.html):
327
+
328
+ > The stories in the Reuters Corpus are under the copyright of Reuters Ltd and/or Thomson Reuters, and their use is governed by the following agreements:
329
+ >
330
+ > [Organizational agreement](https://trec.nist.gov/data/reuters/org_appl_reuters_v4.html)
331
+ >
332
+ > This agreement must be signed by the person responsible for the data at your organization, and sent to NIST.
333
+ >
334
+ > [Individual agreement](https://trec.nist.gov/data/reuters/ind_appl_reuters_v4.html)
335
+ >
336
+ > This agreement must be signed by all researchers using the Reuters Corpus at your organization, and kept on file at your organization.
337
+
338
+ ### Citation Information
339
+
340
+ ```
341
+ @inproceedings{tjong-kim-sang-de-meulder-2003-introduction,
342
+ title = "Introduction to the {C}o{NLL}-2003 Shared Task: Language-Independent Named Entity Recognition",
343
+ author = "Tjong Kim Sang, Erik F. and
344
+ De Meulder, Fien",
345
+ booktitle = "Proceedings of the Seventh Conference on Natural Language Learning at {HLT}-{NAACL} 2003",
346
+ year = "2003",
347
+ url = "https://www.aclweb.org/anthology/W03-0419",
348
+ pages = "142--147",
349
+ }
350
+
351
+ ```
352
+
353
+
354
+ ### Contributions
355
+
356
+ Thanks to [@jplu](https://github.com/jplu), [@vblagoje](https://github.com/vblagoje), [@lhoestq](https://github.com/lhoestq) for adding this dataset.
conll2003.py ADDED
@@ -0,0 +1,244 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 HuggingFace Datasets Authors.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ # Lint as: python3
17
+ """Introduction to the CoNLL-2003 Shared Task: Language-Independent Named Entity Recognition"""
18
+
19
+ import os
20
+
21
+ import datasets
22
+
23
+
24
+ logger = datasets.logging.get_logger(__name__)
25
+
26
+
27
+ _CITATION = """\
28
+ @inproceedings{tjong-kim-sang-de-meulder-2003-introduction,
29
+ title = "Introduction to the {C}o{NLL}-2003 Shared Task: Language-Independent Named Entity Recognition",
30
+ author = "Tjong Kim Sang, Erik F. and
31
+ De Meulder, Fien",
32
+ booktitle = "Proceedings of the Seventh Conference on Natural Language Learning at {HLT}-{NAACL} 2003",
33
+ year = "2003",
34
+ url = "https://www.aclweb.org/anthology/W03-0419",
35
+ pages = "142--147",
36
+ }
37
+ """
38
+
39
+ _DESCRIPTION = """\
40
+ The shared task of CoNLL-2003 concerns language-independent named entity recognition. We will concentrate on
41
+ four types of named entities: persons, locations, organizations and names of miscellaneous entities that do
42
+ not belong to the previous three groups.
43
+
44
+ The CoNLL-2003 shared task data files contain four columns separated by a single space. Each word has been put on
45
+ a separate line and there is an empty line after each sentence. The first item on each line is a word, the second
46
+ a part-of-speech (POS) tag, the third a syntactic chunk tag and the fourth the named entity tag. The chunk tags
47
+ and the named entity tags have the format I-TYPE which means that the word is inside a phrase of type TYPE. Only
48
+ if two phrases of the same type immediately follow each other, the first word of the second phrase will have tag
49
+ B-TYPE to show that it starts a new phrase. A word with tag O is not part of a phrase. Note the dataset uses IOB2
50
+ tagging scheme, whereas the original dataset uses IOB1.
51
+
52
+ For more details see https://www.clips.uantwerpen.be/conll2003/ner/ and https://www.aclweb.org/anthology/W03-0419
53
+ """
54
+
55
+ _URL = "https://data.deepai.org/conll2003.zip"
56
+ _TRAINING_FILE = "train.txt"
57
+ _DEV_FILE = "valid.txt"
58
+ _TEST_FILE = "test.txt"
59
+
60
+
61
+ class Conll2003Config(datasets.BuilderConfig):
62
+ """BuilderConfig for Conll2003"""
63
+
64
+ def __init__(self, **kwargs):
65
+ """BuilderConfig forConll2003.
66
+
67
+ Args:
68
+ **kwargs: keyword arguments forwarded to super.
69
+ """
70
+ super(Conll2003Config, self).__init__(**kwargs)
71
+
72
+
73
+ class Conll2003(datasets.GeneratorBasedBuilder):
74
+ """Conll2003 dataset."""
75
+
76
+ BUILDER_CONFIGS = [
77
+ Conll2003Config(name="conll2003", version=datasets.Version("1.0.0"), description="Conll2003 dataset"),
78
+ ]
79
+
80
+ def _info(self):
81
+ return datasets.DatasetInfo(
82
+ description=_DESCRIPTION,
83
+ features=datasets.Features(
84
+ {
85
+ "id": datasets.Value("string"),
86
+ "tokens": datasets.Sequence(datasets.Value("string")),
87
+ "pos_tags": datasets.Sequence(
88
+ datasets.features.ClassLabel(
89
+ names=[
90
+ '"',
91
+ "''",
92
+ "#",
93
+ "$",
94
+ "(",
95
+ ")",
96
+ ",",
97
+ ".",
98
+ ":",
99
+ "``",
100
+ "CC",
101
+ "CD",
102
+ "DT",
103
+ "EX",
104
+ "FW",
105
+ "IN",
106
+ "JJ",
107
+ "JJR",
108
+ "JJS",
109
+ "LS",
110
+ "MD",
111
+ "NN",
112
+ "NNP",
113
+ "NNPS",
114
+ "NNS",
115
+ "NN|SYM",
116
+ "PDT",
117
+ "POS",
118
+ "PRP",
119
+ "PRP$",
120
+ "RB",
121
+ "RBR",
122
+ "RBS",
123
+ "RP",
124
+ "SYM",
125
+ "TO",
126
+ "UH",
127
+ "VB",
128
+ "VBD",
129
+ "VBG",
130
+ "VBN",
131
+ "VBP",
132
+ "VBZ",
133
+ "WDT",
134
+ "WP",
135
+ "WP$",
136
+ "WRB",
137
+ ]
138
+ )
139
+ ),
140
+ "chunk_tags": datasets.Sequence(
141
+ datasets.features.ClassLabel(
142
+ names=[
143
+ "O",
144
+ "B-ADJP",
145
+ "I-ADJP",
146
+ "B-ADVP",
147
+ "I-ADVP",
148
+ "B-CONJP",
149
+ "I-CONJP",
150
+ "B-INTJ",
151
+ "I-INTJ",
152
+ "B-LST",
153
+ "I-LST",
154
+ "B-NP",
155
+ "I-NP",
156
+ "B-PP",
157
+ "I-PP",
158
+ "B-PRT",
159
+ "I-PRT",
160
+ "B-SBAR",
161
+ "I-SBAR",
162
+ "B-UCP",
163
+ "I-UCP",
164
+ "B-VP",
165
+ "I-VP",
166
+ ]
167
+ )
168
+ ),
169
+ "ner_tags": datasets.Sequence(
170
+ datasets.features.ClassLabel(
171
+ names=[
172
+ "O",
173
+ "B-PER",
174
+ "I-PER",
175
+ "B-ORG",
176
+ "I-ORG",
177
+ "B-LOC",
178
+ "I-LOC",
179
+ "B-MISC",
180
+ "I-MISC",
181
+ ]
182
+ )
183
+ ),
184
+ }
185
+ ),
186
+ supervised_keys=None,
187
+ homepage="https://www.aclweb.org/anthology/W03-0419/",
188
+ citation=_CITATION,
189
+ )
190
+
191
+ def _split_generators(self, dl_manager):
192
+ """Returns SplitGenerators."""
193
+ downloaded_file = dl_manager.download_and_extract(_URL)
194
+ data_files = {
195
+ "train": os.path.join(downloaded_file, _TRAINING_FILE),
196
+ "dev": os.path.join(downloaded_file, _DEV_FILE),
197
+ "test": os.path.join(downloaded_file, _TEST_FILE),
198
+ }
199
+
200
+ return [
201
+ datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": data_files["train"]}),
202
+ datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={"filepath": data_files["dev"]}),
203
+ datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepath": data_files["test"]}),
204
+ ]
205
+
206
+ def _generate_examples(self, filepath):
207
+ logger.info("⏳ Generating examples from = %s", filepath)
208
+ with open(filepath, encoding="utf-8") as f:
209
+ guid = 0
210
+ tokens = []
211
+ pos_tags = []
212
+ chunk_tags = []
213
+ ner_tags = []
214
+ for line in f:
215
+ if line.startswith("-DOCSTART-") or line == "" or line == "\n":
216
+ if tokens:
217
+ yield guid, {
218
+ "id": str(guid),
219
+ "tokens": tokens,
220
+ "pos_tags": pos_tags,
221
+ "chunk_tags": chunk_tags,
222
+ "ner_tags": ner_tags,
223
+ }
224
+ guid += 1
225
+ tokens = []
226
+ pos_tags = []
227
+ chunk_tags = []
228
+ ner_tags = []
229
+ else:
230
+ # conll2003 tokens are space separated
231
+ splits = line.split(" ")
232
+ tokens.append(splits[0])
233
+ pos_tags.append(splits[1])
234
+ chunk_tags.append(splits[2])
235
+ ner_tags.append(splits[3].rstrip())
236
+ # last example
237
+ if tokens:
238
+ yield guid, {
239
+ "id": str(guid),
240
+ "tokens": tokens,
241
+ "pos_tags": pos_tags,
242
+ "chunk_tags": chunk_tags,
243
+ "ner_tags": ner_tags,
244
+ }
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"conll2003": {"description": "The shared task of CoNLL-2003 concerns language-independent named entity recognition. We will concentrate on\nfour types of named entities: persons, locations, organizations and names of miscellaneous entities that do\nnot belong to the previous three groups.\n\nThe CoNLL-2003 shared task data files contain four columns separated by a single space. Each word has been put on\na separate line and there is an empty line after each sentence. The first item on each line is a word, the second\na part-of-speech (POS) tag, the third a syntactic chunk tag and the fourth the named entity tag. The chunk tags\nand the named entity tags have the format I-TYPE which means that the word is inside a phrase of type TYPE. Only\nif two phrases of the same type immediately follow each other, the first word of the second phrase will have tag\nB-TYPE to show that it starts a new phrase. A word with tag O is not part of a phrase. Note the dataset uses IOB2\ntagging scheme, whereas the original dataset uses IOB1.\n\nFor more details see https://www.clips.uantwerpen.be/conll2003/ner/ and https://www.aclweb.org/anthology/W03-0419\n", "citation": "@inproceedings{tjong-kim-sang-de-meulder-2003-introduction,\n title = \"Introduction to the {C}o{NLL}-2003 Shared Task: Language-Independent Named Entity Recognition\",\n author = \"Tjong Kim Sang, Erik F. and\n De Meulder, Fien\",\n booktitle = \"Proceedings of the Seventh Conference on Natural Language Learning at {HLT}-{NAACL} 2003\",\n year = \"2003\",\n url = \"https://www.aclweb.org/anthology/W03-0419\",\n pages = \"142--147\",\n}\n", "homepage": "https://www.aclweb.org/anthology/W03-0419/", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "pos_tags": {"feature": {"num_classes": 47, "names": ["\"", "''", "#", "$", "(", ")", ",", ".", ":", "``", "CC", "CD", "DT", "EX", "FW", "IN", "JJ", "JJR", "JJS", "LS", "MD", "NN", "NNP", "NNPS", "NNS", "NN|SYM", "PDT", "POS", "PRP", "PRP$", "RB", "RBR", "RBS", "RP", "SYM", "TO", "UH", "VB", "VBD", "VBG", "VBN", "VBP", "VBZ", "WDT", "WP", "WP$", "WRB"], "id": null, "_type": "ClassLabel"}, "length": -1, "id": null, "_type": "Sequence"}, "chunk_tags": {"feature": {"num_classes": 23, "names": ["O", "B-ADJP", "I-ADJP", "B-ADVP", "I-ADVP", "B-CONJP", "I-CONJP", "B-INTJ", "I-INTJ", "B-LST", "I-LST", "B-NP", "I-NP", "B-PP", "I-PP", "B-PRT", "I-PRT", "B-SBAR", "I-SBAR", "B-UCP", "I-UCP", "B-VP", "I-VP"], "id": null, "_type": "ClassLabel"}, "length": -1, "id": null, "_type": "Sequence"}, "ner_tags": {"feature": {"num_classes": 9, "names": ["O", "B-PER", "I-PER", "B-ORG", "I-ORG", "B-LOC", "I-LOC", "B-MISC", "I-MISC"], "id": null, "_type": "ClassLabel"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "conll2003", "config_name": "conll2003", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 6931345, "num_examples": 14041, "dataset_name": "conll2003"}, "validation": {"name": "validation", "num_bytes": 1739223, "num_examples": 3250, "dataset_name": "conll2003"}, "test": {"name": "test", "num_bytes": 1582054, "num_examples": 3453, "dataset_name": "conll2003"}}, "download_checksums": {"https://data.deepai.org/conll2003.zip": {"num_bytes": 982975, "checksum": "96a104d174ddae7558bab603f19382c5fe02ff1da5c077a7f3ce2ced1578a2c3"}}, "download_size": 982975, "post_processing_size": null, "dataset_size": 10252622, "size_in_bytes": 11235597}}
gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text