Datasets:

Modalities:
Text
Languages:
Spanish
Libraries:
Datasets
parquet-converter commited on
Commit
80c9079
·
1 Parent(s): 51b8918

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,53 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bz2 filter=lfs diff=lfs merge=lfs -text
5
- *.ftz filter=lfs diff=lfs merge=lfs -text
6
- *.gz filter=lfs diff=lfs merge=lfs -text
7
- *.h5 filter=lfs diff=lfs merge=lfs -text
8
- *.joblib filter=lfs diff=lfs merge=lfs -text
9
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
10
- *.lz4 filter=lfs diff=lfs merge=lfs -text
11
- *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
- *.model filter=lfs diff=lfs merge=lfs -text
13
- *.msgpack filter=lfs diff=lfs merge=lfs -text
14
- *.npy filter=lfs diff=lfs merge=lfs -text
15
- *.npz filter=lfs diff=lfs merge=lfs -text
16
- *.onnx filter=lfs diff=lfs merge=lfs -text
17
- *.ot filter=lfs diff=lfs merge=lfs -text
18
- *.parquet filter=lfs diff=lfs merge=lfs -text
19
- *.pb filter=lfs diff=lfs merge=lfs -text
20
- *.pickle filter=lfs diff=lfs merge=lfs -text
21
- *.pkl filter=lfs diff=lfs merge=lfs -text
22
- *.pt filter=lfs diff=lfs merge=lfs -text
23
- *.pth filter=lfs diff=lfs merge=lfs -text
24
- *.rar filter=lfs diff=lfs merge=lfs -text
25
- *.safetensors filter=lfs diff=lfs merge=lfs -text
26
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
- *.tar.* filter=lfs diff=lfs merge=lfs -text
28
- *.tflite filter=lfs diff=lfs merge=lfs -text
29
- *.tgz filter=lfs diff=lfs merge=lfs -text
30
- *.wasm filter=lfs diff=lfs merge=lfs -text
31
- *.xz filter=lfs diff=lfs merge=lfs -text
32
- *.zip filter=lfs diff=lfs merge=lfs -text
33
- *.zst filter=lfs diff=lfs merge=lfs -text
34
- *tfevents* filter=lfs diff=lfs merge=lfs -text
35
- # Audio files - uncompressed
36
- *.pcm filter=lfs diff=lfs merge=lfs -text
37
- *.sam filter=lfs diff=lfs merge=lfs -text
38
- *.raw filter=lfs diff=lfs merge=lfs -text
39
- # Audio files - compressed
40
- *.aac filter=lfs diff=lfs merge=lfs -text
41
- *.flac filter=lfs diff=lfs merge=lfs -text
42
- *.mp3 filter=lfs diff=lfs merge=lfs -text
43
- *.ogg filter=lfs diff=lfs merge=lfs -text
44
- *.wav filter=lfs diff=lfs merge=lfs -text
45
- # Image files - uncompressed
46
- *.bmp filter=lfs diff=lfs merge=lfs -text
47
- *.gif filter=lfs diff=lfs merge=lfs -text
48
- *.png filter=lfs diff=lfs merge=lfs -text
49
- *.tiff filter=lfs diff=lfs merge=lfs -text
50
- # Image files - compressed
51
- *.jpg filter=lfs diff=lfs merge=lfs -text
52
- *.jpeg filter=lfs diff=lfs merge=lfs -text
53
- *.webp filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
CoNLL-NERC-es.py DELETED
@@ -1,224 +0,0 @@
1
- # coding=utf-8
2
- # Copyright 2020 HuggingFace Datasets Authors.
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
-
16
- # Lint as: python3
17
- """Introduction to the CoNLL-2002 Shared Task: Language-Independent Named Entity Recognition"""
18
-
19
- import datasets
20
-
21
-
22
- logger = datasets.logging.get_logger(__name__)
23
-
24
-
25
- _CITATION = """\
26
- @inproceedings{tjong-kim-sang-2002-introduction,
27
- title = "Introduction to the {C}o{NLL}-2002 Shared Task: Language-Independent Named Entity Recognition",
28
- author = "Tjong Kim Sang, Erik F.",
29
- booktitle = "{COLING}-02: The 6th Conference on Natural Language Learning 2002 ({C}o{NLL}-2002)",
30
- year = "2002",
31
- url = "https://www.aclweb.org/anthology/W02-2024",
32
- }
33
- """
34
-
35
- _DESCRIPTION = """\
36
- Named entities are phrases that contain the names of persons, organizations, locations, times and quantities.
37
-
38
- Example:
39
- [PER Wolff] , currently a journalist in [LOC Argentina] , played with [PER Del Bosque] in the final years of the seventies in [ORG Real Madrid] .
40
-
41
- The shared task of CoNLL-2002 concerns language-independent named entity recognition.
42
- We will concentrate on four types of named entities: persons, locations, organizations and names of miscellaneous entities that do not belong to the previous three groups.
43
- The participants of the shared task will be offered training and test data for at least two languages.
44
- They will use the data for developing a named-entity recognition system that includes a machine learning component.
45
- Information sources other than the training data may be used in this shared task.
46
- We are especially interested in methods that can use additional unannotated data for improving their performance (for example co-training).
47
-
48
- The train/validation/test sets are available in Spanish and Dutch.
49
-
50
- For more details see https://www.clips.uantwerpen.be/conll2002/ner/ and https://www.aclweb.org/anthology/W02-2024/
51
- """
52
-
53
- _URL = "https://www.cs.upc.edu/~nlp/tools/nerc/"
54
- _TRAINING_FILE = "esp.train.gz"
55
- _DEV_FILE = "esp.testa.gz"
56
- _TEST_FILE = "esp.testb.gz"
57
-
58
-
59
- class Conll2002Config(datasets.BuilderConfig):
60
- """BuilderConfig for Conll2002"""
61
-
62
- def __init__(self, **kwargs):
63
- """BuilderConfig forConll2002.
64
-
65
- Args:
66
- **kwargs: keyword arguments forwarded to super.
67
- """
68
- super(Conll2002Config, self).__init__(**kwargs)
69
-
70
-
71
- class Conll2002(datasets.GeneratorBasedBuilder):
72
- """Conll2002 dataset."""
73
-
74
- BUILDER_CONFIGS = [
75
- Conll2002Config(name="es", version=datasets.Version("1.0.0"), description="Conll2002 Spanish dataset"),
76
- ]
77
-
78
- def _info(self):
79
- return datasets.DatasetInfo(
80
- description=_DESCRIPTION,
81
- features=datasets.Features(
82
- {
83
- "id": datasets.Value("string"),
84
- "tokens": datasets.Sequence(datasets.Value("string")),
85
- "pos_tags": datasets.Sequence(
86
- datasets.features.ClassLabel(
87
- names=[
88
- "AO",
89
- "AQ",
90
- "CC",
91
- "CS",
92
- "DA",
93
- "DE",
94
- "DD",
95
- "DI",
96
- "DN",
97
- "DP",
98
- "DT",
99
- "Faa",
100
- "Fat",
101
- "Fc",
102
- "Fd",
103
- "Fe",
104
- "Fg",
105
- "Fh",
106
- "Fia",
107
- "Fit",
108
- "Fp",
109
- "Fpa",
110
- "Fpt",
111
- "Fs",
112
- "Ft",
113
- "Fx",
114
- "Fz",
115
- "I",
116
- "NC",
117
- "NP",
118
- "P0",
119
- "PD",
120
- "PI",
121
- "PN",
122
- "PP",
123
- "PR",
124
- "PT",
125
- "PX",
126
- "RG",
127
- "RN",
128
- "SP",
129
- "VAI",
130
- "VAM",
131
- "VAN",
132
- "VAP",
133
- "VAS",
134
- "VMG",
135
- "VMI",
136
- "VMM",
137
- "VMN",
138
- "VMP",
139
- "VMS",
140
- "VSG",
141
- "VSI",
142
- "VSM",
143
- "VSN",
144
- "VSP",
145
- "VSS",
146
- "Y",
147
- "Z",
148
- ]
149
- )
150
- if self.config.name == "es"
151
- else datasets.features.ClassLabel(
152
- names=["Adj", "Adv", "Art", "Conj", "Int", "Misc", "N", "Num", "Prep", "Pron", "Punc", "V"]
153
- )
154
- ),
155
- "ner_tags": datasets.Sequence(
156
- datasets.features.ClassLabel(
157
- names=[
158
- "O",
159
- "B-PER",
160
- "I-PER",
161
- "B-ORG",
162
- "I-ORG",
163
- "B-LOC",
164
- "I-LOC",
165
- "B-MISC",
166
- "I-MISC",
167
- ]
168
- )
169
- ),
170
- }
171
- ),
172
- supervised_keys=None,
173
- homepage="https://www.aclweb.org/anthology/W02-2024/",
174
- citation=_CITATION,
175
- )
176
-
177
- def _split_generators(self, dl_manager):
178
- """Returns SplitGenerators."""
179
- urls_to_download = {
180
- "train": f"{_URL}{_TRAINING_FILE}",
181
- "dev": f"{_URL}{_DEV_FILE}",
182
- "test": f"{_URL}{_TEST_FILE}",
183
- }
184
- downloaded_files = dl_manager.download_and_extract(urls_to_download)
185
-
186
- return [
187
- datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": downloaded_files["train"]}),
188
- datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={"filepath": downloaded_files["dev"]}),
189
- datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepath": downloaded_files["test"]}),
190
- ]
191
-
192
- def _generate_examples(self, filepath):
193
- logger.info("⏳ Generating examples from = %s", filepath)
194
- with open(filepath, encoding="latin-1") as f:
195
- guid = 0
196
- tokens = []
197
- pos_tags = []
198
- ner_tags = []
199
- for line in f:
200
- if line.startswith("-DOCSTART-") or line == "" or line == "\n":
201
- if tokens:
202
- yield guid, {
203
- "id": str(guid),
204
- "tokens": tokens,
205
- "pos_tags": pos_tags,
206
- "ner_tags": ner_tags,
207
- }
208
- guid += 1
209
- tokens = []
210
- pos_tags = []
211
- ner_tags = []
212
- else:
213
- # conll2002 tokens are space separated
214
- splits = line.split(" ")
215
- tokens.append(splits[0])
216
- pos_tags.append(splits[1])
217
- ner_tags.append(splits[2].rstrip())
218
- # last example
219
- yield guid, {
220
- "id": str(guid),
221
- "tokens": tokens,
222
- "pos_tags": pos_tags,
223
- "ner_tags": ner_tags,
224
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md DELETED
@@ -1,196 +0,0 @@
1
- ---
2
- YAML tags:
3
-
4
- annotations_creators:
5
- - expert-generated
6
- language:
7
- - es
8
- language_creators:
9
- - found
10
- multilinguality:
11
- - monolingual
12
- pretty_name: CoNLL-NERC-es
13
- size_categories: []
14
- source_datasets: []
15
- tags: []
16
- task_categories:
17
- - token-classification
18
- task_ids:
19
- - part-of-speech
20
-
21
- ---
22
-
23
-
24
- # CoNLL-NERC-es
25
-
26
- ## Table of Contents
27
- - [Table of Contents](#table-of-contents)
28
- - [Dataset Description](#dataset-description)
29
- - [Dataset Summary](#dataset-summary)
30
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
31
- - [Languages](#languages)
32
- - [Dataset Structure](#dataset-structure)
33
- - [Data Instances](#data-instances)
34
- - [Data Fields](#data-fields)
35
- - [Data Splits](#data-splits)
36
- - [Dataset Creation](#dataset-creation)
37
- - [Curation Rationale](#curation-rationale)
38
- - [Source Data](#source-data)
39
- - [Annotations](#annotations)
40
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
41
- - [Considerations for Using the Data](#considerations-for-using-the-data)
42
- - [Social Impact of Dataset](#social-impact-of-dataset)
43
- - [Discussion of Biases](#discussion-of-biases)
44
- - [Other Known Limitations](#other-known-limitations)
45
- - [Additional Information](#additional-information)
46
- - [Dataset Curators](#dataset-curators)
47
- - [Licensing Information](#licensing-information)
48
- - [Citation Information](#citation-information)
49
- - [Contributions](#contributions)
50
-
51
-
52
- ## Dataset Description
53
- - **Website:** https://www.cs.upc.edu/~nlp/tools/nerc/nerc.html
54
- - **Point of Contact:** [Xavier Carreras](carreras@lsi.upc.es)
55
-
56
-
57
- ### Dataset Summary
58
-
59
- CoNLL-NERC is the Spanish dataset of the CoNLL-2002 Shared Task [(Tjong Kim Sang, 2002)](https://aclanthology.org/W02-2024.pdf). The dataset is annotated with four types of named entities --persons, locations, organizations, and other miscellaneous entities-- formatted in the standard Beginning-Inside-Outside (BIO) format. The corpus consists of 8,324 train sentences with 19,400 named entities, 1,916 development sentences with 4,568 named entities, and 1,518 test sentences with 3,644 named entities.
60
-
61
- We use this corpus as part of the EvalEs Spanish language benchmark.
62
-
63
- ### Supported Tasks and Leaderboards
64
-
65
- Named Entity Recognition and Classification
66
-
67
- ### Languages
68
-
69
- The dataset is in Spanish (`es-ES`)
70
-
71
- ## Dataset Structure
72
-
73
- ### Data Instances
74
-
75
- <pre>
76
- El DA O
77
- Abogado NC B-PER
78
- General AQ I-PER
79
- del SP I-PER
80
- Estado NC I-PER
81
- , Fc O
82
- Daryl VMI B-PER
83
- Williams NC I-PER
84
- , Fc O
85
- subrayó VMI O
86
- hoy RG O
87
- la DA O
88
- necesidad NC O
89
- de SP O
90
- tomar VMN O
91
- medidas NC O
92
- para SP O
93
- proteger VMN O
94
- al SP O
95
- sistema NC O
96
- judicial AQ O
97
- australiano AQ O
98
- frente RG O
99
- a SP O
100
- una DI O
101
- página NC O
102
- de SP O
103
- internet NC O
104
- que PR O
105
- imposibilita VMI O
106
- el DA O
107
- cumplimiento NC O
108
- de SP O
109
- los DA O
110
- principios NC O
111
- básicos AQ O
112
- de SP O
113
- la DA O
114
- Ley NC B-MISC
115
- . Fp O
116
- </pre>
117
-
118
- ### Data Fields
119
-
120
- Every file has two columns, with the word form or punctuation symbol in the first one and the corresponding IOB tag in the second one. The different files are separated by an empty line.
121
-
122
- ### Data Splits
123
-
124
- - esp.train: 273037 lines
125
- - esp.testa: 54837 lines (used as dev)
126
- - esp.testb: 53049 lines (used as test)
127
-
128
- ## Dataset Creation
129
-
130
- ### Curation Rationale
131
- [N/A]
132
-
133
- ### Source Data
134
-
135
- The data is a collection of news wire articles made available by the Spanish EFE News Agency. The articles are from May 2000.
136
-
137
- #### Initial Data Collection and Normalization
138
-
139
- For more information visit the paper from the CoNLL-2002 Shared Task [(Tjong Kim Sang, 2002)](https://aclanthology.org/W02-2024.pdf).
140
-
141
- #### Who are the source language producers?
142
-
143
- For more information visit the paper from the CoNLL-2002 Shared Task [(Tjong Kim Sang, 2002)](https://aclanthology.org/W02-2024.pdf).
144
-
145
- ### Annotations
146
-
147
- #### Annotation process
148
-
149
- For more information visit the paper from the CoNLL-2002 Shared Task [(Tjong Kim Sang, 2002)](https://aclanthology.org/W02-2024.pdf).
150
-
151
- #### Who are the annotators?
152
-
153
- The annotation was carried out by the TALP Research Center2 of the Technical University of Catalonia (UPC) and the Center of Language and Computation (CLiC3 ) of the University of Barcelona (UB), and funded by the European Commission through the NAMIC pro ject (IST-1999-12392).
154
-
155
- For more information visit the paper from the CoNLL-2002 Shared Task [(Tjong Kim Sang, 2002)](https://aclanthology.org/W02-2024.pdf).
156
-
157
- ### Personal and Sensitive Information
158
-
159
- [N/A]
160
-
161
- ## Considerations for Using the Data
162
-
163
- ### Social Impact of Dataset
164
-
165
- This dataset contributes to the development of language models in Spanish.
166
-
167
- ### Discussion of Biases
168
-
169
- [N/A]
170
-
171
- ### Other Known Limitations
172
-
173
- [N/A]
174
-
175
-
176
- ## Additional Information
177
-
178
-
179
- ### Dataset curators
180
-
181
-
182
- ### Licensing information
183
-
184
-
185
- ### Citation Information
186
-
187
- The following paper must be cited when using this corpus:
188
-
189
- Erik F. Tjong Kim Sang. 2002. Introduction to the CoNLL-2002 Shared Task: Language-Independent Named Entity Recognition. In COLING-02: The 6th Conference on Natural Language Learning 2002 (CoNLL-2002).
190
-
191
-
192
- ### Contributions
193
-
194
- [N/A]
195
-
196
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
es/co_nll-nerc-es-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c95db008623aeb3ff0f4ff2c727164675d91298cff33c0050f7ca33d7a187360
3
+ size 237448
es/co_nll-nerc-es-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:96ae8fcb06648060feeb687eb169c1b52039200a9ad8b86a2fcf506477e40299
3
+ size 1207226
es/co_nll-nerc-es-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1b72e23031f01214e60caed96d1ec80cfd1e144535fa4bd8f43eabc14a9e6310
3
+ size 250793