system HF staff commited on
Commit
2e9b81d
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,222 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ raw_ca:
4
+ - no-annotation
5
+ raw_en:
6
+ - no-annotation
7
+ raw_es:
8
+ - no-annotation
9
+ tagged_ca:
10
+ - machine-generated
11
+ tagged_en:
12
+ - machine-generated
13
+ tagged_es:
14
+ - machine-generated
15
+ language_creators:
16
+ - found
17
+ languages:
18
+ raw_ca:
19
+ - ca
20
+ raw_en:
21
+ - en
22
+ raw_es:
23
+ - es
24
+ tagged_ca:
25
+ - ca
26
+ tagged_en:
27
+ - en
28
+ tagged_es:
29
+ - es
30
+ licenses:
31
+ - gfdl-1-1
32
+ multilinguality:
33
+ - monolingual
34
+ size_categories:
35
+ raw_ca:
36
+ - 100K<n<1M
37
+ raw_en:
38
+ - n>1M
39
+ raw_es:
40
+ - 100K<n<1M
41
+ tagged_ca:
42
+ - n>1M
43
+ tagged_en:
44
+ - n>1M
45
+ tagged_es:
46
+ - n>1M
47
+ source_datasets:
48
+ - original
49
+ task_categories:
50
+ raw_ca:
51
+ - sequence-modeling
52
+ raw_en:
53
+ - sequence-modeling
54
+ raw_es:
55
+ - sequence-modeling
56
+ tagged_ca:
57
+ - structure-prediction
58
+ - text-classification
59
+ tagged_en:
60
+ - structure-prediction
61
+ - text-classification
62
+ tagged_es:
63
+ - structure-prediction
64
+ - text-classification
65
+ task_ids:
66
+ raw_ca:
67
+ - language-modeling
68
+ raw_en:
69
+ - language-modeling
70
+ raw_es:
71
+ - language-modeling
72
+ tagged_ca:
73
+ - structure-prediction-other-lemmatization
74
+ - structure-prediction-other-pos-tagging
75
+ - text-classification-other-word-sense-disambiguation
76
+ tagged_en:
77
+ - structure-prediction-other-lemmatization
78
+ - structure-prediction-other-pos-tagging
79
+ - text-classification-other-word-sense-disambiguation
80
+ tagged_es:
81
+ - structure-prediction-other-lemmatization
82
+ - structure-prediction-other-pos-tagging
83
+ - text-classification-other-word-sense-disambiguation
84
+ ---
85
+
86
+ # Dataset Card for Wikicorpus
87
+
88
+ ## Table of Contents
89
+ - [Dataset Description](#dataset-description)
90
+ - [Dataset Summary](#dataset-summary)
91
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
92
+ - [Languages](#languages)
93
+ - [Dataset Structure](#dataset-structure)
94
+ - [Data Instances](#data-instances)
95
+ - [Data Fields](#data-instances)
96
+ - [Data Splits](#data-instances)
97
+ - [Dataset Creation](#dataset-creation)
98
+ - [Curation Rationale](#curation-rationale)
99
+ - [Source Data](#source-data)
100
+ - [Annotations](#annotations)
101
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
102
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
103
+ - [Social Impact of Dataset](#social-impact-of-dataset)
104
+ - [Discussion of Biases](#discussion-of-biases)
105
+ - [Other Known Limitations](#other-known-limitations)
106
+ - [Additional Information](#additional-information)
107
+ - [Dataset Curators](#dataset-curators)
108
+ - [Licensing Information](#licensing-information)
109
+ - [Citation Information](#citation-information)
110
+
111
+ ## Dataset Description
112
+
113
+ - **Homepage:** https://www.cs.upc.edu/~nlp/wikicorpus/
114
+ - **Repository:**
115
+ - **Paper:** https://www.cs.upc.edu/~nlp/papers/reese10.pdf
116
+ - **Leaderboard:**
117
+ - **Point of Contact:**
118
+
119
+ ### Dataset Summary
120
+
121
+ The Wikicorpus is a trilingual corpus (Catalan, Spanish, English) that contains large portions of the Wikipedia (based on a 2006 dump) and has been automatically enriched with linguistic information. In its present version, it contains over 750 million words.
122
+
123
+ The corpora have been annotated with lemma and part of speech information using the open source library FreeLing. Also, they have been sense annotated with the state of the art Word Sense Disambiguation algorithm UKB. As UKB assigns WordNet senses, and WordNet has been aligned across languages via the InterLingual Index, this sort of annotation opens the way to massive explorations in lexical semantics that were not possible before.
124
+
125
+ ### Supported Tasks and Leaderboards
126
+
127
+ [More Information Needed]
128
+
129
+ ### Languages
130
+
131
+ Each sub-dataset is monolingual in the languages:
132
+ - ca: Catalan
133
+ - en: English
134
+ - es: Spanish
135
+
136
+ ## Dataset Structure
137
+
138
+ ### Data Instances
139
+
140
+ [More Information Needed]
141
+
142
+ ### Data Fields
143
+
144
+ [More Information Needed]
145
+
146
+ ### Data Splits
147
+
148
+ [More Information Needed]
149
+
150
+ ## Dataset Creation
151
+
152
+ ### Curation Rationale
153
+
154
+ [More Information Needed]
155
+
156
+ ### Source Data
157
+
158
+ #### Initial Data Collection and Normalization
159
+
160
+ [More Information Needed]
161
+
162
+ #### Who are the source language producers?
163
+
164
+ [More Information Needed]
165
+
166
+ ### Annotations
167
+
168
+ #### Annotation process
169
+
170
+ [More Information Needed]
171
+
172
+ #### Who are the annotators?
173
+
174
+ [More Information Needed]
175
+
176
+ ### Personal and Sensitive Information
177
+
178
+ [More Information Needed]
179
+
180
+ ## Considerations for Using the Data
181
+
182
+ ### Social Impact of Dataset
183
+
184
+ [More Information Needed]
185
+
186
+ ### Discussion of Biases
187
+
188
+ [More Information Needed]
189
+
190
+ ### Other Known Limitations
191
+
192
+ [More Information Needed]
193
+
194
+ ## Additional Information
195
+
196
+ ### Dataset Curators
197
+
198
+ [More Information Needed]
199
+
200
+ ### Licensing Information
201
+
202
+ The WikiCorpus is licensed under the same license as Wikipedia, that is, the [GNU Free Documentation License](http://www.fsf.org/licensing/licenses/fdl.html)
203
+
204
+ ### Citation Information
205
+
206
+ ```
207
+ @inproceedings{reese-etal-2010-wikicorpus,
208
+ title = "{W}ikicorpus: A Word-Sense Disambiguated Multilingual {W}ikipedia Corpus",
209
+ author = "Reese, Samuel and
210
+ Boleda, Gemma and
211
+ Cuadros, Montse and
212
+ Padr{\'o}, Llu{\'i}s and
213
+ Rigau, German",
214
+ booktitle = "Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}'10)",
215
+ month = may,
216
+ year = "2010",
217
+ address = "Valletta, Malta",
218
+ publisher = "European Language Resources Association (ELRA)",
219
+ url = "http://www.lrec-conf.org/proceedings/lrec2010/pdf/222_Paper.pdf",
220
+ abstract = "This article presents a new freely available trilingual corpus (Catalan, Spanish, English) that contains large portions of the Wikipedia and has been automatically enriched with linguistic information. To our knowledge, this is the largest such corpus that is freely available to the community: In its present version, it contains over 750 million words. The corpora have been annotated with lemma and part of speech information using the open source library FreeLing. Also, they have been sense annotated with the state of the art Word Sense Disambiguation algorithm UKB. As UKB assigns WordNet senses, and WordNet has been aligned across languages via the InterLingual Index, this sort of annotation opens the way to massive explorations in lexical semantics that were not possible before. We present a first attempt at creating a trilingual lexical resource from the sense-tagged Wikipedia corpora, namely, WikiNet. Moreover, we present two by-products of the project that are of use for the NLP community: An open source Java-based parser for Wikipedia pages developed for the construction of the corpus, and the integration of the WSD algorithm UKB in FreeLing.",
221
+ }
222
+ ```
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"raw_ca": {"description": "The Wikicorpus is a trilingual corpus (Catalan, Spanish, English) that contains large portions of the Wikipedia (based on a 2006 dump) and has been automatically enriched with linguistic information. \nIn its present version, it contains over 750 million words.\n", "citation": "@inproceedings{reese-etal-2010-wikicorpus,\n title = \"{W}ikicorpus: A Word-Sense Disambiguated Multilingual {W}ikipedia Corpus\",\n author = \"Reese, Samuel and\n Boleda, Gemma and\n Cuadros, Montse and\n Padr{'o}, Llu{'\\i}s and\n Rigau, German\",\n booktitle = \"Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}'10)\",\n month = may,\n year = \"2010\",\n address = \"Valletta, Malta\",\n publisher = \"European Language Resources Association (ELRA)\",\n url = \"http://www.lrec-conf.org/proceedings/lrec2010/pdf/222_Paper.pdf\",\n abstract = \"This article presents a new freely available trilingual corpus (Catalan, Spanish, English) that contains large portions of the Wikipedia and has been automatically enriched with linguistic information. To our knowledge, this is the largest such corpus that is freely available to the community: In its present version, it contains over 750 million words. The corpora have been annotated with lemma and part of speech information using the open source library FreeLing. Also, they have been sense annotated with the state of the art Word Sense Disambiguation algorithm UKB. As UKB assigns WordNet senses, and WordNet has been aligned across languages via the InterLingual Index, this sort of annotation opens the way to massive explorations in lexical semantics that were not possible before. We present a first attempt at creating a trilingual lexical resource from the sense-tagged Wikipedia corpora, namely, WikiNet. Moreover, we present two by-products of the project that are of use for the NLP community: An open source Java-based parser for Wikipedia pages developed for the construction of the corpus, and the integration of the WSD algorithm UKB in FreeLing.\",\n}\n", "homepage": "https://www.cs.upc.edu/~nlp/wikicorpus/", "license": "GNU Free Documentation License", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "title": {"dtype": "string", "id": null, "_type": "Value"}, "text": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "wikicorpus", "config_name": "raw_ca", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 263170192, "num_examples": 143883, "dataset_name": "wikicorpus"}}, "download_checksums": {"https://www.cs.upc.edu/~nlp/wikicorpus/raw.ca.tgz": {"num_bytes": 96437841, "checksum": "592f75fd594805caf25580f6e79e0cb076c60ac41cc8e2c9d74adf7904b8732c"}}, "download_size": 96437841, "post_processing_size": null, "dataset_size": 263170192, "size_in_bytes": 359608033}, "raw_es": {"description": "The Wikicorpus is a trilingual corpus (Catalan, Spanish, English) that contains large portions of the Wikipedia (based on a 2006 dump) and has been automatically enriched with linguistic information. \nIn its present version, it contains over 750 million words.\n", "citation": "@inproceedings{reese-etal-2010-wikicorpus,\n title = \"{W}ikicorpus: A Word-Sense Disambiguated Multilingual {W}ikipedia Corpus\",\n author = \"Reese, Samuel and\n Boleda, Gemma and\n Cuadros, Montse and\n Padr{'o}, Llu{'\\i}s and\n Rigau, German\",\n booktitle = \"Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}'10)\",\n month = may,\n year = \"2010\",\n address = \"Valletta, Malta\",\n publisher = \"European Language Resources Association (ELRA)\",\n url = \"http://www.lrec-conf.org/proceedings/lrec2010/pdf/222_Paper.pdf\",\n abstract = \"This article presents a new freely available trilingual corpus (Catalan, Spanish, English) that contains large portions of the Wikipedia and has been automatically enriched with linguistic information. To our knowledge, this is the largest such corpus that is freely available to the community: In its present version, it contains over 750 million words. The corpora have been annotated with lemma and part of speech information using the open source library FreeLing. Also, they have been sense annotated with the state of the art Word Sense Disambiguation algorithm UKB. As UKB assigns WordNet senses, and WordNet has been aligned across languages via the InterLingual Index, this sort of annotation opens the way to massive explorations in lexical semantics that were not possible before. We present a first attempt at creating a trilingual lexical resource from the sense-tagged Wikipedia corpora, namely, WikiNet. Moreover, we present two by-products of the project that are of use for the NLP community: An open source Java-based parser for Wikipedia pages developed for the construction of the corpus, and the integration of the WSD algorithm UKB in FreeLing.\",\n}\n", "homepage": "https://www.cs.upc.edu/~nlp/wikicorpus/", "license": "GNU Free Documentation License", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "title": {"dtype": "string", "id": null, "_type": "Value"}, "text": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "wikicorpus", "config_name": "raw_es", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 671295359, "num_examples": 259409, "dataset_name": "wikicorpus"}}, "download_checksums": {"https://www.cs.upc.edu/~nlp/wikicorpus/raw.es.tgz": {"num_bytes": 252926918, "checksum": "d82ae769253f460c48520f97be2be36f4399af62aca4307ad19f8d7442f5b395"}}, "download_size": 252926918, "post_processing_size": null, "dataset_size": 671295359, "size_in_bytes": 924222277}, "raw_en": {"description": "The Wikicorpus is a trilingual corpus (Catalan, Spanish, English) that contains large portions of the Wikipedia (based on a 2006 dump) and has been automatically enriched with linguistic information. \nIn its present version, it contains over 750 million words.\n", "citation": "@inproceedings{reese-etal-2010-wikicorpus,\n title = \"{W}ikicorpus: A Word-Sense Disambiguated Multilingual {W}ikipedia Corpus\",\n author = \"Reese, Samuel and\n Boleda, Gemma and\n Cuadros, Montse and\n Padr{'o}, Llu{'\\i}s and\n Rigau, German\",\n booktitle = \"Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}'10)\",\n month = may,\n year = \"2010\",\n address = \"Valletta, Malta\",\n publisher = \"European Language Resources Association (ELRA)\",\n url = \"http://www.lrec-conf.org/proceedings/lrec2010/pdf/222_Paper.pdf\",\n abstract = \"This article presents a new freely available trilingual corpus (Catalan, Spanish, English) that contains large portions of the Wikipedia and has been automatically enriched with linguistic information. To our knowledge, this is the largest such corpus that is freely available to the community: In its present version, it contains over 750 million words. The corpora have been annotated with lemma and part of speech information using the open source library FreeLing. Also, they have been sense annotated with the state of the art Word Sense Disambiguation algorithm UKB. As UKB assigns WordNet senses, and WordNet has been aligned across languages via the InterLingual Index, this sort of annotation opens the way to massive explorations in lexical semantics that were not possible before. We present a first attempt at creating a trilingual lexical resource from the sense-tagged Wikipedia corpora, namely, WikiNet. Moreover, we present two by-products of the project that are of use for the NLP community: An open source Java-based parser for Wikipedia pages developed for the construction of the corpus, and the integration of the WSD algorithm UKB in FreeLing.\",\n}\n", "homepage": "https://www.cs.upc.edu/~nlp/wikicorpus/", "license": "GNU Free Documentation License", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "title": {"dtype": "string", "id": null, "_type": "Value"}, "text": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "wikicorpus", "config_name": "raw_en", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 3388801074, "num_examples": 1359146, "dataset_name": "wikicorpus"}}, "download_checksums": {"https://www.cs.upc.edu/~nlp/wikicorpus/raw.en.tgz": {"num_bytes": 1346378932, "checksum": "cab102a59f7e5bfb4ac832c15210bd22840bbed5508899f2dee04b857830e858"}}, "download_size": 1346378932, "post_processing_size": null, "dataset_size": 3388801074, "size_in_bytes": 4735180006}, "tagged_ca": {"description": "The Wikicorpus is a trilingual corpus (Catalan, Spanish, English) that contains large portions of the Wikipedia (based on a 2006 dump) and has been automatically enriched with linguistic information. \nIn its present version, it contains over 750 million words.\n", "citation": "@inproceedings{reese-etal-2010-wikicorpus,\n title = \"{W}ikicorpus: A Word-Sense Disambiguated Multilingual {W}ikipedia Corpus\",\n author = \"Reese, Samuel and\n Boleda, Gemma and\n Cuadros, Montse and\n Padr{'o}, Llu{'\\i}s and\n Rigau, German\",\n booktitle = \"Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}'10)\",\n month = may,\n year = \"2010\",\n address = \"Valletta, Malta\",\n publisher = \"European Language Resources Association (ELRA)\",\n url = \"http://www.lrec-conf.org/proceedings/lrec2010/pdf/222_Paper.pdf\",\n abstract = \"This article presents a new freely available trilingual corpus (Catalan, Spanish, English) that contains large portions of the Wikipedia and has been automatically enriched with linguistic information. To our knowledge, this is the largest such corpus that is freely available to the community: In its present version, it contains over 750 million words. The corpora have been annotated with lemma and part of speech information using the open source library FreeLing. Also, they have been sense annotated with the state of the art Word Sense Disambiguation algorithm UKB. As UKB assigns WordNet senses, and WordNet has been aligned across languages via the InterLingual Index, this sort of annotation opens the way to massive explorations in lexical semantics that were not possible before. We present a first attempt at creating a trilingual lexical resource from the sense-tagged Wikipedia corpora, namely, WikiNet. Moreover, we present two by-products of the project that are of use for the NLP community: An open source Java-based parser for Wikipedia pages developed for the construction of the corpus, and the integration of the WSD algorithm UKB in FreeLing.\",\n}\n", "homepage": "https://www.cs.upc.edu/~nlp/wikicorpus/", "license": "GNU Free Documentation License", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "title": {"dtype": "string", "id": null, "_type": "Value"}, "sentence": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "lemmas": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "pos_tags": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "wordnet_senses": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "wikicorpus", "config_name": "tagged_ca", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 1666129919, "num_examples": 2016221, "dataset_name": "wikicorpus"}}, "download_checksums": {"https://www.cs.upc.edu/~nlp/wikicorpus/tagged.ca.tgz": {"num_bytes": 226390380, "checksum": "c6f33b8d4db38188302f738b251f7aa9c86e9bf099e42a34dfdd57829db8b666"}}, "download_size": 226390380, "post_processing_size": null, "dataset_size": 1666129919, "size_in_bytes": 1892520299}, "tagged_es": {"description": "The Wikicorpus is a trilingual corpus (Catalan, Spanish, English) that contains large portions of the Wikipedia (based on a 2006 dump) and has been automatically enriched with linguistic information. \nIn its present version, it contains over 750 million words.\n", "citation": "@inproceedings{reese-etal-2010-wikicorpus,\n title = \"{W}ikicorpus: A Word-Sense Disambiguated Multilingual {W}ikipedia Corpus\",\n author = \"Reese, Samuel and\n Boleda, Gemma and\n Cuadros, Montse and\n Padr{'o}, Llu{'\\i}s and\n Rigau, German\",\n booktitle = \"Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}'10)\",\n month = may,\n year = \"2010\",\n address = \"Valletta, Malta\",\n publisher = \"European Language Resources Association (ELRA)\",\n url = \"http://www.lrec-conf.org/proceedings/lrec2010/pdf/222_Paper.pdf\",\n abstract = \"This article presents a new freely available trilingual corpus (Catalan, Spanish, English) that contains large portions of the Wikipedia and has been automatically enriched with linguistic information. To our knowledge, this is the largest such corpus that is freely available to the community: In its present version, it contains over 750 million words. The corpora have been annotated with lemma and part of speech information using the open source library FreeLing. Also, they have been sense annotated with the state of the art Word Sense Disambiguation algorithm UKB. As UKB assigns WordNet senses, and WordNet has been aligned across languages via the InterLingual Index, this sort of annotation opens the way to massive explorations in lexical semantics that were not possible before. We present a first attempt at creating a trilingual lexical resource from the sense-tagged Wikipedia corpora, namely, WikiNet. Moreover, we present two by-products of the project that are of use for the NLP community: An open source Java-based parser for Wikipedia pages developed for the construction of the corpus, and the integration of the WSD algorithm UKB in FreeLing.\",\n}\n", "homepage": "https://www.cs.upc.edu/~nlp/wikicorpus/", "license": "GNU Free Documentation License", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "title": {"dtype": "string", "id": null, "_type": "Value"}, "sentence": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "lemmas": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "pos_tags": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "wordnet_senses": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "wikicorpus", "config_name": "tagged_es", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 4100040390, "num_examples": 5039367, "dataset_name": "wikicorpus"}}, "download_checksums": {"https://www.cs.upc.edu/~nlp/wikicorpus/tagged.es.tgz": {"num_bytes": 604910899, "checksum": "0cda6a7991874e662a37c1c7c4afb46d05757d04eb1ad3250516c902d94244f5"}}, "download_size": 604910899, "post_processing_size": null, "dataset_size": 4100040390, "size_in_bytes": 4704951289}, "tagged_en": {"description": "The Wikicorpus is a trilingual corpus (Catalan, Spanish, English) that contains large portions of the Wikipedia (based on a 2006 dump) and has been automatically enriched with linguistic information. \nIn its present version, it contains over 750 million words.\n", "citation": "@inproceedings{reese-etal-2010-wikicorpus,\n title = \"{W}ikicorpus: A Word-Sense Disambiguated Multilingual {W}ikipedia Corpus\",\n author = \"Reese, Samuel and\n Boleda, Gemma and\n Cuadros, Montse and\n Padr{'o}, Llu{'\\i}s and\n Rigau, German\",\n booktitle = \"Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}'10)\",\n month = may,\n year = \"2010\",\n address = \"Valletta, Malta\",\n publisher = \"European Language Resources Association (ELRA)\",\n url = \"http://www.lrec-conf.org/proceedings/lrec2010/pdf/222_Paper.pdf\",\n abstract = \"This article presents a new freely available trilingual corpus (Catalan, Spanish, English) that contains large portions of the Wikipedia and has been automatically enriched with linguistic information. To our knowledge, this is the largest such corpus that is freely available to the community: In its present version, it contains over 750 million words. The corpora have been annotated with lemma and part of speech information using the open source library FreeLing. Also, they have been sense annotated with the state of the art Word Sense Disambiguation algorithm UKB. As UKB assigns WordNet senses, and WordNet has been aligned across languages via the InterLingual Index, this sort of annotation opens the way to massive explorations in lexical semantics that were not possible before. We present a first attempt at creating a trilingual lexical resource from the sense-tagged Wikipedia corpora, namely, WikiNet. Moreover, we present two by-products of the project that are of use for the NLP community: An open source Java-based parser for Wikipedia pages developed for the construction of the corpus, and the integration of the WSD algorithm UKB in FreeLing.\",\n}\n", "homepage": "https://www.cs.upc.edu/~nlp/wikicorpus/", "license": "GNU Free Documentation License", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "title": {"dtype": "string", "id": null, "_type": "Value"}, "sentence": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "lemmas": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "pos_tags": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "wordnet_senses": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "wikicorpus", "config_name": "tagged_en", "version": "0.0.0", "splits": {"train": {"name": "train", "num_bytes": 18077275300, "num_examples": 26350272, "dataset_name": "wikicorpus"}}, "download_checksums": {"https://www.cs.upc.edu/~nlp/wikicorpus/tagged.en.tgz": {"num_bytes": 2477450893, "checksum": "dd0c537a591513a068d86737798f6c7af3c55a5b5059686321e6e534c471a4e7"}}, "download_size": 2477450893, "post_processing_size": null, "dataset_size": 18077275300, "size_in_bytes": 20554726193}}
dummy/raw_ca/0.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3bd991e94c4b4a74ba1ca51b10b652564c1869e869c38e1f2c4c1b30b069e22f
3
+ size 5868
dummy/raw_en/0.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f97414e2ac893959034a2d4ec5c89e50b5c65f7bf5e2afe04f6c4ae9edef8b54
3
+ size 9546
dummy/raw_es/0.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:21e61d50ac4b42e58a7893df60b8651c4b42ab51487bfb4a1e2621bdcff811d3
3
+ size 5522
dummy/tagged_ca/0.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:42ee6af4990dbabfeaa4421d86d3b130dd4ea1eef198e2c44dafc598b5363e44
3
+ size 2436
dummy/tagged_en/0.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:11ae850a0f5ec8fd83f7b0085173f6aa7c30589414580b782d4d382f4ac30d4d
3
+ size 2221
dummy/tagged_es/0.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f277ef1c6b6c0c4463ac9107d70b1df7f3ce5172b68cd44949e2aac41186d451
3
+ size 2176
wikicorpus.py ADDED
@@ -0,0 +1,179 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """Wikicorpus dataset."""
16
+
17
+ import re
18
+ from pathlib import Path
19
+
20
+ import datasets
21
+
22
+
23
+ _CITATION = """\
24
+ @inproceedings{reese-etal-2010-wikicorpus,
25
+ title = "{W}ikicorpus: A Word-Sense Disambiguated Multilingual {W}ikipedia Corpus",
26
+ author = "Reese, Samuel and
27
+ Boleda, Gemma and
28
+ Cuadros, Montse and
29
+ Padr{\'o}, Llu{\'i}s and
30
+ Rigau, German",
31
+ booktitle = "Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}'10)",
32
+ month = may,
33
+ year = "2010",
34
+ address = "Valletta, Malta",
35
+ publisher = "European Language Resources Association (ELRA)",
36
+ url = "http://www.lrec-conf.org/proceedings/lrec2010/pdf/222_Paper.pdf",
37
+ abstract = "This article presents a new freely available trilingual corpus (Catalan, Spanish, English) that contains large portions of the Wikipedia and has been automatically enriched with linguistic information. To our knowledge, this is the largest such corpus that is freely available to the community: In its present version, it contains over 750 million words. The corpora have been annotated with lemma and part of speech information using the open source library FreeLing. Also, they have been sense annotated with the state of the art Word Sense Disambiguation algorithm UKB. As UKB assigns WordNet senses, and WordNet has been aligned across languages via the InterLingual Index, this sort of annotation opens the way to massive explorations in lexical semantics that were not possible before. We present a first attempt at creating a trilingual lexical resource from the sense-tagged Wikipedia corpora, namely, WikiNet. Moreover, we present two by-products of the project that are of use for the NLP community: An open source Java-based parser for Wikipedia pages developed for the construction of the corpus, and the integration of the WSD algorithm UKB in FreeLing.",
38
+ }
39
+ """
40
+
41
+ _DESCRIPTION = """\
42
+ The Wikicorpus is a trilingual corpus (Catalan, Spanish, English) that contains large portions of the Wikipedia (based on a 2006 dump) and has been automatically enriched with linguistic information. In its present version, it contains over 750 million words.
43
+ """
44
+
45
+ _HOMEPAGE = "https://www.cs.upc.edu/~nlp/wikicorpus/"
46
+
47
+ _LICENSE = "GNU Free Documentation License"
48
+
49
+ _URLs = "https://www.cs.upc.edu/~nlp/wikicorpus/{form}.{language}.tgz"
50
+
51
+ _LANGUAGES = ["ca", "es", "en"]
52
+ _FORMS = ["raw", "tagged"]
53
+
54
+ METADATA_PATTERN = re.compile(r'.+id="(?P<id>[^"]+)".+title="(?P<title>[^"]+)".+')
55
+
56
+
57
+ class WikicorpusConfig(datasets.BuilderConfig):
58
+ """ BuilderConfig for Wikicorpus."""
59
+
60
+ def __init__(self, form=None, language=None, **kwargs):
61
+ """
62
+ Args:
63
+ form: form of the dataset.
64
+ language: language of the dataset.
65
+ **kwargs: keyword arguments forwarded to super.
66
+ """
67
+ super().__init__(
68
+ name=f"{form}_{language}",
69
+ description=f"Wikicorpus dataset in {form} form and {language} language.",
70
+ **kwargs,
71
+ )
72
+ self.form = form
73
+ self.language = language
74
+
75
+
76
+ class Wikicorpus(datasets.GeneratorBasedBuilder):
77
+ """Wikicorpus dataset."""
78
+
79
+ VERSION = datasets.Version("1.0.0")
80
+ BUILDER_CONFIG_CLASS = WikicorpusConfig
81
+ BUILDER_CONFIGS = [WikicorpusConfig(form=form, language=language) for form in _FORMS for language in _LANGUAGES]
82
+
83
+ def _info(self):
84
+ if self.config.form == "raw":
85
+ features = datasets.Features(
86
+ {
87
+ "id": datasets.Value("string"),
88
+ "title": datasets.Value("string"),
89
+ "text": datasets.Value("string"),
90
+ }
91
+ )
92
+ elif self.config.form == "tagged":
93
+ features = datasets.Features(
94
+ {
95
+ "id": datasets.Value("string"),
96
+ "title": datasets.Value("string"),
97
+ "sentence": datasets.Sequence(datasets.Value("string")),
98
+ "lemmas": datasets.Sequence(datasets.Value("string")),
99
+ "pos_tags": datasets.Sequence(datasets.Value("string")),
100
+ "wordnet_senses": datasets.Sequence(datasets.Value("string")),
101
+ }
102
+ )
103
+ return datasets.DatasetInfo(
104
+ description=_DESCRIPTION,
105
+ features=features,
106
+ supervised_keys=None,
107
+ homepage=_HOMEPAGE,
108
+ license=_LICENSE,
109
+ citation=_CITATION,
110
+ )
111
+
112
+ def _split_generators(self, dl_manager):
113
+ url_to_download = _URLs.format(form=self.config.form, language=self.config.language)
114
+ downloaded_dir = dl_manager.download_and_extract(url_to_download)
115
+ return [
116
+ datasets.SplitGenerator(
117
+ name=datasets.Split.TRAIN,
118
+ gen_kwargs={
119
+ "dirpath": downloaded_dir,
120
+ },
121
+ ),
122
+ ]
123
+
124
+ def _generate_examples(self, dirpath):
125
+ for filepath in sorted(Path(dirpath).iterdir()):
126
+ with open(filepath, encoding="latin-1") as f:
127
+ example = {}
128
+ # raw
129
+ text = []
130
+ # tagged
131
+ words = []
132
+ lemmas = []
133
+ pos_tags = []
134
+ wordnet_senses = []
135
+ for id_, row in enumerate(f):
136
+ if self.config.form == "raw":
137
+ if row.startswith("<doc id"):
138
+ metadata_match = METADATA_PATTERN.match(row)
139
+ example["id"] = metadata_match.group("id") if metadata_match else ""
140
+ example["title"] = metadata_match.group("title") if metadata_match else ""
141
+ elif row.startswith("</doc>"):
142
+ pass
143
+ elif row.startswith("ENDOFARTICLE"):
144
+ yield id_, {
145
+ "id": example["id"],
146
+ "title": example["title"],
147
+ "text": "\n".join(text).strip(),
148
+ }
149
+ example = {}
150
+ text = []
151
+ else:
152
+ text.append(row)
153
+ elif self.config.form == "tagged":
154
+ if row.startswith("<doc id"):
155
+ metadata_match = METADATA_PATTERN.match(row)
156
+ example["id"] = metadata_match.group("id") if metadata_match else ""
157
+ example["title"] = metadata_match.group("title") if metadata_match else ""
158
+ elif row.startswith("</doc>"):
159
+ pass
160
+ elif row.startswith("ENDOFARTICLE") or row.startswith("\n"):
161
+ if len(words) > 1: # some content besides only (. . Fp 0)
162
+ yield id_, {
163
+ "id": example["id"],
164
+ "title": example["title"],
165
+ "sentence": words,
166
+ "lemmas": lemmas,
167
+ "pos_tags": pos_tags,
168
+ "wordnet_senses": wordnet_senses,
169
+ }
170
+ words = []
171
+ lemmas = []
172
+ pos_tags = []
173
+ wordnet_senses = []
174
+ if row.startswith("ENDOFARTICLE"):
175
+ example = {}
176
+ else:
177
+ splits = row.split()
178
+ for tag, tags in zip(splits, [words, lemmas, pos_tags, wordnet_senses]):
179
+ tags.append(tag)