File size: 7,743 Bytes
2e9b81d
b82a70c
2e9b81d
2b1f903
 
2e9b81d
 
ebbc229
2b1f903
 
 
ebbc229
 
2e9b81d
 
 
2b1f903
 
 
2e9b81d
 
 
2b1f903
 
 
 
2e9b81d
2b1f903
 
1a3b8b6
2c7f7d6
2b1f903
 
 
 
 
 
 
26ee83f
 
 
709b6d7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2e9b81d
 
 
 
 
 
 
2c7f7d6
2e9b81d
 
 
2c7f7d6
 
2e9b81d
 
 
 
 
 
 
 
 
 
 
 
 
593fb35
2e9b81d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
593fb35
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
---
pretty_name: Wikicorpus
annotations_creators:
- machine-generated
- no-annotation
language_creators:
- found
language:
- ca
- en
- es
license:
- gfdl
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
- 10M<n<100M
- 1M<n<10M
source_datasets:
- original
task_categories:
- fill-mask
- text-classification
- text-generation
- token-classification
task_ids:
- language-modeling
- masked-language-modeling
- part-of-speech
paperswithcode_id: null
configs:
- raw_ca
- raw_en
- raw_es
- tagged_ca
- tagged_en
- tagged_es
tags:
- word-sense-disambiguation
- lemmatization
dataset_info:
- config_name: raw_ca
  features:
  - name: id
    dtype: string
  - name: title
    dtype: string
  - name: text
    dtype: string
  splits:
  - name: train
    num_bytes: 263170192
    num_examples: 143883
  download_size: 96437841
  dataset_size: 263170192
- config_name: raw_es
  features:
  - name: id
    dtype: string
  - name: title
    dtype: string
  - name: text
    dtype: string
  splits:
  - name: train
    num_bytes: 671295359
    num_examples: 259409
  download_size: 252926918
  dataset_size: 671295359
- config_name: raw_en
  features:
  - name: id
    dtype: string
  - name: title
    dtype: string
  - name: text
    dtype: string
  splits:
  - name: train
    num_bytes: 3388801074
    num_examples: 1359146
  download_size: 1346378932
  dataset_size: 3388801074
- config_name: tagged_ca
  features:
  - name: id
    dtype: string
  - name: title
    dtype: string
  - name: sentence
    sequence: string
  - name: lemmas
    sequence: string
  - name: pos_tags
    sequence: string
  - name: wordnet_senses
    sequence: string
  splits:
  - name: train
    num_bytes: 1666129919
    num_examples: 2016221
  download_size: 226390380
  dataset_size: 1666129919
- config_name: tagged_es
  features:
  - name: id
    dtype: string
  - name: title
    dtype: string
  - name: sentence
    sequence: string
  - name: lemmas
    sequence: string
  - name: pos_tags
    sequence: string
  - name: wordnet_senses
    sequence: string
  splits:
  - name: train
    num_bytes: 4100040390
    num_examples: 5039367
  download_size: 604910899
  dataset_size: 4100040390
- config_name: tagged_en
  features:
  - name: id
    dtype: string
  - name: title
    dtype: string
  - name: sentence
    sequence: string
  - name: lemmas
    sequence: string
  - name: pos_tags
    sequence: string
  - name: wordnet_senses
    sequence: string
  splits:
  - name: train
    num_bytes: 18077275300
    num_examples: 26350272
  download_size: 2477450893
  dataset_size: 18077275300
---

# Dataset Card for Wikicorpus

## Table of Contents
- [Dataset Description](#dataset-description)
  - [Dataset Summary](#dataset-summary)
  - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
  - [Languages](#languages)
- [Dataset Structure](#dataset-structure)
  - [Data Instances](#data-instances)
  - [Data Fields](#data-fields)
  - [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
  - [Curation Rationale](#curation-rationale)
  - [Source Data](#source-data)
  - [Annotations](#annotations)
  - [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
  - [Social Impact of Dataset](#social-impact-of-dataset)
  - [Discussion of Biases](#discussion-of-biases)
  - [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
  - [Dataset Curators](#dataset-curators)
  - [Licensing Information](#licensing-information)
  - [Citation Information](#citation-information)
  - [Contributions](#contributions)

## Dataset Description

- **Homepage:** https://www.cs.upc.edu/~nlp/wikicorpus/
- **Repository:**
- **Paper:** https://www.cs.upc.edu/~nlp/papers/reese10.pdf
- **Leaderboard:**
- **Point of Contact:**

### Dataset Summary

The Wikicorpus is a trilingual corpus (Catalan, Spanish, English) that contains large portions of the Wikipedia (based on a 2006 dump) and has been automatically enriched with linguistic information. In its present version, it contains over 750 million words.

The corpora have been annotated with lemma and part of speech information using the open source library FreeLing. Also, they have been sense annotated with the state of the art Word Sense Disambiguation algorithm UKB. As UKB assigns WordNet senses, and WordNet has been aligned across languages via the InterLingual Index, this sort of annotation opens the way to massive explorations in lexical semantics that were not possible before.

### Supported Tasks and Leaderboards

[More Information Needed]

### Languages

Each sub-dataset is monolingual in the languages:
- ca: Catalan
- en: English
- es: Spanish

## Dataset Structure

### Data Instances

[More Information Needed]

### Data Fields

[More Information Needed]

### Data Splits

[More Information Needed]

## Dataset Creation

### Curation Rationale

[More Information Needed]

### Source Data

#### Initial Data Collection and Normalization

[More Information Needed]

#### Who are the source language producers?

[More Information Needed]

### Annotations

#### Annotation process

[More Information Needed]

#### Who are the annotators?

[More Information Needed]

### Personal and Sensitive Information

[More Information Needed]

## Considerations for Using the Data

### Social Impact of Dataset

[More Information Needed]

### Discussion of Biases

[More Information Needed]

### Other Known Limitations

[More Information Needed]

## Additional Information

### Dataset Curators

[More Information Needed]

### Licensing Information

The WikiCorpus is licensed under the same license as Wikipedia, that is, the [GNU Free Documentation License](http://www.fsf.org/licensing/licenses/fdl.html)

### Citation Information

```
@inproceedings{reese-etal-2010-wikicorpus,
    title = "{W}ikicorpus: A Word-Sense Disambiguated Multilingual {W}ikipedia Corpus",
    author = "Reese, Samuel  and
      Boleda, Gemma  and
      Cuadros, Montse  and
      Padr{\'o}, Llu{\'i}s  and
      Rigau, German",
    booktitle = "Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}'10)",
    month = may,
    year = "2010",
    address = "Valletta, Malta",
    publisher = "European Language Resources Association (ELRA)",
    url = "http://www.lrec-conf.org/proceedings/lrec2010/pdf/222_Paper.pdf",
    abstract = "This article presents a new freely available trilingual corpus (Catalan, Spanish, English) that contains large portions of the Wikipedia and has been automatically enriched with linguistic information. To our knowledge, this is the largest such corpus that is freely available to the community: In its present version, it contains over 750 million words. The corpora have been annotated with lemma and part of speech information using the open source library FreeLing. Also, they have been sense annotated with the state of the art Word Sense Disambiguation algorithm UKB. As UKB assigns WordNet senses, and WordNet has been aligned across languages via the InterLingual Index, this sort of annotation opens the way to massive explorations in lexical semantics that were not possible before. We present a first attempt at creating a trilingual lexical resource from the sense-tagged Wikipedia corpora, namely, WikiNet. Moreover, we present two by-products of the project that are of use for the NLP community: An open source Java-based parser for Wikipedia pages developed for the construction of the corpus, and the integration of the WSD algorithm UKB in FreeLing.",
}
```

### Contributions

Thanks to [@albertvillanova](https://github.com/albertvillanova) for adding this dataset.