Datasets:
Update README.md
#1
by
izhx
- opened
README.md
CHANGED
@@ -1,3 +1,213 @@
|
|
1 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2 |
license: apache-2.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
language:
|
3 |
+
- af
|
4 |
+
- ar
|
5 |
+
- az
|
6 |
+
- bg
|
7 |
+
- bn
|
8 |
+
- de
|
9 |
+
- el
|
10 |
+
- en
|
11 |
+
- es
|
12 |
+
- et
|
13 |
+
- eu
|
14 |
+
- fa
|
15 |
+
- fi
|
16 |
+
- fr
|
17 |
+
- gu
|
18 |
+
- he
|
19 |
+
- hi
|
20 |
+
- ht
|
21 |
+
- hu
|
22 |
+
- id
|
23 |
+
- it
|
24 |
+
- ja
|
25 |
+
- jv
|
26 |
+
- ka
|
27 |
+
- kk
|
28 |
+
- ko
|
29 |
+
- lt
|
30 |
+
- ml
|
31 |
+
- mr
|
32 |
+
- ms
|
33 |
+
- my
|
34 |
+
- nl
|
35 |
+
- pa
|
36 |
+
- pl
|
37 |
+
- pt
|
38 |
+
- qu
|
39 |
+
- ro
|
40 |
+
- ru
|
41 |
+
- sw
|
42 |
+
- ta
|
43 |
+
- te
|
44 |
+
- th
|
45 |
+
- tl
|
46 |
+
- tr
|
47 |
+
- uk
|
48 |
+
- ur
|
49 |
+
- vi
|
50 |
+
- wo
|
51 |
+
- yo
|
52 |
+
- zh
|
53 |
license: apache-2.0
|
54 |
+
pretty_name: Mewsli-X
|
55 |
+
task_categories:
|
56 |
+
- text-retrieval
|
57 |
+
task_ids:
|
58 |
+
- entity-linking-retrieval
|
59 |
+
configs:
|
60 |
+
- config_name: wikipedia_pairs
|
61 |
+
data_files:
|
62 |
+
- split: train
|
63 |
+
path: wikipedia_pairs/train.jsonl.tar.gz
|
64 |
+
- split: validation
|
65 |
+
path: wikipedia_pairs/dev.jsonl.tar.gz
|
66 |
+
- config_name: ar
|
67 |
+
data_files:
|
68 |
+
- split: validation
|
69 |
+
path: wikinews_mentions/ar/dev.jsonl
|
70 |
+
- split: test
|
71 |
+
path: wikinews_mentions/ar/test.jsonl
|
72 |
+
- config_name: de
|
73 |
+
data_files:
|
74 |
+
- split: validation
|
75 |
+
path: wikinews_mentions/de/dev.jsonl
|
76 |
+
- split: test
|
77 |
+
path: wikinews_mentions/de/test.jsonl
|
78 |
+
- config_name: en
|
79 |
+
data_files:
|
80 |
+
- split: validation
|
81 |
+
path: wikinews_mentions/en/dev.jsonl
|
82 |
+
- split: test
|
83 |
+
path: wikinews_mentions/en/test.jsonl
|
84 |
+
- config_name: es
|
85 |
+
data_files:
|
86 |
+
- split: validation
|
87 |
+
path: wikinews_mentions/es/dev.jsonl
|
88 |
+
- split: test
|
89 |
+
path: wikinews_mentions/es/test.jsonl
|
90 |
+
- config_name: fa
|
91 |
+
data_files:
|
92 |
+
- split: validation
|
93 |
+
path: wikinews_mentions/fa/dev.jsonl
|
94 |
+
- split: test
|
95 |
+
path: wikinews_mentions/fa/test.jsonl
|
96 |
+
- config_name: ja
|
97 |
+
data_files:
|
98 |
+
- split: validation
|
99 |
+
path: wikinews_mentions/ja/dev.jsonl
|
100 |
+
- split: test
|
101 |
+
path: wikinews_mentions/ja/test.jsonl
|
102 |
+
- config_name: pl
|
103 |
+
data_files:
|
104 |
+
- split: validation
|
105 |
+
path: wikinews_mentions/pl/dev.jsonl
|
106 |
+
- split: test
|
107 |
+
path: wikinews_mentions/pl/test.jsonl
|
108 |
+
- config_name: ro
|
109 |
+
data_files:
|
110 |
+
- split: validation
|
111 |
+
path: wikinews_mentions/ro/dev.jsonl
|
112 |
+
- split: test
|
113 |
+
path: wikinews_mentions/ro/test.jsonl
|
114 |
+
- config_name: ta
|
115 |
+
data_files:
|
116 |
+
- split: validation
|
117 |
+
path: wikinews_mentions/ta/dev.jsonl
|
118 |
+
- split: test
|
119 |
+
path: wikinews_mentions/ta/test.jsonl
|
120 |
+
- config_name: tr
|
121 |
+
data_files:
|
122 |
+
- split: validation
|
123 |
+
path: wikinews_mentions/tr/dev.jsonl
|
124 |
+
- split: test
|
125 |
+
path: wikinews_mentions/tr/test.jsonl
|
126 |
+
- config_name: uk
|
127 |
+
data_files:
|
128 |
+
- split: validation
|
129 |
+
path: wikinews_mentions/uk/dev.jsonl
|
130 |
+
- split: test
|
131 |
+
path: wikinews_mentions/uk/test.jsonl
|
132 |
+
- config_name: candidate_entities
|
133 |
+
data_files:
|
134 |
+
- split: test
|
135 |
+
path: candidate_entities.jsonl.tar.gz
|
136 |
+
size_categories:
|
137 |
+
- 100K<n<1M
|
138 |
---
|
139 |
+
|
140 |
+
I generated the dataset following [mewsli-x.md#getting-started](https://github.com/google-research/google-research/blob/master/dense_representations_for_entity_retrieval/mel/mewsli-x.md#getting-started)
|
141 |
+
and converted into different parts (see [`process.py`](process.py)):
|
142 |
+
- ar/de/en/es/fa/ja/pl/ro/ta/tr/uk wikinews_mentions dev and test (from `wikinews_mentions-dev/test.jsonl`)
|
143 |
+
- candidate entities of 50 languages (from `candidate_set_entities.jsonl`)
|
144 |
+
- English wikipedia_pairs to fine-tune models (from `wikipedia_pairs-dev/train.jsonl`)
|
145 |
+
|
146 |
+
Raw data files are in [`raw.tar.gz`](raw.tar.gz), which contains:
|
147 |
+
```
|
148 |
+
[...] 535M Feb 24 22:06 candidate_set_entities.jsonl
|
149 |
+
[...] 9.8M Feb 24 22:06 wikinews_mentions-dev.jsonl
|
150 |
+
[...] 35M Feb 24 22:06 wikinews_mentions-test.jsonl
|
151 |
+
[...] 24M Feb 24 22:06 wikipedia_pairs-dev.jsonl
|
152 |
+
[...] 283M Feb 24 22:06 wikipedia_pairs-train.jsonl
|
153 |
+
```
|
154 |
+
|
155 |
+
**Below is from the original [readme](https://github.com/google-research/google-research/blob/master/dense_representations_for_entity_retrieval/mel/mewsli-x.md)**
|
156 |
+
|
157 |
+
# Mewsli-X
|
158 |
+
|
159 |
+
Mewsli-X is a multilingual dataset of entity mentions appearing in
|
160 |
+
[WikiNews](https://www.wikinews.org/) and
|
161 |
+
[Wikipedia](https://www.wikipedia.org/) articles, that have been automatically
|
162 |
+
linked to [WikiData](https://www.wikidata.org/) entries.
|
163 |
+
|
164 |
+
The primary use case is to evaluate transfer-learning in the zero-shot
|
165 |
+
cross-lingual setting of the
|
166 |
+
[XTREME-R benchmark suite](https://sites.research.google/xtremer):
|
167 |
+
|
168 |
+
1. Fine-tune a pretrained model on English Wikipedia examples;
|
169 |
+
2. Evaluate on WikiNews in other languages — **given an *entity mention*
|
170 |
+
in a WikiNews article, retrieve the correct *entity* from the predefined
|
171 |
+
candidate set by means of its textual description.**
|
172 |
+
|
173 |
+
Mewsli-X constitutes a *doubly zero-shot* task by construction: at test time, a
|
174 |
+
model has to contend with different languages and a different set of entities
|
175 |
+
from those observed during fine-tuning.
|
176 |
+
|
177 |
+
π For data examples and other editions of Mewsli, see [README.md](https://github.com/google-research/google-research/blob/master/dense_representations_for_entity_retrieval/mel/README.md).
|
178 |
+
|
179 |
+
π Consider submitting to the
|
180 |
+
**[XTREME-R leaderboard](https://sites.research.google/xtremer)**. The XTREME-R
|
181 |
+
[repository](https://github.com/google-research/xtreme) includes code for
|
182 |
+
getting started with training and evaluating a baseline model in PyTorch.
|
183 |
+
|
184 |
+
π Please cite this paper if you use the data/code in your work: *[XTREME-R:
|
185 |
+
Towards More Challenging and Nuanced Multilingual Evaluation (Ruder et al.,
|
186 |
+
2021)](https://aclanthology.org/2021.emnlp-main.802.pdf)*.
|
187 |
+
|
188 |
+
> _**NOTE:** New evaluation results on Mewsli-X are **not** directly comparable to those reported in the paper because the dataset required further updates, as detailed [below](#updated-dataset). This does not affect the overall findings of the paper._
|
189 |
+
|
190 |
+
```
|
191 |
+
@inproceedings{ruder-etal-2021-xtreme,
|
192 |
+
title = "{XTREME}-{R}: Towards More Challenging and Nuanced Multilingual Evaluation",
|
193 |
+
author = "Ruder, Sebastian and
|
194 |
+
Constant, Noah and
|
195 |
+
Botha, Jan and
|
196 |
+
Siddhant, Aditya and
|
197 |
+
Firat, Orhan and
|
198 |
+
Fu, Jinlan and
|
199 |
+
Liu, Pengfei and
|
200 |
+
Hu, Junjie and
|
201 |
+
Garrette, Dan and
|
202 |
+
Neubig, Graham and
|
203 |
+
Johnson, Melvin",
|
204 |
+
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
|
205 |
+
month = nov,
|
206 |
+
year = "2021",
|
207 |
+
address = "Online and Punta Cana, Dominican Republic",
|
208 |
+
publisher = "Association for Computational Linguistics",
|
209 |
+
url = "https://aclanthology.org/2021.emnlp-main.802",
|
210 |
+
doi = "10.18653/v1/2021.emnlp-main.802",
|
211 |
+
pages = "10215--10245",
|
212 |
+
}
|
213 |
+
```
|