File size: 5,271 Bytes
688a411 8fff0be 688a411 8fff0be |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 |
---
license: apache-2.0
language:
- af
- am
- ar
- as
- az
- be
- bg
- bn
- br
- bs
- ca
- cs
- cy
- da
- de
- en
- el
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- he
- hi
- hr
- hu
- hy
- id
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lo
- lt
- lv
- mg
- mk
- ml
- mn
- mr
- ms
- my
- ne
- nl
- nb
- om
- or
- pa
- pl
- ps
- pt
- ro
- ru
- sa
- sd
- si
- sk
- sl
- so
- sq
- sr
- su
- sv
- sw
- ta
- te
- th
- tl
- tr
- ug
- uk
- ur
- uz
- vi
- xh
- yi
- zh
size_categories:
- 100M<n<1B
---
# Dataset Card for EntityCS
## Dataset Description
- Repository: https://github.com/huawei-noah/noah-research/tree/master/NLP/EntityCS
- Paper: https://aclanthology.org/2022.findings-emnlp.499.pdf
- Point of Contact: efstathia.christopoulou@huawei.com
### Dataset Summary
We use the English Wikipedia and leverage entity information from Wikidata to construct an entity-based Code Switching corpus.
To achieve this, we make use of wikilinks in Wikipedia, i.e. links from one page to another.
We use the English [Wikipedia dump](https://dumps.wikimedia.org/enwiki/latest/) (November 2021) and extract raw text with [WikiExtractor](https://github.com/attardi/wikiextractor) while keeping track of wikilinks.
Since we are interested in creating entity-level CS instances, we only keep sentences containing at least one wikilink.
Given an English sentence with wikilinks, we first map the entity in each wikilink to its corresponding Wikidata ID and
retrieve its available translations from Wikidata.
For each sentence, we check which languages have translations for all entities in that sentence, and consider those as candidates for code-switching.
We ensure all entities are code-switched to the same target language in a single sentence, avoiding noise from including too many languages.
To control the size of the corpus, we generate up to five code-switched sentences for each English sentence.
In particular, if fewer than five languages have translations available for all the entities in a sentence, we create code-switched instances with all of them.
Otherwise, we randomly select five target languages from the candidates.
If no candidate languages can be found, we do not code-switch the sentence, instead, we keep it as part of the English corpus.
Finally, we surround each entity with entity indicators (`<e>`, `</e>`).
### Supported Tasks and Leaderboards
The dataset was developped for intermediate pre-training of language models and can be used on any downstream task.
In the paper it's effectiveness is proven on entity-centric tasks, such as NER.
### Languages
The dataset covers 93 languages in total, including English.
## Dataset Structure
### Data Statistics
| Statistic | Count |
|:------------------------------|------------:|
| Languages | 93 |
| English Sentences | 54,469,214 |
| English Entities | 104,593,076 |
| Average Sentence Length | 23.37 |
| Average Entities per Sentence | 2 |
| CS Sentences per EN Sentence | ≤ 5 |
| CS Sentences | 231,124,422 |
| CS Entities | 420,907,878 |
### Data Fields
Each instance contains 3 fields:
- id: Unique ID of each sentence
- language: The language of choice for entity code-switching of the given sentence
- en_sentence: The original English sentence
- cs_sentence: The code-switched sentence
An example of what a data instance looks like:
```
{
'id': 19,
'en_sentence': 'The subs then enter a <en>coral reef</en> with many bright reflective colors.',
'cs_sentence': 'The subs then enter a <de>Korallenriff</de> with many bright reflective colors.',
'language': 'de'
}
```
### Data Splits
There is a single data split for each language. You can randomly select a few examples to serve as validation set.
### Limitations
An important limitation of the work is that before code-switching an entity, its morphological inflection is not checked.
This can lead to potential errors as the form of the CS entity might not agree with the surrounding context (e.g. plural).
There should be few cases as such, as we are only switching entities. However, this should be improved in a later version of the corpus.
Secondly, the diversity of languages used to construct the EntityCS corpus is restricted to the overlap between the available languages in WikiData and XLM-R pre-training.
This choice was for a better comparison between models, however it is possible to extend the corpus with more languages that XLM-R does not cover, following
the procedure presented in the paper.
### Citation
```html
@inproceedings{whitehouse-etal-2022-entitycs,
title = "{E}ntity{CS}: Improving Zero-Shot Cross-lingual Transfer with Entity-Centric Code Switching",
author = "Whitehouse, Chenxi and
Christopoulou, Fenia and
Iacobacci, Ignacio",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2022",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.findings-emnlp.499",
pages = "6698--6714"
}
```
|