Datasets:
Tasks:
Text Classification
Sub-tasks:
multi-class-classification
Size:
100K<n<1M
ArXiv:
Tags:
relation extraction
License:
File size: 9,955 Bytes
108e493 54344be ba61064 54344be ba61064 54344be ba61064 54344be 108e493 89476c2 54344be 89476c2 4a77a81 89476c2 54344be add7897 caeadaf add7897 54344be 0341568 89476c2 0341568 54344be 0341568 89476c2 54344be add7897 4fded18 54344be 4fded18 54344be 89476c2 54344be 89476c2 54344be 89476c2 54344be 89476c2 54344be 4207959 54344be 4fded18 54344be 4fded18 54344be 89476c2 54344be 89476c2 54344be 89476c2 54344be 1d506d8 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 |
---
annotations_creators:
- crowdsourced
- expert-generated
language:
- ar
- de
- en
- es
- fi
- fr
- hi
- hu
- ja
- pl
- ru
- tr
- zh
language_creators:
- found
license:
- other
multilinguality:
- translation
pretty_name: The Multilingual TAC Relation Extraction Dataset
size_categories:
- 100K<n<1M
source_datasets:
- extended|other
tags:
- relation extraction
task_categories:
- text-classification
task_ids:
- multi-class-classification
---
# Dataset Card for "multilingual_tacred"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://nlp.stanford.edu/projects/tacred](https://nlp.stanford.edu/projects/tacred)
- **Paper:** [Position-aware Attention and Supervised Data Improve Slot Filling](https://aclanthology.org/D17-1004/)
- **Point of Contact:** See [https://nlp.stanford.edu/projects/tacred/](https://nlp.stanford.edu/projects/tacred/)
- **Size of downloaded dataset files:** 62.3 MB
- **Size of the generated dataset:** 40.9 MB
- **Total amount of disk used:** 103.2 MB
### Dataset Summary
NOTE: This Datasetreader supports a reduced version of the original TACRED JSON format with the following changes:
- Removed fields: stanford_pos, stanford_ner, stanford_head, stanford_deprel, docid
The motivation for this is that we want to support additional languages, for which these fields were not required
or available. The reader expects the specification of a language-specific configuration specifying the variant
(original or revised) and the language (as a two-letter iso code). The default config is 'original-en'.
You can find the TACRED dataset reader for the original version of the
dataset [here](https://huggingface.co/datasets/DFKI-SLT/tacred).
The TAC Relation Extraction Dataset (TACRED) is a large-scale relation extraction dataset with 106,264 examples built over newswire and web text from the corpus used in the yearly TAC Knowledge Base Population (TAC KBP) challenges. Examples in TACRED cover 41 relation types as used in the TAC KBP challenges (e.g., per:schools_attended
and org:members) or are labeled as no_relation if no defined relation is held. These examples are created by combining available human annotations from the TAC
KBP challenges and crowdsourcing. Please see [Stanford's EMNLP paper](https://nlp.stanford.edu/pubs/zhang2017tacred.pdf), or their [EMNLP slides](https://nlp.stanford.edu/projects/tacred/files/position-emnlp2017.pdf) for full details.
Note: There is currently a [label-corrected version](https://github.com/DFKI-NLP/tacrev) of the TACRED dataset, which you should consider using instead of
the original version released in 2017. For more details on this new version, see the [TACRED Revisited paper](https://aclanthology.org/2020.acl-main.142/)
published at ACL 2020.
### Supported Tasks and Leaderboards
- **Tasks:** Relation Classification
- **Leaderboards:** [https://paperswithcode.com/sota/relation-extraction-on-tacred](https://paperswithcode.com/sota/relation-extraction-on-tacred)
### Languages
The languages in the dataset are Arabic, German, English, Spanish, Finnish, French, Hindi, Hungarian, Japanese, Polish, Russian, Turkish, and Chinese.
All languages except English are machine-translated using either Deepl's or Google's translation APIs.
## Dataset Structure
### Data Instances
- **Size of downloaded dataset files:** 62.3 MB
- **Size of the generated dataset:** 40.9 MB
- **Total amount of disk used:** 103.2 MB
An example of 'train' looks as follows:
```json
{
"id": "61b3a5c8c9a882dcfcd2",
"relation": "org:founded_by",
"token": ["Tom", "Thabane", "resigned", "in", "October", "last", "year", "to", "form", "the", "All", "Basotho", "Convention", "-LRB-", "ABC", "-RRB-", ",", "crossing", "the", "floor", "with", "17", "members", "of", "parliament", ",", "causing", "constitutional", "monarch", "King", "Letsie", "III", "to", "dissolve", "parliament", "and", "call", "the", "snap", "election", "."],
"subj_start": 10,
"subj_end": 13,
"obj_start": 0,
"obj_end": 2,
"subj_type": "ORGANIZATION",
"obj_type": "PERSON"
}
```
### Data Fields
The data fields are the same among all splits.
- `id`: the instance id of this sentence, a `string` feature.
- `token`: the list of tokens of this sentence, obtained with the StanfordNLP toolkit, a `list` of `string` features.
- `relation`: the relation label of this instance, a `string` classification label.
- `subj_start`: the 0-based index of the start token of the relation subject mention, an `ìnt` feature.
- `subj_end`: the 0-based index of the end token of the relation subject mention, exclusive, an `ìnt` feature.
- `subj_type`: the NER type of the subject mention, among 23 fine-grained types used in the [Stanford NER system](https://stanfordnlp.github.io/CoreNLP/ner.html), a `string` feature.
- `obj_start`: the 0-based index of the start token of the relation object mention, an `ìnt` feature.
- `obj_end`: the 0-based index of the end token of the relation object mention, exclusive, an `ìnt` feature.
- `obj_type`: the NER type of the object mention, among 23 fine-grained types used in the [Stanford NER system](https://stanfordnlp.github.io/CoreNLP/ner.html), a `string` feature.
### Data Splits
To miminize dataset bias, TACRED is stratified across years in which the TAC KBP challenge was run.
Languages statistics for the splits differ because not all instances could be translated with the
subject and object entity markup still intact, these were discarded.
| Language (Translation Engine - D = Deepl, G = Google) | Train | Dev | Test |
| ----- | ------ | ----- | ---- |
| English (en) | 68,124 (TAC KBP 2009-2012) | 22,631 (TAC KBP 2013) | 15,509 (TAC KBP 2014) |
| ar (G) | 67,736 | 22,502 | 15,425 |
| de (D) | 67,205 | 22,343 | 15,282 |
| es (D) | 65,247 | 21,697 | 14,908 |
| fi (D) | 66,751 | 22,268 | 15,083 |
| fr (D) | 66,856 | 22,248 | 15,237 |
| hi (G) | 67,751 | 22,511 | 15,440 |
| hu (G) | 67,766 | 22,519 | 15,436 |
| ja (D) | 61,571 | 20,290 | 13,701 |
| pl (G) | 68,124 | 22,631 | 15,509 |
| ru (D) | 66,413 | 21,998 | 14,995 |
| tr (G) | 67,652 | 22,510 | 15,429 |
| zh (D) | 65,211 | 21,490 | 14,694 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
See the Stanford paper and the Tacred Revisited paper, plus their appendices.
To ensure that models trained on TACRED are not biased towards predicting false positives on real-world text,
all sampled sentences where no relation was found between the mention pairs were fully annotated to be negative examples. As a result, 79.5% of the examples
are labeled as no_relation.
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
To respect the copyright of the underlying TAC KBP corpus, TACRED is released via the
Linguistic Data Consortium ([LDC License](https://catalog.ldc.upenn.edu/license/ldc-non-members-agreement.pdf)).
You can download TACRED from the [LDC TACRED webpage](https://catalog.ldc.upenn.edu/LDC2018T24).
If you are an LDC member, the access will be free; otherwise, an access fee of $25 is needed.
### Citation Information
The original dataset:
```
@inproceedings{zhang2017tacred,
author = {Zhang, Yuhao and Zhong, Victor and Chen, Danqi and Angeli, Gabor and Manning, Christopher D.},
booktitle = {Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP 2017)},
title = {Position-aware Attention and Supervised Data Improve Slot Filling},
url = {https://nlp.stanford.edu/pubs/zhang2017tacred.pdf},
pages = {35--45},
year = {2017}
}
```
For the revised version, please also cite:
```
@inproceedings{alt-etal-2020-tacred,
title = "{TACRED} Revisited: A Thorough Evaluation of the {TACRED} Relation Extraction Task",
author = "Alt, Christoph and
Gabryszak, Aleksandra and
Hennig, Leonhard",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.142",
doi = "10.18653/v1/2020.acl-main.142",
pages = "1558--1569",
}
```
### Contributions
Thanks to [@leonhardhennig](https://github.com/leonhardhennig) for adding this dataset.
|