multitacred / README.md
dfki-nlp's picture
added a todo
82b44d1
|
raw
history blame
10.5 kB
metadata
annotations_creators:
  - crowdsourced
  - expert-generated
language:
  - ar
  - de
  - en
  - es
  - fi
  - fr
  - hi
  - hu
  - ja
  - pl
  - ru
  - tr
  - zh
language_creators:
  - found
license:
  - other
multilinguality:
  - translation
pretty_name: The Multilingual TAC Relation Extraction Dataset
size_categories:
  - 100K<n<1M
source_datasets:
  - extended|other
tags:
  - relation extraction
task_categories:
  - text-classification
task_ids:
  - multi-class-classification

Dataset Card for "multilingual_tacred"

Table of Contents

Dataset Description

Dataset Summary

TODO initial dataset reader for MultiTACRED, needs to be tested / finalized with dataset. Should use the the TACRED dataset reader (https://huggingface.co/datasets/DFKI-SLT/tacred) code but supply other download URLs per language, with language = config item. e.g. 'en-original', 'de-revised', 'ar-retacred' etc.

NOTE: This Datasetreader supports a reduced version of the original TACRED JSON format with the following changes:

  • Removed fields: stanford_pos, stanford_ner, stanford_head, stanford_deprel, docid

The motivation for this is that we want to support additional languages, for which these fields were not required or available. The reader expects the specification of a language-specific configuration specifying the variant (original or revised) and the language (as a two-letter iso code). The default config is 'original-en'.

You can find the TACRED dataset reader for the original version of the dataset here.

The TAC Relation Extraction Dataset (TACRED) is a large-scale relation extraction dataset with 106,264 examples built over newswire and web text from the corpus used in the yearly TAC Knowledge Base Population (TAC KBP) challenges. Examples in TACRED cover 41 relation types as used in the TAC KBP challenges (e.g., per:schools_attended and org:members) or are labeled as no_relation if no defined relation is held. These examples are created by combining available human annotations from the TAC KBP challenges and crowdsourcing. Please see Stanford's EMNLP paper, or their EMNLP slides for full details.

Note: There is currently a label-corrected version of the TACRED dataset, which you should consider using instead of the original version released in 2017. For more details on this new version, see the TACRED Revisited paper published at ACL 2020.

Supported Tasks and Leaderboards

Languages

The languages in the dataset are Arabic, German, English, Spanish, Finnish, French, Hindi, Hungarian, Japanese, Polish, Russian, Turkish, and Chinese. All languages except English are machine-translated using either Deepl's or Google's translation APIs.

Dataset Structure

Data Instances

  • Size of downloaded dataset files: 62.3 MB
  • Size of the generated dataset: 40.9 MB
  • Total amount of disk used: 103.2 MB

An example of 'train' looks as follows:

{
  "id": "61b3a5c8c9a882dcfcd2",
  "relation": "org:founded_by",
  "token": ["Tom", "Thabane", "resigned", "in", "October", "last", "year", "to", "form", "the", "All", "Basotho", "Convention", "-LRB-", "ABC", "-RRB-", ",", "crossing", "the", "floor", "with", "17", "members", "of", "parliament", ",", "causing", "constitutional", "monarch", "King", "Letsie", "III", "to", "dissolve", "parliament", "and", "call", "the", "snap", "election", "."],
  "subj_start": 10,
  "subj_end": 13,
  "obj_start": 0,
  "obj_end": 2,
  "subj_type": "ORGANIZATION",
  "obj_type": "PERSON"
}

Data Fields

The data fields are the same among all splits.

  • id: the instance id of this sentence, a string feature.
  • token: the list of tokens of this sentence, a list of string features.
  • relation: the relation label of this instance, a string classification label.
  • subj_start: the 0-based index of the start token of the relation subject mention, an ìnt feature.
  • subj_end: the 0-based index of the end token of the relation subject mention, exclusive, an ìnt feature.
  • subj_type: the NER type of the subject mention, among 23 fine-grained types used in the Stanford NER system, a string feature.
  • obj_start: the 0-based index of the start token of the relation object mention, an ìnt feature.
  • obj_end: the 0-based index of the end token of the relation object mention, exclusive, an ìnt feature.
  • obj_type: the NER type of the object mention, among 23 fine-grained types used in the Stanford NER system, a string feature.

Data Splits

To miminize dataset bias, TACRED is stratified across years in which the TAC KBP challenge was run. Languages statistics for the splits differ because not all instances could be translated with the subject and object entity markup still intact, these were discarded.

Language (Translation Engine - D = Deepl, G = Google) Train Dev Test
English (en) 68,124 (TAC KBP 2009-2012) 22,631 (TAC KBP 2013) 15,509 (TAC KBP 2014)
ar (G) 67,736 22,502 15,425
de (D) 67,205 22,343 15,282
es (D) 65,247 21,697 14,908
fi (D) 66,751 22,268 15,083
fr (D) 66,856 22,248 15,237
hi (G) 67,751 22,511 15,440
hu (G) 67,766 22,519 15,436
ja (D) 61,571 20,290 13,701
pl (G) 68,124 22,631 15,509
ru (D) 66,413 21,998 14,995
tr (G) 67,652 22,510 15,429
zh (D) 65,211 21,490 14,694

Dataset Creation

Curation Rationale

[More Information Needed]

Source Data

Initial Data Collection and Normalization

[More Information Needed]

Who are the source language producers?

[More Information Needed]

Annotations

Annotation process

See the Stanford paper and the Tacred Revisited paper, plus their appendices.

To ensure that models trained on TACRED are not biased towards predicting false positives on real-world text, all sampled sentences where no relation was found between the mention pairs were fully annotated to be negative examples. As a result, 79.5% of the examples are labeled as no_relation.

Tokenization of the English data was done with Stanford CoreNLP by the authors of the original dataset. The translated versions were tokenized with language-specific Spacy models (Spacy 3.1) or Trankit when there was no Spacy model for a given language (Hungarian, Turkish, Arabic, Hindi).

Who are the annotators?

[More Information Needed]

Personal and Sensitive Information

[More Information Needed]

Considerations for Using the Data

Social Impact of Dataset

[More Information Needed]

Discussion of Biases

[More Information Needed]

Other Known Limitations

[More Information Needed]

Additional Information

Dataset Curators

[More Information Needed]

Licensing Information

To respect the copyright of the underlying TAC KBP corpus, TACRED is released via the Linguistic Data Consortium (LDC License). You can download TACRED from the LDC TACRED webpage. If you are an LDC member, the access will be free; otherwise, an access fee of $25 is needed.

Citation Information

The original dataset:

@inproceedings{zhang2017tacred,
  author = {Zhang, Yuhao and Zhong, Victor and Chen, Danqi and Angeli, Gabor and Manning, Christopher D.},
  booktitle = {Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP 2017)},
  title = {Position-aware Attention and Supervised Data Improve Slot Filling},
  url = {https://nlp.stanford.edu/pubs/zhang2017tacred.pdf},
  pages = {35--45},
  year = {2017}
}

For the revised version, please also cite:

@inproceedings{alt-etal-2020-tacred,
    title = "{TACRED} Revisited: A Thorough Evaluation of the {TACRED} Relation Extraction Task",
    author = "Alt, Christoph  and
      Gabryszak, Aleksandra  and
      Hennig, Leonhard",
    booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
    month = jul,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://www.aclweb.org/anthology/2020.acl-main.142",
    doi = "10.18653/v1/2020.acl-main.142",
    pages = "1558--1569",
}

Contributions

Thanks to @leonhardhennig for adding this dataset.