dfki-nlp commited on
Commit
9293f1a
1 Parent(s): f10b571

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +18 -46
README.md CHANGED
@@ -32,43 +32,6 @@ task_categories: # Full list at https://github.com/huggingface/hub-docs/blob/ma
32
  task_ids:
33
  - multi-class-classification
34
  paperswithcode_id: multitacred
35
- configs: # Optional for datasets with multiple configurations like glue.
36
- - original-ar
37
- - original-de
38
- - original-es
39
- - original-fi
40
- - original-fr
41
- - original-hi
42
- - original-hu
43
- - original-ja
44
- - original-pl
45
- - original-ru
46
- - original-tr
47
- - original-zh
48
- - revisited-ar
49
- - revisited-de
50
- - revisited-es
51
- - revisited-fi
52
- - revisited-fr
53
- - revisited-hi
54
- - revisited-hu
55
- - revisited-ja
56
- - revisited-pl
57
- - revisited-ru
58
- - revisited-tr
59
- - revisited-zh
60
- - retacred-ar
61
- - retacred-de
62
- - retacred-es
63
- - retacred-fi
64
- - retacred-fr
65
- - retacred-hi
66
- - retacred-hu
67
- - retacred-ja
68
- - retacred-pl
69
- - retacred-ru
70
- - retacred-tr
71
- - retacred-zh
72
  dataset_info:
73
  - config_name: original-ar
74
  features:
@@ -4865,12 +4828,16 @@ subject and object entity markup still intact, these were discarded.
4865
 
4866
  ## Dataset Creation
4867
  ### Curation Rationale
4868
- [More Information Needed]
 
4869
  ### Source Data
4870
  #### Initial Data Collection and Normalization
4871
- [More Information Needed]
 
 
 
4872
  #### Who are the source language producers?
4873
- [More Information Needed]
4874
  ### Annotations
4875
  #### Annotation process
4876
  See the Stanford paper, the TACRED Revisited paper, and the Re-TACRED paper, plus their appendices, for
@@ -4879,19 +4846,24 @@ details on the original annotation process. The translated versions do not chang
4879
  Translations were tokenized with language-specific Spacy models (Spacy 3.1, 'core_news/web_sm' models)
4880
  or Trankit (Trankit 1.1.0) when there was no Spacy model for a given language (Hungarian, Turkish, Arabic, Hindi).
4881
  #### Who are the annotators?
4882
- [More Information Needed]
4883
  ### Personal and Sensitive Information
4884
- [More Information Needed]
 
 
4885
  ## Considerations for Using the Data
4886
  ### Social Impact of Dataset
4887
- [More Information Needed]
4888
  ### Discussion of Biases
4889
- [More Information Needed]
 
4890
  ### Other Known Limitations
4891
- [More Information Needed]
4892
  ## Additional Information
4893
  ### Dataset Curators
4894
- [More Information Needed]
 
 
4895
  ### Licensing Information
4896
  To respect the copyright of the underlying TACRED dataset, MultiTACRED is released via the
4897
  Linguistic Data Consortium ([LDC License](https://catalog.ldc.upenn.edu/license/ldc-non-members-agreement.pdf)).
 
32
  task_ids:
33
  - multi-class-classification
34
  paperswithcode_id: multitacred
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
35
  dataset_info:
36
  - config_name: original-ar
37
  features:
 
4828
 
4829
  ## Dataset Creation
4830
  ### Curation Rationale
4831
+ To enable more research on multilingual Relation Extraction, we generate translations of the TAC relation extraction
4832
+ dataset using DeepL and Google Translate.
4833
  ### Source Data
4834
  #### Initial Data Collection and Normalization
4835
+ The instances of this dataset are sentences from the
4836
+ [original TACRED dataset](https://nlp.stanford.edu/projects/tacred/), which in turn
4837
+ are sampled from the [corpus](https://catalog.ldc.upenn.edu/LDC2018T03) used in the yearly
4838
+ [TAC Knowledge Base Population (TAC KBP) challenges](https://tac.nist.gov/2017/KBP/index.html).
4839
  #### Who are the source language producers?
4840
+ Newswire and web texts collected for the [TAC Knowledge Base Population (TAC KBP) challenges](https://tac.nist.gov/2017/KBP/index.html).
4841
  ### Annotations
4842
  #### Annotation process
4843
  See the Stanford paper, the TACRED Revisited paper, and the Re-TACRED paper, plus their appendices, for
 
4846
  Translations were tokenized with language-specific Spacy models (Spacy 3.1, 'core_news/web_sm' models)
4847
  or Trankit (Trankit 1.1.0) when there was no Spacy model for a given language (Hungarian, Turkish, Arabic, Hindi).
4848
  #### Who are the annotators?
4849
+ The original TACRED dataset was annotated by crowd workers, see the [TACRED paper](https://nlp.stanford.edu/pubs/zhang2017tacred.pdf).
4850
  ### Personal and Sensitive Information
4851
+ The [authors](https://nlp.stanford.edu/pubs/zhang2017tacred.pdf) of the original TACRED dataset
4852
+ have not stated measures that prevent collecting sensitive or offensive text. Therefore, we do
4853
+ not rule out the possible risk of sensitive/offensive content in the translated data.
4854
  ## Considerations for Using the Data
4855
  ### Social Impact of Dataset
4856
+ not applicable
4857
  ### Discussion of Biases
4858
+ The dataset is drawn from web and newswire text, and thus reflects any biases of these original
4859
+ texts, as well as biases introduced by the MT models.
4860
  ### Other Known Limitations
4861
+ not applicable
4862
  ## Additional Information
4863
  ### Dataset Curators
4864
+ The dataset was created by members of the
4865
+ [DFKI SLT team: Leonhard Hennig, Philippe Thomas, Sebastian Möller, Gabriel Kressin](https://www.dfki.de/en/web/research/research-departments/speech-and-language-technology/speech-and-language-technology-staff-members)
4866
+
4867
  ### Licensing Information
4868
  To respect the copyright of the underlying TACRED dataset, MultiTACRED is released via the
4869
  Linguistic Data Consortium ([LDC License](https://catalog.ldc.upenn.edu/license/ldc-non-members-agreement.pdf)).