dfki-nlp commited on
Commit
9c9990b
1 Parent(s): 1d506d8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -1
README.md CHANGED
@@ -120,7 +120,7 @@ The data fields are the same among all splits.
120
 
121
 
122
  - `id`: the instance id of this sentence, a `string` feature.
123
- - `token`: the list of tokens of this sentence, obtained with the StanfordNLP toolkit, a `list` of `string` features.
124
  - `relation`: the relation label of this instance, a `string` classification label.
125
  - `subj_start`: the 0-based index of the start token of the relation subject mention, an `ìnt` feature.
126
  - `subj_end`: the 0-based index of the end token of the relation subject mention, exclusive, an `ìnt` feature.
@@ -164,6 +164,8 @@ See the Stanford paper and the Tacred Revisited paper, plus their appendices.
164
  To ensure that models trained on TACRED are not biased towards predicting false positives on real-world text,
165
  all sampled sentences where no relation was found between the mention pairs were fully annotated to be negative examples. As a result, 79.5% of the examples
166
  are labeled as no_relation.
 
 
167
  #### Who are the annotators?
168
  [More Information Needed]
169
  ### Personal and Sensitive Information
 
120
 
121
 
122
  - `id`: the instance id of this sentence, a `string` feature.
123
+ - `token`: the list of tokens of this sentence, a `list` of `string` features.
124
  - `relation`: the relation label of this instance, a `string` classification label.
125
  - `subj_start`: the 0-based index of the start token of the relation subject mention, an `ìnt` feature.
126
  - `subj_end`: the 0-based index of the end token of the relation subject mention, exclusive, an `ìnt` feature.
 
164
  To ensure that models trained on TACRED are not biased towards predicting false positives on real-world text,
165
  all sampled sentences where no relation was found between the mention pairs were fully annotated to be negative examples. As a result, 79.5% of the examples
166
  are labeled as no_relation.
167
+
168
+ Tokenization of the English data was done with Stanford CoreNLP by the authors of the original dataset. The translated versions were tokenized with language-specific Spacy models (Spacy 3.1) or Trankit when there was no Spacy model for a given language (Hungarian, Turkish, Arabic, Hindi).
169
  #### Who are the annotators?
170
  [More Information Needed]
171
  ### Personal and Sensitive Information