mathiascreutz
commited on
Commit
•
e110f3a
1
Parent(s):
d6a4efb
Minor modifications
Browse files
README.md
CHANGED
@@ -155,7 +155,9 @@ data = load_dataset("GEM/opusparcus", "de.100")
|
|
155 |
data = load_dataset("GEM/opusparcus", "fr.90")
|
156 |
```
|
157 |
|
158 |
-
|
|
|
|
|
159 |
|
160 |
### Data Instances
|
161 |
|
@@ -246,9 +248,8 @@ up in the datasets.
|
|
246 |
The training sets were not annotated manually. This is indicated by
|
247 |
the value 0.0 in the `annot_score` field.
|
248 |
|
249 |
-
For an assessment of of inter-annotator agreement, see
|
250 |
-
|
251 |
-
paraphrases using a new web
|
252 |
tool.](http://ceur-ws.org/Vol-2364/3_paper.pdf) In Proceedings of the
|
253 |
Digital Humanities in the Nordic Countries 4th Conference],
|
254 |
Copenhagen, Denmark.
|
@@ -312,16 +313,16 @@ approximately 1000 sentence pairs that have been verified to be
|
|
312 |
acceptable paraphrases by two indepedent annotators.
|
313 |
|
314 |
The `annot_score` field reflects the judgments made by the annotators.
|
315 |
-
If
|
316 |
-
|
317 |
-
|
318 |
-
|
319 |
-
|
320 |
-
|
321 |
-
|
322 |
-
|
323 |
-
|
324 |
-
|
325 |
|
326 |
#### Who are the annotators?
|
327 |
|
|
|
155 |
data = load_dataset("GEM/opusparcus", "fr.90")
|
156 |
```
|
157 |
|
158 |
+
Remark regarding the optimal choice of training set qualities:
|
159 |
+
Previous work suggests that a larger and noisier set is better than a
|
160 |
+
smaller and clean set. See Sjöblom et al. (2018). [Paraphrase Detection on Noisy Subtitles in Six Languages](http://noisy-text.github.io/2018/pdf/W-NUT20189.pdf). In Proceedings of the 2018 EMNLP Workshop W-NUT: The 4th Workshop on Noisy User-generated Text., and Vahtola et al. (2021). [Coping with Noisy Training Data Labels in Paraphrase Detection](https://aclanthology.org/2021.wnut-1.32/). In Proceedings of the 7th Workshop on Noisy User-generated Text.
|
161 |
|
162 |
### Data Instances
|
163 |
|
|
|
248 |
The training sets were not annotated manually. This is indicated by
|
249 |
the value 0.0 in the `annot_score` field.
|
250 |
|
251 |
+
For an assessment of of inter-annotator agreement, see Aulamo et
|
252 |
+
al. (2019). [Annotation of subtitle paraphrases using a new web
|
|
|
253 |
tool.](http://ceur-ws.org/Vol-2364/3_paper.pdf) In Proceedings of the
|
254 |
Digital Humanities in the Nordic Countries 4th Conference],
|
255 |
Copenhagen, Denmark.
|
|
|
313 |
acceptable paraphrases by two indepedent annotators.
|
314 |
|
315 |
The `annot_score` field reflects the judgments made by the annotators.
|
316 |
+
If the annnotators fully agreed on the category (4.0: dark green, 3.0:
|
317 |
+
light green, 2.0: yellow, 1.0: red), the value of `annot_score` is
|
318 |
+
4.0, 3.0, 2.0 or 1.0. If the annotators chose adjacent categories,
|
319 |
+
the value in this field will be 3.5, 2.5 or 1.5. For instance, a
|
320 |
+
value of 2.5 means that one annotator gave a score of 3 ("mostly
|
321 |
+
good"), indicating a possible paraphrase pair, whereas the other
|
322 |
+
annotator scored this as a 2 ("mostly bad"), that is, unlikely to be a
|
323 |
+
paraphrase pair. If the annotators disagreed by more than one
|
324 |
+
category, the sentence pair was discarded and won't show up in the
|
325 |
+
datasets.
|
326 |
|
327 |
#### Who are the annotators?
|
328 |
|