Update README.md
Browse files
README.md
CHANGED
@@ -1,57 +1,35 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
-
|
12 |
-
|
13 |
-
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
-
|
28 |
-
Wiebe. Semeval-2016 Task 1: Semantic Textual Similarity,
|
29 |
-
Monolingual and Cross-Lingual Evaluation. Proceedings of SemEval
|
30 |
-
2016.
|
31 |
-
|
32 |
-
Eneko Agirre, Daniel Cer, Mona Diab, Iñigo Lopez-Gazpio, Lucia
|
33 |
-
Specia. Semeval-2017 Task 1: Semantic Textual Similarity
|
34 |
-
Multilingual and Crosslingual Focused Evaluation. Proceedings of
|
35 |
-
SemEval 2017.
|
36 |
-
|
37 |
-
Paul Clough and Mark Stevenson. 2011. Developing a corpus of
|
38 |
-
plagiarised short answers. Language Resources and Evaluation,
|
39 |
-
45(1):5-24
|
40 |
-
http://ir.shef.ac.uk/cloughie/resources/plagiarism_corpus.html
|
41 |
-
|
42 |
-
Weiwei Guo, Hao Li, Heng Ji and Mona Diab. 2013. Linking Tweets to
|
43 |
-
News: A Framework to Enrich Online Short Text Data in Social Media.
|
44 |
-
In Proceedings of the 51th Annual Meeting of the Association for
|
45 |
-
Computational Linguistics
|
46 |
-
|
47 |
-
Eduard Hovy, Mitchell Marcus, Martha Palmer, Lance Ramshaw, and Ralph
|
48 |
-
Weischedel. 2006. Ontonotes: The 90% solution. In Proceedings of
|
49 |
-
the Human Language Technology Conference of the North American
|
50 |
-
Chapter of the ACL.
|
51 |
-
|
52 |
-
Lucia Specia. 2011. Exploiting Objective Annotations for Measuring
|
53 |
-
Trans lation Post-editing Effort. In Proceedings of the 15th
|
54 |
-
Conference of the European Association from Machine Translation
|
55 |
-
(EAMT 2011).
|
56 |
-
http://staffwww.dcs.shef.ac.uk/people/L.Specia/resources
|
57 |
```
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
+
task_categories:
|
4 |
+
- sentence-similarity
|
5 |
+
- text-classification
|
6 |
+
language:
|
7 |
+
- en
|
8 |
+
tags:
|
9 |
+
- sts
|
10 |
---
|
11 |
+
https://ixa2.si.ehu.eus/stswiki/index.php/STSbenchmark
|
12 |
+
|
13 |
+
The companion datasets to the STS Benchmark comprise the rest of the English datasets used in the STS tasks organized by us in the context of SemEval between 2012 and 2017.
|
14 |
+
We collated two datasets, one with pairs of sentences related to machine translation evaluation. Another one with the rest of datasets, which can be used for domain adaptation studies.
|
15 |
+
|
16 |
+
|
17 |
+
```bib
|
18 |
+
@inproceedings{cer-etal-2017-semeval,
|
19 |
+
title = "{S}em{E}val-2017 Task 1: Semantic Textual Similarity Multilingual and Crosslingual Focused Evaluation",
|
20 |
+
author = "Cer, Daniel and
|
21 |
+
Diab, Mona and
|
22 |
+
Agirre, Eneko and
|
23 |
+
Lopez-Gazpio, I{\~n}igo and
|
24 |
+
Specia, Lucia",
|
25 |
+
booktitle = "Proceedings of the 11th International Workshop on Semantic Evaluation ({S}em{E}val-2017)",
|
26 |
+
month = aug,
|
27 |
+
year = "2017",
|
28 |
+
address = "Vancouver, Canada",
|
29 |
+
publisher = "Association for Computational Linguistics",
|
30 |
+
url = "https://aclanthology.org/S17-2001",
|
31 |
+
doi = "10.18653/v1/S17-2001",
|
32 |
+
pages = "1--14",
|
33 |
+
abstract = "Semantic Textual Similarity (STS) measures the meaning similarity of sentences. Applications include machine translation (MT), summarization, generation, question answering (QA), short answer grading, semantic search, dialog and conversational systems. The STS shared task is a venue for assessing the current state-of-the-art. The 2017 task focuses on multilingual and cross-lingual pairs with one sub-track exploring MT quality estimation (MTQE) data. The task obtained strong participation from 31 teams, with 17 participating in \textit{all language tracks}. We summarize performance and review a selection of well performing methods. Analysis highlights common errors, providing insight into the limitations of existing models. To support ongoing work on semantic representations, the \textit{STS Benchmark} is introduced as a new shared training and evaluation set carefully selected from the corpus of English STS shared task data (2012-2017).",
|
34 |
+
}s
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
35 |
```
|