|
--- |
|
license: cc-by-nc-sa-4.0 |
|
task_categories: |
|
- text-classification |
|
task_ids: |
|
- natural-language-inference |
|
- multi-input-text-classification |
|
language: |
|
- el |
|
size_categories: |
|
- 1K<n<10K |
|
--- |
|
|
|
# Dataset Card for Dataset Name |
|
|
|
|
|
## Dataset Description |
|
|
|
- **Homepage:** |
|
- **Repository:** |
|
- **Paper:** |
|
- **Leaderboard:** |
|
- **Point of Contact:** |
|
|
|
### Dataset Summary |
|
|
|
This repository contains a machine-translated Modern Greek version of [SICK](https://huggingface.co/datasets/sick) (Sentences Involving Compositional Knowldedge) dataset. The goal is to predict textual entailment (does sentence A |
|
imply/contradict/neither sentence B), which is a classification task (given two sentences, predict one of three labels). Apart from machine translating the sentence pairs, the rest of information (pair ID, labels, source dataset of each sentence, train/dev/test subset partition) has been left intact as in the original English dataset. |
|
|
|
The dataset is here formatted in a similar manner (TSV format) as the widely used [XNLI](https://huggingface.co/datasets/xnli) dataset for convenience. |
|
|
|
### Supported Tasks and Leaderboards |
|
|
|
This dataset can be used for the task of Natural Language Inference (NLI), also known as Recognizing Textual Entailment (RTE), which is a sentence-pair classification task. |
|
|
|
### Languages |
|
|
|
[More Information Needed] |
|
|
|
## Dataset Structure |
|
|
|
### Data Instances |
|
|
|
[More Information Needed] |
|
|
|
### Data Fields |
|
|
|
- `pair_ID`: Sentence pair ID. |
|
- `sentence_A`: Sentence A, also known as premise in other NLI datasets. |
|
- `sentence_B`: Sentence B, also known as hypothesis in other NLI datasets. |
|
- `entailment_label`: textual entailment gold label (NEUTRAL, ENTAILMENT, or CONTRADICTION). |
|
- `entailment_AB`: Entailment label for the A-B order (A_neutral_B, A_entails_B, or A_contradicts_B). |
|
- `entailment_BA`: Entailment label for the B-A order (B_neutral_A, B_entails_A, or B_contradicts_A). |
|
- `original_SICK_sentence_A`: The original premise from the English source dataset. |
|
- `original_SICK_sentence_B`: The original hypothesis from the English source dataset. |
|
- `sentence_A_dataset`: The dataset from which the original sentence A was extracted (FLICKR vs. SEMEVAL). |
|
- `sentence_B_dataset`: The dataset from which the original sentence B was extracted (FLICKR vs. SEMEVAL). |
|
|
|
### Data Splits |
|
|
|
| name |Entailment|Neutral|Contradiction|Total| |
|
|--------|---------:|------:|------------:|------------:| |
|
|train | 1274 | 2524 | 641 | 4439 | |
|
|validation | 143 | 281 | 71 | 495 | |
|
|test | 1404 | 2790 | 712 | 4906 | |
|
|
|
For the A-B order: |
|
| name |A_entails_B|A_neutral_B|A_contradicts_B| |
|
|--------|---------:|------:|------------:| |
|
|train | 1274 | 2381 | 784 | |
|
|validation | 143 | 266 | 86 | |
|
|test | 1404 | 2621 | 881 | |
|
|
|
For the B-A order: |
|
| name |B_entails_A|B_neutral_A|B_contradicts_A| |
|
|--------|---------:|------:|------------:| |
|
|train | 606 | 3072 | 761 | |
|
|validation | 84 | 329 | 82 | |
|
|test | 610 | 3431 | 865 | |
|
|
|
## Dataset Creation |
|
|
|
The dataset was machine translated from English to Modern Greek using the latest neural machine translation [opus-mt-tc-big](https://huggingface.co/Helsinki-NLP/opus-mt-tc-big-en-el) model available for Modern Greek. |
|
The translation of the sentences was carried out on November 26th, 2023. |
|
|
|
### Curation Rationale |
|
|
|
[More Information Needed] |
|
|
|
### Source Data |
|
|
|
#### Initial Data Collection and Normalization |
|
|
|
[More Information Needed] |
|
|
|
#### Who are the source language producers? |
|
|
|
[More Information Needed] |
|
|
|
### Annotations |
|
|
|
#### Annotation process |
|
|
|
[More Information Needed] |
|
|
|
#### Who are the annotators? |
|
|
|
[More Information Needed] |
|
|
|
### Personal and Sensitive Information |
|
|
|
[More Information Needed] |
|
|
|
## Considerations for Using the Data |
|
|
|
### Social Impact of Dataset |
|
|
|
[More Information Needed] |
|
|
|
### Discussion of Biases |
|
|
|
[More Information Needed] |
|
|
|
### Other Known Limitations |
|
|
|
[More Information Needed] |
|
|
|
## Additional Information |
|
|
|
### Dataset Curators |
|
|
|
[More Information Needed] |
|
|
|
### Licensing Information |
|
|
|
[More Information Needed] |
|
|
|
### Citation Information |
|
|
|
**BibTeX:** |
|
|
|
````BibTeX |
|
@inproceedings{marelli-etal-2014-sick, |
|
title = "A {SICK} cure for the evaluation of compositional distributional semantic models", |
|
author = "Marelli, Marco and |
|
Menini, Stefano and |
|
Baroni, Marco and |
|
Bentivogli, Luisa and |
|
Bernardi, Raffaella and |
|
Zamparelli, Roberto", |
|
editor = "Calzolari, Nicoletta and |
|
Choukri, Khalid and |
|
Declerck, Thierry and |
|
Loftsson, Hrafn and |
|
Maegaard, Bente and |
|
Mariani, Joseph and |
|
Moreno, Asuncion and |
|
Odijk, Jan and |
|
Piperidis, Stelios", |
|
booktitle = "Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}'14)", |
|
month = may, |
|
year = "2014", |
|
address = "Reykjavik, Iceland", |
|
publisher = "European Language Resources Association (ELRA)", |
|
url = "http://www.lrec-conf.org/proceedings/lrec2014/pdf/363_Paper.pdf", |
|
pages = "216--223", |
|
abstract = "Shared and internationally recognized benchmarks are fundamental for the development of any computational system. We aim to help the research community working on compositional distributional semantic models (CDSMs) by providing SICK (Sentences Involving Compositional Knowldedge), a large size English benchmark tailored for them. SICK consists of about 10,000 English sentence pairs that include many examples of the lexical, syntactic and semantic phenomena that CDSMs are expected to account for, but do not require dealing with other aspects of existing sentential data sets (idiomatic multiword expressions, named entities, telegraphic language) that are not within the scope of CDSMs. By means of crowdsourcing techniques, each pair was annotated for two crucial semantic tasks: relatedness in meaning (with a 5-point rating scale as gold score) and entailment relation between the two elements (with three possible gold labels: entailment, contradiction, and neutral). The SICK data set was used in SemEval-2014 Task 1, and it freely available for research purposes.", |
|
} |
|
|
|
@inproceedings{tiedemann-thottingal-2020-opus, |
|
title = "{OPUS}-{MT} {--} Building open translation services for the World", |
|
author = {Tiedemann, J{\"o}rg and |
|
Thottingal, Santhosh}, |
|
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation", |
|
month = nov, |
|
year = "2020", |
|
address = "Lisboa, Portugal", |
|
publisher = "European Association for Machine Translation", |
|
url = "https://aclanthology.org/2020.eamt-1.61", |
|
pages = "479--480", |
|
abstract = "This paper presents OPUS-MT a project that focuses on the development of free resources and tools for machine translation. The current status is a repository of over 1,000 pre-trained neural machine translation models that are ready to be launched in on-line translation services. For this we also provide open source implementations of web applications that can run efficiently on average desktop hardware with a straightforward setup and installation.", |
|
} |
|
```` |
|
|
|
**ACL:** |
|
|
|
Marco Marelli, Stefano Menini, Marco Baroni, Luisa Bentivogli, Raffaella Bernardi, and Roberto Zamparelli. 2014. [A SICK cure for the evaluation of compositional distributional semantic models](http://www.lrec-conf.org/proceedings/lrec2014/pdf/363_Paper.pdf). In *Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)*, pages 216–223, Reykjavik, Iceland. European Language Resources Association (ELRA). |
|
|
|
Jörg Tiedemann and Santhosh Thottingal. 2020. [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61). In *Proceedings of the 22nd Annual Conference of the European Association for Machine Translation*, pages 479–480, Lisboa, Portugal. European Association for Machine Translation. |
|
|
|
### Acknowledgements |
|
|
|
This translation of the original dataset was done as part of a research project supported by the Defence Innovation Agency (AID) of the Directorate General of Armament (DGA) of the French Ministry of Armed Forces, and by the ICO, _Institut Cybersécurité Occitanie_, funded by Région Occitanie, France. |
|
|
|
### Contributions |
|
|
|
[More Information Needed] |