sick-fr-mt / README.md
maximoss's picture
Update README.md
f900d6d verified
|
raw
history blame
8.97 kB
---
license: cc-by-nc-sa-4.0
task_categories:
- text-classification
task_ids:
- natural-language-inference
- multi-input-text-classification
language:
- fr
size_categories:
- 1K<n<10K
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This repository contains a machine-translated French version of [SICK](https://huggingface.co/datasets/sick) (Sentences Involving Compositional Knowldedge) dataset. The goal is to predict textual entailment (does sentence A
imply/contradict/neither sentence B), which is a classification task (given two sentences, predict one of three labels). Apart from machine translating the sentence pairs, the rest of information (pair ID, labels, source dataset of each sentence, train/dev/test subset partition) has been left intact as in the original English dataset.
The dataset is here formatted in a similar manner (TSV format) as the widely used [XNLI](https://huggingface.co/datasets/xnli) dataset for convenience.
The machine translation produced in the present repository should be of pretty decent quality, given the average short length of the sentences in SICK dataset.
### Supported Tasks and Leaderboards
This dataset can be used for the task of Natural Language Inference (NLI), also known as Recognizing Textual Entailment (RTE), which is a sentence-pair classification task.
## Dataset Structure
### Data Fields
- `pair_ID`: Sentence pair ID.
- `sentence_A`: Sentence A, also known as premise in other NLI datasets.
- `sentence_B`: Sentence B, also known as hypothesis in other NLI datasets.
- `entailment_label`: textual entailment gold label (NEUTRAL, ENTAILMENT, or CONTRADICTION).
- `entailment_AB`: Entailment label for the A-B order (A_neutral_B, A_entails_B, or A_contradicts_B).
- `entailment_BA`: Entailment label for the B-A order (B_neutral_A, B_entails_A, or B_contradicts_A).
- `original_SICK_sentence_A`: The original premise from the English source dataset.
- `original_SICK_sentence_B`: The original hypothesis from the English source dataset.
- `sentence_A_dataset`: The dataset from which the original sentence A was extracted (FLICKR vs. SEMEVAL).
- `sentence_B_dataset`: The dataset from which the original sentence B was extracted (FLICKR vs. SEMEVAL).
### Data Splits
| name |Entailment|Neutral|Contradiction|Total|
|--------|---------:|------:|------------:|------------:|
|train | 1274 | 2524 | 641 | 4439 |
|validation | 143 | 281 | 71 | 495 |
|test | 1404 | 2790 | 712 | 4906 |
For the A-B order:
| name |A_entails_B|A_neutral_B|A_contradicts_B|
|--------|---------:|------:|------------:|
|train | 1274 | 2381 | 784 |
|validation | 143 | 266 | 86 |
|test | 1404 | 2621 | 881 |
For the B-A order:
| name |B_entails_A|B_neutral_A|B_contradicts_A|
|--------|---------:|------:|------------:|
|train | 606 | 3072 | 761 |
|validation | 84 | 329 | 82 |
|test | 610 | 3431 | 865 |
## Dataset Creation
The dataset was machine translated from English to French using the latest neural machine translation [opus-mt-tc-big](https://huggingface.co/Helsinki-NLP/opus-mt-tc-big-en-fr) model available for French.
The translation of the sentences was carried out on November 26th, 2023.
## Additional Information
### Citation Information
**BibTeX:**
````BibTeX
@inproceedings{skandalis-etal-2024-new-datasets,
title = "New Datasets for Automatic Detection of Textual Entailment and of Contradictions between Sentences in {F}rench",
author = "Skandalis, Maximos and
Moot, Richard and
Retor{\'e}, Christian and
Robillard, Simon",
editor = "Calzolari, Nicoletta and
Kan, Min-Yen and
Hoste, Veronique and
Lenci, Alessandro and
Sakti, Sakriani and
Xue, Nianwen",
booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)",
month = may,
year = "2024",
address = "Torino, Italy",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.lrec-main.1065",
pages = "12173--12186",
abstract = "This paper introduces DACCORD, an original dataset in French for automatic detection of contradictions between sentences. It also presents new, manually translated versions of two datasets, namely the well known dataset RTE3 and the recent dataset GQNLI, from English to French, for the task of natural language inference / recognising textual entailment, which is a sentence-pair classification task. These datasets help increase the admittedly limited number of datasets in French available for these tasks. DACCORD consists of 1034 pairs of sentences and is the first dataset exclusively dedicated to this task and covering among others the topic of the Russian invasion in Ukraine. RTE3-FR contains 800 examples for each of its validation and test subsets, while GQNLI-FR is composed of 300 pairs of sentences and focuses specifically on the use of generalised quantifiers. Our experiments on these datasets show that they are more challenging than the two already existing datasets for the mainstream NLI task in French (XNLI, FraCaS). For languages other than English, most deep learning models for NLI tasks currently have only XNLI available as a training set. Additional datasets, such as ours for French, could permit different training and evaluation strategies, producing more robust results and reducing the inevitable biases present in any single dataset.",
}
@inproceedings{marelli-etal-2014-sick,
title = "A {SICK} cure for the evaluation of compositional distributional semantic models",
author = "Marelli, Marco and
Menini, Stefano and
Baroni, Marco and
Bentivogli, Luisa and
Bernardi, Raffaella and
Zamparelli, Roberto",
editor = "Calzolari, Nicoletta and
Choukri, Khalid and
Declerck, Thierry and
Loftsson, Hrafn and
Maegaard, Bente and
Mariani, Joseph and
Moreno, Asuncion and
Odijk, Jan and
Piperidis, Stelios",
booktitle = "Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}'14)",
month = may,
year = "2014",
address = "Reykjavik, Iceland",
publisher = "European Language Resources Association (ELRA)",
url = "http://www.lrec-conf.org/proceedings/lrec2014/pdf/363_Paper.pdf",
pages = "216--223",
abstract = "Shared and internationally recognized benchmarks are fundamental for the development of any computational system. We aim to help the research community working on compositional distributional semantic models (CDSMs) by providing SICK (Sentences Involving Compositional Knowldedge), a large size English benchmark tailored for them. SICK consists of about 10,000 English sentence pairs that include many examples of the lexical, syntactic and semantic phenomena that CDSMs are expected to account for, but do not require dealing with other aspects of existing sentential data sets (idiomatic multiword expressions, named entities, telegraphic language) that are not within the scope of CDSMs. By means of crowdsourcing techniques, each pair was annotated for two crucial semantic tasks: relatedness in meaning (with a 5-point rating scale as gold score) and entailment relation between the two elements (with three possible gold labels: entailment, contradiction, and neutral). The SICK data set was used in SemEval-2014 Task 1, and it freely available for research purposes.",
}
````
**ACL:**
Maximos Skandalis, Richard Moot, Christian Retoré, and Simon Robillard. 2024. [New Datasets for Automatic Detection of Textual Entailment and of Contradictions between Sentences in French](https://aclanthology.org/2024.lrec-main.1065). In *Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)*, pages 12173–12186, Torino, Italy. ELRA and ICCL.
And
Marco Marelli, Stefano Menini, Marco Baroni, Luisa Bentivogli, Raffaella Bernardi, and Roberto Zamparelli. 2014. [A SICK cure for the evaluation of compositional distributional semantic models](http://www.lrec-conf.org/proceedings/lrec2014/pdf/363_Paper.pdf). In *Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)*, pages 216–223, Reykjavik, Iceland. European Language Resources Association (ELRA).
### Acknowledgements
This translation of the original dataset was done as part of a research project supported by the Defence Innovation Agency (AID) of the Directorate General of Armament (DGA) of the French Ministry of Armed Forces, and by the ICO, _Institut Cybersécurité Occitanie_, funded by Région Occitanie, France.