rte3-multi / README.md
maximoss's picture
Update README.md
69f64ef verified
|
raw
history blame
3.69 kB
---
license: cc-by-4.0
language:
- fr
- en
- it
- de
task_categories:
- text-classification
task_ids:
- natural-language-inference
- multi-input-text-classification
size_categories:
- 1K<n<10K
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:** https://github.com/mskandalis/rte3-french
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This repository contains all manually translated versions of RTE-3 dataset, plus the original English one. The languages into which RTE-3 dataset has so far been translated are Italian (2012), German (2013), and French (2023).
Unlike in other repositories, both our own French version and the older Italian and German ones are here annotated in 3 classes (entailment, neutral, contradiction), and not in 2 (entailment, not entailment).
If you want to use the dataset only in a specific language among those provided here, you can filter data by selecting only the language column value you wish.
### Supported Tasks and Leaderboards
This dataset can be used for the task of Natural Language Inference (NLI), also known as Recognizing Textual Entailment (RTE), which is a sentence-pair classification task.
## Dataset Structure
### Data Fields
- `id`: Index number.
- `language`: The language of the concerned pair of sentences.
- `premise`: The translated premise in the target language.
- `hypothesis`: The translated premise in the target language.
- `label`: The classification label, with possible values 0 (`entailment`), 1 (`neutral`), 2 (`contradiction`).
- `label_text`: The classification label, with possible values `entailment` (0), `neutral` (1), `contradiction` (2).
- `task`: The particular NLP task that the data was drawn from (IE, IR, QA and SUM).
- `length`: The length of the text of the pair.
### Data Splits
| name |development|test|
|-------------|----------:|---:|
|all_languages| 3200 |3200|
| fr | 800 | 800|
| de | 800 | 800|
| it | 800 | 800|
For French RTE-3:
| name |entailment|neutral|contradiction|
|-------------|---------:|------:|------------:|
| dev | 412 | 299 | 89 |
| test | 410 | 318 | 72 |
| name |short|long|
|-------------|----:|---:|
| dev | 665 | 135|
| test | 683 | 117|
| name | IE| IR| QA|SUM|
|-------------|--:|--:|--:|--:|
| dev |200|200|200|200|
| test |200|200|200|200|
## Additional Information
### Citation Information
**BibTeX:**
````BibTeX
@inproceedings{giampiccolo-etal-2007-third,
title = "The Third {PASCAL} Recognizing Textual Entailment Challenge",
author = "Giampiccolo, Danilo and
Magnini, Bernardo and
Dagan, Ido and
Dolan, Bill",
booktitle = "Proceedings of the {ACL}-{PASCAL} Workshop on Textual Entailment and Paraphrasing",
month = jun,
year = "2007",
address = "Prague",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/W07-1401",
pages = "1--9",
}
````
**ACL:**
Danilo Giampiccolo, Bernardo Magnini, Ido Dagan, and Bill Dolan. 2007. [The Third PASCAL Recognizing Textual Entailment Challenge](https://aclanthology.org/W07-1401). In *Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing*, pages 1–9, Prague. Association for Computational Linguistics.
### Acknowledgements
This work was supported by the Defence Innovation Agency (AID) of the Directorate General of Armament (DGA) of the French Ministry of Armed Forces, and by the ICO, _Institut Cybersécurité Occitanie_, funded by Région Occitanie, France.