File size: 3,860 Bytes
4b6c90f
 
a640102
 
0caba38
 
af168ac
a640102
 
62bf34b
 
4b6c90f
a640102
 
 
 
 
 
 
 
 
 
 
 
e1bf385
 
cd6eed7
e1bf385
a640102
 
 
570e03d
a640102
 
 
 
 
1b868a5
 
df2bdaf
 
1b868a5
 
 
 
a640102
 
 
6926e41
 
 
a640102
bde1cba
 
 
 
 
a640102
 
 
 
894a5b8
 
39b0801
a640102
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
39b0801
a640102
894a5b8
 
 
 
a092ac8
 
671a361
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
---
license: bsd-2-clause
task_categories:
- text-classification
task_ids:
- natural-language-inference
- multi-input-text-classification
language:
- fr
size_categories:
- 1K<n<10K
---
# Dataset Card for Dataset Name

## Dataset Description

- **Homepage:** 
- **Repository:** 
- **Paper:** 
- **Leaderboard:** 
- **Point of Contact:** 

### Dataset Summary

This repository contains a machine-translated French version of the portion of [MultiNLI](https://cims.nyu.edu/~sbowman/multinli) concerning the 9/11 terrorist attacks (2000 examples).
Note that these 2000 examples included in MultiNLI (and machine translated in French here) on the subject of 9/11 are different from the 249 examples in the validation subset and the 501 ones in the test subset of XNLI on the same subject.

In the original subset of MultiNLI on 9/11, 26 examples were left without gold label. In this French version, we have given a gold label also to these examples (so that there are no more examples without gold label), according to our reading of the examples.

### Supported Tasks and Leaderboards

This dataset can be used for the task of Natural Language Inference (NLI), also known as Recognizing Textual Entailment (RTE), which is a sentence-pair classification task.

## Dataset Structure

### Data Fields

- `premise`: The machine translated premise in the target language.
- `hypothesis`: The machine translated premise in the target language.
- `label`: The classification label, with possible values 0 (`entailment`), 1 (`neutral`), 2 (`contradiction`).
- `label_text`: The classification label, with possible values `entailment` (0), `neutral` (1), `contradiction` (2).
- `pairID`: Unique identifier for pair.
- `promptID`: Unique identifier for prompt.
- `premise_original`: The original premise from the English source dataset.
- `hypothesis_original`: The original hypothesis from the English source dataset.

### Data Splits

|  name  |entailment|neutral|contradiction|
|--------|---------:|------:|------------:|
|mnli_fr |   705    |  641  |     654     |

## Dataset Creation

The dataset was machine translated from English to French using the latest neural machine translation [opus-mt-tc-big](https://huggingface.co/Helsinki-NLP/opus-mt-tc-big-en-fr) model available for French. 
The translation of the sentences was carried out on March 29th, 2023.

## Additional Information

### Citation Information

**BibTeX:**

````BibTeX
@InProceedings{N18-1101,
  author = "Williams, Adina
            and Nangia, Nikita
            and Bowman, Samuel",
  title = "A Broad-Coverage Challenge Corpus for
           Sentence Understanding through Inference",
  booktitle = "Proceedings of the 2018 Conference of
               the North American Chapter of the
               Association for Computational Linguistics:
               Human Language Technologies, Volume 1 (Long
               Papers)",
  year = "2018",
  publisher = "Association for Computational Linguistics",
  pages = "1112--1122",
  location = "New Orleans, Louisiana",
  url = "http://aclweb.org/anthology/N18-1101"
}
````

**ACL:**

Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. [A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference](https://aclanthology.org/N18-1101/). In *Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)*, pages 1112–1122, New Orleans, Louisiana. Association for Computational Linguistics.

### Acknowledgements

This translation of the original dataset was done as part of a research project supported by the Defence Innovation Agency (AID) of the Directorate General of Armament (DGA) of the French Ministry of Armed Forces, and by the ICO, _Institut Cybersécurité Occitanie_, funded by Région Occitanie, France.