File size: 6,204 Bytes
463b8ee
 
0cd9fce
 
ca2b742
1f91a71
 
 
0cd9fce
 
 
 
 
1f91a71
 
 
 
 
 
 
 
 
 
 
 
 
 
f55a731
1f91a71
4d86592
 
1f91a71
 
50c4ded
 
2448941
1f91a71
 
 
 
 
 
93d5983
1f91a71
 
93d5983
a04613c
93d5983
 
 
 
 
 
 
 
 
 
 
4525828
93d5983
ef7faa1
4525828
1f91a71
 
 
57807ff
 
 
 
 
 
 
 
 
 
20fa69e
 
57807ff
20fa69e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1f91a71
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
---
license: cc-by-nc-sa-4.0
task_categories:
- text-classification
- question-answering
task_ids:
- natural-language-inference
- multi-input-text-classification
language:
- fr
- en
size_categories:
- n<1K
---

# Dataset Card for Dataset Name

## Dataset Description

- **Homepage:** 
- **Repository:** https://gitlab.inria.fr/semagramme-public-projects/resources/french-fracas
- **Paper:** 
- **Leaderboard:** 
- **Point of Contact:** 

### Dataset Summary

This repository contains the French version of the FraCaS Test Suite introduced in [this paper](https://aclanthology.org/2020.lrec-1.721.pdf), as well as the original English one, in a TSV format (as opposed to the XML format provided with the original paper).

FraCaS stands for "Framework for Computational Semantics".

### Supported Tasks and Leaderboards

This dataset can be used for the task of Natural Language Inference (NLI), also known as Recognizing Textual Entailment (RTE), which is a sentence-pair classification task. 

It can also be used for the task of Question Answering (QA) (when using the columns `question` and `answer` instead of `hypothesis` and `label`, respectively).

## Dataset Structure

### Data Fields

- `id`: Index number.
- `premises`: All premises provided for this particular example, concatenated, in French.
- `hypothesis`: The translated hypothesis in the target language (French).
- `label`: The classification label, with possible values 0 (`entailment`), 1 (`neutral`), 2 (`contradiction`), or undef (for undefined).
- `question`: The hypothesis in the form of question, in French.
- `answer`: The answer to the question, with possible values `Yes` (0), `Don't know` / `Unknown` (1), `No` (2), `undef`, or a longer phrase containing qualifications or elaborations such as `Yes, on one reading`.
- `premises_original`: All premises provided for this particular example, concatenated, in their language of origin (English).
- `premise1`: The first premise in English.
- `premise1_original`: The first premise in English.
- `premise2`: When available, the second premise in French.
- `premise2_original`: When available, the second premise in English.
- `premise3`: When available, the third premise in French.
- `premise3_original`: When available, the third premise in English.
- `premise4`: When available, the fourth premise in French.
- `premise4_original`: When available, the fourth premise in English.
- `premise5`: When available, the fifth premise in French.
- `premise5_original`: When available, the fifth premise in English.
- `hypothesis_original`: The hypothesis in English.
- `question_original`: The hypothesis in the form of question, in English.
- `note`: Text from the source document intended to explain or justify the answer, or notes added to a number of problems in order to explain issues which arose during translation.
- `topic`: Problem set / topic.

### Data Splits

The premise counts are distributed as follows:

|     # premises    |# problems|% problems|
|---------:|------:|------------:|
|   1    |  192  |     55.5 %     |
|   2    |  122  |      35.3 %     |
|   3    |   29  |      8.4 %     |
|   4    |  2  |      0.6 %     |
|   5    |  1  |       0.3 %     |

The answer distribution is roughly as follows:
             
|     # problems    |Percentage|Answer|
|---------:|------:|------------:|
|   180    |  52%  |      Yes     |
|   94    |  27%  |      Don't know     |
|   31    |   9%  |      No     |
|   41    |  12%  |      [other / complex]     |

Here's the breakdown by topic:

|     sec    | topic| start| count|%|single-premise|
|-------------|--:|--:|--:|--:|--:|
|     1     |Quantifiers|1|80|23 %|50|
|     2    |Plurals|81|33|10 %|24|
|     3    |Anaphora|114|28| 8 %|6|
|     4    |Ellipsis|142|55|16 %|25|
|     5    |Adjectives|197|23|7 %|15|
|     6    |Comparatives|220|31|9 %|16|
|     7    |Temporal|251|75|22 %|39|
|     8    |Verbs|326|8|2 %|8|
|     9    |Attitudes|334|4|10 %|9|

## Additional Information

### Citation Information

**BibTeX:**

````BibTeX
@inproceedings{amblard-etal-2020-french,
    title = "A {F}rench Version of the {F}ra{C}a{S} Test Suite",
    author = "Amblard, Maxime  and
      Beysson, Cl{\'e}ment  and
      de Groote, Philippe  and
      Guillaume, Bruno  and
      Pogodalla, Sylvain",
    editor = "Calzolari, Nicoletta  and
      B{\'e}chet, Fr{\'e}d{\'e}ric  and
      Blache, Philippe  and
      Choukri, Khalid  and
      Cieri, Christopher  and
      Declerck, Thierry  and
      Goggi, Sara  and
      Isahara, Hitoshi  and
      Maegaard, Bente  and
      Mariani, Joseph  and
      Mazo, H{\'e}l{\`e}ne  and
      Moreno, Asuncion  and
      Odijk, Jan  and
      Piperidis, Stelios",
    booktitle = "Proceedings of the Twelfth Language Resources and Evaluation Conference",
    month = may,
    year = "2020",
    address = "Marseille, France",
    publisher = "European Language Resources Association",
    url = "https://aclanthology.org/2020.lrec-1.721",
    pages = "5887--5895",
    abstract = "This paper presents a French version of the FraCaS test suite. This test suite, originally written in English, contains problems illustrating semantic inference in natural language. We describe linguistic choices we had to make when translating the FraCaS test suite in French, and discuss some of the issues that were raised by the translation. We also report an experiment we ran in order to test both the translation and the logical semantics underlying the problems of the test suite. This provides a way of checking formal semanticists{'} hypotheses against actual semantic capacity of speakers (in the present case, French speakers), and allow us to compare the results we obtained with the ones of similar experiments that have been conducted for other languages.",
    language = "English",
    ISBN = "979-10-95546-34-4",
}
````

**ACL:**

Maxime Amblard, Clément Beysson, Philippe de Groote, Bruno Guillaume, and Sylvain Pogodalla. 2020. [A French Version of the FraCaS Test Suite](https://aclanthology.org/2020.lrec-1.721/). In *Proceedings of the Twelfth Language Resources and Evaluation Conference*, pages 5887–5895, Marseille, France. European Language Resources Association.