Update README.md
Browse files
README.md
CHANGED
@@ -2,9 +2,118 @@
|
|
2 |
license: cc-by-nc-sa-4.0
|
3 |
task_categories:
|
4 |
- text-classification
|
|
|
|
|
|
|
5 |
language:
|
6 |
- fr
|
7 |
- en
|
8 |
size_categories:
|
9 |
- n<1K
|
10 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2 |
license: cc-by-nc-sa-4.0
|
3 |
task_categories:
|
4 |
- text-classification
|
5 |
+
task_ids:
|
6 |
+
- natural-language-inference
|
7 |
+
- multi-input-text-classification
|
8 |
language:
|
9 |
- fr
|
10 |
- en
|
11 |
size_categories:
|
12 |
- n<1K
|
13 |
+
---
|
14 |
+
|
15 |
+
# Dataset Card for Dataset Name
|
16 |
+
|
17 |
+
## Dataset Description
|
18 |
+
|
19 |
+
- **Homepage:**
|
20 |
+
- **Repository:** https://gitlab.inria.fr/semagramme-public-projects/resources/french-fracas
|
21 |
+
- **Paper:**
|
22 |
+
- **Leaderboard:**
|
23 |
+
- **Point of Contact:**
|
24 |
+
|
25 |
+
### Dataset Summary
|
26 |
+
|
27 |
+
This repository contains the French version of the FraCaS Test Suite, along with the original English one, in a TSV format (as opposed to the XML format provided with the original paper).
|
28 |
+
|
29 |
+
### Supported Tasks and Leaderboards
|
30 |
+
|
31 |
+
This dataset can be used for the task of Natural Language Inference (NLI), also known as Recognizing Textual Entailment (RTE), which is a sentence-pair classification task.
|
32 |
+
|
33 |
+
## Dataset Structure
|
34 |
+
|
35 |
+
### Data Fields
|
36 |
+
|
37 |
+
- `id`: Index number.
|
38 |
+
- `premises`: .
|
39 |
+
- `hypothesis`: The translated hypothesis in the target language (French).
|
40 |
+
- `label`: The classification label, with possible values 0 (`entailment`), 1 (`neutral`), 2 (`contradiction`), or undef (for undefined).
|
41 |
+
- `question`: The hypothesis in the form of question.
|
42 |
+
- `answer`: The answer to the question, with possible values `yes` (0), `unknown` (1), `no` (2), `undef`, or other more detailed answer.
|
43 |
+
- `premises_original`: .
|
44 |
+
- `premise1`: .
|
45 |
+
- `premise1_original`: .
|
46 |
+
- `premise2`: .
|
47 |
+
- `premise2_original`: .
|
48 |
+
- `premise3`: .
|
49 |
+
- `premise3_original`: .
|
50 |
+
- `premise4`: .
|
51 |
+
- `premise4_original`: .
|
52 |
+
- `premise5`: .
|
53 |
+
- `premise5_original`: .
|
54 |
+
- `hypothesis_original`: .
|
55 |
+
- `question_original`: .
|
56 |
+
- `note`: .
|
57 |
+
- `topic`: .
|
58 |
+
|
59 |
+
### Data Splits
|
60 |
+
|
61 |
+
| name |entailment|neutral|contradiction|
|
62 |
+
|-------------|---------:|------:|------------:|
|
63 |
+
| dev | 412 | 299 | 89 |
|
64 |
+
| test | 410 | 318 | 72 |
|
65 |
+
|
66 |
+
| name |short|long|
|
67 |
+
|-------------|----:|---:|
|
68 |
+
| dev | 665 | 135|
|
69 |
+
| test | 683 | 117|
|
70 |
+
|
71 |
+
| name | IE| IR| QA|SUM|
|
72 |
+
|-------------|--:|--:|--:|--:|
|
73 |
+
| dev |200|200|200|200|
|
74 |
+
| test |200|200|200|200|
|
75 |
+
|
76 |
+
## Additional Information
|
77 |
+
|
78 |
+
### Citation Information
|
79 |
+
|
80 |
+
**BibTeX:**
|
81 |
+
|
82 |
+
````BibTeX
|
83 |
+
@inproceedings{amblard-etal-2020-french,
|
84 |
+
title = "A {F}rench Version of the {F}ra{C}a{S} Test Suite",
|
85 |
+
author = "Amblard, Maxime and
|
86 |
+
Beysson, Cl{\'e}ment and
|
87 |
+
de Groote, Philippe and
|
88 |
+
Guillaume, Bruno and
|
89 |
+
Pogodalla, Sylvain",
|
90 |
+
editor = "Calzolari, Nicoletta and
|
91 |
+
B{\'e}chet, Fr{\'e}d{\'e}ric and
|
92 |
+
Blache, Philippe and
|
93 |
+
Choukri, Khalid and
|
94 |
+
Cieri, Christopher and
|
95 |
+
Declerck, Thierry and
|
96 |
+
Goggi, Sara and
|
97 |
+
Isahara, Hitoshi and
|
98 |
+
Maegaard, Bente and
|
99 |
+
Mariani, Joseph and
|
100 |
+
Mazo, H{\'e}l{\`e}ne and
|
101 |
+
Moreno, Asuncion and
|
102 |
+
Odijk, Jan and
|
103 |
+
Piperidis, Stelios",
|
104 |
+
booktitle = "Proceedings of the Twelfth Language Resources and Evaluation Conference",
|
105 |
+
month = may,
|
106 |
+
year = "2020",
|
107 |
+
address = "Marseille, France",
|
108 |
+
publisher = "European Language Resources Association",
|
109 |
+
url = "https://aclanthology.org/2020.lrec-1.721",
|
110 |
+
pages = "5887--5895",
|
111 |
+
abstract = "This paper presents a French version of the FraCaS test suite. This test suite, originally written in English, contains problems illustrating semantic inference in natural language. We describe linguistic choices we had to make when translating the FraCaS test suite in French, and discuss some of the issues that were raised by the translation. We also report an experiment we ran in order to test both the translation and the logical semantics underlying the problems of the test suite. This provides a way of checking formal semanticists{'} hypotheses against actual semantic capacity of speakers (in the present case, French speakers), and allow us to compare the results we obtained with the ones of similar experiments that have been conducted for other languages.",
|
112 |
+
language = "English",
|
113 |
+
ISBN = "979-10-95546-34-4",
|
114 |
+
}
|
115 |
+
````
|
116 |
+
|
117 |
+
**ACL:**
|
118 |
+
|
119 |
+
Maxime Amblard, Clément Beysson, Philippe de Groote, Bruno Guillaume, and Sylvain Pogodalla. 2020. [A French Version of the FraCaS Test Suite](https://aclanthology.org/2020.lrec-1.721/). In *Proceedings of the Twelfth Language Resources and Evaluation Conference*, pages 5887–5895, Marseille, France. European Language Resources Association.
|