fracas / README.md
maximoss's picture
Update README.md
2448941 verified
|
raw
history blame
6.2 kB
metadata
license: cc-by-nc-sa-4.0
task_categories:
  - text-classification
  - question-answering
task_ids:
  - natural-language-inference
  - multi-input-text-classification
language:
  - fr
  - en
size_categories:
  - n<1K

Dataset Card for Dataset Name

Dataset Description

Dataset Summary

This repository contains the French version of the FraCaS Test Suite introduced in this paper, as well as the original English one, in a TSV format (as opposed to the XML format provided with the original paper).

FraCaS stands for "Framework for Computational Semantics".

Supported Tasks and Leaderboards

This dataset can be used for the task of Natural Language Inference (NLI), also known as Recognizing Textual Entailment (RTE), which is a sentence-pair classification task.

It can also be used for the task of Question Answering (QA) (when using the columns question and answer instead of hypothesis and label, respectively).

Dataset Structure

Data Fields

  • id: Index number.
  • premises: All premises provided for this particular example, concatenated, in French.
  • hypothesis: The translated hypothesis in the target language (French).
  • label: The classification label, with possible values 0 (entailment), 1 (neutral), 2 (contradiction), or undef (for undefined).
  • question: The hypothesis in the form of question, in French.
  • answer: The answer to the question, with possible values Yes (0), Don't know / Unknown (1), No (2), undef, or a longer phrase containing qualifications or elaborations such as Yes, on one reading.
  • premises_original: All premises provided for this particular example, concatenated, in their language of origin (English).
  • premise1: The first premise in English.
  • premise1_original: The first premise in English.
  • premise2: When available, the second premise in French.
  • premise2_original: When available, the second premise in English.
  • premise3: When available, the third premise in French.
  • premise3_original: When available, the third premise in English.
  • premise4: When available, the fourth premise in French.
  • premise4_original: When available, the fourth premise in English.
  • premise5: When available, the fifth premise in French.
  • premise5_original: When available, the fifth premise in English.
  • hypothesis_original: The hypothesis in English.
  • question_original: The hypothesis in the form of question, in English.
  • note: Text from the source document intended to explain or justify the answer, or notes added to a number of problems in order to explain issues which arose during translation.
  • topic: Problem set / topic.

Data Splits

The premise counts are distributed as follows:

# premises # problems % problems
1 192 55.5 %
2 122 35.3 %
3 29 8.4 %
4 2 0.6 %
5 1 0.3 %

The answer distribution is roughly as follows:

# problems Percentage Answer
180 52% Yes
94 27% Don't know
31 9% No
41 12% [other / complex]

Here's the breakdown by topic:

sec topic start count % single-premise
1 Quantifiers 1 80 23 % 50
2 Plurals 81 33 10 % 24
3 Anaphora 114 28 8 % 6
4 Ellipsis 142 55 16 % 25
5 Adjectives 197 23 7 % 15
6 Comparatives 220 31 9 % 16
7 Temporal 251 75 22 % 39
8 Verbs 326 8 2 % 8
9 Attitudes 334 4 10 % 9

Additional Information

Citation Information

BibTeX:

@inproceedings{amblard-etal-2020-french,
    title = "A {F}rench Version of the {F}ra{C}a{S} Test Suite",
    author = "Amblard, Maxime  and
      Beysson, Cl{\'e}ment  and
      de Groote, Philippe  and
      Guillaume, Bruno  and
      Pogodalla, Sylvain",
    editor = "Calzolari, Nicoletta  and
      B{\'e}chet, Fr{\'e}d{\'e}ric  and
      Blache, Philippe  and
      Choukri, Khalid  and
      Cieri, Christopher  and
      Declerck, Thierry  and
      Goggi, Sara  and
      Isahara, Hitoshi  and
      Maegaard, Bente  and
      Mariani, Joseph  and
      Mazo, H{\'e}l{\`e}ne  and
      Moreno, Asuncion  and
      Odijk, Jan  and
      Piperidis, Stelios",
    booktitle = "Proceedings of the Twelfth Language Resources and Evaluation Conference",
    month = may,
    year = "2020",
    address = "Marseille, France",
    publisher = "European Language Resources Association",
    url = "https://aclanthology.org/2020.lrec-1.721",
    pages = "5887--5895",
    abstract = "This paper presents a French version of the FraCaS test suite. This test suite, originally written in English, contains problems illustrating semantic inference in natural language. We describe linguistic choices we had to make when translating the FraCaS test suite in French, and discuss some of the issues that were raised by the translation. We also report an experiment we ran in order to test both the translation and the logical semantics underlying the problems of the test suite. This provides a way of checking formal semanticists{'} hypotheses against actual semantic capacity of speakers (in the present case, French speakers), and allow us to compare the results we obtained with the ones of similar experiments that have been conducted for other languages.",
    language = "English",
    ISBN = "979-10-95546-34-4",
}

ACL:

Maxime Amblard, Clément Beysson, Philippe de Groote, Bruno Guillaume, and Sylvain Pogodalla. 2020. A French Version of the FraCaS Test Suite. In Proceedings of the Twelfth Language Resources and Evaluation Conference, pages 5887–5895, Marseille, France. European Language Resources Association.