bourdoiscatie's picture
Update README.md
6472b9b
metadata
licence: mit
language:
  - fr
size_categories:
  - 100K<n<1M
task_categories:
  - text-classification
tags:
  - textual-entailment
  - DFP
  - french prompts
annotations_creators:
  - found
language_creators:
  - found
multilinguality:
  - monolingual
source_datasets:
  - multilingual-NLI-26lang-2mil7

ling_fr_prompt_textual_entailment

Summary

ling_fr_prompt_textual_entailment is a subset of the Dataset of French Prompts (DFP).
It contains 110,000 rows that can be used for a textual entailment task.
The original data (without prompts) comes from the dataset multilingual-NLI-26lang-2mil7 by Laurer et al. where only the ling French part has been kept.
A list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.

Prompts used

List

22 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.

"""Prendre l'énoncé suivant comme vrai : "  """+premise+"""  "\n Alors l'énoncé suivant : "  """+hypothesis+"""  " est "vrai", "faux", ou "incertain" ?""", 
"""Prends l'énoncé suivant comme vrai : "  """+premise+"""  "\n Alors l'énoncé suivant : "  """+hypothesis+"""  " est "vrai", "faux", ou "incertain" ?""",   
"""Prenez l'énoncé suivant comme vrai : "  """+premise+"""  "\n Alors l'énoncé suivant : "  """+hypothesis+"""  " est "vrai", "faux", ou "incertain" ?""",   
'"'+premise+'"\nQuestion : Cela implique-t-il que "'+hypothesis+'" ? "vrai", "faux", ou "incertain" ?',   
'"'+premise+'"\nQuestion : "'+hypothesis+'" est "vrai", "faux", ou "peut-être" ?',   
""" "  """+premise+"""  "\n D'après le passage précédent, est-il vrai que "  """+hypothesis+"""  " ? "vrai", "faux", ou "incertain" ?""",   
""" "  """+premise+"""  "\nSur la base de ces informations, l'énoncé est-il : "  """+hypothesis+"""  " ? "vrai", "faux", ou "incertain" ?""",   
""" "  """+premise+"""  "\nEn gardant à l'esprit le texte ci-dessus, considérez : "  """+hypothesis+"""  "\n Est-ce que c'est "vrai", "faux", ou "incertain" ?""",   
""" "  """+premise+"""  "\nEn gardant à l'esprit le texte ci-dessus, considére : "  """+hypothesis+"""  "\n Est-ce que c'est "vrai", "faux", ou "peut-être" ?""",   
""" "  """+premise+"""  "\nEn utilisant uniquement la description ci-dessus et ce que vous savez du monde, "  """+hypothesis+"""  " est-ce "vrai", "faux", ou "incertain" ?""",   
""" "  """+premise+"""  "\nEn utilisant uniquement la description ci-dessus et ce que tu sais du monde, "  """+hypothesis+"""  " est-ce "vrai", "faux", ou "incertain" ?""",   
"""Étant donné que "  """+premise+"""  ", s'ensuit-il que "  """+hypothesis+"""  " ? "vrai", "faux", ou "incertain" ?""",   
"""Étant donné que "  """+premise+"""  ", est-il garanti que "  """+hypothesis+"""  " ? "vrai", "faux", ou "incertain" ?""",   
'Étant donné '+premise+', doit-on supposer que '+hypothesis+' est "vrai", "faux", ou "incertain" ?',   
'Étant donné '+premise+', dois-je supposer que '+hypothesis+' est "vrai", "faux", ou "incertain" ?',   
'Sachant que '+premise+', doit-on supposer que '+hypothesis+' est "vrai", "faux", ou "incertain" ?',   
'Sachant que '+premise+', dois-je supposer que '+hypothesis+' est "vrai", "faux", ou "incertain" ?',   
'Étant donné que '+premise+', il doit donc être vrai que '+hypothesis+' ? "vrai", "faux", ou "incertain" ?',   
"""Supposons que "  """+premise+"""  ", pouvons-nous déduire que "  """+hypothesis+"""  " ? "vrai", "faux", ou "incertain" ?""",   
"""Supposons que "  """+premise+"""  ", puis-je déduire que "  """+hypothesis+"""  " ? "vrai", "faux", ou "incertain" ?""",   
"""Supposons qu'il est vrai que "  """+premise+"""  ". Alors, est-ce que "  """+hypothesis+"""  " ? "vrai", "faux", ou "incertain" ?""",   
"""Supposons qu'il soit vrai que "  """+premise+"""  ",\n Donc, "  """+hypothesis+"""  " ? "vrai", "faux", ou "incertain" ?"""

Features used in the prompts

In the prompt list above, premise, hypothesis and targets have been constructed from:

moritz = load_dataset('MoritzLaurer/multilingual-NLI-26lang-2mil7')
ling = moritz['fr_ling']
ling['premise'] = list(map(lambda i: i.replace(' . ','. ').replace(' .','. ').replace('( ','(').replace(' )',')').replace(' , ',', ').replace(', ',', ').replace("' ","'"), map(str,ling['premise'])))
ling['hypothesis'] = list(map(lambda x: x.replace(' . ','. ').replace(' .','. ').replace('( ','(').replace(' )',')').replace(' , ',', ').replace(', ',', ').replace("' ","'"), map(str,ling['hypothesis'])))
targets = str(ling['label'][i]).replace("0","vrai").replace("1","incertain").replace("2","faux")

Splits

  • train with 110,000 samples
  • no valid split
  • no test split

How to use?

from datasets import load_dataset
dataset = load_dataset("CATIE-AQ/ling_fr_prompt_textual_entailment")

Citation

Original data

@article{laurer_less_2022, title = {Less {Annotating}, {More} {Classifying} – {Addressing} the {Data} {Scarcity} {Issue} of {Supervised} {Machine} {Learning} with {Deep} {Transfer} {Learning} and {BERT} - {NLI}}, url = {https://osf.io/74b8k}, language = {en-us}, urldate = {2022-07-28}, journal = {Preprint}, author = {Laurer, Moritz and Atteveldt, Wouter van and Casas, Andreu Salleras and Welbers, Kasper}, month = jun, year = {2022}, note = {Publisher: Open Science Framework}, }

This Dataset

@misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023,
author = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { DFP (Revision 1d24c09) },
year = 2023,
url = { https://huggingface.co/datasets/CATIE-AQ/DFP },
doi = { 10.57967/hf/1200 },
publisher = { Hugging Face }
}

License

MIT