Datasets:

Languages:
French
Multilinguality:
monolingual
Size Categories:
1K<n<10K
Language Creators:
crowdsourced
found
Annotations Creators:
crowdsourced
Source Datasets:
original
ArXiv:
License:
system HF staff commited on
Commit
d54179d
1 Parent(s): bb49125

Update files from the datasets library (from 1.18.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.18.0

Files changed (1) hide show
  1. README.md +6 -5
README.md CHANGED
@@ -23,9 +23,10 @@ task_ids:
23
  - extractive-qa
24
  - closed-domain-qa
25
  paperswithcode_id: fquad
 
26
  ---
27
 
28
- # Dataset Card for "fquad"
29
 
30
  ## Table of Contents
31
  - [Dataset Description](#dataset-description)
@@ -63,10 +64,10 @@ paperswithcode_id: fquad
63
  ### Dataset Summary
64
 
65
  FQuAD: French Question Answering Dataset
66
- We introduce FQuAD, a native French Question Answering Dataset.
67
 
68
  FQuAD contains 25,000+ question and answer pairs.
69
- Finetuning CamemBERT on FQuAD yields a F1 score of 88% and an exact match of 77.9%.
70
  Developped to provide a SQuAD equivalent in the French language. Questions are original and based on high quality Wikipedia articles.
71
 
72
  ### Supported Tasks and Leaderboards
@@ -116,7 +117,7 @@ The data fields are the same among all splits.
116
 
117
  ### Data Splits
118
 
119
- The FQuAD dataset has 3 splits: _train_, _validation_, and _test_. The _test_ split is however not released publicly at the moment. The splits contain disjoint sets of articles. The following table contains stats about each split.
120
 
121
  Dataset Split | Number of Articles in Split | Number of paragraphs in split | Number of questions in split
122
  --------------|------------------------------|--------------------------|-------------------------
@@ -134,7 +135,7 @@ Test | 10 | 532 | 2189
134
  The text used for the contexts are from the curated list of French High-Quality Wikipedia [articles](https://fr.wikipedia.org/wiki/Cat%C3%A9gorie:Article_de_qualit%C3%A9).
135
  ### Annotations
136
 
137
- Annotations (spans and questions) are written by students of the CentraleSupélec school of engineering.
138
  Wikipedia articles were scraped and Illuin used an internally-developped tool to help annotators ask questions and indicate the answer spans.
139
  Annotators were given paragraph sized contexts and asked to generate 4/5 non-trivial questions about information in the context.
140
 
 
23
  - extractive-qa
24
  - closed-domain-qa
25
  paperswithcode_id: fquad
26
+ pretty_name: "FQuAD: French Question Answering Dataset"
27
  ---
28
 
29
+ # Dataset Card for FQuAD
30
 
31
  ## Table of Contents
32
  - [Dataset Description](#dataset-description)
 
64
  ### Dataset Summary
65
 
66
  FQuAD: French Question Answering Dataset
67
+ We introduce FQuAD, a native French Question Answering Dataset.
68
 
69
  FQuAD contains 25,000+ question and answer pairs.
70
+ Finetuning CamemBERT on FQuAD yields a F1 score of 88% and an exact match of 77.9%.
71
  Developped to provide a SQuAD equivalent in the French language. Questions are original and based on high quality Wikipedia articles.
72
 
73
  ### Supported Tasks and Leaderboards
 
117
 
118
  ### Data Splits
119
 
120
+ The FQuAD dataset has 3 splits: _train_, _validation_, and _test_. The _test_ split is however not released publicly at the moment. The splits contain disjoint sets of articles. The following table contains stats about each split.
121
 
122
  Dataset Split | Number of Articles in Split | Number of paragraphs in split | Number of questions in split
123
  --------------|------------------------------|--------------------------|-------------------------
 
135
  The text used for the contexts are from the curated list of French High-Quality Wikipedia [articles](https://fr.wikipedia.org/wiki/Cat%C3%A9gorie:Article_de_qualit%C3%A9).
136
  ### Annotations
137
 
138
+ Annotations (spans and questions) are written by students of the CentraleSupélec school of engineering.
139
  Wikipedia articles were scraped and Illuin used an internally-developped tool to help annotators ask questions and indicate the answer spans.
140
  Annotators were given paragraph sized contexts and asked to generate 4/5 non-trivial questions about information in the context.
141