Datasets:

Languages:
French
Multilinguality:
monolingual
Size Categories:
1K<n<10K
Language Creators:
crowdsourced
found
Annotations Creators:
crowdsourced
Source Datasets:
original
ArXiv:
License:
lhoestq HF staff commited on
Commit
23c403c
1 Parent(s): 4329230

add dataset_info in dataset metadata

Browse files
Files changed (1) hide show
  1. README.md +23 -2
README.md CHANGED
@@ -21,7 +21,28 @@ task_ids:
21
  - extractive-qa
22
  - closed-domain-qa
23
  paperswithcode_id: fquad
24
- pretty_name: "FQuAD: French Question Answering Dataset"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
25
  ---
26
 
27
  # Dataset Card for FQuAD
@@ -191,4 +212,4 @@ archivePrefix = {arXiv},
191
  ### Contributions
192
 
193
  Thanks to [@thomwolf](https://github.com/thomwolf), [@mariamabarham](https://github.com/mariamabarham), [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun), [@albertvillanova](https://github.com/albertvillanova) for adding this dataset.
194
- Thanks to [@ManuelFay](https://github.com/manuelfay) for providing information on the dataset creation process.
 
21
  - extractive-qa
22
  - closed-domain-qa
23
  paperswithcode_id: fquad
24
+ pretty_name: 'FQuAD: French Question Answering Dataset'
25
+ dataset_info:
26
+ features:
27
+ - name: context
28
+ dtype: string
29
+ - name: questions
30
+ sequence: string
31
+ - name: answers
32
+ sequence:
33
+ - name: texts
34
+ dtype: string
35
+ - name: answers_starts
36
+ dtype: int32
37
+ splits:
38
+ - name: train
39
+ num_bytes: 5910248
40
+ num_examples: 4921
41
+ - name: validation
42
+ num_bytes: 1033253
43
+ num_examples: 768
44
+ download_size: 3292236
45
+ dataset_size: 6943501
46
  ---
47
 
48
  # Dataset Card for FQuAD
 
212
  ### Contributions
213
 
214
  Thanks to [@thomwolf](https://github.com/thomwolf), [@mariamabarham](https://github.com/mariamabarham), [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun), [@albertvillanova](https://github.com/albertvillanova) for adding this dataset.
215
+ Thanks to [@ManuelFay](https://github.com/manuelfay) for providing information on the dataset creation process.