Dataset Preview Go to dataset viewer
The dataset preview is not available for this split.
Cannot load the dataset split (in normal download mode) to extract the first rows.
Error code:   NormalRowsError
Exception:    RuntimeError
Message:      Give up after 5 attempts with ConnectionError
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/implementations/", line 391, in _info
                  await _file_info(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/implementations/", line 772, in _file_info
                File "/src/services/worker/.venv/lib/python3.9/site-packages/aiohttp/", line 1004, in raise_for_status
                  raise ClientResponseError(
              aiohttp.client_exceptions.ClientResponseError: 403, message='Forbidden', url=URL('')
              The above exception was the direct cause of the following exception:
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/responses/", line 337, in get_first_rows_response
                  rows = get_rows(dataset, config, split, streaming=True, rows_max_number=rows_max_number, hf_token=hf_token)
                File "/src/services/worker/src/worker/", line 123, in decorator
                  return func(*args, **kwargs)
                File "/src/services/worker/src/worker/responses/", line 77, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/", line 718, in __iter__
                  for key, example in self._iter():
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/", line 708, in _iter
                  yield from ex_iterable
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/", line 112, in __iter__
                  yield from self.generate_examples_fn(**self.kwargs)
                File "/tmp/modules-cache/datasets_modules/datasets/fquad/505897c64c809e79a9910c91632481daec6fc5f55429ddd5ef367baf5f76651d/", line 98, in _generate_examples
                  with open(filepath, encoding="utf-8") as f:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/", line 67, in wrapper
                  return function(*args, use_auth_token=use_auth_token, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/", line 453, in xopen
                  file_obj =, mode=mode, *args, **kwargs).open()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/", line 441, in open
                  return open_files(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/", line 273, in open_files
                  fs, fs_token, paths = get_fs_token_paths(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/", line 606, in get_fs_token_paths
                  fs = filesystem(protocol, **inkwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/", line 268, in filesystem
                  return cls(**storage_options)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/", line 76, in __call__
                  obj = super().__call__(*args, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/implementations/", line 59, in __init__
         = fo.__enter__()  # the whole instance is a context
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/", line 103, in __enter__
                  f =, mode=mode)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/", line 1034, in open
                  f = self._open(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/implementations/", line 340, in _open
                  size = size or, **kwargs)["size"]
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/", line 111, in wrapper
                  return sync(self.loop, func, *args, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/", line 96, in sync
                  raise return_result
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/", line 53, in _runner
                  result[0] = await coro
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/implementations/", line 404, in _info
                  raise FileNotFoundError(url) from exc
              During handling of the above exception, another exception occurred:
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/", line 123, in decorator
                  return func(*args, **kwargs)
                File "/src/services/worker/src/worker/responses/", line 65, in get_rows
                  ds = load_dataset(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/", line 1746, in load_dataset
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/", line 704, in download_and_prepare
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/", line 1227, in _download_and_prepare
                  super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/", line 771, in _download_and_prepare
                  split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
                File "/tmp/modules-cache/datasets_modules/datasets/fquad/505897c64c809e79a9910c91632481daec6fc5f55429ddd5ef367baf5f76651d/", line 80, in _split_generators
                  dl_dir = dl_manager.download_and_extract(download_urls)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/", line 431, in download_and_extract
                  return self.extract(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/", line 309, in download
                  downloaded_path_or_paths = map_nested(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/", line 393, in map_nested
                  mapped = [
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/", line 394, in <listcomp>
                  _single_map_nested((function, obj, types, None, True, None))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/", line 330, in _single_map_nested
                  return function(data_struct)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/", line 335, in _download
                  return cached_path(url_or_filename, download_config=download_config)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/", line 185, in cached_path
                  output_path = get_from_cache(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/", line 535, in get_from_cache
                  raise ConnectionError(f"Couldn't reach {url} (error {response.status_code})")
              ConnectionError: Couldn't reach (error 403)
              The above exception was the direct cause of the following exception:
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/responses/", line 345, in get_first_rows_response
                  rows = get_rows(
                File "/src/services/worker/src/worker/", line 128, in decorator
                  raise RuntimeError(f"Give up after {attempt} attempts with ConnectionError") from last_err
              RuntimeError: Give up after 5 attempts with ConnectionError

Need help to make the dataset viewer work? Open an discussion for direct support.

Dataset Card for FQuAD

Dataset Summary

FQuAD: French Question Answering Dataset We introduce FQuAD, a native French Question Answering Dataset.

FQuAD contains 25,000+ question and answer pairs. Finetuning CamemBERT on FQuAD yields a F1 score of 88% and an exact match of 77.9%. Developped to provide a SQuAD equivalent in the French language. Questions are original and based on high quality Wikipedia articles.

Supported Tasks and Leaderboards

  • closed-domain-qa, text-retrieval: This dataset is intended to be used for closed-domain-qa, but can also be used for information retrieval tasks.


This dataset is exclusively in French, with context data from Wikipedia and questions from French university students (fr).

Dataset Structure

Data Instances


  • Size of downloaded dataset files: 3.14 MB
  • Size of the generated dataset: 6.62 MB
  • Total amount of disk used: 9.76 MB

An example of 'validation' looks as follows.

This example was too long and was cropped:

    "answers": {
        "answers_starts": [161, 46, 204],
        "texts": ["La Vierge aux rochers", "documents contemporains", "objets de spéculations"]
    "context": "\"Les deux tableaux sont certes décrits par des documents contemporains à leur création mais ceux-ci ne le font qu'indirectement ...",
    "questions": ["Que concerne principalement les documents ?", "Par quoi sont décrit les deux tableaux ?", "Quels types d'objets sont les deux tableaux aux yeux des chercheurs ?"]

Data Fields

The data fields are the same among all splits.


  • context: a string feature.
  • questions: a list of string features.
  • answers: a dictionary feature containing:
    • texts: a string feature.
    • answers_starts: a int32 feature.

Data Splits

The FQuAD dataset has 3 splits: train, validation, and test. The test split is however not released publicly at the moment. The splits contain disjoint sets of articles. The following table contains stats about each split.

Dataset Split Number of Articles in Split Number of paragraphs in split Number of questions in split
Train 117 4921 20731
Validation 768 51.0% 3188
Test 10 532 2189

Dataset Creation

Curation Rationale

The FQuAD dataset was created by Illuin technology. It was developped to provide a SQuAD equivalent in the French language. Questions are original and based on high quality Wikipedia articles.

Source Data

The text used for the contexts are from the curated list of French High-Quality Wikipedia articles.


Annotations (spans and questions) are written by students of the CentraleSupélec school of engineering. Wikipedia articles were scraped and Illuin used an internally-developped tool to help annotators ask questions and indicate the answer spans. Annotators were given paragraph sized contexts and asked to generate 4/5 non-trivial questions about information in the context.

Personal and Sensitive Information

No personal or sensitive information is included in this dataset. This has been manually verified by the dataset curators.

Considerations for Using the Data

Users should consider this dataset is sampled from Wikipedia data which might not be representative of all QA use cases.

Social Impact of Dataset

The social biases of this dataset have not yet been investigated.

Discussion of Biases

The social biases of this dataset have not yet been investigated, though articles have been selected by their quality and objectivity.

Other Known Limitations

The limitations of the FQuAD dataset have not yet been investigated.

Additional Information

Dataset Curators

Illuin Technology:

Licensing Information

The FQuAD dataset is licensed under the CC BY-NC-SA 3.0 license.

It allows personal and academic research uses of the dataset, but not commercial uses. So concretely, the dataset cannot be used to train a model that is then put into production within a business or a company. For this type of commercial use, we invite FQuAD users to contact the authors to discuss possible partnerships.

Citation Information

       author = {Martin, d'Hoffschmidt and Maxime, Vidal and
         Wacim, Belblidia and Tom, Brendlé},
        title = "{FQuAD: French Question Answering Dataset}",
      journal = {arXiv e-prints},
     keywords = {Computer Science - Computation and Language},
         year = "2020",
        month = "Feb",
          eid = {arXiv:2002.06071},
        pages = {arXiv:2002.06071},
archivePrefix = {arXiv},
       eprint = {2002.06071},
 primaryClass = {cs.CL}


Thanks to @thomwolf, @mariamabarham, @patrickvonplaten, @lewtun, @albertvillanova for adding this dataset. Thanks to @ManuelFay for providing information on the dataset creation process.

Models trained or fine-tuned on fquad