The dataset viewer is not available for this split.
The info cannot be fetched for the config 'default' of the dataset.
Error code:   InfoError
Exception:    HfHubHTTPError
Message:      500 Server Error: Internal Server Error for url: https://huggingface.co/api/datasets/fquad (Request ID: Root=1-65dfa485-5ba8ddfc74d3c7167518002b;40d4cc30-ffdb-44d2-b25c-e6a9dec2051f)

Internal Error - We're working hard to fix this as soon as possible!
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 210, in compute_first_rows_from_streaming_response
                  info = get_dataset_config_info(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 477, in get_dataset_config_info
                  builder = load_dataset_builder(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 2220, in load_dataset_builder
                  dataset_module = dataset_module_factory(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1871, in dataset_module_factory
                  raise e1 from None
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1816, in dataset_module_factory
                  raise e
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1790, in dataset_module_factory
                  dataset_info = hf_api.dataset_info(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
                  return fn(*args, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_api.py", line 2148, in dataset_info
                  hf_raise_for_status(r)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_errors.py", line 333, in hf_raise_for_status
                  raise HfHubHTTPError(str(e), response=response) from e
              huggingface_hub.utils._errors.HfHubHTTPError: 500 Server Error: Internal Server Error for url: https://huggingface.co/api/datasets/fquad (Request ID: Root=1-65dfa485-5ba8ddfc74d3c7167518002b;40d4cc30-ffdb-44d2-b25c-e6a9dec2051f)
              
              Internal Error - We're working hard to fix this as soon as possible!

Need help to make the dataset viewer work? Open a discussion for direct support.

Dataset Card for FQuAD

Dataset Summary

FQuAD: French Question Answering Dataset We introduce FQuAD, a native French Question Answering Dataset.

FQuAD contains 25,000+ question and answer pairs. Finetuning CamemBERT on FQuAD yields a F1 score of 88% and an exact match of 77.9%. Developped to provide a SQuAD equivalent in the French language. Questions are original and based on high quality Wikipedia articles.

Please, note this dataset is licensed for non-commercial purposes and users must agree to the following terms and conditions:

  1. Use FQuAD only for internal research purposes.
  2. Not make any copy except a safety one.
  3. Not redistribute it (or part of it) in any way, even for free.
  4. Not sell it or use it for any commercial purpose. Contact us for a possible commercial licence.
  5. Mention the corpus origin and Illuin Technology in all publications about experiments using FQuAD.
  6. Redistribute to Illuin Technology any improved or enriched version you could make of that corpus.

Request manually download of the data from: https://fquad.illuin.tech/

Supported Tasks and Leaderboards

  • closed-domain-qa, text-retrieval: This dataset is intended to be used for closed-domain-qa, but can also be used for information retrieval tasks.

Languages

This dataset is exclusively in French, with context data from Wikipedia and questions from French university students (fr).

Dataset Structure

Data Instances

default

  • Size of downloaded dataset files: 3.29 MB
  • Size of the generated dataset: 6.94 MB
  • Total amount of disk used: 10.23 MB

An example of 'validation' looks as follows.

This example was too long and was cropped:

{
    "answers": {
        "answers_starts": [161, 46, 204],
        "texts": ["La Vierge aux rochers", "documents contemporains", "objets de spéculations"]
    },
    "context": "\"Les deux tableaux sont certes décrits par des documents contemporains à leur création mais ceux-ci ne le font qu'indirectement ...",
    "questions": ["Que concerne principalement les documents ?", "Par quoi sont décrit les deux tableaux ?", "Quels types d'objets sont les deux tableaux aux yeux des chercheurs ?"]
}

Data Fields

The data fields are the same among all splits.

default

  • context: a string feature.
  • questions: a list of string features.
  • answers: a dictionary feature containing:
    • texts: a string feature.
    • answers_starts: a int32 feature.

Data Splits

The FQuAD dataset has 3 splits: train, validation, and test. The test split is however not released publicly at the moment. The splits contain disjoint sets of articles. The following table contains stats about each split.

Dataset Split Number of Articles in Split Number of paragraphs in split Number of questions in split
Train 117 4921 20731
Validation 768 51.0% 3188
Test 10 532 2189

Dataset Creation

Curation Rationale

The FQuAD dataset was created by Illuin technology. It was developped to provide a SQuAD equivalent in the French language. Questions are original and based on high quality Wikipedia articles.

Source Data

The text used for the contexts are from the curated list of French High-Quality Wikipedia articles.

Annotations

Annotations (spans and questions) are written by students of the CentraleSupélec school of engineering. Wikipedia articles were scraped and Illuin used an internally-developped tool to help annotators ask questions and indicate the answer spans. Annotators were given paragraph sized contexts and asked to generate 4/5 non-trivial questions about information in the context.

Personal and Sensitive Information

No personal or sensitive information is included in this dataset. This has been manually verified by the dataset curators.

Considerations for Using the Data

Users should consider this dataset is sampled from Wikipedia data which might not be representative of all QA use cases.

Social Impact of Dataset

The social biases of this dataset have not yet been investigated.

Discussion of Biases

The social biases of this dataset have not yet been investigated, though articles have been selected by their quality and objectivity.

Other Known Limitations

The limitations of the FQuAD dataset have not yet been investigated.

Additional Information

Dataset Curators

Illuin Technology: https://fquad.illuin.tech/

Licensing Information

The FQuAD dataset is licensed under the CC BY-NC-SA 3.0 license.

It allows personal and academic research uses of the dataset, but not commercial uses. So concretely, the dataset cannot be used to train a model that is then put into production within a business or a company. For this type of commercial use, we invite FQuAD users to contact the authors to discuss possible partnerships.

Citation Information

@ARTICLE{2020arXiv200206071
       author = {Martin, d'Hoffschmidt and Maxime, Vidal and
         Wacim, Belblidia and Tom, Brendlé},
        title = "{FQuAD: French Question Answering Dataset}",
      journal = {arXiv e-prints},
     keywords = {Computer Science - Computation and Language},
         year = "2020",
        month = "Feb",
          eid = {arXiv:2002.06071},
        pages = {arXiv:2002.06071},
archivePrefix = {arXiv},
       eprint = {2002.06071},
 primaryClass = {cs.CL}
}

Contributions

Thanks to @thomwolf, @mariamabarham, @patrickvonplaten, @lewtun, @albertvillanova for adding this dataset. Thanks to @ManuelFay for providing information on the dataset creation process.

Downloads last month
377

Models trained or fine-tuned on fquad