Datasets:
head_qa

Task Categories: question-answering
Languages: en es
Multilinguality: monolingual
Size Categories: 1K<n<10K
Licenses: mit
Language Creators: expert-generated
Annotations Creators: no-annotation
Source Datasets: original

Dataset Card for HEAD-QA

Dataset Summary

HEAD-QA is a multi-choice HEAlthcare Dataset. The questions come from exams to access a specialized position in the Spanish healthcare system, and are challenging even for highly specialized humans. They are designed by the Ministerio de Sanidad, Consumo y Bienestar Social, who also provides direct access to the exams of the last 5 years (in Spanish).

Date of the last update of the documents object of the reuse: January, 14th, 2019.

HEAD-QA tries to make these questions accesible for the Natural Language Processing community. We hope it is an useful resource towards achieving better QA systems. The dataset contains questions about the following topics:

  • Medicine
  • Nursing
  • Psychology
  • Chemistry
  • Pharmacology
  • Biology

Supported Tasks and Leaderboards

  • multiple-choice-qa: HEAD-QA is a multi-choice question answering testbed to encourage research on complex reasoning.

Languages

The questions and answers are available in both Spanish (BCP-47 code: 'es-ES') and English (BCP-47 code: 'en').

The language by default is Spanish:

from datasets import load_dataset

data_es = load_dataset('head_qa')

data_en = load_dataset('head_qa', 'en')

Dataset Structure

Data Instances

A typical data point comprises a question qtext, multiple possible answers atext and the right answer ra.

An example from the HEAD-QA dataset looks as follows:

{
'qid': '1', 
'category': 'biology', 
'qtext': 'Los potenciales postsinápticos excitadores:',
'answers': [
    {
        'aid': 1, 
        'atext': 'Son de tipo todo o nada.'
    }, 
    {
        'aid': 2, 
        'atext': 'Son hiperpolarizantes.'
    },
    {
        'aid': 3, 
        'atext': 'Se pueden sumar.'
    },
    {
        'aid': 4, 
        'atext': 'Se propagan a largas distancias.'
    },
    {
        'aid': 5, 
        'atext': 'Presentan un periodo refractario.'
    }],
'ra': '3',
'image': '',
'name': 'Cuaderno_2013_1_B',
'year': '2013'
}

Data Fields

  • qid: question identifier (int)
  • category: category of the question: "medicine", "nursing", "psychology", "chemistry", "pharmacology", "biology"
  • qtext: question text
  • answers: list of possible answers. Each element of the list is a dictionary with 2 keys:
    • aid: answer identifier (int)
    • atext: answer text
  • ra: aid of the right answer (int)
  • image: optional, some of the questions refer to an image
  • name: name of the exam from which the question was extracted
  • year: year in which the exam took place

Data Splits

The data is split into train, validation and test set for each of the two languages. The split sizes are as follow:

Train Val Test
Spanish 2657 1366 2742
English 2657 1366 2742

Dataset Creation

Curation Rationale

As motivation for the creation of this dataset, here is the abstract of the paper:

"We present HEAD-QA, a multi-choice question answering testbed to encourage research on complex reasoning. The questions come from exams to access a specialized position in the Spanish healthcare system, and are challenging even for highly specialized humans. We then consider monolingual (Spanish) and cross-lingual (to English) experiments with information retrieval and neural techniques. We show that: (i) HEAD-QA challenges current methods, and (ii) the results lag well behind human performance, demonstrating its usefulness as a benchmark for future work."

Source Data

Initial Data Collection and Normalization

The questions come from exams to access a specialized position in the Spanish healthcare system, and are designed by the Ministerio de Sanidad, Consumo y Bienestar Social, who also provides direct access to the exams of the last 5 years (in Spanish).

Who are the source language producers?

The dataset was created by David Vilares and Carlos Gómez-Rodríguez.

Annotations

The dataset does not contain any additional annotations.

Annotation process

[N/A]

Who are the annotators?

[N/A]

Personal and Sensitive Information

[More Information Needed]

Considerations for Using the Data

Social Impact of Dataset

[More Information Needed]

Discussion of Biases

[More Information Needed]

Other Known Limitations

[More Information Needed]

Additional Information

Dataset Curators

The dataset was created by David Vilares and Carlos Gómez-Rodríguez.

Licensing Information

According to the HEAD-QA homepage:

The Ministerio de Sanidad, Consumo y Biniestar Social allows the redistribution of the exams and their content under certain conditions:

  • The denaturalization of the content of the information is prohibited in any circumstance.
  • The user is obliged to cite the source of the documents subject to reuse.
  • The user is obliged to indicate the date of the last update of the documents object of the reuse.

According to the HEAD-QA repository:

The dataset is licensed under the MIT License.

Citation Information

@inproceedings{vilares-gomez-rodriguez-2019-head,
    title = "{HEAD}-{QA}: A Healthcare Dataset for Complex Reasoning",
    author = "Vilares, David  and
      G{\'o}mez-Rodr{\'i}guez, Carlos",
    booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
    month = jul,
    year = "2019",
    address = "Florence, Italy",
    publisher = "Association for Computational Linguistics",
    url = "https://www.aclweb.org/anthology/P19-1092",
    doi = "10.18653/v1/P19-1092",
    pages = "960--966",
    abstract = "We present HEAD-QA, a multi-choice question answering testbed to encourage research on complex reasoning. The questions come from exams to access a specialized position in the Spanish healthcare system, and are challenging even for highly specialized humans. We then consider monolingual (Spanish) and cross-lingual (to English) experiments with information retrieval and neural techniques. We show that: (i) HEAD-QA challenges current methods, and (ii) the results lag well behind human performance, demonstrating its usefulness as a benchmark for future work.",
}

Contributions

Thanks to @mariagrandury for adding this dataset.

Models trained or fine-tuned on head_qa

None yet