arcd / README.md
albertvillanova's picture
Convert dataset to Parquet (#2)
cc6906b
metadata
annotations_creators:
  - crowdsourced
language_creators:
  - crowdsourced
language:
  - ar
license:
  - mit
multilinguality:
  - monolingual
size_categories:
  - 1K<n<10K
source_datasets:
  - original
task_categories:
  - question-answering
task_ids:
  - extractive-qa
paperswithcode_id: arcd
pretty_name: ARCD
language_bcp47:
  - ar-SA
dataset_info:
  config_name: plain_text
  features:
    - name: id
      dtype: string
    - name: title
      dtype: string
    - name: context
      dtype: string
    - name: question
      dtype: string
    - name: answers
      sequence:
        - name: text
          dtype: string
        - name: answer_start
          dtype: int32
  splits:
    - name: train
      num_bytes: 811036
      num_examples: 693
    - name: validation
      num_bytes: 885620
      num_examples: 702
  download_size: 365858
  dataset_size: 1696656
configs:
  - config_name: plain_text
    data_files:
      - split: train
        path: plain_text/train-*
      - split: validation
        path: plain_text/validation-*
    default: true

Dataset Card for "arcd"

Table of Contents

Dataset Description

Dataset Summary

Arabic Reading Comprehension Dataset (ARCD) composed of 1,395 questions posed by crowdworkers on Wikipedia articles.

Supported Tasks and Leaderboards

More Information Needed

Languages

More Information Needed

Dataset Structure

Data Instances

plain_text

  • Size of downloaded dataset files: 1.94 MB
  • Size of the generated dataset: 1.70 MB
  • Total amount of disk used: 3.64 MB

An example of 'train' looks as follows.

This example was too long and was cropped:

{
    "answers": "{\"answer_start\": [34], \"text\": [\"صحابي من صحابة رسول الإسلام محمد، وعمُّه وأخوه من الرضاعة وأحد وزرائه الأربعة عشر،\"]}...",
    "context": "\"حمزة بن عبد المطلب الهاشمي القرشي صحابي من صحابة رسول الإسلام محمد، وعمُّه وأخوه من الرضاعة وأحد وزرائه الأربعة عشر، وهو خير أع...",
    "id": "621723207492",
    "question": "من هو حمزة بن عبد المطلب؟",
    "title": "حمزة بن عبد المطلب"
}

Data Fields

The data fields are the same among all splits.

plain_text

  • id: a string feature.
  • title: a string feature.
  • context: a string feature.
  • question: a string feature.
  • answers: a dictionary feature containing:
    • text: a string feature.
    • answer_start: a int32 feature.

Data Splits

name train validation
plain_text 693 702

Dataset Creation

Curation Rationale

More Information Needed

Source Data

Initial Data Collection and Normalization

More Information Needed

Who are the source language producers?

More Information Needed

Annotations

Annotation process

More Information Needed

Who are the annotators?

More Information Needed

Personal and Sensitive Information

More Information Needed

Considerations for Using the Data

Social Impact of Dataset

More Information Needed

Discussion of Biases

More Information Needed

Other Known Limitations

More Information Needed

Additional Information

Dataset Curators

More Information Needed

Licensing Information

More Information Needed

Citation Information

@inproceedings{mozannar-etal-2019-neural,
    title = "Neural {A}rabic Question Answering",
    author = "Mozannar, Hussein  and
      Maamary, Elie  and
      El Hajal, Karl  and
      Hajj, Hazem",
    booktitle = "Proceedings of the Fourth Arabic Natural Language Processing Workshop",
    month = aug,
    year = "2019",
    address = "Florence, Italy",
    publisher = "Association for Computational Linguistics",
    url = "https://www.aclweb.org/anthology/W19-4612",
    doi = "10.18653/v1/W19-4612",
    pages = "108--118",
    abstract = "This paper tackles the problem of open domain factual Arabic question answering (QA) using Wikipedia as our knowledge source. This constrains the answer of any question to be a span of text in Wikipedia. Open domain QA for Arabic entails three challenges: annotated QA datasets in Arabic, large scale efficient information retrieval and machine reading comprehension. To deal with the lack of Arabic QA datasets we present the Arabic Reading Comprehension Dataset (ARCD) composed of 1,395 questions posed by crowdworkers on Wikipedia articles, and a machine translation of the Stanford Question Answering Dataset (Arabic-SQuAD). Our system for open domain question answering in Arabic (SOQAL) is based on two components: (1) a document retriever using a hierarchical TF-IDF approach and (2) a neural reading comprehension model using the pre-trained bi-directional transformer BERT. Our experiments on ARCD indicate the effectiveness of our approach with our BERT-based reader achieving a 61.3 F1 score, and our open domain system SOQAL achieving a 27.6 F1 score.",
}

Contributions

Thanks to @albertvillanova, @lewtun, @mariamabarham, @thomwolf, @tayciryahmed for adding this dataset.