Datasets:
license: cc-by-nc-sa-4.0
task_categories:
- question-answering
language:
- ar
- asm
- bn
- en
- hi
- ne
- tr
tags:
- question-answering
- cultural-aligned
pretty_name: MultiNativQA -- Multilingual Native and Culturally Aligned QA
size_categories:
- 10K<n<100K
dataset_info:
- config_name: Arabic
splits:
- name: train
num_examples: 3649
- name: dev
num_examples: 492
- name: test
num_examples: 988
- config_name: Assamese
splits:
- name: train
num_examples: 1131
- name: dev
num_examples: 157
- name: test
num_examples: 545
- config_name: Bangla-BD
splits:
- name: train
num_examples: 7018
- name: dev
num_examples: 953
- name: test
num_examples: 1521
- config_name: Bangla-IN
splits:
- name: train
num_examples: 6891
- name: dev
num_examples: 930
- name: test
num_examples: 2146
- config_name: English-BD
splits:
- name: train
num_examples: 4761
- name: dev
num_examples: 656
- name: test
num_examples: 1113
- config_name: English-QA
splits:
- name: train
num_examples: 8212
- name: dev
num_examples: 1164
- name: test
num_examples: 2322
- config_name: Hindi
splits:
- name: train
num_examples: 9288
- name: dev
num_examples: 1286
- name: test
num_examples: 2745
- config_name: Nepali
splits:
- name: test
num_examples: 561
- config_name: Turkish
splits:
- name: train
num_examples: 3527
- name: dev
num_examples: 483
- name: test
num_examples: 1218
configs:
- config_name: arabic_qa
data_files:
- split: train
path: arabic_qa/NativQA_ar_msa_qa_train.json
- split: dev
path: arabic_qa/NativQA_ar_msa_qa_dev.json
- split: test
path: arabic_qa/NativQA_ar_msa_qa_test.json
- config_name: assamese_in
data_files:
- split: train
path: assamese_in/NativQA_asm_NA_in_train.json
- split: dev
path: assamese_in/NativQA_asm_NA_in_dev.json
- split: test
path: assamese_in/NativQA_asm_NA_in_test.json
- config_name: bangla_bd
data_files:
- split: train
path: bangla_bd/NativQA_bn_scb_bd_train.json
- split: dev
path: bangla_bd/NativQA_bn_scb_bd_dev.json
- split: test
path: bangla_bd/NativQA_bn_scb_bd_test.json
- config_name: bangla_in
data_files:
- split: train
path: bangla_in/NativQA_bn_scb_in_train.json
- split: dev
path: bangla_in/NativQA_bn_scb_in_dev.json
- split: test
path: bangla_in/NativQA_bn_scb_in_test.json
- config_name: english_bd
data_files:
- split: train
path: english_bd/NativQA_en_NA_bd_train.json
- split: dev
path: english_bd/NativQA_en_NA_bd_dev.json
- split: test
path: english_bd/NativQA_en_NA_bd_test.json
- config_name: english_qa
data_files:
- split: train
path: english_qa/NativQA_en_NA_qa_train.json
- split: dev
path: english_qa/NativQA_en_NA_qa_dev.json
- split: test
path: english_qa/NativQA_en_NA_qa_test.json
- config_name: hindi_in
data_files:
- split: train
path: hindi_in/NativQA_hi_NA_in_train.json
- split: dev
path: hindi_in/NativQA_hi_NA_in_dev.json
- split: test
path: hindi_in/NativQA_hi_NA_in_test.json
- config_name: nepali_np
data_files:
- split: test
path: nepali_np/NativQA_ne_NA_np_test.json
- config_name: turkish_tr
data_files:
- split: train
path: turkish_tr/NativQA_tr_NA_tr_train.json
- split: dev
path: turkish_tr/NativQA_tr_NA_tr_dev.json
- split: test
path: turkish_tr/NativQA_tr_NA_tr_test.json
MultiNativQA: Multilingual Culturally-Aligned Natural Queries For LLMs
Overview
The MultiNativQA dataset is a multilingual, native, and culturally aligned question-answering resource. It spans 7 languages, ranging from high- to extremely low-resource, and covers 9 different locations/cities. To capture linguistic diversity, the dataset includes several dialects for dialect-rich languages like Arabic. In addition to Modern Standard Arabic (MSA), MultiNativQA features six Arabic dialects — Egyptian, Jordanian, Khaliji, Sudanese, Tunisian, and Yemeni.
The dataset also provides two linguistic variations of Bangla, reflecting differences between speakers in Bangladesh and West Bengal, India. Additionally, MultiNativQA includes English queries from Dhaka and Doha, where English is commonly used as a second language, as well as from New York, USA.
The QA pairs in this dataset cover 18 diverse topics, including: Animals, Business, Clothing, Education, Events, Food & Drinks, General, Geography, Immigration, Language, Literature, Names & Persons, Plants, Religion, Sports & Games, Tradition, Travel, and Weather.
MultiNativQA is designed to evaluate and fine-tune large language models (LLMs) for long-form question answering while assessing their cultural adaptability and understanding.
Directory Structure (JSON files only)
The dataset is organized into directories based on language and region. Each directory contains JSON files for the train, development, and test sets, with the exception of Nepali, which consists of only a test set.
arabic_qa/
NativQA_ar_msa_qa_dev.json
NativQA_ar_msa_qa_test.json
NativQA_ar_msa_qa_train.json
assamese_in/
NativQA_asm_NA_in_dev.json
NativQA_asm_NA_in_test.json
NativQA_asm_NA_in_train.json
bangla_bd/
NativQA_bn_scb_bd_dev.json
NativQA_bn_scb_bd_test.json
NativQA_bn_scb_bd_train.json
bangla_in/
NativQA_bn_scb_in_dev.json
NativQA_bn_scb_in_test.json
NativQA_bn_scb_in_train.json
english_bd/
NativQA_en_NA_bd_dev.json
NativQA_en_NA_bd_test.json
NativQA_en_NA_bd_train.json
english_qa/
NativQA_en_NA_qa_dev.json
NativQA_en_NA_qa_test.json
NativQA_en_NA_qa_train.json
hindi_in/
NativQA_hi_NA_in_dev.json
NativQA_hi_NA_in_test.json
NativQA_hi_NA_in_train.json
nepali_np/
NativQA_ne_NA_np_test.json
turkish_tr/
NativQA_tr_NA_tr_dev.json
NativQA_tr_NA_tr_test.json
NativQA_tr_NA_tr_train.json
Example of a data
{
"data_id": "cf92ec1e52b4b3071d263a1063b43928",
"category": "immigration",
"input_query": "How long can you stay in Qatar on a visitors visa?",
"question": "Can I extend my tourist visa in Qatar?",
"is_reliable": "very_reliable",
"answer": "If you would like to extend your visa, you will need to proceed to immigration headquarters in Doha prior to the expiry of your visa and apply there for an extension.",
"source_answer_url": "https://hayya.qa/en/web/hayya/faq"
}
Field Descriptions:
data_id
: Unique identifier for each data entry.category
: General topic or category of the query (e.g., "health", "religion").input_query
: The original user-submitted query.question
: The formalized question derived from the input query.is_reliable
: Indicates the reliability of the provided answer ("very_reliable"
,"somewhat_reliable"
,"unreliable"
).answer
: The system-provided answer to the query.source_answer_url
: URL of the source from which the answer was derived.
Statistics
Distribution of the MultiNativQA dataset across different languages.
This dataset consists of two types of data: annotated and un-annotated. We considered the un-annotated data as additional data. Please find the data statistics below:
Statistics of our MultiNativQA dataset including languages with the final annotated QA pairs from different location.
Language | City | Train | Dev | Test | Total |
---|---|---|---|---|---|
Arabic | Doha | 3,649 | 492 | 988 | 5,129 |
Assamese | Assam | 1,131 | 157 | 545 | 1,833 |
Bangla | Dhaka | 7,018 | 953 | 1,521 | 9,492 |
Bangla | Kolkata | 6,891 | 930 | 2,146 | 9,967 |
English | Dhaka | 4,761 | 656 | 1,113 | 6,530 |
English | Doha | 8,212 | 1,164 | 2,322 | 11,698 |
Hindi | Delhi | 9,288 | 1,286 | 2,745 | 13,319 |
Nepali | Kathmandu | -- | -- | 561 | 561 |
Turkish | Istanbul | 3,527 | 483 | 1,218 | 5,228 |
Total | 44,477 | 6,121 | 13,159 | 63,757 |
We provide the un-annotated additional data stats below:
Language-Location | # of QA |
---|---|
Arabic-Egypt | 7,956 |
Arabic-Palestine | 5,679 |
Arabic-Sudan | 4,718 |
Arabic-Syria | 11,288 |
Arabic-Tunisia | 14,789 |
Arabic-Yemen | 4,818 |
English-New York | 6,454 |
Total | 55,702 |
How to download data
import os
import json
from datasets import load_dataset
dataset_names = ['arabic_qa', 'assamese_in', 'bangla_bd', 'bangla_in', 'english_bd',
'english_qa', 'hindi_in', 'nepali_np', 'turkish_tr']
base_dir="./MNQA/"
for dname in dataset_names:
output_dir = os.path.join(base_dir, dname)
# load each language
dataset = load_dataset("QCRI/MultiNativQA", name=dname)
# Save the dataset to the specified directory. This will save all splits to the output directory.
dataset.save_to_disk(output_dir)
# iterate over splits to save the data into json format
for split in ['train','dev','test']:
data = []
if split not in dataset:
continue
for idx, item in enumerate(dataset[split]):
data.append(item)
output_file = os.path.join(output_dir, f"{split}.json")
with open(output_file, 'w', encoding='utf-8') as f:
json.dump(data, f, ensure_ascii=False, indent=4)
License
The dataset is distributed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0). The full license text can be found in the accompanying licenses_by-nc-sa_4.0_legalcode.txt file.
Contact & Additional Information
For more details, please visit our official website.
Citation
You can access the full paper here.
@article{hasan2024nativqa,
title={NativQA: Multilingual Culturally-Aligned Natural Query for LLMs},
author={Hasan, Md Arid and Hasanain, Maram and Ahmad, Fatema and Laskar, Sahinur Rahman and Upadhyay, Sunaya and Sukhadia, Vrunda N and Kutlu, Mucahid and Chowdhury, Shammur Absar and Alam, Firoj},
journal={arXiv preprint arXiv:2407.09823},
year={2024}
publisher={arXiv:2407.09823},
url={https://arxiv.org/abs/2407.09823},
}