|
--- |
|
dataset_info: |
|
features: |
|
- name: id |
|
dtype: string |
|
- name: language |
|
dtype: string |
|
- name: question |
|
dtype: string |
|
- name: choices |
|
struct: |
|
- name: text |
|
sequence: string |
|
- name: label |
|
sequence: string |
|
- name: answerKey |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 357825 |
|
num_examples: 1119 |
|
- name: validation |
|
num_bytes: 98118 |
|
num_examples: 299 |
|
- name: test |
|
num_bytes: 382265 |
|
num_examples: 1172 |
|
download_size: 433794 |
|
dataset_size: 838208 |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: train |
|
path: data/train-* |
|
- split: validation |
|
path: data/validation-* |
|
- split: test |
|
path: data/test-* |
|
license: apache-2.0 |
|
language: |
|
- sw |
|
--- |
|
|
|
# Dataset Card for ARC_Challenge_Swahili |
|
|
|
## Dataset Summary |
|
|
|
ARC_Challenge_Swahili is a Swahili translation of the original English ARC (AI2 Reasoning Challenge) dataset. This dataset evaluates the ability of AI systems to answer grade-school level multiple-choice science questions. The Swahili version was created using a combination of machine translation and human annotation to ensure high-quality and accurate translations. |
|
|
|
## Translation Methodology |
|
The translation process for the ARC_Challenge_Swahili dataset involved two main stages: |
|
|
|
### Machine Translation: |
|
1. The initial translation from English to Swahili was performed using the SeamlessM4TModel translation model. |
|
|
|
* The following parameters were used for the translation: |
|
```python |
|
inputs = tokenizer(text, return_tensors="pt", padding=True, truncation=True, max_length=1024).to(device) |
|
outputs = model.generate(**inputs, tgt_lang=dest_lang) |
|
translation = tokenizer.batch_decode(outputs, skip_special_tokens=True) |
|
``` |
|
|
|
2. Human Verification and Annotation: |
|
|
|
* After the initial machine translation, the translations were passed through GPT-3.5 for verification. This step involved checking the quality of the translations and identifying any that were not up to standard. |
|
* Human translators reviewed and annotated the translations flagged by GPT-3.5 as problematic to ensure accuracy and naturalness in Swahili. |
|
|
|
## Supported Tasks and Leaderboards |
|
* multiple-choice: The dataset supports multiple-choice question-answering tasks. |
|
|
|
## Languages |
|
The dataset is in Swahili. |
|
|
|
## Dataset Structure |
|
|
|
### Data Instances |
|
* An example of a data instance: |
|
```json |
|
{ |
|
"id": "example-id", |
|
"language": "sw", |
|
"question": "Ni gani kati ya zifuatazo ni sehemu ya mmea?", |
|
"choices": [ |
|
{"text": "Majani", "Jiwe", "Ubao", "Nondo"}, |
|
{"label": "A", "B": "C", "D"}, |
|
], |
|
"answerKey": "A" |
|
} |
|
``` |
|
|
|
### Data Fields |
|
* id: Unique identifier for each question. |
|
* language: The language of the question is Swahili (sw). |
|
* question: The science question in Swahili. |
|
* Choices: There are multiple-choice options, each with text and label. |
|
* answerKey: The correct answer for each question. |
|
|
|
## Datasplit |
|
| Split | Num Rows | |
|
|-------|----------| |
|
| train | 1119 | |
|
| validation | 299 | |
|
| test | 1172 | |
|
|