File size: 2,998 Bytes
6176d9b e2c47d5 b2f8a4f 6176d9b e2c47d5 b2f8a4f 6176d9b e2c47d5 b2f8a4f e2c47d5 6176d9b e2c47d5 055c2ca dc1df9d 055c2ca 8119cef 055c2ca 8ba1da5 551402d f1a3ffa 532790b 7d53a01 f1a3ffa 280a42e b54d946 532790b bfbf646 532790b 271af0b 532790b 271af0b 532790b 271af0b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 |
---
dataset_info:
features:
- name: id
dtype: string
- name: language
dtype: string
- name: question
dtype: string
- name: choices
struct:
- name: text
sequence: string
- name: label
sequence: string
- name: answerKey
dtype: string
splits:
- name: train
num_bytes: 357825
num_examples: 1119
- name: validation
num_bytes: 98118
num_examples: 299
- name: test
num_bytes: 382265
num_examples: 1172
download_size: 433794
dataset_size: 838208
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
license: apache-2.0
language:
- sw
---
# Dataset Card for ARC_Challenge_Swahili
## Dataset Summary
ARC_Challenge_Swahili is a Swahili translation of the original English ARC (AI2 Reasoning Challenge) dataset. This dataset evaluates the ability of AI systems to answer grade-school level multiple-choice science questions. The Swahili version was created using a combination of machine translation and human annotation to ensure high-quality and accurate translations.
## Translation Methodology
The translation process for the ARC_Challenge_Swahili dataset involved two main stages:
### Machine Translation:
1. The initial translation from English to Swahili was performed using the SeamlessM4TModel translation model.
* The following parameters were used for the translation:
```python
inputs = tokenizer(text, return_tensors="pt", padding=True, truncation=True, max_length=1024).to(device)
outputs = model.generate(**inputs, tgt_lang=dest_lang)
translation = tokenizer.batch_decode(outputs, skip_special_tokens=True)
```
2. Human Verification and Annotation:
* After the initial machine translation, the translations were passed through GPT-3.5 for verification. This step involved checking the quality of the translations and identifying any that were not up to standard.
* Human translators reviewed and annotated the translations flagged by GPT-3.5 as problematic to ensure accuracy and naturalness in Swahili.
## Supported Tasks and Leaderboards
* multiple-choice: The dataset supports multiple-choice question-answering tasks.
## Languages
The dataset is in Swahili.
## Dataset Structure
### Data Instances
* An example of a data instance:
```json
{
"id": "example-id",
"language": "sw",
"question": "Ni gani kati ya zifuatazo ni sehemu ya mmea?",
"choices": [
{"text": "Majani", "Jiwe", "Ubao", "Nondo"},
{"label": "A", "B": "C", "D"},
],
"answerKey": "A"
}
```
### Data Fields
* id: Unique identifier for each question.
* language: The language of the question is Swahili (sw).
* question: The science question in Swahili.
* Choices: There are multiple-choice options, each with text and label.
* answerKey: The correct answer for each question.
## Datasplit
| Split | Num Rows |
|-------|----------|
| train | 1119 |
| validation | 299 |
| test | 1172 |
|