Datasets:

Modalities:
Text
Formats:
parquet
Languages:
Arabic
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
File size: 3,412 Bytes
464e470
a64885f
 
 
 
 
 
 
 
464e470
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a64885f
464e470
 
 
 
 
d457e55
 
a64885f
 
d457e55
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
727a8f1
 
 
 
3c0f02e
727a8f1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d457e55
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
---
language:
- ar
license: apache-2.0
size_categories:
- n<1K
task_categories:
- multiple-choice
pretty_name: 'CIDAR-MCQ-100 '
dataset_info:
  features:
  - name: Question
    dtype: string
  - name: A
    dtype: string
  - name: B
    dtype: string
  - name: C
    dtype: string
  - name: D
    dtype: string
  - name: answer
    dtype: string
  splits:
  - name: test
    num_bytes: 18899
    num_examples: 100
  download_size: 13287
  dataset_size: 18899
configs:
- config_name: default
  data_files:
  - split: test
    path: data/test-*
---

# Dataset Card for "CIDAR-MCQ-100"

# CIDAR-MCQ-100

CIDAR-MCQ-100 contains **100** multiple-choice questions and answers about the Arabic culture. 

## ๐Ÿ“š Datasets Summary

<table>
  <tr>
<th>Name</th>
<th>Explanation</th>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/arbml/cidar>CIDAR</a></t> 
<td>10,000 instructions and responses in Arabic</td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/arbml/cidar-eval-100>CIDAR-EVAL-100</a></t> 
<td>100 instructions to evaluate LLMs on cultural relevance</td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/arbml/cidar-mcq-100><b>CIDAR-MCQ-100</b></a></t> 
<td>100 Multiple choice questions and answers to evaluate LLMs on cultural relevance </td>
</tr>
</table>


<div width="30px" align="center">

| Category   |      CIDAR-EVAL-100     |  <a href=https://huggingface.co/datasets/arbml/cidar-mcq-100><b>CIDAR-MCQ-100</b></a>|
|----------|:-------------:|:------:|
|Food&Drinks   | 14        | 8     |
|Names         | 14        | 8     |
|Animals       | 2         | 4     |
|Language      | 10        | 20    |
|Jokes&Puzzles | 3         | 7     |
|Religion      | 5         | 10    |
|Business      | 6         | 7     |
|Cloths        | 4         | 5     |
|Science       | 3         | 4     |
|Sports&Games  | 4         | 2     |
|Tradition     | 4         | 10    |
|Weather       | 4         | 2     |   
|Geography     | 7         | 8     |
|General       | 4         | 3     |
|Fonts         | 5         | 2     |
|Literature    | 10        | 2     |
|Plants        | 3         | 0     |
<i>Total</i>   | 100       | 100   |

</div>

## ๐Ÿ“‹ Dataset Structure
- `Question(str)`: Question about the Arabic culture.
- `A(str)`: First choice.
- `B(str)`: Second choice.
- `C(str)`: Third choice.
- `D(str)`: Fourth choice.
- `answer(str)`: The correct choice from A,B,C, and D. 

## ๐Ÿ“ Loading The Dataset
You can download the dataset directly from HuggingFace or use the following code:

```python
from datasets import load_dataset
cidar = load_dataset('arbml/CIDAR-MCQ-100')
```

## ๐Ÿ“„ Sample From The Dataset:

 **Question**: ุญุฏุฏ ุญูŠูˆุงู† ู…ุดู‡ูˆุฑ ููŠ ุงู„ู…ู†ุทู‚ุฉ

 **A**: ุงู„ุฌู…ู„
 **B**: ุงู„ู„ุงู…ุง
 **C**: ุงู„ูƒุงู†ุบุฑูˆ
 **D**: ุงู„ุฏุจ ุงู„ู‚ุทุจูŠ
 **answer**: A

## ๐Ÿ”‘ License
The dataset is licensed under **Apache-2.0**. [Apache-2.0](https://www.apache.org/licenses/LICENSE-2.0).

## Citation

```
@misc{alyafeai2024cidar,
      title={{CIDAR: Culturally Relevant Instruction Dataset For Arabic}}, 
      author={Zaid Alyafeai and Khalid Almubarak and Ahmed Ashraf and Deema Alnuhait and Saied Alshahrani and Gubran A. Q. Abdulrahman and Gamil Ahmed and Qais Gawah and Zead Saleh and Mustafa Ghaleb and Yousef Ali and Maged S. Al-Shaibani},
      year={2024},
      eprint={2402.03177},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
```