Datasets:

Modalities:
Text
Formats:
parquet
Languages:
Arabic
ArXiv:
Libraries:
Datasets
pandas
Hennara commited on
Commit
c08671a
ยท
verified ยท
1 Parent(s): 74cc3ad

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +84 -0
README.md CHANGED
@@ -16,4 +16,88 @@ configs:
16
  data_files:
17
  - split: train
18
  path: data/train-*
 
 
 
 
 
 
19
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
16
  data_files:
17
  - split: train
18
  path: data/train-*
19
+ task_categories:
20
+ - text-generation
21
+ language:
22
+ - ar
23
+ size_categories:
24
+ - 1K<n<10K
25
  ---
26
+
27
+ # SadeedDiac-25: A Benchmark for Arabic Diacritization
28
+
29
+ **SadeedDiac-25** is a comprehensive and linguistically diverse benchmark specifically designed for evaluating Arabic diacritization models. It unifies Modern Standard Arabic (MSA) and Classical Arabic (CA) in a single dataset, addressing key limitations in existing benchmarks.
30
+
31
+ ## Overview
32
+
33
+ Existing Arabic diacritization benchmarks tend to focus on either Classical Arabic (e.g., Fadel, Abbad) or Modern Standard Arabic (e.g., CATT, WikiNews), with limited domain diversity and quality inconsistencies. SadeedDiac-25 addresses these issues by:
34
+
35
+ - Combining MSA and CA in one dataset
36
+ - Covering diverse domains (e.g., news, religion, politics, sports, culinary arts)
37
+ - Ensuring high annotation quality through a multi-stage expert review process
38
+ - Avoiding contamination from large-scale pretraining corpora
39
+
40
+ ## Dataset Composition
41
+
42
+ SadeedDiac-25 consists of 1,200 paragraphs:
43
+
44
+ - **๐Ÿ“˜ 50% Modern Standard Arabic (MSA)**
45
+
46
+ - 454 paragraphs of curated original MSA content
47
+ - 146 paragraphs from WikiNews
48
+ - Length: 40โ€“50 words per paragraph
49
+
50
+ - **๐Ÿ“— 50% Classical Arabic (CA)**
51
+
52
+ - ๐Ÿ“– 600 paragraphs from the Fadel test set
53
+
54
+ ## Evaluation Results
55
+
56
+ We evaluated several models on SadeedDiac-25, including proprietary LLMs and open-source Arabic models. Evaluation metrics include Diacritic Error Rate (DER), Word Error Rate (WER), and hallucination rates.
57
+ The evaluation code for this dataset is available at: https://github.com/misraj-ai/Sadeed
58
+
59
+ ### Evaluation Table
60
+
61
+ | Model | DER (CE) | WER (CE) | DER (w/o CE) | WER (w/o CE) | Hallucinations |
62
+ | ------------------------ | ---------- | ---------- | ------------ | ------------ | -------------- |
63
+ | Claude-3-7-Sonnet-Latest | **1.3941** | **4.6718** | **0.7693** | **2.3098** | **0.821** |
64
+ | GPT-4 | 3.8645 | 5.2719 | 3.8645 | 10.9274 | 1.0242 |
65
+ | Gemini-Flash-2.0 | 3.1926 | 7.9942 | 2.3783 | 5.5044 | 1.1713 |
66
+ | *Sadeed* | *7.2915* | *13.7425* | *5.2625* | *9.9245* | *7.1946* |
67
+ | Aya-23-8B | 25.6274 | 47.4908 | 19.7584 | 40.2478 | 5.7793 |
68
+ | ALLaM-7B-Instruct | 50.3586 | 70.3369 | 39.4100 | 67.0920 | 36.5092 |
69
+ | Yehia-7B | 50.8801 | 70.2323 | 39.7677 | 67.1520 | 43.1113 |
70
+ | Jais-13B | 78.6820 | 99.7541 | 60.7271 | 99.5702 | 61.0803 |
71
+ | Gemma-2-9B | 78.8560 | 99.7928 | 60.9188 | 99.5895 | 86.8771 |
72
+ | SILMA-9B-Instruct-v1.0 | 78.6567 | 99.7367 | 60.7106 | 99.5586 | 93.6515 |
73
+
74
+ > **Note**: CE = Case Ending
75
+
76
+
77
+ ## Citation
78
+
79
+ If you use SadeedDiac-25 in your work, please cite:
80
+ ## Citation
81
+
82
+ If you use this dataset, please cite:
83
+
84
+ ```bibtex
85
+ @misc{aldallal2025sadeedadvancingarabicdiacritization,
86
+ title={Sadeed: Advancing Arabic Diacritization Through Small Language Model},
87
+ author={Zeina Aldallal and Sara Chrouf and Khalil Hennara and Mohamed Motaism Hamed and Muhammad Hreden and Safwan AlModhayan},
88
+ year={2025},
89
+ eprint={2504.21635},
90
+ archivePrefix={arXiv},
91
+ primaryClass={cs.CL},
92
+ url={https://arxiv.org/abs/2504.21635},
93
+ }
94
+ ```
95
+
96
+
97
+ ## License
98
+
99
+ ๐Ÿ“„ This dataset is released under the CC BY-NC-SA 4.0 License.
100
+
101
+ ## Contact
102
+
103
+ ๐Ÿ“ฌ For questions, contact [Misraj-AI](https://misraj.ai/) on Hugging Face.