balhafni commited on
Commit
4c9497d
1 Parent(s): e83d149

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +48 -0
README.md CHANGED
@@ -1,3 +1,51 @@
1
  ---
2
  license: mit
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
+ pipeline_tag: token-classification
4
+ language:
5
+ - ar
6
+ widget:
7
+ - text: 'انه يحب اكل الطعام بكثره'
8
+
9
  ---
10
+
11
+
12
+ # CAMeLBERT-MSA ZAEBUC GED-13 Model
13
+
14
+ ## Model description
15
+ **CAMeLBERT-MSA ZAEBUC GED-13 Model** is a Modern Standard Arabic (MSA) grammatical error detection (GED) model that was built by fine-tuning the [CAMeLBERT-MSA](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-msa/) model.
16
+ For the fine-tuning, we used a combination of the [QALB-2014](https://aclanthology.org/W14-3605.pdf), [QALB-2015](https://aclanthology.org/W15-3204.pdf), and [ZAEBUC](https://aclanthology.org/2022.lrec-1.9.pdf) datasets. Please note that this model was fine-tuned on morphologically preprocessed text.
17
+ Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[Advancements in Arabic Grammatical Error Detection and Correction:
18
+ An Empirical Investigation]()."* Our fine-tuning code and data can be found [here](https://github.com/CAMeL-Lab/arabic-gec).
19
+
20
+ ## Intended uses
21
+ You can use the CAMeLBERT-MSA GED-13 model model as part of the transformers pipeline.
22
+
23
+ #### How to use
24
+ To use the model with a transformers pipeline:
25
+ ```python
26
+ >>> from transformers import pipeline
27
+ >>> ged = pipeline('token-classification', model='CAMeL-Lab/camelbert-msa-zaebuc-ged-13')
28
+ >>> text = 'و قال له انه يحب اكل الطعام بكثره'
29
+ >>> ged(text)
30
+ [{'entity': 'MERGE-B', 'score': 0.99943775, 'index': 1, 'word': 'و', 'start': 0, 'end': 1}, {'entity': 'MERGE-I', 'score': 0.99959165, 'index': 2, 'word': 'قال', 'start': 2, 'end': 5}, {'entity': 'UC', 'score': 0.9985884, 'index': 3, 'word': 'له', 'start': 6, 'end': 8}, {'entity': 'REPLACE_O', 'score': 0.8346316, 'index': 4, 'word': 'انه', 'start': 9, 'end': 12}, {'entity': 'UC', 'score': 0.99985325, 'index': 5, 'word': 'يحب', 'start': 13, 'end': 16}, {'entity': 'REPLACE_O', 'score': 0.6836415, 'index': 6, 'word': 'اكل', 'start': 17, 'end': 20}, {'entity': 'UC', 'score': 0.99763715, 'index': 7, 'word': 'الطعام', 'start': 21, 'end': 27}, {'entity': 'REPLACE_O', 'score': 0.993848, 'index': 8, 'word': 'بكثره', 'start': 28, 'end': 33}]
31
+ ```
32
+
33
+
34
+
35
+ ## Citation
36
+ ```bibtex
37
+ @inproceedings{alhafni-etal-2023-advancements,
38
+ title = "Advancements in Arabic Grammatical Error Detection and Correction: An Empirical Investigation",
39
+ author = "Alhafni, Bashar and
40
+ Inoue, Go and
41
+ Khairallah, Christian and
42
+ Habash, Nizar",
43
+ booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
44
+ month = dec,
45
+ year = "2023",
46
+ address = "Singapore, Singapore",
47
+ publisher = "Association for Computational Linguistics",
48
+ url = "https://arxiv.org/abs/2305.14734",
49
+ abstract = "Grammatical error correction (GEC) is a well-explored problem in English with many existing models and datasets. However, research on GEC in morphologically rich languages has been limited due to challenges such as data scarcity and language complexity. In this paper, we present the first results on Arabic GEC using two newly developed Transformer-based pretrained sequence-to-sequence models. We also define the task of multi-class Arabic grammatical error detection (GED) and present the first results on multi-class Arabic GED. We show that using GED information as auxiliary input in GEC models improves GEC performance across three datasets spanning different genres. Moreover, we also investigate the use of contextual morphological preprocessing in aiding GEC systems. Our models achieve SOTA results on two Arabic GEC shared task datasets and establish a strong benchmark on a recently created dataset. We make our code, data, and pretrained models publicly available.",
50
+ }
51
+ ```