balhafni commited on
Commit
d2f3719
1 Parent(s): 8953724

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +103 -0
README.md CHANGED
@@ -1,3 +1,106 @@
1
  ---
2
  license: mit
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
+ language:
4
+ - ar
5
  ---
6
+
7
+
8
+ # AraBART+Morph+GEC<sup>13</sup> QALB-2014 Model
9
+
10
+ ## Model description
11
+ **AraBART+Morph+GEC<sup>13</sup>** is a Modern Standard Arabic (MSA) grammatical error correction (GEC) model that was built by fine-tuning the [AraBART](https://huggingface.co/moussaKam/AraBART) model.
12
+ For the fine-tuning, we used the [QALB-2014](https://aclanthology.org/W14-3605.pdf) dataset.
13
+ Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[Advancements in Arabic Grammatical Error Detection and Correction:
14
+ An Empirical Investigation]()."* Our fine-tuning code and data can be found [here](https://github.com/CAMeL-Lab/arabic-gec).
15
+
16
+ ## Intended uses
17
+ You can use the AraBART+Morph+GEC<sup>13</sup> model as part of an extended version of the [transformers](https://github.com/CAMeL-Lab/arabic-gec) that we make publicly available.
18
+ The GEC model is intended to be used with this [GED](https://huggingface.co/CAMeL-Lab/camelbert-msa-qalb14-ged-13) model as we outlined in the example below.
19
+
20
+ #### How to use
21
+ To use the model with our extended version of transformers:
22
+
23
+
24
+ ```python
25
+ from transformers import AutoTokenizer, BertForTokenClassification, MBartForConditionalGeneration
26
+ from camel_tools.disambig.bert import BERTUnfactoredDisambiguator
27
+ from camel_tools.utils.dediac import dediac_ar
28
+ import torch.nn.functional as F
29
+ import torch
30
+
31
+ bert_disambig = BERTUnfactoredDisambiguator.pretrained()
32
+
33
+ ged_tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/camelbert-msa-qalb14-ged-13')
34
+ ged_model = BertForTokenClassification.from_pretrained('CAMeL-Lab/camelbert-msa-qalb14-ged-13')
35
+
36
+ gec_tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/arabart-qalb14-gec-ged-13')
37
+ gec_model = MBartForConditionalGeneration.from_pretrained('CAMeL-Lab/arabart-qalb14-gec-ged-13')
38
+
39
+ text = 'و قال له انه يحب اكل الطعام بكثره .'
40
+
41
+ # morph processing the input text
42
+ text_disambig = bert_disambig.disambiguate(text.split())
43
+ morph_pp_text = [dediac_ar(w_disambig.analyses[0].analysis['diac']) for w_disambig in text_disambig]
44
+ morph_pp_text = ' '.join(morph_pp_text)
45
+
46
+ # GED tagging
47
+ inputs = ged_tokenizer([morph_pp_text], return_tensors='pt')
48
+ logits = ged_model(**inputs).logits
49
+ preds = F.softmax(logits, dim=-1).squeeze()[1:-1]
50
+ pred_ged_labels = [ged_model.config.id2label[p.item()] for p in torch.argmax(preds, -1)]
51
+
52
+ # Extending GED label to GEC-tokenized input
53
+ ged_label2ids = gec_model.config.ged_label2id
54
+ tokens, ged_labels = [], []
55
+
56
+ for word, label in zip(morph_pp_text.split(), pred_ged_labels):
57
+ word_tokens = gec_tokenizer.tokenize(word)
58
+ if len(word_tokens) > 0:
59
+ tokens.extend(word_tokens)
60
+ ged_labels.extend([label for _ in range(len(word_tokens))])
61
+
62
+
63
+ input_ids = gec_tokenizer.convert_tokens_to_ids(tokens)
64
+ input_ids = [gec_tokenizer.bos_token_id] + input_ids + [gec_tokenizer.eos_token_id]
65
+
66
+ label_ids = [ged_label2ids.get(label, ged_label2ids['<pad>']) for label in ged_labels]
67
+ label_ids = [ged_label2ids['UC']] + label_ids + [ged_label2ids['UC']]
68
+ attention_mask = [1 for _ in range(len(input_ids))]
69
+
70
+
71
+ gen_kwargs = {'num_beams': 5, 'max_length': 100,
72
+ 'num_return_sequences': 1,
73
+ 'no_repeat_ngram_size': 0, 'early_stopping': False,
74
+ 'ged_tags': torch.tensor([label_ids]),
75
+ 'attention_mask': torch.tensor([attention_mask])
76
+ }
77
+
78
+ # GEC generation
79
+ generated = gec_model.generate(torch.tensor([input_ids]), **gen_kwargs)
80
+
81
+ generated_text = gec_tokenizer.batch_decode(generated,
82
+ skip_special_tokens=True,
83
+ clean_up_tokenization_spaces=False
84
+ )[0]
85
+
86
+ print(generated_text) # وقال له أنه يحب أكل الطعام بكثرة .
87
+ ```
88
+
89
+
90
+ ## Citation
91
+ ```bibtex
92
+ @inproceedings{alhafni-etal-2023-advancements,
93
+ title = "Advancements in Arabic Grammatical Error Detection and Correction: An Empirical Investigation",
94
+ author = "Alhafni, Bashar and
95
+ Inoue, Go and
96
+ Khairallah, Christian and
97
+ Habash, Nizar",
98
+ booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
99
+ month = dec,
100
+ year = "2023",
101
+ address = "Singapore, Singapore",
102
+ publisher = "Association for Computational Linguistics",
103
+ url = "https://arxiv.org/abs/2305.14734",
104
+ abstract = "Grammatical error correction (GEC) is a well-explored problem in English with many existing models and datasets. However, research on GEC in morphologically rich languages has been limited due to challenges such as data scarcity and language complexity. In this paper, we present the first results on Arabic GEC using two newly developed Transformer-based pretrained sequence-to-sequence models. We also define the task of multi-class Arabic grammatical error detection (GED) and present the first results on multi-class Arabic GED. We show that using GED information as auxiliary input in GEC models improves GEC performance across three datasets spanning different genres. Moreover, we also investigate the use of contextual morphological preprocessing in aiding GEC systems. Our models achieve SOTA results on two Arabic GEC shared task datasets and establish a strong benchmark on a recently created dataset. We make our code, data, and pretrained models publicly available.",
105
+ }
106
+ ```