balhafni commited on
Commit
0b30423
1 Parent(s): 1fa7163

Adding README

Browse files
Files changed (1) hide show
  1. README.md +134 -0
README.md ADDED
@@ -0,0 +1,134 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - ar
4
+ license: apache-2.0
5
+ widget:
6
+ - text: "إمارة أبوظبي هي إحدى إمارات دولة الإمارات العربية المتحدة السبع"
7
+ ---
8
+ # CAMeLBERT MSA NER Model
9
+ ## Model description
10
+ **CAMeLBERT MSA NER Model ** Named Entity Recognition (NER) model that was built by fine-tuning the [CAMeLBERT Modern Standard Arabic (MSA)](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-msa/) model. For the fine-tuning, we used the [ANERcorp](https://camel.abudhabi.nyu.edu/anercorp/) dataset. Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."*
11
+
12
+ ## Intended uses
13
+ You can use the released model for either masked language modeling or next sentence prediction.
14
+ However, it is mostly intended to be fine-tuned on an NLP task, such as NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.
15
+ We release our fine-tuninig code [here](https://github.com/CAMeL-Lab/CAMeLBERT).
16
+ #### How to use
17
+ You can use this model directly with a pipeline for masked language modeling:
18
+ ```python
19
+ >>> from transformers import pipeline
20
+ >>> unmasker = pipeline('fill-mask', model='CAMeL-Lab/bert-base-arabic-camelbert-msa')
21
+ >>> unmasker("الهدف من الحياة هو [MASK] .")
22
+ [{'sequence': '[CLS] الهدف من الحياة هو العمل. [SEP]',
23
+ 'score': 0.08507660031318665,
24
+ 'token': 2854,
25
+ 'token_str': 'العمل'},
26
+ {'sequence': '[CLS] الهدف من الحياة هو الحياة. [SEP]',
27
+ 'score': 0.058905381709337234,
28
+ 'token': 3696, 'token_str': 'الحياة'},
29
+ {'sequence': '[CLS] الهدف من الحياة هو النجاح. [SEP]',
30
+ 'score': 0.04660581797361374, 'token': 6232,
31
+ 'token_str': 'النجاح'},
32
+ {'sequence': '[CLS] الهدف من الحياة هو الربح. [SEP]',
33
+ 'score': 0.04156001657247543,
34
+ 'token': 12413, 'token_str': 'الربح'},
35
+ {'sequence': '[CLS] الهدف من الحياة هو الحب. [SEP]',
36
+ 'score': 0.03534102067351341,
37
+ 'token': 3088,
38
+ 'token_str': 'الحب'}]
39
+ ```
40
+ *Note*: to download our models, you would need `transformers>=3.5.0`. Otherwise, you could download the models manually.
41
+ Here is how to use this model to get the features of a given text in PyTorch:
42
+ ```python
43
+ from transformers import AutoTokenizer, AutoModel
44
+ tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-msa')
45
+ model = AutoModel.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-msa')
46
+ text = "مرحبا يا عالم."
47
+ encoded_input = tokenizer(text, return_tensors='pt')
48
+ output = model(**encoded_input)
49
+ ```
50
+ and in TensorFlow:
51
+ ```python
52
+ from transformers import AutoTokenizer, TFAutoModel
53
+ tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-msa')
54
+ model = TFAutoModel.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-msa')
55
+ text = "مرحبا يا عالم."
56
+ encoded_input = tokenizer(text, return_tensors='tf')
57
+ output = model(encoded_input)
58
+ ```
59
+ ## Training data
60
+ - MSA (Modern Standard Arabic)
61
+ - [The Arabic Gigaword Fifth Edition](https://catalog.ldc.upenn.edu/LDC2011T11)
62
+ - [Abu El-Khair Corpus](http://www.abuelkhair.net/index.php/en/arabic/abu-el-khair-corpus)
63
+ - [OSIAN corpus](https://vlo.clarin.eu/search;jsessionid=31066390B2C9E8C6304845BA79869AC1?1&q=osian)
64
+ - [Arabic Wikipedia](https://archive.org/details/arwiki-20190201)
65
+ - The unshuffled version of the Arabic [OSCAR corpus](https://oscar-corpus.com/)
66
+ ## Training procedure
67
+ We use [the original implementation](https://github.com/google-research/bert) released by Google for pre-training.
68
+ We follow the original English BERT model's hyperparameters for pre-training, unless otherwise specified.
69
+ ### Preprocessing
70
+ - After extracting the raw text from each corpus, we apply the following pre-processing.
71
+ - We first remove invalid characters and normalize white spaces using the utilities provided by [the original BERT implementation](https://github.com/google-research/bert/blob/eedf5716ce1268e56f0a50264a88cafad334ac61/tokenization.py#L286-L297).
72
+ - We also remove lines without any Arabic characters.
73
+ - We then remove diacritics and kashida using [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools).
74
+ - Finally, we split each line into sentences with a heuristics-based sentence segmenter.
75
+ - We train a WordPiece tokenizer on the entire dataset (167 GB text) with a vocabulary size of 30,000 using [HuggingFace's tokenizers](https://github.com/huggingface/tokenizers).
76
+ - We do not lowercase letters nor strip accents.
77
+ ### Pre-training
78
+ - The model was trained on a single cloud TPU (`v3-8`) for one million steps in total.
79
+ - The first 90,000 steps were trained with a batch size of 1,024 and the rest was trained with a batch size of 256.
80
+ - The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%.
81
+ - We use whole word masking and a duplicate factor of 10.
82
+ - We set max predictions per sequence to 20 for the dataset with max sequence length of 128 tokens and 80 for the dataset with max sequence length of 512 tokens.
83
+ - We use a random seed of 12345, masked language model probability of 0.15, and short sequence probability of 0.1.
84
+ - The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after.
85
+ ## Evaluation results
86
+ - We evaluate our pre-trained language models on five NLP tasks: NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.
87
+ - We fine-tune and evaluate the models using 12 dataset.
88
+ - We used Hugging Face's transformers to fine-tune our CAMeLBERT models.
89
+ - We used transformers `v3.1.0` along with PyTorch `v1.5.1`.
90
+ - The fine-tuning was done by adding a fully connected linear layer to the last hidden state.
91
+ - We use \\(F_{1}\\) score as a metric for all tasks.
92
+ - Code used for fine-tuning is available [here](https://github.com/CAMeL-Lab/CAMeLBERT).
93
+ ### Results
94
+ | Task | Dataset | Variant | Mix | CA | DA | MSA | MSA-1/2 | MSA-1/4 | MSA-1/8 | MSA-1/16 |
95
+ | -------------------- | --------------- | ------- | ----- | ----- | ----- | ----- | ------- | ------- | ------- | -------- |
96
+ | NER | ANERcorp | MSA | 80.8% | 67.9% | 74.1% | 82.4% | 82.0% | 82.1% | 82.6% | 80.8% |
97
+ | POS | PATB (MSA) | MSA | 98.1% | 97.8% | 97.7% | 98.3% | 98.2% | 98.3% | 98.2% | 98.2% |
98
+ | | ARZTB (EGY) | DA | 93.6% | 92.3% | 92.7% | 93.6% | 93.6% | 93.7% | 93.6% | 93.6% |
99
+ | | Gumar (GLF) | DA | 97.3% | 97.7% | 97.9% | 97.9% | 97.9% | 97.9% | 97.9% | 97.9% |
100
+ | SA | ASTD | MSA | 76.3% | 69.4% | 74.6% | 76.9% | 76.0% | 76.8% | 76.7% | 75.3% |
101
+ | | ArSAS | MSA | 92.7% | 89.4% | 91.8% | 93.0% | 92.6% | 92.5% | 92.5% | 92.3% |
102
+ | | SemEval | MSA | 69.0% | 58.5% | 68.4% | 72.1% | 70.7% | 72.8% | 71.6% | 71.2% |
103
+ | DID | MADAR-26 | DA | 62.9% | 61.9% | 61.8% | 62.6% | 62.0% | 62.8% | 62.0% | 62.2% |
104
+ | | MADAR-6 | DA | 92.5% | 91.5% | 92.2% | 91.9% | 91.8% | 92.2% | 92.1% | 92.0% |
105
+ | | MADAR-Twitter-5 | MSA | 75.7% | 71.4% | 74.2% | 77.6% | 78.5% | 77.3% | 77.7% | 76.2% |
106
+ | | NADI | DA | 24.7% | 17.3% | 20.1% | 24.9% | 24.6% | 24.6% | 24.9% | 23.8% |
107
+ | Poetry | APCD | CA | 79.8% | 80.9% | 79.6% | 79.7% | 79.9% | 80.0% | 79.7% | 79.8% |
108
+ ### Results (Average)
109
+ | | Variant | Mix | CA | DA | MSA | MSA-1/2 | MSA-1/4 | MSA-1/8 | MSA-1/16 |
110
+ | -------------------- | ------- | ----- | ----- | ----- | ----- | ------- | ------- | ------- | -------- |
111
+ | Variant-wise-average<sup>[[1]](#footnote-1)</sup> | MSA | 82.1% | 75.7% | 80.1% | 83.4% | 83.0% | 83.3% | 83.2% | 82.3% |
112
+ | | DA | 74.4% | 72.1% | 72.9% | 74.2% | 74.0% | 74.3% | 74.1% | 73.9% |
113
+ | | CA | 79.8% | 80.9% | 79.6% | 79.7% | 79.9% | 80.0% | 79.7% | 79.8% |
114
+ | Macro-Average | ALL | 78.7% | 74.7% | 77.1% | 79.2% | 79.0% | 79.2% | 79.1% | 78.6% |
115
+ <a name="footnote-1">[1]</a>: Variant-wise-average refers to average over a group of tasks in the same language variant.
116
+ ## Acknowledgements
117
+ This research was supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC).
118
+ ## Citation
119
+ ```bibtex
120
+ @inproceedings{inoue-etal-2021-interplay,
121
+ title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
122
+ author = "Inoue, Go and
123
+ Alhafni, Bashar and
124
+ Baimukan, Nurpeiis and
125
+ Bouamor, Houda and
126
+ Habash, Nizar",
127
+ booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
128
+ month = apr,
129
+ year = "2021",
130
+ address = "Kyiv, Ukraine (Online)",
131
+ publisher = "Association for Computational Linguistics",
132
+ abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
133
+ }
134
+ ```