Go Inoue commited on
Commit
4279ed8
1 Parent(s): 53a8c29

Add README.md

Browse files
Files changed (1) hide show
  1. README.md +169 -0
README.md ADDED
@@ -0,0 +1,169 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - ar
4
+ license: apache-2.0
5
+ widget:
6
+ - text: "الهدف من الحياة هو [MASK] ."
7
+ ---
8
+
9
+ # CAMeLBERT-MSA-sixteenth
10
+
11
+ ## Model description
12
+
13
+ **CAMeLBERT** is a BERT model pre-trained on Arabic texts with different sizes and variants.
14
+ The details are described in the paper *"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models."*
15
+ We release eight models with different sizes and variants as follows:
16
+
17
+ ||Model|Variant|Size|#Word|
18
+ |-|-|:-:|-:|-:|
19
+ ||`bert-base-camelbert-mix`|CA,DA,MSA|167GB|17.3B|
20
+ ||`bert-base-camelbert-ca`|CA|6GB|847M|
21
+ ||`bert-base-camelbert-da`|DA|54GB|5.8B|
22
+ ||`bert-base-camelbert-msa`|MSA|107GB|12.6B|
23
+ ||`bert-base-camelbert-msa-half`|MSA|53GB|6.3B|
24
+ ||`bert-base-camelbert-msa-quarter`|MSA|27GB|3.1B|
25
+ ||`bert-base-camelbert-msa-eighth`|MSA|14GB|1.6B|
26
+ |✔|`bert-base-camelbert-msa-sixteenth`|MSA|6GB|746M|
27
+
28
+ This model card describes `bert-base-camelbert-msa-sixteenth`, a model pre-trained on a sixteenth of the full MSA dataset.
29
+
30
+ ## Intended uses
31
+ You can use the released model for either masked language modeling or next sentence prediction.
32
+ However, it is mostly intended to be fine-tuned on an NLP task, such as NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.
33
+ We release our fine-tuninig code [here](https://github.com/CAMeL-Lab/CAMeLBERT).
34
+
35
+ #### How to use
36
+ You can use this model directly with a pipeline for masked language modeling:
37
+ ```python
38
+ >>> from transformers import pipeline
39
+ >>> unmasker = pipeline('fill-mask', model='CAMeL-Lab/bert-base-camelbert-msa-sixteenth')
40
+ >>> unmasker("الهدف من الحياة هو [MASK] .")
41
+ [{'sequence': '[CLS] الهدف من الحياة هو التغيير. [SEP]',
42
+ 'score': 0.08320745080709457,
43
+ 'token': 7946,
44
+ 'token_str': 'التغيير'},
45
+ {'sequence': '[CLS] الهدف من الحياة هو التعلم. [SEP]',
46
+ 'score': 0.04305094853043556,
47
+ 'token': 12554,
48
+ 'token_str': 'التعلم'},
49
+ {'sequence': '[CLS] الهدف من الحياة هو العمل. [SEP]',
50
+ 'score': 0.0417640283703804,
51
+ 'token': 2854,
52
+ 'token_str': 'العمل'},
53
+ {'sequence': '[CLS] الهدف من الحياة هو الحياة. [SEP]',
54
+ 'score': 0.041371218860149384,
55
+ 'token': 3696,
56
+ 'token_str': 'الحياة'},
57
+ {'sequence': '[CLS] الهدف من الحياة هو المعرفة. [SEP]',
58
+ 'score': 0.039794355630874634,
59
+ 'token': 7344,
60
+ 'token_str': 'المعرفة'}]
61
+ ```
62
+
63
+ Here is how to use this model to get the features of a given text in PyTorch:
64
+ ```python
65
+ from transformers import AutoTokenizer, AutoModel
66
+ tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/bert-base-camelbert-msa-sixteenth')
67
+ model = AutoModel.from_pretrained('CAMeL-Lab/bert-base-camelbert-msa-sixteenth')
68
+ text = "مرحبا يا عالم."
69
+ encoded_input = tokenizer(text, return_tensors='pt')
70
+ output = model(**encoded_input)
71
+ ```
72
+
73
+ and in TensorFlow:
74
+ ```python
75
+ from transformers import AutoTokenizer, TFAutoModel
76
+ tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/bert-base-camelbert-msa-sixteenth')
77
+ model = TFAutoModel.from_pretrained('CAMeL-Lab/bert-base-camelbert-msa-sixteenth')
78
+ text = "مرحبا يا عالم."
79
+ encoded_input = tokenizer(text, return_tensors='tf')
80
+ output = model(encoded_input)
81
+ ```
82
+
83
+ ## Training data
84
+ - MSA
85
+ - [The Arabic Gigaword Fifth Edition](https://catalog.ldc.upenn.edu/LDC2011T11)
86
+ - [Abu El-Khair Corpus](http://www.abuelkhair.net/index.php/en/arabic/abu-el-khair-corpus)
87
+ - [OSIAN corpus](https://vlo.clarin.eu/search;jsessionid=31066390B2C9E8C6304845BA79869AC1?1&q=osian)
88
+ - [Arabic Wikipedia](https://archive.org/details/arwiki-20190201)
89
+ - The unshuffled version of the Arabic [OSCAR corpus](https://oscar-corpus.com/)
90
+
91
+ ## Training procedure
92
+ We use [the original implementation](https://github.com/google-research/bert) released by Google for pre-training.
93
+ We follow the original English BERT model's hyperparameters for pre-training, unless otherwise specified.
94
+
95
+ ### Preprocessing
96
+ - After extracting the raw text from each corpus, we apply the following pre-processing.
97
+ - We first remove invalid characters and normalize white spaces using the utilities provided by [the original BERT implementation](https://github.com/google-research/bert/blob/eedf5716ce1268e56f0a50264a88cafad334ac61/tokenization.py#L286-L297).
98
+ - We also remove lines without any Arabic characters.
99
+ - We then remove diacritics and kashida using [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools).
100
+ - Finally, we split each line into sentences with a heuristics-based sentence segmenter.
101
+ - We train a WordPiece tokenizer on the entire dataset (167 GB text) with a vocabulary size of 30,000 using [HuggingFace's tokenizers](https://github.com/huggingface/tokenizers).
102
+ - We do not lowercase letters nor strip accents.
103
+
104
+ ### Pre-training
105
+ - The model was trained on a single cloud TPU (`v3-8`) for one million steps in total.
106
+ - The first 90,000 steps were trained with a batch size of 1,024 and the rest was trained with a batch size of 256.
107
+ - The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%.
108
+ - We use whole word masking and a duplicate factor of 10.
109
+ - We set max predictions per sequence to 20 for the dataset with max sequence length of 128 tokens and 80 for the dataset with max sequence length of 512 tokens.
110
+ - We use a random seed of 12345, masked language model probability of 0.15, and short sequence probability of 0.1.
111
+ - The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after.
112
+
113
+ ## Evaluation results
114
+ - We evaluate our pre-trained language models on five NLP tasks: NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.
115
+ - We fine-tune and evaluate the models using 12 dataset.
116
+ - We used Hugging Face's transformers to fine-tune our CAMeLBERT models.
117
+ - We used transformers `v3.1.0` along with PyTorch `v1.5.1`.
118
+ - The fine-tuning was done by adding a fully connected linear layer to the last hidden state.
119
+ - We use \\(F_{1}\\) score as a metric for all tasks.
120
+ - Code used for fine-tuning is available [here](https://github.com/CAMeL-Lab/CAMeLBERT).
121
+
122
+ ### Results
123
+
124
+ | Task | Dataset | Variant | Mix | CA | DA | MSA | MSA-1/2 | MSA-1/4 | MSA-1/8 | MSA-1/16 |
125
+ | -------------------- | --------------- | ------- | ----- | ----- | ----- | ----- | ------- | ------- | ------- | -------- |
126
+ | NER | ANERcorp | MSA | 80.2% | 66.2% | 74.2% | 82.4% | 82.3% | 82.0% | 82.3% | 80.5% |
127
+ | POS | PATB (MSA) | MSA | 97.3% | 96.6% | 96.5% | 97.4% | 97.4% | 97.4% | 97.4% | 97.4% |
128
+ | | ARZTB (EGY) | DA | 90.1% | 88.6% | 89.4% | 90.8% | 90.3% | 90.5% | 90.5% | 90.4% |
129
+ | | Gumar (GLF) | DA | 97.3% | 96.5% | 97.0% | 97.1% | 97.0% | 97.0% | 97.1% | 97.0% |
130
+ | SA | ASTD | MSA | 76.3% | 69.4% | 74.6% | 76.9% | 76.0% | 76.8% | 76.7% | 75.3% |
131
+ | | ArSAS | MSA | 92.7% | 89.4% | 91.8% | 93.0% | 92.6% | 92.5% | 92.5% | 92.3% |
132
+ | | SemEval | MSA | 69.0% | 58.5% | 68.4% | 72.1% | 70.7% | 72.8% | 71.6% | 71.2% |
133
+ | DID | MADAR-26 | DA | 62.9% | 61.9% | 61.8% | 62.6% | 62.0% | 62.8% | 62.0% | 62.2% |
134
+ | | MADAR-6 | DA | 92.5% | 91.5% | 92.2% | 91.9% | 91.8% | 92.2% | 92.1% | 92.0% |
135
+ | | MADAR-Twitter-5 | MSA | 75.7% | 71.4% | 74.2% | 77.6% | 78.5% | 77.3% | 77.7% | 76.2% |
136
+ | | NADI | DA | 24.7% | 17.3% | 20.1% | 24.9% | 24.6% | 24.6% | 24.9% | 23.8% |
137
+ | Poetry | APCD | CA | 79.8% | 80.9% | 79.6% | 79.7% | 79.9% | 80.0% | 79.7% | 79.8% |
138
+
139
+ ### Results (Average)
140
+ | | Variant | Mix | CA | DA | MSA | MSA-1/2 | MSA-1/4 | MSA-1/8 | MSA-1/16 |
141
+ | -------------------- | ------- | ----- | ----- | ----- | ----- | ------- | ------- | ------- | -------- |
142
+ | Variant-wise-average<sup>[[1]](#footnote-1)</sup> | MSA | 81.9% | 75.3% | 79.9% | 83.2% | 82.9% | 83.1% | 83.0% | 82.1% |
143
+ | | DA | 73.5% | 71.1% | 72.1% | 73.5% | 73.1% | 73.4% | 73.3% | 73.1% |
144
+ | | CA | 79.8% | 80.9% | 79.6% | 79.7% | 79.9% | 80.0% | 79.7% | 79.8% |
145
+ | Macro-Average | ALL | 78.2% | 74.0% | 76.6% | 78.9% | 78.6% | 78.8% | 78.7% | 78.2% |
146
+
147
+ <a name="footnote-1">[1]</a>: Variant-wise-average refers to average over a group of tasks in the same language variant.
148
+
149
+ ## Acknowledgements
150
+ This research was supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC).
151
+
152
+ ## Citation
153
+
154
+ ```bibtex
155
+ @inproceedings{inoue-etal-2021-interplay,
156
+ title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
157
+ author = "Inoue, Go and
158
+ Alhafni, Bashar and
159
+ Baimukan, Nurpeiis and
160
+ Bouamor, Houda and
161
+ Habash, Nizar",
162
+ booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
163
+ month = apr,
164
+ year = "2021",
165
+ address = "Kyiv, Ukraine (Online)",
166
+ publisher = "Association for Computational Linguistics",
167
+ abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
168
+ }
169
+ ```