wissamantoun commited on
Commit
b20deb8
1 Parent(s): b7bb29b

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +79 -0
README.md ADDED
@@ -0,0 +1,79 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: ar
3
+ datasets:
4
+ - wikipedia
5
+ - OSIAN
6
+ - 1.5B Arabic Corpus
7
+ - OSCAR Arabic Unshuffled
8
+ - Twitter
9
+ widget:
10
+ - text: " عاصمة لبنان هي [MASK] ."
11
+ ---
12
+
13
+
14
+ <img src="https://raw.githubusercontent.com/aub-mind/arabert/master/arabert_logo.png" width="100" align="center"/>
15
+
16
+
17
+ # AraBERTv0.2-Twitter
18
+
19
+ AraBERTv0.2-Twitter-base/large are two new models for Arabic dialects and tweets, trained by continuing the pre-training using the MLM task on ~60M Arabic tweets (filtered from a collection on 100M).
20
+
21
+ The two new models have had emojies added to their vocabulary in addition to common words that weren't at first present. The pre-training was done with a max sentence length of 64 only for 1 epoch.
22
+
23
+ **AraBERT** is an Arabic pretrained lanaguage model based on [Google's BERT architechture](https://github.com/google-research/bert). AraBERT uses the same BERT-Base config. More details are available in the [AraBERT Paper](https://arxiv.org/abs/2003.00104) and in the [AraBERT Meetup](https://github.com/WissamAntoun/pydata_khobar_meetup)
24
+
25
+
26
+ ## Other Models
27
+
28
+ Model | HuggingFace Model Name | Size (MB/Params)| Pre-Segmentation | DataSet (Sentences/Size/nWords) |
29
+ ---|:---:|:---:|:---:|:---:
30
+ AraBERTv0.2-base | [bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) | 543MB / 136M | No | 200M / 77GB / 8.6B |
31
+ AraBERTv0.2-large| [bert-large-arabertv02](https://huggingface.co/aubmindlab/bert-large-arabertv02) | 1.38G / 371M | No | 200M / 77GB / 8.6B |
32
+ AraBERTv2-base| [bert-base-arabertv2](https://huggingface.co/aubmindlab/bert-base-arabertv2) | 543MB / 136M | Yes | 200M / 77GB / 8.6B |
33
+ AraBERTv2-large| [bert-large-arabertv2](https://huggingface.co/aubmindlab/bert-large-arabertv2) | 1.38G / 371M | Yes | 200M / 77GB / 8.6B |
34
+ AraBERTv0.1-base| [bert-base-arabertv01](https://huggingface.co/aubmindlab/bert-base-arabertv01) | 543MB / 136M | No | 77M / 23GB / 2.7B |
35
+ AraBERTv1-base| [bert-base-arabert](https://huggingface.co/aubmindlab/bert-base-arabert) | 543MB / 136M | Yes | 77M / 23GB / 2.7B |
36
+ AraBERTv0.2-Twitter-base| [bert-base-arabertv02-twitter](https://huggingface.co/aubmindlab/bert-base-arabertv02-twitter) | 543MB / 136M | No | Same as v02 + 60M Multi-Dialect Tweets|
37
+ AraBERTv0.2-Twitter-large| [bert-large-arabertv02-twitter](https://huggingface.co/aubmindlab/bert-large-arabertv02-twitter) | 1.38G / 371M | No | Same as v02 + 60M Multi-Dialect Tweets|
38
+
39
+
40
+ # Preprocessing
41
+
42
+ **The model is trained on a sequence length of 64, using max length beyond 64 might result in degraded performance**
43
+
44
+ It is recommended to apply our preprocessing function before training/testing on any dataset.
45
+ The preprocessor will keep and space out emojis when used with a "twitter" model.
46
+
47
+ ```python
48
+ from arabert.preprocess import ArabertPreprocessor
49
+ from transformers import AutoTokenizer, AutoModelForMaskedLM
50
+
51
+ model_name="aubmindlab/bert-base-arabertv02-twitter"
52
+ arabert_prep = ArabertPreprocessor(model_name=model_name)
53
+
54
+ text = "ولن نبالغ إذا قلنا إن هاتف أو كمبيوتر المكتب في زمننا هذا ضروري"
55
+ arabert_prep.preprocess(text)
56
+
57
+ tokenizer = AutoTokenizer.from_pretrained("aubmindlab/bert-base-arabertv02-twitter")
58
+ model = AutoModelForMaskedLM.from_pretrained("aubmindlab/bert-base-arabertv02-twitter")
59
+ ```
60
+
61
+
62
+
63
+ # If you used this model please cite us as :
64
+ Google Scholar has our Bibtex wrong (missing name), use this instead
65
+ ```
66
+ @inproceedings{antoun2020arabert,
67
+ title={AraBERT: Transformer-based Model for Arabic Language Understanding},
68
+ author={Antoun, Wissam and Baly, Fady and Hajj, Hazem},
69
+ booktitle={LREC 2020 Workshop Language Resources and Evaluation Conference 11--16 May 2020},
70
+ pages={9}
71
+ }
72
+ ```
73
+ # Acknowledgments
74
+ Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the [AUB MIND Lab](https://sites.aub.edu.lb/mindlab/) Members for the continous support. Also thanks to [Yakshof](https://www.yakshof.com/#/) and Assafir for data and storage access. Another thanks for Habib Rahal (https://www.behance.net/rahalhabib), for putting a face to AraBERT.
75
+
76
+ # Contacts
77
+ **Wissam Antoun**: [Linkedin](https://www.linkedin.com/in/wissam-antoun-622142b4/) | [Twitter](https://twitter.com/wissam_antoun) | [Github](https://github.com/WissamAntoun) | <wfa07@mail.aub.edu> | <wissam.antoun@gmail.com>
78
+
79
+ **Fady Baly**: [Linkedin](https://www.linkedin.com/in/fadybaly/) | [Twitter](https://twitter.com/fadybaly) | [Github](https://github.com/fadybaly) | <fgb06@mail.aub.edu> | <baly.fady@gmail.com>