aubmindlab commited on
Commit
e4ea8a9
1 Parent(s): fdba830

Added model files

Browse files
README.md ADDED
@@ -0,0 +1,110 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: ar
3
+ datasets:
4
+ - wikipedia
5
+ - OSIAN
6
+ - 1.5B Arabic Corpus
7
+ - OSCAR Arabic Unshuffled
8
+ ---
9
+
10
+ # AraELECTRA
11
+
12
+ **ELECTRA** is a method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a [GAN](https://arxiv.org/pdf/1406.2661.pdf). AraELECTRA achieves state-of-the-art results on Arabic QA dataset.
13
+
14
+ For a detailed description, please refer to the AraELECTRA paper [AraELECTRA: Pre-Training Text Discriminators for Arabic Language Understanding](https://arxiv.org/abs/2012.15516).
15
+
16
+ ## How to use the discriminator in `transformers`
17
+
18
+ ```python
19
+ from transformers import ElectraForPreTraining, ElectraTokenizerFast
20
+ import torch
21
+
22
+ discriminator = ElectraForPreTraining.from_pretrained("aubmindlab/araelectra-base-discriminator")
23
+ tokenizer = ElectraTokenizerFast.from_pretrained("aubmindlab/araelectra-base-discriminator")
24
+
25
+ sentence = ""
26
+ fake_sentence = ""
27
+
28
+ fake_tokens = tokenizer.tokenize(fake_sentence)
29
+ fake_inputs = tokenizer.encode(fake_sentence, return_tensors="pt")
30
+ discriminator_outputs = discriminator(fake_inputs)
31
+ predictions = torch.round((torch.sign(discriminator_outputs[0]) + 1) / 2)
32
+
33
+ [print("%7s" % token, end="") for token in fake_tokens]
34
+
35
+ [print("%7s" % int(prediction), end="") for prediction in predictions.tolist()]
36
+ ```
37
+
38
+ # Model
39
+
40
+ Model | HuggingFace Model Name | Size (MB/Params)|
41
+ ---|:---:|:---:
42
+ AraELECTRA-base-generator | [araelectra-base-generator](https://huggingface.co/aubmindlab/araelectra-base-generator) | 227MB/60M |
43
+ AraELECTRA-base-discriminator | [araelectra-base-discriminator](https://huggingface.co/aubmindlab/araelectra-base-discriminator) | 516MB/135M |
44
+
45
+ # Compute
46
+ Model | Hardware | num of examples (seq len = 512) | Batch Size | Num of Steps | Time (in days)
47
+ ---|:---:|:---:|:---:|:---:|:---:
48
+ AraELECTRA-base | TPUv3-8 | - | 256 | 2M | 24
49
+
50
+ # Dataset
51
+
52
+ The pretraining data used for the new **AraELECTRA** model is also used for **AraGPT2 and AraBERTv2**.
53
+
54
+ The dataset consists of 77GB or 200,095,961 lines or 8,655,948,860 words or 82,232,988,358 chars (before applying Farasa Segmentation)
55
+
56
+ For the new dataset we added the unshuffled OSCAR corpus, after we thoroughly filter it, to the previous dataset used in AraBERTv1 but with out the websites that we previously crawled:
57
+ - OSCAR unshuffled and filtered.
58
+ - [Arabic Wikipedia dump](https://archive.org/details/arwiki-20190201) from 2020/09/01
59
+ - [The 1.5B words Arabic Corpus](https://www.semanticscholar.org/paper/1.5-billion-words-Arabic-Corpus-El-Khair/f3eeef4afb81223df96575adadf808fe7fe440b4)
60
+ - [The OSIAN Corpus](https://www.aclweb.org/anthology/W19-4619)
61
+ - Assafir news articles. Huge thank you for Assafir for giving us the data
62
+
63
+ # Preprocessing
64
+
65
+ It is recommended to apply our preprocessing function before training/testing on any dataset.
66
+ **Install farasapy to segment text for AraBERT v1 & v2 `pip install farasapy`**
67
+
68
+ ```python
69
+ from arabert.preprocess import ArabertPreprocessor
70
+
71
+ model_name="araelectra-base"
72
+ arabert_prep = ArabertPreprocessor(model_name=model_name, keep_emojis=True)
73
+
74
+ text = "ولن نبالغ إذا قلنا إن هاتف أو كمبيوتر المكتب في زمننا هذا ضروري"
75
+ arabert_prep.preprocess(text)
76
+ ```
77
+
78
+
79
+ # TensorFlow 1.x models
80
+
81
+ The TF1.x model are avaiable in the HuggingFace models repo.
82
+ To download them as follows:
83
+ ```bash
84
+ wget https://s3.amazonaws.com/models.huggingface.co/bert/aubmindlab/MODEL_NAME/tf1_model.tar.gz
85
+ ```
86
+ where `MODEL_NAME` is any model under the `aubmindlab` name
87
+
88
+
89
+ # If you used this model please cite us as :
90
+
91
+ ```
92
+ @misc{antoun2020aragpt2,
93
+ title={AraGPT2: Pre-Trained Transformer for Arabic Language Generation},
94
+ author={Wissam Antoun and Fady Baly and Hazem Hajj},
95
+ year={2020},
96
+ eprint={2012.15520},
97
+ archivePrefix={arXiv},
98
+ primaryClass={cs.CL}
99
+ }
100
+ ```
101
+
102
+ # Acknowledgments
103
+ Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the [AUB MIND Lab](https://sites.aub.edu.lb/mindlab/) Members for the continous support. Also thanks to [Yakshof](https://www.yakshof.com/#/) and Assafir for data and storage access. Another thanks for Habib Rahal (https://www.behance.net/rahalhabib), for putting a face to AraBERT.
104
+
105
+ # Contacts
106
+ **Wissam Antoun**: [Linkedin](https://www.linkedin.com/in/wissam-antoun-622142b4/) | [Twitter](https://twitter.com/wissam_antoun) | [Github](https://github.com/WissamAntoun) | <wfa07@mail.aub.edu> | <wissam.antoun@gmail.com>
107
+
108
+ **Fady Baly**: [Linkedin](https://www.linkedin.com/in/fadybaly/) | [Twitter](https://twitter.com/fadybaly) | [Github](https://github.com/fadybaly) | <fgb06@mail.aub.edu> | <baly.fady@gmail.com>
109
+
110
+
config.json ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "ElectraForPreTraining"
4
+ ],
5
+ "attention_probs_dropout_prob": 0.1,
6
+ "embedding_size": 768,
7
+ "generator_hidden_size" : 0.33333,
8
+ "hidden_act": "gelu",
9
+ "hidden_dropout_prob": 0.1,
10
+ "hidden_size": 768,
11
+ "initializer_range": 0.02,
12
+ "intermediate_size": 3072,
13
+ "layer_norm_eps": 1e-12,
14
+ "max_position_embeddings": 512,
15
+ "model_type": "electra",
16
+ "num_attention_heads": 12,
17
+ "num_hidden_layers": 12,
18
+ "pad_token_id": 0,
19
+ "type_vocab_size": 2,
20
+ "vocab_size": 64000
21
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:19c88dcaa12e6dce01cca46080ca1c31fdfbd4334324421eee7b860490371267
3
+ size 540862295
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
1
+ {"unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]"}
tf1_model.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cbca4d0cedf32683a99a235494e946ba11a373095f4040260e005948c88f2af1
3
+ size 538931319
tf_model.h5 ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7c474d54db219b2253b961f579e44bf463582c562059cd429fde6392ea1ca480
3
+ size 541048320
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
1
+ {"do_lower_case": false, "do_basic_tokenize": true, "never_split": ["[بريد]", "[مستخدم]", "[رابط]"], "unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]", "tokenize_chinese_chars": true, "strip_accents": null, "special_tokens_map_file": null, "tokenizer_file": null, "name_or_path": "./torch_model_noseg"}
vocab.txt ADDED
The diff for this file is too large to render. See raw diff