wissamantoun commited on
Commit
2dff82d
1 Parent(s): 3783d1b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +112 -108
README.md CHANGED
@@ -1,108 +1,112 @@
1
- ---
2
- language: ar
3
- datasets:
4
- - wikipedia
5
- - OSIAN
6
- - 1.5B Arabic Corpus
7
- - OSCAR Arabic Unshuffled
8
- widget:
9
- - text: " عاصمة لبنان هي [MASK] ."
10
- ---
11
-
12
- # AraELECTRA
13
-
14
- <img src="https://raw.githubusercontent.com/aub-mind/arabert/master/AraELECTRA.png" width="100" align="left"/>
15
-
16
- **ELECTRA** is a method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a [GAN](https://arxiv.org/pdf/1406.2661.pdf). AraELECTRA achieves state-of-the-art results on Arabic QA dataset.
17
-
18
- For a detailed description, please refer to the AraELECTRA paper [AraELECTRA: Pre-Training Text Discriminators for Arabic Language Understanding](https://arxiv.org/abs/2012.15516).
19
-
20
- ## How to use the generator in `transformers`
21
-
22
- ```python
23
- from transformers import pipeline
24
-
25
- fill_mask = pipeline(
26
- "fill-mask",
27
- model="aubmindlab/araelectra-base-generator",
28
- tokenizer="aubmindlab/araelectra-base-generator"
29
- )
30
-
31
- print(
32
- fill_mask(" عاصمة لبنان هي [MASK] .)
33
- )
34
- ```
35
-
36
- # Preprocessing
37
-
38
- It is recommended to apply our preprocessing function before training/testing on any dataset.
39
- **Install farasapy to segment text for AraBERT v1 & v2 `pip install farasapy`**
40
-
41
- ```python
42
- from arabert.preprocess import ArabertPreprocessor
43
-
44
- model_name="aubmindlab/araelectra-base"
45
- arabert_prep = ArabertPreprocessor(model_name=model_name)
46
-
47
- text = "ولن نبالغ إذا قلنا إن هاتف أو كمبيوتر المكتب في زمننا هذا ضروري"
48
- arabert_prep.preprocess(text)
49
- ```
50
-
51
- # Model
52
-
53
- Model | HuggingFace Model Name | Size (MB/Params)|
54
- ---|:---:|:---:
55
- AraELECTRA-base-generator | [araelectra-base-generator](https://huggingface.co/aubmindlab/araelectra-base-generator) | 227MB/60M |
56
- AraELECTRA-base-discriminator | [araelectra-base-discriminator](https://huggingface.co/aubmindlab/araelectra-base-discriminator) | 516MB/135M |
57
-
58
- # Compute
59
- Model | Hardware | num of examples (seq len = 512) | Batch Size | Num of Steps | Time (in days)
60
- ---|:---:|:---:|:---:|:---:|:---:
61
- AraELECTRA-base | TPUv3-8 | - | 256 | 2M | 24
62
-
63
- # Dataset
64
-
65
- The pretraining data used for the new AraELECTRA model is also used for **AraGPT2 and AraELECTRA**.
66
-
67
- The dataset consists of 77GB or 200,095,961 lines or 8,655,948,860 words or 82,232,988,358 chars (before applying Farasa Segmentation)
68
-
69
- For the new dataset we added the unshuffled OSCAR corpus, after we thoroughly filter it, to the previous dataset used in AraBERTv1 but with out the websites that we previously crawled:
70
- - OSCAR unshuffled and filtered.
71
- - [Arabic Wikipedia dump](https://archive.org/details/arwiki-20190201) from 2020/09/01
72
- - [The 1.5B words Arabic Corpus](https://www.semanticscholar.org/paper/1.5-billion-words-Arabic-Corpus-El-Khair/f3eeef4afb81223df96575adadf808fe7fe440b4)
73
- - [The OSIAN Corpus](https://www.aclweb.org/anthology/W19-4619)
74
- - Assafir news articles. Huge thank you for Assafir for giving us the data
75
-
76
-
77
- # TensorFlow 1.x models
78
-
79
- **You can find the PyTorch, TF2 and TF1 models in HuggingFace's Transformer Library under the ```aubmindlab``` username**
80
-
81
- - `wget https://huggingface.co/aubmindlab/MODEL_NAME/resolve/main/tf1_model.tar.gz` where `MODEL_NAME` is any model under the `aubmindlab` name
82
-
83
-
84
- # If you used this model please cite us as :
85
-
86
- ```
87
- @inproceedings{antoun-etal-2021-araelectra,
88
- title = "{A}ra{ELECTRA}: Pre-Training Text Discriminators for {A}rabic Language Understanding",
89
- author = "Antoun, Wissam and
90
- Baly, Fady and
91
- Hajj, Hazem",
92
- booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
93
- month = apr,
94
- year = "2021",
95
- address = "Kyiv, Ukraine (Virtual)",
96
- publisher = "Association for Computational Linguistics",
97
- url = "https://www.aclweb.org/anthology/2021.wanlp-1.20",
98
- pages = "191--195",
99
- }
100
- ```
101
-
102
- # Acknowledgments
103
- Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the [AUB MIND Lab](https://sites.aub.edu.lb/mindlab/) Members for the continous support. Also thanks to [Yakshof](https://www.yakshof.com/#/) and Assafir for data and storage access. Another thanks for Habib Rahal (https://www.behance.net/rahalhabib), for putting a face to AraBERT.
104
-
105
- # Contacts
106
- **Wissam Antoun**: [Linkedin](https://www.linkedin.com/in/wissam-antoun-622142b4/) | [Twitter](https://twitter.com/wissam_antoun) | [Github](https://github.com/WissamAntoun) | <wfa07@mail.aub.edu> | <wissam.antoun@gmail.com>
107
-
108
- **Fady Baly**: [Linkedin](https://www.linkedin.com/in/fadybaly/) | [Twitter](https://twitter.com/fadybaly) | [Github](https://github.com/fadybaly) | <fgb06@mail.aub.edu> | <baly.fady@gmail.com>
 
 
 
 
1
+ ---
2
+ language: ar
3
+ datasets:
4
+ - wikipedia
5
+ - Osian
6
+ - 1.5B-Arabic-Corpus
7
+ - oscar-arabic-unshuffled
8
+ - Assafir(private)
9
+ widget:
10
+ - text: " عاصمة لبنان هي [MASK] ."
11
+ ---
12
+
13
+ # AraELECTRA
14
+
15
+ <img src="https://raw.githubusercontent.com/aub-mind/arabert/master/AraELECTRA.png" width="100" align="left"/>
16
+
17
+ **ELECTRA** is a method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a [GAN](https://arxiv.org/pdf/1406.2661.pdf). AraELECTRA achieves state-of-the-art results on Arabic QA dataset.
18
+
19
+ For a detailed description, please refer to the AraELECTRA paper [AraELECTRA: Pre-Training Text Discriminators for Arabic Language Understanding](https://arxiv.org/abs/2012.15516).
20
+
21
+ ## How to use the generator in `transformers`
22
+
23
+ ```python
24
+ from transformers import pipeline
25
+
26
+ fill_mask = pipeline(
27
+ "fill-mask",
28
+ model="aubmindlab/araelectra-base-generator",
29
+ tokenizer="aubmindlab/araelectra-base-generator"
30
+ )
31
+
32
+ print(
33
+ fill_mask(" عاصمة لبنان هي [MASK] .)
34
+ )
35
+ ```
36
+
37
+ # Preprocessing
38
+
39
+ It is recommended to apply our preprocessing function before training/testing on any dataset.
40
+
41
+ **Install the arabert python package to segment text for AraBERT v1 & v2 or to clean your data `pip install arabert`**
42
+
43
+ ```python
44
+ from arabert.preprocess import ArabertPreprocessor
45
+
46
+ model_name="aubmindlab/araelectra-base"
47
+ arabert_prep = ArabertPreprocessor(model_name=model_name)
48
+
49
+ text = "ولن نبالغ إذا قلنا إن هاتف أو كمبيوتر المكتب في زمننا هذا ضروري"
50
+ arabert_prep.preprocess(text)
51
+
52
+ >>> output: ولن نبالغ إذا قلنا : إن هاتف أو كمبيوتر المكتب في زمننا هذا ضروري
53
+ ```
54
+
55
+ # Model
56
+
57
+ Model | HuggingFace Model Name | Size (MB/Params)|
58
+ ---|:---:|:---:
59
+ AraELECTRA-base-generator | [araelectra-base-generator](https://huggingface.co/aubmindlab/araelectra-base-generator) | 227MB/60M |
60
+ AraELECTRA-base-discriminator | [araelectra-base-discriminator](https://huggingface.co/aubmindlab/araelectra-base-discriminator) | 516MB/135M |
61
+
62
+ # Compute
63
+ Model | Hardware | num of examples (seq len = 512) | Batch Size | Num of Steps | Time (in days)
64
+ ---|:---:|:---:|:---:|:---:|:---:
65
+ AraELECTRA-base | TPUv3-8 | - | 256 | 2M | 24
66
+
67
+ # Dataset
68
+
69
+ The pretraining data used for the new AraELECTRA model is also used for **AraGPT2 and AraELECTRA**.
70
+
71
+ The dataset consists of 77GB or 200,095,961 lines or 8,655,948,860 words or 82,232,988,358 chars (before applying Farasa Segmentation)
72
+
73
+ For the new dataset we added the unshuffled OSCAR corpus, after we thoroughly filter it, to the previous dataset used in AraBERTv1 but with out the websites that we previously crawled:
74
+ - OSCAR unshuffled and filtered.
75
+ - [Arabic Wikipedia dump](https://archive.org/details/arwiki-20190201) from 2020/09/01
76
+ - [The 1.5B words Arabic Corpus](https://www.semanticscholar.org/paper/1.5-billion-words-Arabic-Corpus-El-Khair/f3eeef4afb81223df96575adadf808fe7fe440b4)
77
+ - [The OSIAN Corpus](https://www.aclweb.org/anthology/W19-4619)
78
+ - Assafir news articles. Huge thank you for Assafir for giving us the data
79
+
80
+
81
+ # TensorFlow 1.x models
82
+
83
+ **You can find the PyTorch, TF2 and TF1 models in HuggingFace's Transformer Library under the ```aubmindlab``` username**
84
+
85
+ - `wget https://huggingface.co/aubmindlab/MODEL_NAME/resolve/main/tf1_model.tar.gz` where `MODEL_NAME` is any model under the `aubmindlab` name
86
+
87
+
88
+ # If you used this model please cite us as :
89
+
90
+ ```
91
+ @inproceedings{antoun-etal-2021-araelectra,
92
+ title = "{A}ra{ELECTRA}: Pre-Training Text Discriminators for {A}rabic Language Understanding",
93
+ author = "Antoun, Wissam and
94
+ Baly, Fady and
95
+ Hajj, Hazem",
96
+ booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
97
+ month = apr,
98
+ year = "2021",
99
+ address = "Kyiv, Ukraine (Virtual)",
100
+ publisher = "Association for Computational Linguistics",
101
+ url = "https://www.aclweb.org/anthology/2021.wanlp-1.20",
102
+ pages = "191--195",
103
+ }
104
+ ```
105
+
106
+ # Acknowledgments
107
+ Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the [AUB MIND Lab](https://sites.aub.edu.lb/mindlab/) Members for the continous support. Also thanks to [Yakshof](https://www.yakshof.com/#/) and Assafir for data and storage access. Another thanks for Habib Rahal (https://www.behance.net/rahalhabib), for putting a face to AraBERT.
108
+
109
+ # Contacts
110
+ **Wissam Antoun**: [Linkedin](https://www.linkedin.com/in/wissam-antoun-622142b4/) | [Twitter](https://twitter.com/wissam_antoun) | [Github](https://github.com/WissamAntoun) | <wfa07@mail.aub.edu> | <wissam.antoun@gmail.com>
111
+
112
+ **Fady Baly**: [Linkedin](https://www.linkedin.com/in/fadybaly/) | [Twitter](https://twitter.com/fadybaly) | [Github](https://github.com/fadybaly) | <fgb06@mail.aub.edu> | <baly.fady@gmail.com>