sultan commited on
Commit
3ce8f96
1 Parent(s): 92d5a91

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +17 -1
README.md CHANGED
@@ -10,4 +10,20 @@ Pre-training Transformer-based models such as BERT and ELECTRA on a collection o
10
 
11
  This model was pre-trained on 44GB of Arabic corpora using [Funnel Transformer with ELECTRA objective](https://arxiv.org/abs/2006.03236). This model is faster than ELECTRA-base architecture while having the same number of parameters. The model was pre-trained with significantly less resources than state-of-the-art models. We will update you with more details about the model and our accepted paper later at EMNLP21.
12
 
13
- Check our GitHub page for the latest updates and examples : https://github.com/salrowili/ArabicTransformer
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
 
11
  This model was pre-trained on 44GB of Arabic corpora using [Funnel Transformer with ELECTRA objective](https://arxiv.org/abs/2006.03236). This model is faster than ELECTRA-base architecture while having the same number of parameters. The model was pre-trained with significantly less resources than state-of-the-art models. We will update you with more details about the model and our accepted paper later at EMNLP21.
12
 
13
+ Check our GitHub page for the latest updates and examples : https://github.com/salrowili/ArabicTransformer
14
+
15
+ ```bibtex
16
+ @inproceedings{alrowili-shanker-2021-arabictransformer-efficient,
17
+ title = "{A}rabic{T}ransformer: Efficient Large {A}rabic Language Model with Funnel Transformer and {ELECTRA} Objective",
18
+ author = "Alrowili, Sultan and
19
+ Shanker, Vijay",
20
+ booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021",
21
+ month = nov,
22
+ year = "2021",
23
+ address = "Punta Cana, Dominican Republic",
24
+ publisher = "Association for Computational Linguistics",
25
+ url = "https://aclanthology.org/2021.findings-emnlp.108",
26
+ pages = "1255--1261",
27
+ abstract = "Pre-training Transformer-based models such as BERT and ELECTRA on a collection of Arabic corpora, demonstrated by both AraBERT and AraELECTRA, shows an impressive result on downstream tasks. However, pre-training Transformer-based language models is computationally expensive, especially for large-scale models. Recently, Funnel Transformer has addressed the sequential redundancy inside Transformer architecture by compressing the sequence of hidden states, leading to a significant reduction in the pre-training cost. This paper empirically studies the performance and efficiency of building an Arabic language model with Funnel Transformer and ELECTRA objective. We find that our model achieves state-of-the-art results on several Arabic downstream tasks despite using less computational resources compared to other BERT-based models.",
28
+ }
29
+ ```