sultan commited on
Commit
ec45c1a
1 Parent(s): c6ca6be

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +17 -1
README.md CHANGED
@@ -8,4 +8,20 @@ Pre-training Transformer-based models such as BERT and ELECTRA on a collection o
8
 
9
  <b>Description</b>
10
 
11
- This model was pre-trained on 44GB of Arabic corpora using [Funnel Transformer with ELECTRA objective](https://arxiv.org/abs/2006.03236). We will update you with more details about the model and our accepted paper later at EMNLP21. Check our GitHub page for the latest updates and examples: https://github.com/salrowili/ArabicTransformer
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8
 
9
  <b>Description</b>
10
 
11
+ This model was pre-trained on 44GB of Arabic corpora using [Funnel Transformer with ELECTRA objective](https://arxiv.org/abs/2006.03236). We will update you with more details about the model and our accepted paper later at EMNLP21. Check our GitHub page for the latest updates and examples: https://github.com/salrowili/ArabicTransformer
12
+
13
+ ```bibtex
14
+ @inproceedings{alrowili-shanker-2021-arabictransformer-efficient,
15
+ title = "{A}rabic{T}ransformer: Efficient Large {A}rabic Language Model with Funnel Transformer and {ELECTRA} Objective",
16
+ author = "Alrowili, Sultan and
17
+ Shanker, Vijay",
18
+ booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021",
19
+ month = nov,
20
+ year = "2021",
21
+ address = "Punta Cana, Dominican Republic",
22
+ publisher = "Association for Computational Linguistics",
23
+ url = "https://aclanthology.org/2021.findings-emnlp.108",
24
+ pages = "1255--1261",
25
+ abstract = "Pre-training Transformer-based models such as BERT and ELECTRA on a collection of Arabic corpora, demonstrated by both AraBERT and AraELECTRA, shows an impressive result on downstream tasks. However, pre-training Transformer-based language models is computationally expensive, especially for large-scale models. Recently, Funnel Transformer has addressed the sequential redundancy inside Transformer architecture by compressing the sequence of hidden states, leading to a significant reduction in the pre-training cost. This paper empirically studies the performance and efficiency of building an Arabic language model with Funnel Transformer and ELECTRA objective. We find that our model achieves state-of-the-art results on several Arabic downstream tasks despite using less computational resources compared to other BERT-based models.",
26
+ }
27
+ ```