Back to all models
fill-mask mask_token: [MASK]
Query this model
๐Ÿ”ฅ This model is currently loaded and running on the Inference API. โš ๏ธ This model could not be loaded by the inference API. โš ๏ธ This model can be loaded on the Inference API on-demand.
JSON Output
API endpoint  

โšก๏ธ Upgrade your account to access the Inference API

Share Copied link to clipboard

Monthly model downloads

asafaya/bert-large-arabic asafaya/bert-large-arabic
931 downloads
last 30 days

pytorch

tf

Contributed by

asafaya Ali Safaya
4 models

How to use this model directly from the ๐Ÿค—/transformers library:

			
Copy to clipboard
from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("asafaya/bert-large-arabic") model = AutoModelForMaskedLM.from_pretrained("asafaya/bert-large-arabic")

Arabic BERT Large Model

Pretrained BERT Large language model for Arabic

If you use this model in your work, please cite this paper:

@misc{safaya2020kuisail,
    title={KUISAIL at SemEval-2020 Task 12: BERT-CNN for Offensive Speech Identification in Social Media},
    author={Ali Safaya and Moutasem Abdullatif and Deniz Yuret},
    year={2020},
    eprint={2007.13184},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}

Pretraining Corpus

arabic-bert-large model was pretrained on ~8.2 Billion words:

and other Arabic resources which sum up to ~95GB of text.

Notes on training data:

  • Our final version of corpus contains some non-Arabic words inlines, which we did not remove from sentences since that would affect some tasks like NER.
  • Although non-Arabic characters were lowered as a preprocessing step, since Arabic characters does not have upper or lower case, there is no cased and uncased version of the model.
  • The corpus and vocabulary set are not restricted to Modern Standard Arabic, they contain some dialectical Arabic too.

Pretraining details

  • This model was trained using Google BERT's github repository on a single TPU v3-8 provided for free from TFRC.
  • Our pretraining procedure follows training settings of bert with some changes: trained for 3M training steps with batchsize of 128, instead of 1M with batchsize of 256.

Load Pretrained Model

You can use this model by installing torch or tensorflow and Huggingface library transformers. And you can use it directly by initializing it like this:

from transformers import AutoTokenizer, AutoModel

tokenizer = AutoTokenizer.from_pretrained("asafaya/bert-large-arabic")
model = AutoModel.from_pretrained("asafaya/bert-large-arabic")

Results

For further details on the models performance or any other queries, please refer to Arabic-BERT

Acknowledgement

Thanks to Google for providing free TPU for the training process and for Huggingface for hosting this model on their servers ๐Ÿ˜Š