Back to all models
text-generation mask_token:
Query this model
๐Ÿ”ฅ This model is currently loaded and running on the Inference API. โš ๏ธ This model could not be loaded by the inference API. โš ๏ธ This model can be loaded on the Inference API on-demand.
JSON Output
API endpoint  

โšก๏ธ Upgrade your account to access the Inference API

Share Copied link to clipboard

Monthly model downloads

akhooli/gpt2-small-arabic akhooli/gpt2-small-arabic
267 downloads
last 30 days

pytorch

tf

Contributed by

akhooli Abed Khooli
7 models

How to use this model directly from the ๐Ÿค—/transformers library:

			
Copy to clipboard
from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("akhooli/gpt2-small-arabic") model = AutoModelWithLMHead.from_pretrained("akhooli/gpt2-small-arabic")

GPT2-Small-Arabic

Model description

GPT2 model from Arabic Wikipedia dataset based on gpt2-small (using Fastai2).

Intended uses & limitations

How to use

An example is provided in this colab notebook. Both text and poetry (fine-tuned model) generation are included.

Limitations and bias

GPT2-small-arabic (trained on Arabic Wikipedia) has several limitations in terms of coverage (Arabic Wikipeedia quality, no diacritics) and training performance. Use as demonstration or proof of concepts but not as production code.

Training data

This pretrained model used the Arabic Wikipedia dump (around 900 MB).

Training procedure

Training was done using Fastai2 library on Kaggle, using free GPU.

Eval results

Final perplexity reached was 72.19, loss: 4.28, accuracy: 0.307

BibTeX entry and citation info

@inproceedings{Abed Khooli,
  year={2020}
}