File size: 1,610 Bytes
2dfd242 5d73e61 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 |
---
language: ar
---
# Dialect-AR-GPT-2021
## Finetuned AraGPT-2 demo
This model started with [AraGPT2-Medium](https://huggingface.co/aubmindlab/aragpt2-medium),
from AUB MIND Lab.
This model was then finetuned on dialect datasets from Qatar University, University of British Columbia / NLP,
and Johns Hopkins University / LREC for 10 epochs.
You can use special tokens to prompt five dialects: `[EGYPTIAN]`, `[GULF]`, `[LEVANTINE]`, `[MAGHREBI]`, or `[MSA]`, followed by a space.
```
from simpletransformers.language_generation import LanguageGenerationModel
model = LanguageGenerationModel("gpt2", "monsoon-nlp/dialect-ar-gpt-2021")
model.generate('[GULF] ' + "مدينتي هي", { 'max_length': 100 })
```
There is NO content filtering in the current version; do not use for public-facing
text generation!
## Training and Finetuning details
Original model: https://huggingface.co/aubmindlab/aragpt2-medium
I inserted new tokens into the tokenizer, finetuned the model on the dialect samples, and exported the new model.
Notebook: https://colab.research.google.com/drive/19C0zbkSCt5ncVCa4kY-ik9hSEiJcjI-F
## Citations
AraGPT2 model:
```
@misc{antoun2020aragpt2,
title={AraGPT2: Pre-Trained Transformer for Arabic Language Generation},
author={Wissam Antoun and Fady Baly and Hazem Hajj},
year={2020},
eprint={2012.15520},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
Dialect data sources:
- https://qspace.qu.edu.qa/handle/10576/15265
- https://github.com/UBC-NLP/aoc_id
- https://github.com/ryancotterell/arabic_dialect_annotation
|