Edit model card

Sanaa

Arabic GPT-2 demo

This is a small GPT-2 model retrained on Arabic Wikipedia circa September 2020 (due to memory limits, the first 600,000 lines of the Wiki dump)

There is NO content filtering in the current version; do not use for public-facing text generation.

Training

Training notebook: https://colab.research.google.com/drive/1Z_935vTuZvbseOsExCjSprrqn1MsQT57

Steps to training:

am = AutoModel.from_pretrained('./argpt', from_tf=True)
am.save_pretrained("./")

Generating text in SimpleTransformers

Finetuning notebook: https://colab.research.google.com/drive/1fXFH7g4nfbxBo42icI4ZMy-0TAGAxc2i

from simpletransformers.language_generation import LanguageGenerationModel
model = LanguageGenerationModel("gpt2", "monsoon-nlp/sanaa")
model.generate("ู…ุฏุฑุณุชูŠ")

Finetuning dialects in SimpleTransformers

I finetuned this model on different Arabic dialects to generate a new model (monsoon-nlp/sanaa-dialect on HuggingFace) with some additional control tokens.

Finetuning notebook: https://colab.research.google.com/drive/1fXFH7g4nfbxBo42ic$

from simpletransformers.language_modeling import LanguageModelingModel
ft_model = LanguageModelingModel('gpt2', 'monsoon-nlp/sanaa', args=train_args)
ft_model.tokenizer.add_tokens(["[EGYPTIAN]", "[MSA]", "[LEVANTINE]", "[GULF]"])
ft_model.model.resize_token_embeddings(len(ft_model.tokenizer))
ft_model.train_model("./train.txt", eval_file="./test.txt")

# exported model
from simpletransformers.language_generation import LanguageGenerationModel
model = LanguageGenerationModel("gpt2", "./dialects")
model.generate('[EGYPTIAN]' + "ู…ุฏุฑุณุชูŠ")
Downloads last month
9