Back to all models
text-generation mask_token:
Query this model
🔥 This model is currently loaded and running on the Inference API. ⚠️ This model could not be loaded by the inference API. ⚠️ This model can be loaded on the Inference API on-demand.
JSON Output
API endpoint  

⚡️ Upgrade your account to access the Inference API

Share Copied link to clipboard

Monthly model downloads

monsoon-nlp/sanaa monsoon-nlp/sanaa
94 downloads
last 30 days

pytorch

tf

Contributed by

monsoon-nlp Nick Doiron
7 models

How to use this model directly from the 🤗/transformers library:

			
Copy to clipboard
from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("monsoon-nlp/sanaa") model = AutoModelWithLMHead.from_pretrained("monsoon-nlp/sanaa")
Uploaded in S3

Sanaa

Arabic GPT-2 demo

This is a small GPT-2 model retrained on Arabic Wikipedia circa September 2020 (due to memory limits, the first 600,000 lines of the Wiki dump)

There is NO content filtering in the current version; do not use for public-facing text generation.

Training

Training notebook: https://colab.research.google.com/drive/1Z_935vTuZvbseOsExCjSprrqn1MsQT57

Steps to training:

am = AutoModel.from_pretrained('./argpt', from_tf=True)
am.save_pretrained("./")

Generating text in SimpleTransformers

Finetuning notebook: https://colab.research.google.com/drive/1fXFH7g4nfbxBo42icI4ZMy-0TAGAxc2i

from simpletransformers.language_generation import LanguageGenerationModel
model = LanguageGenerationModel("gpt2", "monsoon-nlp/sanaa")
model.generate("مدرستي")

Finetuning dialects in SimpleTransformers

I finetuned this model on different Arabic dialects to generate a new model (monsoon-nlp/sanaa-dialect on HuggingFace) with some additional control tokens.

Finetuning notebook: https://colab.research.google.com/drive/1fXFH7g4nfbxBo42ic$

from simpletransformers.language_modeling import LanguageModelingModel
ft_model = LanguageModelingModel('gpt2', 'monsoon-nlp/sanaa', args=train_args)
ft_model.tokenizer.add_tokens(["[EGYPTIAN]", "[MSA]", "[LEVANTINE]", "[GULF]"])
ft_model.model.resize_token_embeddings(len(ft_model.tokenizer))
ft_model.train_model("./train.txt", eval_file="./test.txt")

# exported model
from simpletransformers.language_generation import LanguageGenerationModel
model = LanguageGenerationModel("gpt2", "./dialects")
model.generate('[EGYPTIAN]' + "مدرستي")