Edit model card

๐Ÿ‡ฐ๐Ÿ‡ฟ Kazakh mGPT 1.3B

Language model for Kazakh. Model has 1.3B parameters as you can guess from it's name.

Kazakh belongs to Turkic language family. It's a very vibrant language with approximately 18 million speakers. Here are some facts about it:

  1. It is a major language spoken in Kazakhstan.
  2. Kazakh has its own version of the Cyrillic script but is transitioning to the Latin script.
  3. It has a rich tradition of oral literature, including epic poetry.

Technical details

It's one of the models derived from the base mGPT-XL (1.3B) model (see the list below) which was originally trained on the 61 languages from 25 language families using Wikipedia and C4 corpus.

We've found additional data for 23 languages most of which are considered as minor and decided to further tune the base model. Kazakh mGPT 1.3B was trained for another 150000 steps with batch_size=4 and context window of 2048 tokens on 1 A100.

Final perplexity for this model on validation is 3.38.

Chart of the training loss and perplexity:

Other mGPT-1.3B models

Feedback

If you'll found a bug of have additional data to train model on your language โ€” please, give us feedback.

Model will be improved over time. Stay tuned!

Downloads last month
635
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.