A newer version of this model is available: XeTute/HamzahLMV1-1B
Cover image
Hamzah Language Model; my tiny implementation of an eastern-culture Language Model.

We're not a company, just a small group of students. It takes huge compute to train these models till something high-quality comes out; you can help us pay our studies here.

We introduce HamzahLMV0, the 'zeroth' version of Hamzah Language Model, a series of upcoming models designed to have a tiny bit of a personality, be smart (for their size) and promptable to be beyond for what they've been trained on (=> high instruction following).
Quick performance metadata:

  • Model Series: HamzahLM
  • Model Version: V0
  • Model Parameters: 1.2B
  • Context Length: 128k tokens
  • Recommended Max Generation Length: 2k - 8k
  • Other Notes: Large ctx length; this model is good at processing large context while being coherent; you can exploit this using RAG or similar.

A 3B version is out here! You're able to access the 3B version for free through our endpoint, serving the full 128k context with acceptable processing & perfect generation performance.


Our Apps & Socials

Chat with our Assistant | Support us Financially | Visit our GitHub

Long live the Islamic Republic of Pakistan; Glory to the Islamic Republic of Pakistan 🇵🇰
The Flag of the Islamic Federal Republic of Pakistan

Downloads last month
5
Safetensors
Model size
1.24B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for XeTute/HamzahLMV0-1B

Finetuned
(379)
this model
Quantizations
1 model

Datasets used to train XeTute/HamzahLMV0-1B

Collection including XeTute/HamzahLMV0-1B