Cover image
Hamzah Language Model; my tiny implementation of an eastern-culture Language Model.

We're not a company, just a small group of students. It takes huge compute to train these models till something high-quality comes out; you can help us pay our studies here.

We introduce HamzahLMV0, the 'zeroth' version of Hamzah Language Model, a series of upcoming models designed to have a tiny bit of a personality, be smart (for their size) and promptable to be beyond for what they've been trained on (=> high instruction following).
Quick performance metadata:

  • Model Series: HamzahLM
  • Model Version: V0
  • Model Parameters: 3.2B
  • Context Length: 128k tokens
  • Recommended Max Generation Length: 2k - 8k
  • Other Notes: Large ctx length; this model is good at processing large context while being coherent; you can exploit this using RAG or similar.

A 1B version is out here! You're able to access this, the 3B version, for free through our endpoint, serving the full 128k context with acceptable processing & perfect generation performance.


Our Apps & Socials

Chat with our Assistant | Support us Financially | Visit our GitHub

Long live the Islamic Republic of Pakistan; Glory to the Islamic Republic of Pakistan 🇵🇰
The Flag of the Islamic Federal Republic of Pakistan

Downloads last month
0
Safetensors
Model size
3.21B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for XeTute/HamzahLMV0-3B

Finetuned
(335)
this model
Quantizations
2 models

Datasets used to train XeTute/HamzahLMV0-3B

Collection including XeTute/HamzahLMV0-3B