
We're not a company, just a small group of students. It takes huge compute to train these models till something high-quality comes out; you can help us pay our studies here.
We introduce HamzahLMV0, the 'zeroth' version of Hamzah Language Model, a series of upcoming models designed to have a tiny bit of a personality, be smart (for their size) and promptable to be beyond for what they've been trained on (=> high instruction following).
Quick performance metadata:
- Model Series: HamzahLM
- Model Version: V0
- Model Parameters: 3.2B
- Context Length: 128k tokens
- Recommended Max Generation Length: 2k - 8k
- Other Notes: Large ctx length; this model is good at processing large context while being coherent; you can exploit this using RAG or similar.
A 1B version is out here! You're able to access this, the 3B version, for free through our endpoint, serving the full 128k context with acceptable processing & perfect generation performance.
Our Apps & Socials
Chat with our Assistant | Support us Financially | Visit our GitHub
Long live the Islamic Republic of Pakistan; Glory to the Islamic Republic of Pakistan 🇵🇰
- Downloads last month
- 0