ALLaM-Thinking-GGUF

Description

ALLaM-Thinking-GGUF is an Arabic language model optimized for step-by-step reasoning and mathematical problem-solving. The model has been quantized to the GGUF format for efficient inference on consumer hardware.

Model Details

  • Model Name: ALLaM-Thinking-GGUF
  • Author: almaghrabima
  • Languages: Arabic (primary)
  • Format: GGUF (GPU/CPU inference optimized)
  • Quantization: q4_k_m

Features

  • Specialized in step-by-step reasoning for mathematical problems
  • Optimized for Arabic language comprehension and generation
  • Efficient inference through GGUF quantization
  • Suitable for educational applications and mathematical assistance

Installation

# Clone or download the repository
git clone https://huggingface.co/almaghrabima/ALLaM-Thinking-GGUF

# Navigate to the downloaded directory
cd ALLaM-Thinking-GGUF

Usage

The model can be used with llama.cpp for local inference:

./build/bin/llama-cli -m ./ALLaM-Thinking-q4_k_m.gguf -cnv -p "Your prompt in Arabic"

Example

./build/bin/llama-cli -m ./ALLaM-Thinking-q4_k_m.gguf -cnv -p "ููŠ ูุฑูŠู‚ ู…ูƒูˆู† ู…ู† 15 ู„ุงุนุจุงู‹ุŒ 40% ู…ู†ู‡ู… ูŠุณุฌู„ูˆู† ุงู„ุฃู‡ุฏุงู. ุฅุฐุง ุณุฌู„ ูƒู„ ู„ุงุนุจ ู…ู† ุงู„ู„ุงุนุจูŠู† ุงู„ุฐูŠู† ูŠุณุฌู„ูˆู† ุงู„ุฃู‡ุฏุงู ููŠ ุงู„ู…ุชูˆุณุท 5 ุฃู‡ุฏุงู ุฎู„ุงู„ ุงู„ู…ูˆุณู…ุŒ ููƒู… ุนุฏุฏ ุงู„ุฃู‡ุฏุงู ุงู„ูƒู„ูŠ ุงู„ุชูŠ ุณุฌู„ู‡ุง ุงู„ู„ุงุนุจูˆู† ุงู„ุฐูŠู† ูŠุณุฌู„ูˆู† ุงู„ุฃู‡ุฏุงูุŸ"

Sample Output

[INST] ููŠ ูุฑูŠู‚ ู…ูƒูˆู† ู…ู† 15 ู„ุงุนุจุงู‹ุŒ 40 % ู…ู†ู‡ู… ูŠุณุฌู„ูˆู† ุงู„ุฃู‡ุฏุงู. ุฅุฐุง ุณุฌู„ ูƒู„ ู„ุงุนุจ ู…ู† ุงู„ู„ุงุนุจูŠู† ุงู„ุฐูŠู† ูŠุณุฌู„ูˆู† ุงู„ุฃู‡ุฏุงู ููŠ ุงู„ู…ุชูˆุณุท 5 ุฃู‡ุฏุงู ุฎู„ุงู„ ุงู„ู…ูˆุณู…ุŒ ููƒู… ุนุฏุฏ ุงู„ุฃู‡ุฏุงู ุงู„ูƒู„ูŠ ุงู„ุชูŠ ุณุฌู„ู‡ุง ุงู„ู„ุงุนุจูˆู† ุงู„ุฐูŠู† ูŠุณุฌู„ูˆู† ุงู„ุฃู‡ุฏุงูุŸ [/INST] 

ู„ุญุณุงุจ ุนุฏุฏ ุงู„ุฃู‡ุฏุงู ุงู„ูƒู„ูŠ ุงู„ุชูŠ ุณุฌู„ู‡ุง ุงู„ู„ุงุนุจูˆู† ุงู„ุฐูŠู† ูŠุณุฌู„ูˆู† ุงู„ุฃู‡ุฏุงู ููŠ ุงู„ูุฑูŠู‚ ุงู„ู…ูƒูˆู† ู…ู† 15 ู„ุงุนุจุงู‹ุŒ ู†ุจุฏุฃ ุจุญุณุงุจ ุนุฏุฏ ุงู„ู„ุงุนุจูŠู† ุงู„ุฐูŠู† ูŠุณุฌู„ูˆู† ุงู„ุฃู‡ุฏุงู.

ุนุฏุฏ ุงู„ู„ุงุนุจูŠู† ุงู„ุฐูŠู† ูŠุณุฌู„ูˆู† ุงู„ุฃู‡ุฏุงู = ุฅุฌู…ุงู„ูŠ ุนุฏุฏ ุงู„ู„ุงุนุจูŠู† * ู†ุณุจุฉ ุงู„ู„ุงุนุจูŠู† ุงู„ุฐูŠู† ูŠุณุฌู„ูˆู† ุงู„ุฃู‡ุฏุงู = 15 * 0.40 = 6 ู„ุงุนุจูŠู†

ุซู… ู†ุถุฑุจ ุนุฏุฏ ุงู„ู„ุงุนุจูŠู† ุงู„ุฐูŠู† ูŠุณุฌู„ูˆู† ุงู„ุฃู‡ุฏุงู ููŠ ู…ุชูˆุณุท ุนุฏุฏ ุงู„ุฃู‡ุฏุงู ุงู„ุชูŠ ูŠุณุฌู„ู‡ุง ูƒู„ ู„ุงุนุจ ู…ู†ู‡ู… ุฎู„ุงู„ ุงู„ู…ูˆุณู….

ุงู„ุฃู‡ุฏุงู ุงู„ูƒู„ูŠ ุงู„ู…ุณุฌู„ุฉ = ุนุฏุฏ ุงู„ู„ุงุนุจูŠู† ุงู„ุฐูŠู† ูŠุณุฌู„ูˆู† ุงู„ุฃู‡ุฏุงู * ู…ุชูˆุณุท ุนุฏุฏ ุงู„ุฃู‡ุฏุงู ู„ูƒู„ ู„ุงุนุจ = 6 * 5 = 30 ู‡ุฏูุงู‹

ู„ุฐุงุŒ ุณุฌู„ ุงู„ู„ุงุนุจูˆู† ุงู„ุฐูŠู† ูŠุณุฌู„ูˆู† ุงู„ุฃู‡ุฏุงู ุฅุฌู…ุงู„ูŠ 30 ู‡ุฏูุงู‹ ุฎู„ุงู„ ุงู„ู…ูˆุณู….

Advanced Options

You can customize inference parameters with additional options:

./build/bin/llama-cli -m ./ALLaM-Thinking-q4_k_m.gguf -cnv -p "Your prompt" \
  --ctx_size 2048 \
  --temp 0.7 \
  --top_p 0.9 \
  --repeat_penalty 1.1

Hardware Requirements

  • Minimum: 8GB RAM
  • Recommended: 16GB RAM, High-end CPU or GPU with at least 8GB VRAM

License

This model is released under the Apache 2.0 License.

Citations

If you use this model in your research or applications, please cite:

@misc{almaghrabima2025allam,
  author = {Mohammed Al-Maghrabi Research},
  title = {ALLaM-Thinking: Arabic Large Language Model with Enhanced Reasoning Capabilities},
  year = {2025},
  publisher = {Hugging Face},
  howpublished = {\url{https://huggingface.co/almaghrabima/ALLaM-Thinking}}
}

Acknowledgements

  • This model utilizes the GGUF format developed by the llama.cpp team
  • Special thanks to contributors and the Arabic NLP community
Downloads last month
83
GGUF
Model size
7B params
Architecture
llama

4-bit

5-bit

8-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.