File size: 12,637 Bytes
7274835
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172

---

base_model: jan-hq/AlphaMaze-v0.2-1.5B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en

---

[![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory)


# QuantFactory/AlphaMaze-v0.2-1.5B-GGUF
This is quantized version of [homebrewltd/AlphaMaze-v0.2-1.5B](https://huggingface.co/homebrewltd/AlphaMaze-v0.2-1.5B) created using llama.cpp

# Original Model Card


[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)

<div align="center">

# AlphaMaze: Teaching LLMs to Think Visually
<!---
<a href='https://homebrew.ltd/blog/alpha-maze'><img src='https://img.shields.io/badge/Project-Blog-Green'></a>
<a href='https://huggingface.co/homebrewltd/AlphaMaze-v0.2-1.5B'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-blue'></a>
<a href='https://huggingface.co/datasets/homebrewltd/Maze-Reasoning-v0.1'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Data-green'></a>
<a href='https://alphamaze.menlo.ai/'><img src='https://img.shields.io/badge/Project-Demo-violet'></a>
<a href=''><img src='https://img.shields.io/badge/Paper-Arxiv-red'></a>
-->

[**About**](#About) | [**Demo**](#Demo) | [**Models and Datasets**](#Models-and-Dataset) | [**Benchmarks**](#Benchmarks) | [**How to Run Locally**](#Run-Locally) | 

<img src="./alphamaze.gif" width="400"/>
</div>

## About
Developed by **Menlo Research**, **AlphaMaze** is a novel model for evaluating and enhancing visual reasoning in LLMs. AlphaMaze challenges models with a deceptively simple task: solving mazes presented entirely in text. We further enhance AlphaMaze's capabilities using the GRPO (Generalized Relative Policy Optimization) method.

Prior research, like [Microsoft's "Multimodal Visualization-of-Thought (MVoT)"](https://arxiv.org/abs/2501.07542), explored visual reasoning through image generation.  But AlphaMaze takes a different, more focused path.  We believe that if a model can internally reconstruct a maze from a text description and use that *mental map* to plan its moves, it demonstrates a genuine capacity for visual reasoning – even without generating a single image.  AlphaMaze moves beyond the limitations of multiple-choice evaluations, providing a richer, more nuanced assessment of a model's spatial understanding.  We're not just testing if a model *can* solve a maze; we're revealing *how* it thinks about space.

## Demo

AlphaMaze tackle a text-based maze! See how it interprets the maze, plans its moves, and strategically resets when it encounters a dead end.

[![Watch the AlphaMaze Demo](https://img.youtube.com/vi/dUS9wR03on8/0.jpg)](https://www.youtube.com/watch?v=dUS9wR03on8)

Alternatively, you can explore it on our [demo website](https://alphamaze.menlo.ai/).

## Models and Datasets

### Models

You can find our AlphaMaze models on Hugging Face 🤗! We're committed to open-source and easy access for the research community.

| Model        | Backbone                                                                 | Size  | Link                                                                      |
|--------------|--------------------------------------------------------------------------|-------|----------------------------------------------------------------------------|
| AlphaMaze-v0.1 | [DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) | 1.5B  | [🤗 AlphaMaze-v0.1](https://huggingface.co/homebrewltd/AlphaMaze-v0.2-1.5B) |

### Datasets

We've released our datasets on Hugging Face 🤗 to support reproducibility and further research.

| Dataset                             | Description                                         | Size  | Link                                                                                    |
|--------------------------------------|-----------------------------------------------------|-------|-----------------------------------------------------------------------------------------|
| Maze-Reasoning-v0.1                  | Training set used for Supervised Fine-Tuning (SFT) | 420k  | [🤗 Maze-Reasoning-v0.1](https://huggingface.co/datasets/homebrewltd/Maze-Reasoning-v0.1) |
| Maze-Reasoning-Reset-v0.1          | Training set for SFT, including reset actions        | 50k   | [🤗 Maze-Reasoning-Reset-v0.1](https://huggingface.co/datasets/homebrewltd/Maze-Reasoning-Reset-v0.1) |
| Maze-Reasoning-GRPO-v0.1             | Training set used for GRPO model                    | 180k  | [🤗 Maze-Reasoning-GRPO-v0.1](https://huggingface.co/datasets/homebrewltd/Maze-Reasoning-GRPO-v0.1) |

## Benchmarks

### Supervised Fine-Tuning (SFT)

We used [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory) for Supervised Fine-Tuning (SFT) of our AlphaMaze model.  Here's a summary of our key training runs:

| Run ID | Model Config                                                                             | Dataset                                                                  | Steps | Final Loss | Hardware                | Key Findings                                                                                                                                                               |
|--------|-------------------------------------------------------------------------------------------|--------------------------------------------------------------------------|-------|------------|-------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| exp-1  | [Full Finetune (Qwen-1.5B)](https://github.com/janhq/visual-thinker/blob/main/training/Llama-factory-config/Qwen2.5_1.5B_distil.yaml) | [Maze Reasoning](https://huggingface.co/datasets/jan-hq/Maze-Reasoning) | 3125  | 0.01       | ~1.5 hours on 6xA6000  | Initial run with new maze tokens.  Observed lower performance.                                                                                                      |
| exp-2  | [Full Finetune (Qwen-1.5B)](https://github.com/janhq/visual-thinker/blob/main/training/Llama-factory-config/Qwen2.5_1.5B_distil.yaml) | [Maze Reasoning](https://huggingface.co/datasets/jan-hq/Maze-Reasoning) | 3125  | 0.01       | ~1.5 hours on 6xA6000  | Trained using pure text descriptions (no new tokens).  Surprisingly strong performance.                                                                        |
| exp-3  | [Full Finetune (Qwen-7B)](https://github.com/janhq/visual-thinker/blob/main/training/Llama-factory-config/Qwen2.5_7B_distil.yaml)   | [Maze Reasoning](https://huggingface.co/datasets/jan-hq/Maze-Reasoning) | 2344  | 0.0077     | ~12 hours on 6xA6000 | Extended training with pure text descriptions (larger model).                                                                                            |
| exp-4  | [Full Finetune (Qwen-1.5B)](https://github.com/janhq/visual-thinker/blob/main/training/Llama-factory-config/Qwen2.5_1.5B_distil.yaml) | [Maze Reasoning](https://huggingface.co/datasets/jan-hq/Maze-Reasoning) | 2344  | ~0         | ~1.5 hours on 6xA6000  | Further extended training with pure text descriptions.  Near-zero loss.                                                                                           |
| exp-5  | [Full Finetune (Qwen-1.5B)](https://github.com/janhq/visual-thinker/blob/main/training/Llama-factory-config/Qwen2.5_1.5B_distil.yaml) | [Maze Reasoning](https://huggingface.co/datasets/jan-hq/Maze-Reasoning) | 3681  | ~0.02      | ~1 hours on 8xH200   | Experiment with new maze tokens and different hardware.                                                                                                   |

**Key Observations from SFT:**

*   Adding new maze-specific tokens did *not* improve performance, and in some cases, resulted in worse results.
*   Surprisingly, the model performed well even with *pure text descriptions*, suggesting a strong ability to learn spatial relationships from text alone.
*	Loss value equal to 0 is concerning.

**Note:** These results suggest that reducing token complexity can lead to improved performance in translating spatial information into language.

### Generalized Reward-based Policy Optimization (GRPO)

We employed [Unsloth](https://unsloth.ai/) for Generalized Reward-based Policy Optimization (GRPO) to further refine the model's maze-solving policy.

The plot below shows the MazeBench scores (blue crosses) achieved during GRPO training, along with a linear regression trendline (red dashed line).  The upward trend demonstrates that GRPO effectively guides the model towards improved maze-solving strategies.

![GRPO Training Progress](./grpo_progress.png)
_GRPO training progress, showing MazeBench scores over training steps._


## Run Locally

For an example of using AlphaMaze with HuggingFace Transformers:

```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
import flash_attn

model_path =  "homebrewltd/AlphaMaze-v0.2-1.5B"

tokenizer = AutoTokenizer.from_pretrained(model_path)

model = AutoModelForCausalLM.from_pretrained(
    model_path,
    torch_dtype=torch.float16,
    device_map="auto",
    attn_implementation="flash_attention_2",
)

maze =  """You are a helpful assistant that solves mazes. You will be given a maze represented by a series of tokens. The tokens represent: - Coordinates: <|row-col|> (e.g., <|0-0|>, <|2-4|>) - Walls: <|no_wall|>, <|up_wall|>, <|down_wall|>, <|left_wall|>, <|right_wall|>, <|up_down_wall|>, etc. - Origin: <|origin|> - Target: <|target|> - Movement: <|up|>, <|down|>, <|left|>, <|right|>, <|blank|> Your task is to output the sequence of movements (<|up|>, <|down|>, <|left|>, <|right|>) required to navigate from the origin to the target, based on the provided maze representation. Think step by step. At each step, predict only the next movement token. Output only the move tokens, separated by spaces. MAZE: <|0-0|><|up_left_wall|><|blank|><|0-1|><|up_down_wall|><|blank|><|0-2|><|up_down_wall|><|blank|><|0-3|><|up_right_wall|><|blank|><|0-4|><|up_left_right_wall|><|blank|> <|1-0|><|down_left_wall|><|blank|><|1-1|><|up_right_wall|><|blank|><|1-2|><|up_left_wall|><|blank|><|1-3|><|down_right_wall|><|blank|><|1-4|><|left_right_wall|><|blank|> <|2-0|><|up_left_wall|><|blank|><|2-1|><|down_right_wall|><|blank|><|2-2|><|down_left_wall|><|blank|><|2-3|><|up_down_wall|><|blank|><|2-4|><|down_right_wall|><|target|> <|3-0|><|left_right_wall|><|blank|><|3-1|><|up_left_wall|><|origin|><|3-2|><|up_right_wall|><|blank|><|3-3|><|up_down_left_wall|><|blank|><|3-4|><|up_right_wall|><|blank|> <|4-0|><|down_left_wall|><|blank|><|4-1|><|down_right_wall|><|blank|><|4-2|><|down_left_wall|><|blank|><|4-3|><|up_down_wall|><|blank|><|4-4|><|down_right_wall|><|blank|>"""

messages = [
    {
        "role": "user",
        "content": maze
    }
]

input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors='pt').to("cuda")
generated_ids = model.generate(input_ids, max_new_tokens=2500, temperature=0.8, repetition_penalty=1.1, do_sample=True, eos_token_id=tokenizer.eos_token_id)
response = tokenizer.decode(generated_ids[0], skip_special_tokens=True, clean_up_tokenization_space=True)
print(f"Solving maze: {response}")
```

## Next Steps
We are exploring further GRPO enhancements to boost maze-solving capabilities. Stay tuned for more updates on how GRPO is paving the way for improved spatial reasoning in LLMs!

## Join Us

We're looking for collaborators and plan to expand the model's capabilities to include additional spatial tasks in the future.

## References

```bibtex
@misc{dao2025alphamazeenhancinglargelanguage,
      title={AlphaMaze: Enhancing Large Language Models' Spatial Intelligence via GRPO}, 
      author={Alan Dao and Dinh Bach Vu},
      year={2025},
      eprint={2502.14669},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2502.14669}, 
}
```

## Acknowledgement

- [llama-factory](https://github.com/hiyouga/LLaMA-Factory)
- [unsloth](https://unsloth.ai/)
- [Deepseek](https://github.com/deepseek-ai/DeepSeek-R1)
- [Multimodal Visualization-of-Thought (MVoT)](https://arxiv.org/abs/2501.07542)