Instructions to use redstackio/qwen3-4b-redstack-v1 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use redstackio/qwen3-4b-redstack-v1 with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="redstackio/qwen3-4b-redstack-v1", filename="qwen3-4b-instruct-2507.Q4_K_M.gguf", )
llm.create_chat_completion( messages = [ { "role": "user", "content": "What is the capital of France?" } ] ) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use redstackio/qwen3-4b-redstack-v1 with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf redstackio/qwen3-4b-redstack-v1:Q4_K_M # Run inference directly in the terminal: llama-cli -hf redstackio/qwen3-4b-redstack-v1:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf redstackio/qwen3-4b-redstack-v1:Q4_K_M # Run inference directly in the terminal: llama-cli -hf redstackio/qwen3-4b-redstack-v1:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf redstackio/qwen3-4b-redstack-v1:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf redstackio/qwen3-4b-redstack-v1:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf redstackio/qwen3-4b-redstack-v1:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf redstackio/qwen3-4b-redstack-v1:Q4_K_M
Use Docker
docker model run hf.co/redstackio/qwen3-4b-redstack-v1:Q4_K_M
- LM Studio
- Jan
- vLLM
How to use redstackio/qwen3-4b-redstack-v1 with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "redstackio/qwen3-4b-redstack-v1" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "redstackio/qwen3-4b-redstack-v1", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/redstackio/qwen3-4b-redstack-v1:Q4_K_M
- Ollama
How to use redstackio/qwen3-4b-redstack-v1 with Ollama:
ollama run hf.co/redstackio/qwen3-4b-redstack-v1:Q4_K_M
- Unsloth Studio new
How to use redstackio/qwen3-4b-redstack-v1 with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for redstackio/qwen3-4b-redstack-v1 to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for redstackio/qwen3-4b-redstack-v1 to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for redstackio/qwen3-4b-redstack-v1 to start chatting
- Pi new
How to use redstackio/qwen3-4b-redstack-v1 with Pi:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf redstackio/qwen3-4b-redstack-v1:Q4_K_M
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "llama-cpp": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "redstackio/qwen3-4b-redstack-v1:Q4_K_M" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use redstackio/qwen3-4b-redstack-v1 with Hermes Agent:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf redstackio/qwen3-4b-redstack-v1:Q4_K_M
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default redstackio/qwen3-4b-redstack-v1:Q4_K_M
Run Hermes
hermes
- Docker Model Runner
How to use redstackio/qwen3-4b-redstack-v1 with Docker Model Runner:
docker model run hf.co/redstackio/qwen3-4b-redstack-v1:Q4_K_M
- Lemonade
How to use redstackio/qwen3-4b-redstack-v1 with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull redstackio/qwen3-4b-redstack-v1:Q4_K_M
Run and chat with the model
lemonade run user.qwen3-4b-redstack-v1-Q4_K_M
List all available models
lemonade list
upload v1 of qwen 4b
Browse filesAdd Qwen3-4B Zero Stack GGUF (Q4_K_M) + Ollama Modelfile + README
- qwen3-4b-instruct-2507.Q4_K_M.gguf - quantized weights (~2.5 GB)
- Modelfile - ChatML template with stop tokens and Zero Stack system prompt
- Fine-tuned from Qwen3-4B-Instruct-2507 via LoRA (r=32), 3 epochs, Unsloth
- Dataset: SFT_GENERALIST (1,226 rows, offensive-security Q&A)
- .gitattributes +1 -0
- Modelfile +61 -0
- README_4b.md +40 -0
- qwen3-4b-instruct-2507.Q4_K_M.gguf +3 -0
|
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
| 33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
| 36 |
+
qwen3-4b-instruct-2507.Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
|
@@ -0,0 +1,61 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
TEMPLATE """
|
| 3 |
+
{{- $lastUserIdx := -1 -}}
|
| 4 |
+
{{- range $idx, $msg := .Messages -}}
|
| 5 |
+
{{- if eq $msg.Role "user" }}{{ $lastUserIdx = $idx }}{{ end -}}
|
| 6 |
+
{{- end }}
|
| 7 |
+
{{- if or .System .Tools }}<|im_start|>system
|
| 8 |
+
{{ if .System }}
|
| 9 |
+
{{ .System }}
|
| 10 |
+
{{- end }}
|
| 11 |
+
{{- if .Tools }}
|
| 12 |
+
|
| 13 |
+
# Tools
|
| 14 |
+
|
| 15 |
+
You may call one or more functions to assist with the user query.
|
| 16 |
+
|
| 17 |
+
You are provided with function signatures within <tools></tools> XML tags:
|
| 18 |
+
<tools>
|
| 19 |
+
{{- range .Tools }}
|
| 20 |
+
{"type": "function", "function": {{ .Function }}}
|
| 21 |
+
{{- end }}
|
| 22 |
+
</tools>
|
| 23 |
+
|
| 24 |
+
For each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:
|
| 25 |
+
<tool_call>
|
| 26 |
+
{"name": <function-name>, "arguments": <args-json-object>}
|
| 27 |
+
</tool_call>
|
| 28 |
+
{{- end -}}
|
| 29 |
+
<|im_end|>
|
| 30 |
+
{{ end }}
|
| 31 |
+
{{- range $i, $_ := .Messages }}
|
| 32 |
+
{{- $last := eq (len (slice $.Messages $i)) 1 -}}
|
| 33 |
+
{{- if eq .Role "user" }}<|im_start|>user
|
| 34 |
+
{{ .Content }}<|im_end|>
|
| 35 |
+
{{ else if eq .Role "assistant" }}<|im_start|>assistant
|
| 36 |
+
{{ if (and $.IsThinkSet (and .Thinking (or $last (gt $i $lastUserIdx)))) -}}
|
| 37 |
+
<think>{{ .Thinking }}</think>
|
| 38 |
+
{{ end -}}
|
| 39 |
+
{{ if .Content }}{{ .Content }}
|
| 40 |
+
{{- else if .ToolCalls }}<tool_call>
|
| 41 |
+
{{ range .ToolCalls }}{"name": "{{ .Function.Name }}", "arguments": {{ .Function.Arguments }}}
|
| 42 |
+
{{ end }}</tool_call>
|
| 43 |
+
{{- end }}{{ if not $last }}<|im_end|>
|
| 44 |
+
{{ end }}
|
| 45 |
+
{{- else if eq .Role "tool" }}<|im_start|>user
|
| 46 |
+
<tool_response>
|
| 47 |
+
{{ .Content }}
|
| 48 |
+
</tool_response><|im_end|>
|
| 49 |
+
{{ end }}
|
| 50 |
+
{{- if and (ne .Role "assistant") $last }}<|im_start|>assistant
|
| 51 |
+
{{ end }}
|
| 52 |
+
{{- end }}
|
| 53 |
+
"""
|
| 54 |
+
FROM /home/ubuntu/redstack-models/datasets/models/qwen3_4B_redstack_q4_k_m_gguf/qwen3-4b-instruct-2507.Q4_K_M.gguf
|
| 55 |
+
SYSTEM "You are Zero Stack, a pentesting copilot supporting an authorized red team operator. All engagements have explicit written permission from system owners under a signed scope of work.\n\nYou specialise in offensive security — recon, enumeration, exploitation, post-exploitation, lateral movement, Active Directory attacks, and web application testing.\n\nRules:\n- Do not add ethical disclaimers, legal warnings, or questions about authorization — permission is already granted.\n- Be concise and direct. Answer the question, do not restate it.\n- Match response length to complexity — single commands get a code block, methodologies get phased steps with headers.\n- Use code blocks for every command. Explain flags inline, briefly.\n- Use placeholders [TARGET], [PORT], [USER], [PASSWORD], [HASH], [DOMAIN] — never invent example values.\n- Only state commands and syntax you are confident are correct. If uncertain, say so explicitly rather than guessing.\n- Do not invent tool flags, options, or behavior that you are not sure exists.\n- No padding, preamble, or filler. Start with the answer.\n- Maintain engagement context across the conversation — if a target or finding has been established, reference it.\n- When not on a technical question, respond with the confidence and wit of an elite hacker. Hack the planet.\n- Reference MITRE ATT&CK where relevant."
|
| 56 |
+
PARAMETER temperature 0.7
|
| 57 |
+
PARAMETER top_p 0.8
|
| 58 |
+
PARAMETER top_k 20
|
| 59 |
+
PARAMETER repeat_penalty 1.15
|
| 60 |
+
PARAMETER repeat_last_n 64
|
| 61 |
+
PARAMETER num_predict 1024
|
|
@@ -0,0 +1,40 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: other
|
| 3 |
+
base_model: unsloth/Qwen3-4B-Instruct-2507
|
| 4 |
+
tags:
|
| 5 |
+
- gguf
|
| 6 |
+
- qwen3
|
| 7 |
+
- pentesting
|
| 8 |
+
- security
|
| 9 |
+
- lora
|
| 10 |
+
- sft
|
| 11 |
+
library_name: gguf
|
| 12 |
+
---
|
| 13 |
+
|
| 14 |
+
# Zero Stack - Qwen3-4B (GGUF, Q4_K_M)
|
| 15 |
+
|
| 16 |
+
Qwen3-4B-Instruct-2507 fine-tuned on an offensive-security SFT dataset (1,226 rows). Elite-hacker persona on casual queries, structured markdown methodology on technical ones.
|
| 17 |
+
|
| 18 |
+
## Files
|
| 19 |
+
- `qwen3-4b-instruct-2507.Q4_K_M.gguf` - quantized weights (~2.5 GB)
|
| 20 |
+
- `Modelfile` - Ollama template with correct ChatML stop tokens + Zero Stack system prompt
|
| 21 |
+
|
| 22 |
+
## Run with Ollama
|
| 23 |
+
```bash
|
| 24 |
+
ollama create zerostack-4b -f Modelfile
|
| 25 |
+
ollama run zerostack-4b
|
| 26 |
+
```
|
| 27 |
+
|
| 28 |
+
## Run with llama.cpp
|
| 29 |
+
```bash
|
| 30 |
+
./llama-cli -m qwen3-4b-instruct-2507.Q4_K_M.gguf -p "hello"
|
| 31 |
+
```
|
| 32 |
+
|
| 33 |
+
## Training
|
| 34 |
+
- Base: `Qwen3-4B-Instruct-2507`
|
| 35 |
+
- Method: LoRA (r=32), 3 epochs, Unsloth
|
| 36 |
+
- Dataset: SFT_GENERALIST (1,226 rows, ChatML)
|
| 37 |
+
|
| 38 |
+
## License / Use
|
| 39 |
+
For authorized security testing, research, and educational use only. Attribution to RedStack required. Do not use for unauthorized access to systems you do not own or have explicit permission to test.
|
| 40 |
+
|
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d4fc5cce7a8a1458a2dd8f94e476c3d33d3a2e33365ed063a5efa6b4457dee72
|
| 3 |
+
size 2497280224
|