S4MPL3BI4S commited on
Commit
659ee84
·
verified ·
1 Parent(s): 5b0b3bc

Add README

Browse files
Files changed (1) hide show
  1. README.md +25 -10
README.md CHANGED
@@ -1,16 +1,31 @@
1
  ---
2
- library_name: peft
3
- base_model: unsloth/gemma-4-E4B-it
4
  tags:
 
 
5
  - unsloth
6
- - trl
7
- - coding-agent
8
  ---
9
- # gemma4-coding-agent
10
 
11
- This model is a fine-tuned version of `unsloth/gemma-4-E4B-it` for Pythonic function calling and coding tasks.
12
- It was trained using [Unsloth](https://github.com/unslothai/unsloth).
13
 
14
- ## Formats
15
- - **LoRA Adapters:** Available in the root directory.
16
- - **GGUF:** Available for standard local inference (e.g., LM Studio, Ollama).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
 
 
2
  tags:
3
+ - gguf
4
+ - llama.cpp
5
  - unsloth
6
+ - vision-language-model
 
7
  ---
 
8
 
9
+ # gemma4-coding-agent : GGUF
 
10
 
11
+ This model was finetuned and converted to GGUF format using [Unsloth](https://github.com/unslothai/unsloth).
12
+
13
+ **Example usage**:
14
+ - For text only LLMs: `llama-cli -hf S4MPL3BI4S/gemma4-coding-agent --jinja`
15
+ - For multimodal models: `llama-mtmd-cli -hf S4MPL3BI4S/gemma4-coding-agent --jinja`
16
+
17
+ ## Available Model files:
18
+ - `gemma-4-E4B-it.Q4_K_M.gguf`
19
+ - `gemma-4-E4B-it.BF16-mmproj.gguf`
20
+
21
+ ## ⚠️ Ollama Note for Vision Models
22
+ **Important:** Ollama currently does not support separate mmproj files for vision models.
23
+
24
+ To create an Ollama model from this vision model:
25
+ 1. Place the `Modelfile` in the same directory as the finetuned bf16 merged model
26
+ 3. Run: `ollama create model_name -f ./Modelfile`
27
+ (Replace `model_name` with your desired name)
28
+
29
+ This will create a unified bf16 model that Ollama can use.
30
+ This was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth)
31
+ [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)