metadata
tags:
- gguf
- llama.cpp
- unsloth
- vision-language-model
model - GGUF
This model was finetuned and converted to GGUF format using Unsloth.
Example usage:
- For text only LLMs: llama-cli --hf repo_id/model_name -p "why is the sky blue?"
- For multimodal models: llama-mtmd-cli -m model_name.gguf --mmproj mmproj_file.gguf
Available Model files:
gemma-3-4b-it.Q4_K_M.ggufgemma-3-4b-it.F16.ggufgemma-3-4b-it.BF16-mmproj.gguf
⚠️ Ollama Note for Vision Models
Important: Ollama currently does not support separate mmproj files for vision models.
To create an Ollama model from this vision model:
- Place the
Modelfilein the same directory as the finetuned bf16 merged model - Run:
ollama create model_name -f ./Modelfile(Replacemodel_namewith your desired name)
This will create a unified bf16 model that Ollama can use.