πͺ Mirror Dolly (GGUF) β Model Card
π§ Summary
Mirror Dolly is a fine-tuned assistant-style language model built on top of dipeshmajithia/MirrorCode
. It was fine-tuned for 1000 iterations on the Dolly 15k dataset using LoRA, and later merged and converted to GGUF for local inference.
Mirror Dolly is designed for structured and emotionally aware assistant conversations and supports lightweight deployment with llama.cpp
, ollama
, or text-generation-webui
.
π¦ Model Overview
- Base model:
dipeshmajithia/MirrorCode
- LoRA fine-tuning:
- Dataset: Dolly 15k
- Iterations: 1000
- Layers: 4
- Rank: 8
- Merged and Converted: To GGUF via
transformers
+convert_hf_to_gguf.py
- Quantization options: f16, q8_0, q4_0
- Use cases:
- Personal assistant
- Structured explanations
- Lightweight offline inference
π How to Use
βΆοΈ With llama.cpp
./main -m mirror_dolly.gguf -p "Who are you?"
- Downloads last month
- 5
Hardware compatibility
Log In
to view the estimation
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support
HF Inference deployability: The model has no pipeline_tag.
Model tree for dipeshmajithia/mirror_dolly
Base model
dipeshmajithia/MirrorCode