πŸͺž Mirror Dolly (GGUF) – Model Card

🧠 Summary

Mirror Dolly is a fine-tuned assistant-style language model built on top of dipeshmajithia/MirrorCode. It was fine-tuned for 1000 iterations on the Dolly 15k dataset using LoRA, and later merged and converted to GGUF for local inference.

Mirror Dolly is designed for structured and emotionally aware assistant conversations and supports lightweight deployment with llama.cpp, ollama, or text-generation-webui.


πŸ“¦ Model Overview

  • Base model: dipeshmajithia/MirrorCode
  • LoRA fine-tuning:
    • Dataset: Dolly 15k
    • Iterations: 1000
    • Layers: 4
    • Rank: 8
  • Merged and Converted: To GGUF via transformers + convert_hf_to_gguf.py
  • Quantization options: f16, q8_0, q4_0
  • Use cases:
    • Personal assistant
    • Structured explanations
    • Lightweight offline inference

πŸ›  How to Use

▢️ With llama.cpp

./main -m mirror_dolly.gguf -p "Who are you?"
Downloads last month
5
GGUF
Model size
7.24B params
Architecture
llama
Hardware compatibility
Log In to view the estimation
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for dipeshmajithia/mirror_dolly

Quantized
(1)
this model

Dataset used to train dipeshmajithia/mirror_dolly