Edit model card

Uploaded model

  • Developed by: pacozaa
  • License: apache-2.0
  • Finetuned from model : unsloth/mistral-7b-bnb-4bit
  • This is LoRA Adapter : Train with liyucheng/ShareGPT90K - the step of training is increasing over time since I am fine-tuning in Colab. Right now it's at 550 step.

Ollama

This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month
186
GGUF
Unable to determine this model’s pipeline type. Check the docs .

Quantized from

Dataset used to train pacozaa/mistral-sharegpt90k