mistral-sharegpt90k / README.md
pacozaa's picture
Update README.md
7511575 verified
metadata
language:
  - en
license: apache-2.0
tags:
  - text-generation-inference
  - transformers
  - unsloth
  - mistral
  - trl
  - LoRA
  - LoRA Adapter
  - PEFT
  - ollama
base_model: unsloth/mistral-7b-bnb-4bit
datasets:
  - liyucheng/ShareGPT90K

Uploaded model

  • Developed by: pacozaa
  • License: apache-2.0
  • Finetuned from model : unsloth/mistral-7b-bnb-4bit
  • This is LoRA Adapter : Train with liyucheng/ShareGPT90K - the step of training is increasing over time since I am fine-tuning in Colab. Right now it's at 550 step.

Ollama

This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.