mistral-sharegpt90k / README.md
pacozaa's picture
Update README.md
fb093db verified
|
raw
history blame
782 Bytes
metadata
language:
  - en
license: apache-2.0
tags:
  - text-generation-inference
  - transformers
  - unsloth
  - mistral
  - trl
  - LORA
  - Lora Adapter
  - PEFT
base_model: unsloth/mistral-7b-bnb-4bit
datasets:
  - liyucheng/ShareGPT90K

Uploaded model

  • Developed by: pacozaa
  • License: apache-2.0
  • Finetuned from model : unsloth/mistral-7b-bnb-4bit
  • This is LoRA Adapter : Train with liyucheng/ShareGPT90K - the step of training is increasing over time since I am fine-tuning in Colab.

This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.