metadata
license: apache-2.0
datasets:
- timdettmers/openassistant-guanaco
pipeline_tag: text-generation
This is a Llama 2-7b model QLoRA fine-tuned (4-bit precision) on the mlabonne/guanaco-llama2-1k
dataset.
It was trained on a Google Colab notebook with a T4 GPU and high RAM. It is mainly designed for educational purposes, not for inference.
You can easily import it using the AutoModelForCausalLM
class from transformers
:
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM("mlabonne/llama-2-7b-miniguanaco")