metadata
license: apache-2.0
datasets:
- mlabonne/guanaco-llama2-1k
pipeline_tag: text-generation
Miniguanaco
This is a Llama 2-7b model QLoRA fine-tuned (4-bit precision) on the mlabonne/guanaco-llama2-1k
dataset, which is a subset of the timdettmers/openassistant-guanaco
.
It was trained on a Google Colab notebook with a T4 GPU and high RAM. It is mainly designed for educational purposes, not for inference.
You can easily import it using the AutoModelForCausalLM
class from transformers
:
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM("mlabonne/llama-2-7b-miniguanaco")