Edit model card

Model that is fine-tuned in 4-bit precision using QLoRA on timdettmers/openassistant-guanaco and databricks/databricks-dolly-15k. Sharded as well to be used on a free Google Colab instance.

It can be easily imported using the AutoModelForCausalLM class from transformers:

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained(
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
Downloads last month

Datasets used to train guardrail/llama-2-7b-guanaco-dolly-8bit-sharded