Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

llama-2-7b-guanaco-instruct-sharded - GGUF

Original model description:

license: apache-2.0 datasets: - timdettmers/openassistant-guanaco pipeline_tag: text-generation

Model that is fine-tuned in 4-bit precision using QLoRA on timdettmers/openassistant-guanaco and sharded to be used on a free Google Colab instance that can be loaded with 4bits.

It can be easily imported using the AutoModelForCausalLM class from transformers:

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained(
          "guardrail/llama-2-7b-guanaco-instruct-sharded",
          load_in_4bit=True)
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
Downloads last month
119
GGUF
Model size
6.74B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .