alpaca-lora-7b / README.md
chainyo
add model weights
cbbe0bc

Alpaca LoRa 7B

This repository contains a LLaMA-7B fine-tuned model on the Standford Alpaca cleaned version dataset.

I used LLaMA-7B-hf as a base model

Usage

Using the model

from transformers import LlamaTokenizer, LlamaForCausalLM,

tokenizer = LlamaTokenizer.from_pretrained("decapoda-research/alpaca-lora-7b")
model = LlamaForCausalLM.from_pretrained(
    "chainyo/alpaca-lora-7b",
    load_in_8bit=True,
    torch_dtype=torch.float16,
    device_map="auto",
)