# Alpaca LoRa 7B This repository contains a LLaMA-7B fine-tuned model on the [Standford Alpaca](https://github.com/tatsu-lab/stanford_alpaca) cleaned version dataset. ⚠️ **I used [LLaMA-7B-hf](decapoda-research/llama-7b-hf) as a base model, so this model is for Research purpose only (See the [license](https://huggingface.co/decapoda-research/llama-7b-hf/blob/main/LICENSE))** # Usage ## Using the model ```python from transformers import LlamaTokenizer, LlamaForCausalLM, tokenizer = LlamaTokenizer.from_pretrained("decapoda-research/alpaca-lora-7b") model = LlamaForCausalLM.from_pretrained( "chainyo/alpaca-lora-7b", load_in_8bit=True, torch_dtype=torch.float16, device_map="auto", ) model.eval() if torch.__version__ >= "2": model = torch.compile(model) ```