File size: 793 Bytes
cbbe0bc
 
 
 
d417c1f
cbbe0bc
 
 
 
 
 
 
 
 
 
 
 
 
 
d417c1f
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
# Alpaca LoRa 7B

This repository contains a LLaMA-7B fine-tuned model on the [Standford Alpaca](https://github.com/tatsu-lab/stanford_alpaca) cleaned version dataset.

⚠️ **I used [LLaMA-7B-hf](decapoda-research/llama-7b-hf) as a base model, so this model is for Research purpose only (See the [license](https://huggingface.co/decapoda-research/llama-7b-hf/blob/main/LICENSE))**

# Usage

## Using the model

```python
from transformers import LlamaTokenizer, LlamaForCausalLM,

tokenizer = LlamaTokenizer.from_pretrained("decapoda-research/alpaca-lora-7b")
model = LlamaForCausalLM.from_pretrained(
    "chainyo/alpaca-lora-7b",
    load_in_8bit=True,
    torch_dtype=torch.float16,
    device_map="auto",
)

model.eval()
if torch.__version__ >= "2":
    model = torch.compile(model)
```