File size: 997 Bytes
b6448b4
 
 
 
7e15793
b6448b4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7e15793
b6448b4
7e15793
b6448b4
 
 
 
 
 
7e15793
 
b6448b4
7e15793
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
---
license: apache-2.0
datasets:
- shibing624/alpaca-zh
- yahma/alpaca-cleaned
language:
- zh
tags:
- LoRA
- LLaMA
- Alpaca
- PEFT
- int8
---

# Model Card for llama-7b-alpaca-zh-20k

<!-- Provide a quick summary of what the model is/does. -->

## Uses

<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->

### Direct Use

<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->

```python
from peft import PeftModel
from transformers import GenerationConfig, LlamaForCausalLM, LlamaTokenizer


tokenizer = LlamaTokenizer.from_pretrained("decapoda-research/llama-7b-hf")
model = LlamaForCausalLM.from_pretrained(
    "decapoda-research/llama-7b-hf",
    load_in_8bit=True,
    torch_dtype=torch.float16,
    device_map="auto"
)
model = PeftModel.from_pretrained(
    model,
    "DataAgent/llama-7b-alpaca-zh-120k",
    torch_dtype=torch.float16
)
```