File size: 752 Bytes
2a61db1
f4955e2
 
 
 
 
46a40f6
 
f4955e2
46a40f6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f4955e2
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
---
# Model Card for Model ID

This repo contains the tokenizer, Chinese-Alpaca "merged" weights and configs for Chinese-LLaMA-Alpaca
Directly load `merge` weight for chinese-llama-alpaca-plus-lora-7b

```python
model_name_or_path = 'we1kkk/chinese-llama-alpaca-plus-lora-7b'

config = LlamaConfig.from_pretrained(
    model_name_or_path,
    # trust_remote_code=True
)
tokenizer = LlamaTokenizer.from_pretrained(
    model_name_or_path,
    # trust_remote_code=True
)
model = LlamaForCausalLM.from_pretrained(
    model_name_or_path,
    config=config,
    ).half().cuda()
```

 

## Citation

Thanks to: https://github.com/ymcui/Chinese-LLaMA-Alpaca.
Instructions for using the weights can be found at https://github.com/ymcui/Chinese-LLaMA-Alpaca.