Chinese
alpaca
Chinese-Vicuna
llama
Facico commited on
Commit
6971791
1 Parent(s): 73341e6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +34 -0
README.md CHANGED
@@ -1,3 +1,37 @@
1
  ---
2
  license: gpl-3.0
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: gpl-3.0
3
+ datasets:
4
+ - BelleGroup/generated_train_0.5M_CN
5
+ - JosephusCheung/GuanacoDataset
6
+ language:
7
+ - zh
8
+ tags:
9
+ - alpaca
10
+ - Chinese-Vicuna
11
+ - llama
12
  ---
13
+
14
+ This is a Chinese instruction-tuning lora checkpoint based on llama-7B from [this repo's](https://github.com/Facico/Chinese-Vicuna) work.
15
+ Specially, this is the 4bit version trained with qlora
16
+
17
+ You can use it like this:
18
+
19
+
20
+
21
+ ```python
22
+ from transformers import LlamaForCausalLM
23
+ from peft import PeftModel
24
+
25
+ model = LlamaForCausalLM.from_pretrained(
26
+ "decapoda-research/llama-7b-hf",
27
+ load_in_8bit=True,
28
+ torch_dtype=torch.float16,
29
+ device_map="auto",
30
+ )
31
+ model = PeftModel.from_pretrained(
32
+ model,
33
+ LORA_PATH, # specific checkpoint path from "Chinese-Vicuna/Chinese-Vicuna-lora-7b-belle-and-guanaco"
34
+ torch_dtype=torch.float16,
35
+ device_map={'': 0}
36
+ )
37
+ ```