flemingxu commited on
Commit
a49577c
1 Parent(s): e7078fb

update llama-13b-plus-pth

Browse files
README.md CHANGED
@@ -1,3 +1,158 @@
1
  ---
2
- license: bigcode-openrail-m
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ title: chinese-alpaca-plus-7b
3
+ emoji: 📚
4
+ colorFrom: gray
5
+ colorTo: red
6
+ language:
7
+ - zh
8
+ tags:
9
+ - chatglm
10
+ - pytorch
11
+ - zh
12
+ - Text2Text-Generation
13
+ license: "other"
14
+ widget:
15
+ - text: "为什么天空是蓝色的?"
16
  ---
17
+
18
+ # Chinese Alpaca Plus 7B Model
19
+
20
+ **发布中文LLaMA, Alpaca Plus版(7B)模型**
21
+
22
+
23
+ 推出中文LLaMA, Alpaca Plus版(7B),相比基础版本的改进点如下:
24
+
25
+ - 进一步扩充了训练数据,其中LLaMA扩充至120G文本(通用领域),Alpaca扩充至4M指令数据(重点增加了STEM相关数据)
26
+ - Alpaca训练时采用了更大的rank,相比原版具有更低的验证集损失
27
+ - 评测结果显示,Alpaca-Plus-7B相比基础版Alpaca-7B效果更优,部分任务接近或超过13B版本
28
+ - 这一轮比拼:7B获得65.3分,13B获得70.9分,Plus-7B效果75.3分,具体评测结果请参考[效果评测](https://github.com/ymcui/Chinese-LLaMA-Alpaca/blob/main/examples/README.md)
29
+
30
+ 本模型是`原生LLaMA-7B`合并`中文LLaMA LoRA`和`中文Alpaca LoRA`后的模型权重,可以直接使用或者继续训练。
31
+
32
+
33
+ test case:
34
+
35
+ |input_text|predict|
36
+ |:-- |:--- |
37
+ |为什么天空是蓝色的?|天空是蓝色的,是因为大气层中的气体分子会散射太阳光中的蓝色光,使得我们看到的天空是蓝色的。|
38
+
39
+
40
+ ## Usage
41
+
42
+ 本项目开源在textgen项目:[textgen](https://github.com/shibing624/textgen),可支持llama模型,通过如下命令调用:
43
+
44
+ Install package:
45
+ ```shell
46
+ pip install -U textgen
47
+ ```
48
+
49
+ ```python
50
+ from textgen import LlamaModel
51
+ model = LlamaModel("llama", "shibing624/chinese-alpaca-plus-7b-hf")
52
+ r = model.predict(["用一句话描述地球为什么是独一无二的。"])
53
+ print(r) # ['地球是独一无二的,因为它拥有独特的大气层、水循环、生物多样性以及其他自然资源,这些都使它成为一个独特的生命支持系统。']
54
+ ```
55
+
56
+ ## Usage (HuggingFace Transformers)
57
+ Without [textgen](https://github.com/shibing624/textgen), you can use the model like this:
58
+
59
+ First, you pass your input through the transformer model, then you get the generated sentence.
60
+
61
+ Install package:
62
+ ```
63
+ pip install sentencepiece
64
+ pip install transformers>=4.28.0
65
+ ```
66
+
67
+ ```python
68
+ import torch
69
+ import transformers
70
+ from transformers import LlamaTokenizer, LlamaForCausalLM
71
+
72
+ def generate_prompt(text):
73
+ return f"""Below is an instruction that describes a task. Write a response that appropriately completes the request.
74
+
75
+ ### Instruction:
76
+ {text}
77
+
78
+ ### Response:"""
79
+
80
+
81
+ tokenizer = LlamaTokenizer.from_pretrained('shibing624/chinese-alpaca-plus-7b-hf')
82
+ model = LlamaForCausalLM.from_pretrained('shibing624/chinese-alpaca-plus-7b-hf').half().cuda()
83
+ model.eval()
84
+
85
+ text = '为什么天空是蓝色的?'
86
+ prompt = generate_prompt(text)
87
+ input_ids = tokenizer.encode(prompt, return_tensors='pt').to('cuda')
88
+
89
+
90
+ with torch.no_grad():
91
+ output_ids = model.generate(
92
+ input_ids=input_ids,
93
+ max_new_tokens=128,
94
+ temperature=1,
95
+ top_k=40,
96
+ top_p=0.9,
97
+ repetition_penalty=1.15
98
+ ).cuda()
99
+ output = tokenizer.decode(output_ids[0], skip_special_tokens=True)
100
+ print(output.replace(text, '').strip())
101
+ ```
102
+
103
+
104
+ output:
105
+ ```shell
106
+ 为什么天空是蓝色的?
107
+ 天空是蓝色的,是因为大气层中的气体分子会散射太阳光中的蓝色光,使得我们看到的天空是蓝色的。
108
+ ```
109
+
110
+ ## 模型来源
111
+
112
+ 基于 [多LoRA权重合并(适用于Chinese-Alpaca-Plus )](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki/%E6%89%8B%E5%8A%A8%E6%A8%A1%E5%9E%8B%E5%90%88%E5%B9%B6%E4%B8%8E%E8%BD%AC%E6%8D%A2#%E5%A4%9Alora%E6%9D%83%E9%87%8D%E5%90%88%E5%B9%B6%E9%80%82%E7%94%A8%E4%BA%8Echinese-alpaca-plus-)方法手动合并而成,具体是使用 [decapoda-research/llama-7b-hf](https://huggingface.co/decapoda-research/llama-7b-hf) 底座模型 合并 Chinese-LLaMA-Plus-LoRA和Chinese-Alpaca-Plus-LoRA 两个LoRA权重 得到,并转化为HuggingFace版本权重(.bin文件)。
113
+
114
+ release合并后的模型权重,一次到位直接使用,省电、减少碳排放。
115
+
116
+
117
+ 模型文件组成:
118
+ ```
119
+ chinese-alpaca-plus-7b-hf
120
+ config.json
121
+ generation_config.json
122
+ pytorch_model-00001-of-00002.bin
123
+ pytorch_model-00002-of-00002.bin
124
+ pytorch_model.bin.index.json
125
+ special_tokens_map.json
126
+ tokenizer.json
127
+ tokenizer.model
128
+ tokenizer_config.json
129
+ ```
130
+
131
+ 硬件要求:14G显存
132
+
133
+ ### 训练数据集
134
+
135
+ 1. 50万条中文ChatGPT指令Belle数据集:[BelleGroup/train_0.5M_CN](https://huggingface.co/datasets/BelleGroup/train_0.5M_CN)
136
+ 2. 100万条中文ChatGPT指令Belle数据集:[BelleGroup/train_1M_CN](https://huggingface.co/datasets/BelleGroup/train_1M_CN)
137
+ 3. 5万条英文ChatGPT指令Alpaca数据集:[50k English Stanford Alpaca dataset](https://github.com/tatsu-lab/stanford_alpaca#data-release)
138
+ 4. 2万条中文ChatGPT指令Alpaca数据集:[shibing624/alpaca-zh](https://huggingface.co/datasets/shibing624/alpaca-zh)
139
+ 5. 69万条中文指令Guanaco数据集(Belle50万条+Guanaco19万条):[Chinese-Vicuna/guanaco_belle_merge_v1.0](https://huggingface.co/datasets/Chinese-Vicuna/guanaco_belle_merge_v1.0)
140
+
141
+
142
+ 如果需要训练LLAMA模型,请参考[https://github.com/shibing624/textgen](https://github.com/shibing624/textgen)
143
+
144
+
145
+ ## Citation
146
+
147
+ ```latex
148
+ @software{textgen,
149
+ author = {Xu Ming},
150
+ title = {textgen: Implementation of language model finetune},
151
+ year = {2023},
152
+ url = {https://github.com/shibing624/textgen},
153
+ }
154
+ ```
155
+
156
+
157
+ ## Reference
158
+ - https://github.com/ymcui/Chinese-LLaMA-Alpaca
consolidated.00.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f23822d41a85b18b21cce3b6dfe3097c1e41476c1ab12530296d08a53d107eb9
3
+ size 13200250915
consolidated.01.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a401805d62663f33eac48865078fa5f00432c2de973be632dfe3169e40a9b2de
3
+ size 13200250915
params.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"dim": 5120, "multiple_of": 256, "n_heads": 40, "n_layers": 40, "norm_eps": 1e-06, "vocab_size": -1}
special_tokens_map.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": "<s>",
3
+ "eos_token": "</s>",
4
+ "pad_token": "[PAD]",
5
+ "unk_token": "<unk>"
6
+ }
tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2d967e855b1213a439df6c8ce2791f869c84b4f3b6cfacf22b86440b8192a2f8
3
+ size 757972
tokenizer_config.json ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": true,
3
+ "add_eos_token": false,
4
+ "bos_token": {
5
+ "__type": "AddedToken",
6
+ "content": "<s>",
7
+ "lstrip": false,
8
+ "normalized": true,
9
+ "rstrip": false,
10
+ "single_word": false
11
+ },
12
+ "clean_up_tokenization_spaces": false,
13
+ "eos_token": {
14
+ "__type": "AddedToken",
15
+ "content": "</s>",
16
+ "lstrip": false,
17
+ "normalized": true,
18
+ "rstrip": false,
19
+ "single_word": false
20
+ },
21
+ "model_max_length": 1000000000000000019884624838656,
22
+ "pad_token": null,
23
+ "sp_model_kwargs": {},
24
+ "tokenizer_class": "LlamaTokenizer",
25
+ "unk_token": {
26
+ "__type": "AddedToken",
27
+ "content": "<unk>",
28
+ "lstrip": false,
29
+ "normalized": true,
30
+ "rstrip": false,
31
+ "single_word": false
32
+ }
33
+ }