File size: 3,258 Bytes
aaac90a
5573a11
 
 
aaac90a
6577ffa
 
5573a11
ecfe070
 
 
 
 
 
5573a11
 
6577ffa
5573a11
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
---
license: bigscience-bloom-rail-1.0
language:
- zh
---
# 体验链接
1. 🔗[http://101.68.79.42:7861/](http://101.68.79.42:7861/)

## 🚀更新
| 模型链接                                                                                                                         | 训练的数据量    | 模型版本 | 备注                     |
|------------------------------------------------------------------------------------------------------------------------------|-----------|------|------------------------|
| [https://huggingface.co/yuanzhoulvpi/chinese_bloom_7b_chat](https://huggingface.co/yuanzhoulvpi/chinese_bloom_7b_chat)       | 15w中文指令数据 | v1   |                        |
| [https://huggingface.co/yuanzhoulvpi/chinese_bloom_7b_chat_v2](https://huggingface.co/yuanzhoulvpi/chinese_bloom_7b_chat_v2) | 150w条中文指令数据 | v2   | 目前已经测试过效果,相较于v1,效果有所提升 |
| [https://huggingface.co/yuanzhoulvpi/chinese_bloom_7b_chat_v3](https://huggingface.co/yuanzhoulvpi/chinese_bloom_7b_chat_v3) | 420w条中文指令数据 | v3   | 目前效果还没测试,欢迎大家测试        |

## 介绍
1. ✅ 对`bloom-7b`模型做了sft,本次版本为V2版本(使用了150w条有监督数据做sft),相较于V1版本,效果更好!!!
2. 🚀 训练代码和推理代码全部分享,可以查看链接[https://github.com/yuanzhoulvpi2017/zero_nlp/tree/main/chinese_bloom](https://github.com/yuanzhoulvpi2017/zero_nlp/tree/main/chinese_bloom)


## 如何使用

```python
from transformers import AutoModelForCausalLM, AutoTokenizer


checkpoint = "yuanzhoulvpi/chinese_bloom_7b_chat_v2"#"bigscience/bloomz-3b" #"bigscience/bloom-7b1"#  "output_dir/checkpoint-8260"#
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint).half().cuda()

PROMPT_DICT = {
    "prompt_input": (
        "Below is an instruction that describes a task, paired with an input that provides further context. "
        "Write a response that appropriately completes the request.\n\n"
        "### Instruction:\n{instruction}\n\n### Input:\n{input}\n\n### Response:"
    ),
    "prompt_no_input": (
        "Below is an instruction that describes a task. "
        "Write a response that appropriately completes the request.\n\n"
        "### Instruction:\n{instruction}\n\n### Response:"
    ),
}

from typing import Optional
def generate_input(instruction:Optional[str]= None, input_str:Optional[str] = None) -> str:
    if input_str is None:
        return PROMPT_DICT['prompt_no_input'].format_map({'instruction':instruction})
    else:
        return PROMPT_DICT['prompt_input'].format_map({'instruction':instruction, 'input':input_str})


for i in range(5):
    print("*"*80)

    inputs = tokenizer.encode(generate_input(instruction="你是谁"), return_tensors="pt")
    outputs = model.generate(inputs,num_beams=3,
                            max_new_tokens=512,
                            do_sample=False, 
                            top_k=10,
                            penalty_alpha=0.6,
                            temperature=0.8,
                            repetition_penalty=1.2)
    print(tokenizer.decode(outputs[0]))
```