Upload folder using huggingface_hub (#5)
Browse files- f4295322ed858a8fbf38688ef4f66fe4b5bc05fb7f546ae9264a472e5239f5d2 (925438ad9b1679b83aea4271a06ed446b0c9eda4)
- 8c9448225b6d39c35ae97c0ceffcf19f863386bf3080d08731fe757bf70fb1aa (02069605261f780bf0ef4e33e6228d46135ddd19)
- 54e3a9aa87dd59f13edf74bba2b3c2927cd0d30b14a35927e0492aa33f641ffa (c8fdffa0ca33833aa6a7a6e181c9ffa111e1e0b4)
- 9e84c48d007060e25931bf9b53241145366debd7948b69fa07bc26a73193e3e9 (80a9b630736f6fa8893758feebe58240a7f73247)
- a0d6ad42d56fbe1cde7c72a9d567995eec85083a6d3fb172fd68eb8c2285878e (56e9f237b30a9469ab4016e815f3e5805051d295)
- b1744ae213ec8db6a8f4096d511a09be14bfbb99e690c06d10a0e4f560319a3b (bf5e46aff9f09817a57d86d7305e28c97d136b56)
- e2f8ff30e35595fc88c47f4940b7287a57f57b3172542305244b7551db1dda8a (795bfdcc9da976a9c005b8112ff2b96a153cd1a4)
- 376c9f42d909b6013b8507b014d32b07f93a8d6b4ab080124ce95b37df49795d (f0e2110a21beac8d0c7e5424f08a406efad59922)
- README.md +77 -0
- config.json +1 -1
@@ -1,3 +1,80 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
---
|
4 |
+
# Ziya-LLaMA-7B-Reward
|
5 |
+
|
6 |
+
## Introduction
|
7 |
+
Ziya-LLaMA-7B-Reward基于Ziya-LLaMA模型,在以下偏好排序数据上进行训练:
|
8 |
+
* 自标注高质量偏好排序数据40190条
|
9 |
+
* 严格过滤的外部开源数据3600条,来源包括:OpenAssistant Conversations Dataset (OASST1)、Anthropic HH-RLHF、GPT-4-LLM和webgpt_comparisions
|
10 |
+
模型能够模拟中英双语生成的奖励环境,对LLM生成结果提供准确的奖励反馈。
|
11 |
+
|
12 |
+
Ziya-LLaMA-7B-Reward is based on the Ziya-LLaMA model, trained on the following preference ranking data:
|
13 |
+
* 40190 self-labeled high-quality preference ranking data
|
14 |
+
* 3600 strictly filtered external open source data from sources including OpenAssistant Conversations Dataset (OASST1), Anthropic HH-RLHF, GPT-4-LLM and webgpt_comparisions
|
15 |
+
The model is able to simulate a bilingual reward environment and provide accurate reward feedback on LLM generation results.
|
16 |
+
|
17 |
+
## Usage
|
18 |
+
|
19 |
+
```python
|
20 |
+
from transformers import AutoModelForSequenceClassification,LlamaTokenizer
|
21 |
+
reward_model = AutoModelForSequenceClassification.from_pretrained("IDEA-CCNL/Ziya-LLaMA-7B-Reward", trust_remote_code=True)
|
22 |
+
reward_model = reward_model.eval().half().cuda()
|
23 |
+
tokenizer = LlamaTokenizer.from_pretrained("IDEA-CCNL/Ziya-LLaMA-7B-Reward",add_eos_token=True)
|
24 |
+
prefix_user = "Human:"
|
25 |
+
prefix_bot = "\n\nAssistant:"
|
26 |
+
query = "列举一种空气污染。"
|
27 |
+
response = "一种常见的空气污染源是化石燃料的燃烧产生的尾气排放,包括来自汽车、卡车、飞机、火车和工业厂房的废气排放。这会导致大气中的二氧化硫、氮氧化物、一氧化碳、臭氧和颗粒物(例如灰尘和烟雾)等污染物含量增加,对人类健康和环境造成不利影响。"
|
28 |
+
text = prefix_user+query+prefix_bot+response
|
29 |
+
batch = tokenizer(text, return_tensors="pt",padding=True,truncation=True,max_length=1024)
|
30 |
+
with torch.no_grad():
|
31 |
+
reward = reward_model(batch['input_ids'].cuda(), attention_mask = batch['attention_mask'].cuda())
|
32 |
+
print(reward.item())
|
33 |
+
# reward: 0.76
|
34 |
+
```
|
35 |
+
模型可以较为准确地判断文本重复,异常中断和不符合指令要求等低质量模型生成结果,并给出较低的奖励值。
|
36 |
+
The model can more accurately determine low quality model generation results such as text repetition, interruptions and failure to meet instruction requirements, and give lower reward values.
|
37 |
+
|
38 |
+
```python
|
39 |
+
prefix_user = "Human:"
|
40 |
+
prefix_bot = "\n\nAssistant:"
|
41 |
+
query = "列举一种空气污染。"
|
42 |
+
response = [
|
43 |
+
"一种常见的空气污染源是化石燃料的燃烧产生的尾气排放,包括来自汽车、卡车、飞机、火车和工业厂房的废气排放。这会导致大气中的二氧化硫、氮氧化物、一氧化碳、臭氧和颗粒物(例如灰尘和烟雾)等污染物含量增加,对人类健康和环境造成不利影响。",
|
44 |
+
"一种常见的空气污染源是化石燃料的燃烧产生的尾气排放,包括来自汽车、卡车、飞机、火车和工业厂房的废气排放。这会导致大气中的二氧化硫、二氧化硫、二氧化硫、二氧化硫、氮氧化物、一氧化碳、臭氧和颗粒物(例如灰尘和烟雾)等污染物含量增加,对人类健康和环境造成不利影响。",
|
45 |
+
"一种水污染是氮氧化物污染,它是由于氮和硝化物的排放,以及由氮、硝化物和磷细菌共同作用在水体中,导致水体变成浊褐色而造成的。氮氧化物污染会影响水体中所有生物,包括鱼类和其他水生生物,影响它们的健康和生长。",
|
46 |
+
"一种常见的空气污染源是化石燃料的燃烧产生的尾气排放,包括来自汽车、卡车、飞机、火车和工业厂房的废气排放。这会导致大气中的二氧化硫、",
|
47 |
+
]
|
48 |
+
text = [prefix_user+query+prefix_bot+r for r in response]
|
49 |
+
batch = tokenizer(text, return_tensors="pt",padding=True,truncation=True,max_length=1024)
|
50 |
+
with torch.no_grad():
|
51 |
+
reward = reward_model(batch['input_ids'].cuda(), attention_mask = batch['attention_mask'].cuda())
|
52 |
+
print(reward.tolist())
|
53 |
+
#reward: [0.76, -1.36, -2.99, -1.82]
|
54 |
+
```
|
55 |
+
模型能够对比对同一指令的不同生成结果,并根据质量给出奖励值。
|
56 |
+
The model is able to compare different generation results for the same instruction and give reward values based on quality.
|
57 |
+
```python
|
58 |
+
prefix_user = "Human:"
|
59 |
+
prefix_bot = "\n\nAssistant:"
|
60 |
+
query = "写一首古诗表达对老师的感激。"
|
61 |
+
response =[
|
62 |
+
"教书育人重如金,\n诲人不倦志在心。\n恩师良师真堪许,\n学道由衷付诸君。",
|
63 |
+
"良师益友是人才,\n谆谆谆谈甚有用。\n教诲谆谆言不尽,\n学道无穷光辈优。",
|
64 |
+
"老师,您是我的导师,\n您是我学习的指路人。\n您不仅传授知识,\n还以身作则,做出榜样。\n您用心教诲,耐心帮助,\n让我在学海中航行。\n感谢您的教诲,\n我将铭记于心。",
|
65 |
+
"好的,以下是一篇写一首古诗表达对老师的感激的古诗。\n老去沧桑似梦中,江山依旧是往���。尊前一笑终无日,地下相从却有年。",
|
66 |
+
]
|
67 |
+
text = [prefix_user+query+prefix_bot+r for r in response]
|
68 |
+
batch = tokenizer(text, return_tensors="pt",padding=True,truncation=True,max_length=1024)
|
69 |
+
with torch.no_grad():
|
70 |
+
reward = reward_model(batch['input_ids'].cuda(), attention_mask = batch['attention_mask'].cuda())
|
71 |
+
print(reward.tolist())
|
72 |
+
#reward: [2.76, 1.21, -0.20, -2.19]
|
73 |
+
|
74 |
+
```
|
75 |
+
## Limitation
|
76 |
+
由于基础模型能力和训练数据的限制,Ziya-LLaMA-7B-Reward的能力也存在一些不足,例如,模型难以精确判断事实性问题的对错,对于质量相近的生成文本判断不够准确等。模型对同一指令的不同生成结果对比排序较为准确,但不同类型指令之间的相互对比则较为困难,比如一个正确回答的数学问题和一个准确回复的写作问题的奖励值可能并不相近。
|
77 |
+
我们将继续训练以提升模型的能力。
|
78 |
+
|
79 |
+
Due to the limitations of the base model capabilities and training data, there are also some shortcomings in the capabilities of Ziya-LLaMA-7B-Reward, for example, the model has difficulty in accurately determining the correctness of factual questions and is not accurate enough in judging generated text of similar quality. The model is more accurate in comparing and ranking different generated results for the same instruction, but it is more difficult to compare different types of instructions with each other, for example, the reward value of a correctly answered math question and an accurately responded writing question may not be similar.
|
80 |
+
We will continue training to improve the model's capabilities.
|
@@ -1,5 +1,5 @@
|
|
1 |
{
|
2 |
-
"_name_or_path": "/
|
3 |
"architectures": [
|
4 |
"LlamaRewardModel"
|
5 |
],
|
|
|
1 |
{
|
2 |
+
"_name_or_path": "IDEA-CCNL/Ziya-LLaMA-7B-Reward",
|
3 |
"architectures": [
|
4 |
"LlamaRewardModel"
|
5 |
],
|