pharaouk hanbin commited on
Commit
fee4304
0 Parent(s):

Duplicate from openbmb/Eurus-RM-7b

Browse files

Co-authored-by: wanghanbin <hanbin@users.noreply.huggingface.co>

.gitattributes ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,64 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - openbmb/UltraInteract
5
+ - openbmb/UltraFeedback
6
+ - openbmb/UltraSafety
7
+ tags:
8
+ - reward_model
9
+ pipeline_tag: text-classification
10
+ ---
11
+
12
+
13
+ # Links
14
+
15
+ - 📜 [Paper]()
16
+ - 🤗 [Eurus Collection](https://huggingface.co/collections/openbmb/eurus-660bc40bec5376b3adc9d1c5)
17
+ - 🤗 [UltraInteract](https://huggingface.co/datasets/openbmb/UltraInteract)
18
+
19
+ # Introduction
20
+
21
+ Eurus-RM-7B is trained on a mixture of [UltraInteract](https://huggingface.co/datasets/openbmb/UltraInteract), [UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback), and [UltraSafety](https://huggingface.co/datasets/openbmb/UltraSafety), with a specifically designed reward modeling objective for reasoning to directly increase.
22
+
23
+ - EURUS-RM-7B stands out as the best 7B RM overall and achieves similar or better performance than much larger baselines. Particularly, it outperforms GPT-4 in certain tasks.
24
+ - Our training objective is beneficial in improving RM performance on hard problems and reasoning.
25
+ - ULTRAINTERACT is compatible with other datasets like UltraFeedback and UltraSafety, and mixing these datasets can balance different RM abilities.
26
+ - EURUS-RM-7B improves LLMs’ reasoning performance by a large margin through reranking.
27
+
28
+
29
+ ## Usage
30
+ ```python
31
+ from transformers import AutoTokenizer, AutoModel
32
+
33
+
34
+ def test(model_path):
35
+ dataset = [ # cases in webgpt; we use the same template as Mistral-Instruct-v0.2
36
+ {"chosen":"[INST] \"Who orders martinis \"\"shaken, not stirred\"\"?\" [\INST] Sean Connery's character, fictional British Secret Service agent James Bond, in the movie Goldfinger, stated that he preferred his martini to be \"shaken, not stirred\". [1] Some believe that Bond ordered his martini shaken because of the vodka it contained, as vodka was, for the most part, refined from potatoes (cheaper brands) which made the vodka oily. To disperse the oil, Bond ordered his martinis shaken. [2]","rejected":"[INST] \"Who orders martinis \"\"shaken, not stirred\"\"?\" [\INST] Fleming's fictional British Secret Service agent James Bond orders his martini cocktail shaken, not stirred [1]. Bond's preferences for his martini are carried over to the films, where his orders are seen in both the 1961 film Dr. No and the 2006 film Casino Royale [1, 2]. In both films, Bond's subordinates copy his order, telling the bartender to keep the fruit with their drinks [2]. However, in the 2006 film, Bond appears irritated when the bartender asks if he would like his drink shaken or stirred [2]."},
37
+ {"chosen":"[INST] Sural relates to which part of the body? [\INST] The sural region is the muscular swelling of the back of the leg below the knee, formed chiefly by the bellies of the gastrocnemius and soleus muscles [1,2].","rejected":"[INST] Sural relates to which part of the body? [\INST] The Sural nerve runs down the side of the leg near the small saphenous vein, then passes forward below the lateral malleolus and continues on the outside of the foot as the lateral dorsal cutaneous nerve, which then communicates with the intermediate dorsal cutaneous nerve, which branches off to the side of the foot. [1]"}
38
+ ]
39
+
40
+
41
+ tokenizer = AutoTokenizer.from_pretrained(model_path)
42
+ model = AutoModel.from_pretrained(model_path, trust_remote_code=True)
43
+
44
+ for example in dataset:
45
+ inputs = tokenizer(example["chosen"], return_tensors="pt")
46
+ chosen_reward = model(**inputs).item()
47
+ inputs = tokenizer(example["rejected"], return_tensors="pt")
48
+ rejected_reward = model(**inputs).item()
49
+ print(chosen_reward - rejected_reward)
50
+
51
+ test("openbmb/Eurus-RM-7b")
52
+ # Output 1: 0.14470714330673218
53
+ # Output 2: 0.7317184507846832
54
+ ```
55
+
56
+ ## Citation
57
+ ```
58
+ @misc{yuan2024advancing,
59
+ title={Advancing LLM Reasoning Generalists with Preference Trees},
60
+ author={Lifan Yuan and Ganqu Cui and Hanbin Wang and Ning Ding and Xingyao Wang and Jia Deng and Boji Shan and Huimin Chen and Ruobing Xie and Yankai Lin and Zhenghao Liu and Bowen Zhou and Hao Peng and Zhiyuan Liu and Maosong Sun},
61
+ year={2024},
62
+ primaryClass={cs.CL}
63
+ }
64
+ ```
config.json ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "EurusRewardModel"
4
+ ],
5
+ "auto_map": {
6
+ "AutoModel": "modeling_eurus_rm.EurusRewardModel"
7
+ },
8
+ "bos_token_id": 1,
9
+ "eos_token_id": 2,
10
+ "hidden_act": "silu",
11
+ "hidden_size": 4096,
12
+ "initializer_range": 0.02,
13
+ "intermediate_size": 14336,
14
+ "max_position_embeddings": 32768,
15
+ "model_type": "mistral",
16
+ "num_attention_heads": 32,
17
+ "num_hidden_layers": 32,
18
+ "num_key_value_heads": 8,
19
+ "rms_norm_eps": 1e-05,
20
+ "rope_theta": 10000.0,
21
+ "sliding_window": 4096,
22
+ "tie_word_embeddings": false,
23
+ "torch_dtype": "bfloat16",
24
+ "transformers_version": "4.34.0.dev0",
25
+ "use_cache": true,
26
+ "vocab_size": 32000
27
+ }
generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 1,
4
+ "eos_token_id": 2,
5
+ "transformers_version": "4.34.0.dev0"
6
+ }
model-00001-of-00003.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:713905e32a4ba1b2500773443f3be6e68995147a64a42ad1521f27933c7eee28
3
+ size 4943162240
model-00002-of-00003.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:19f9bca444bdd26f0751800ca47e5d0a4e2d63412c9d1edc04c6d5d2ebd5b388
3
+ size 4999819232
model-00003-of-00003.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f6e34136df618545f3afd720842c488b34aea653a0c1e6693741b49af6a3021c
3
+ size 4278380432
model.safetensors.index.json ADDED
@@ -0,0 +1,298 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "metadata": {
3
+ "total_size": 14221328384
4
+ },
5
+ "weight_map": {
6
+ "model.embed_tokens.weight": "model-00001-of-00003.safetensors",
7
+ "model.layers.0.input_layernorm.weight": "model-00001-of-00003.safetensors",
8
+ "model.layers.0.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
9
+ "model.layers.0.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
10
+ "model.layers.0.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
11
+ "model.layers.0.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
12
+ "model.layers.0.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
13
+ "model.layers.0.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
14
+ "model.layers.0.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
15
+ "model.layers.0.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
16
+ "model.layers.1.input_layernorm.weight": "model-00001-of-00003.safetensors",
17
+ "model.layers.1.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
18
+ "model.layers.1.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
19
+ "model.layers.1.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
20
+ "model.layers.1.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
21
+ "model.layers.1.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
22
+ "model.layers.1.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
23
+ "model.layers.1.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
24
+ "model.layers.1.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
25
+ "model.layers.10.input_layernorm.weight": "model-00002-of-00003.safetensors",
26
+ "model.layers.10.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
27
+ "model.layers.10.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
28
+ "model.layers.10.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
29
+ "model.layers.10.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
30
+ "model.layers.10.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
31
+ "model.layers.10.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
32
+ "model.layers.10.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
33
+ "model.layers.10.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
34
+ "model.layers.11.input_layernorm.weight": "model-00002-of-00003.safetensors",
35
+ "model.layers.11.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
36
+ "model.layers.11.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
37
+ "model.layers.11.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
38
+ "model.layers.11.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
39
+ "model.layers.11.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
40
+ "model.layers.11.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
41
+ "model.layers.11.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
42
+ "model.layers.11.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
43
+ "model.layers.12.input_layernorm.weight": "model-00002-of-00003.safetensors",
44
+ "model.layers.12.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
45
+ "model.layers.12.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
46
+ "model.layers.12.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
47
+ "model.layers.12.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
48
+ "model.layers.12.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
49
+ "model.layers.12.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
50
+ "model.layers.12.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
51
+ "model.layers.12.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
52
+ "model.layers.13.input_layernorm.weight": "model-00002-of-00003.safetensors",
53
+ "model.layers.13.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
54
+ "model.layers.13.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
55
+ "model.layers.13.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
56
+ "model.layers.13.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
57
+ "model.layers.13.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
58
+ "model.layers.13.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
59
+ "model.layers.13.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
60
+ "model.layers.13.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
61
+ "model.layers.14.input_layernorm.weight": "model-00002-of-00003.safetensors",
62
+ "model.layers.14.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
63
+ "model.layers.14.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
64
+ "model.layers.14.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
65
+ "model.layers.14.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
66
+ "model.layers.14.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
67
+ "model.layers.14.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
68
+ "model.layers.14.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
69
+ "model.layers.14.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
70
+ "model.layers.15.input_layernorm.weight": "model-00002-of-00003.safetensors",
71
+ "model.layers.15.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
72
+ "model.layers.15.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
73
+ "model.layers.15.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
74
+ "model.layers.15.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
75
+ "model.layers.15.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
76
+ "model.layers.15.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
77
+ "model.layers.15.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
78
+ "model.layers.15.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
79
+ "model.layers.16.input_layernorm.weight": "model-00002-of-00003.safetensors",
80
+ "model.layers.16.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
81
+ "model.layers.16.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
82
+ "model.layers.16.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
83
+ "model.layers.16.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
84
+ "model.layers.16.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
85
+ "model.layers.16.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
86
+ "model.layers.16.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
87
+ "model.layers.16.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
88
+ "model.layers.17.input_layernorm.weight": "model-00002-of-00003.safetensors",
89
+ "model.layers.17.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
90
+ "model.layers.17.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
91
+ "model.layers.17.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
92
+ "model.layers.17.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
93
+ "model.layers.17.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
94
+ "model.layers.17.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
95
+ "model.layers.17.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
96
+ "model.layers.17.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
97
+ "model.layers.18.input_layernorm.weight": "model-00002-of-00003.safetensors",
98
+ "model.layers.18.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
99
+ "model.layers.18.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
100
+ "model.layers.18.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
101
+ "model.layers.18.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
102
+ "model.layers.18.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
103
+ "model.layers.18.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
104
+ "model.layers.18.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
105
+ "model.layers.18.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
106
+ "model.layers.19.input_layernorm.weight": "model-00002-of-00003.safetensors",
107
+ "model.layers.19.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
108
+ "model.layers.19.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
109
+ "model.layers.19.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
110
+ "model.layers.19.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
111
+ "model.layers.19.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
112
+ "model.layers.19.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
113
+ "model.layers.19.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
114
+ "model.layers.19.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
115
+ "model.layers.2.input_layernorm.weight": "model-00001-of-00003.safetensors",
116
+ "model.layers.2.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
117
+ "model.layers.2.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
118
+ "model.layers.2.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
119
+ "model.layers.2.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
120
+ "model.layers.2.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
121
+ "model.layers.2.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
122
+ "model.layers.2.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
123
+ "model.layers.2.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
124
+ "model.layers.20.input_layernorm.weight": "model-00002-of-00003.safetensors",
125
+ "model.layers.20.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
126
+ "model.layers.20.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
127
+ "model.layers.20.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
128
+ "model.layers.20.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
129
+ "model.layers.20.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
130
+ "model.layers.20.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
131
+ "model.layers.20.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
132
+ "model.layers.20.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
133
+ "model.layers.21.input_layernorm.weight": "model-00002-of-00003.safetensors",
134
+ "model.layers.21.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
135
+ "model.layers.21.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
136
+ "model.layers.21.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
137
+ "model.layers.21.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
138
+ "model.layers.21.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
139
+ "model.layers.21.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
140
+ "model.layers.21.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
141
+ "model.layers.21.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
142
+ "model.layers.22.input_layernorm.weight": "model-00003-of-00003.safetensors",
143
+ "model.layers.22.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
144
+ "model.layers.22.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
145
+ "model.layers.22.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
146
+ "model.layers.22.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
147
+ "model.layers.22.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
148
+ "model.layers.22.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
149
+ "model.layers.22.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
150
+ "model.layers.22.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
151
+ "model.layers.23.input_layernorm.weight": "model-00003-of-00003.safetensors",
152
+ "model.layers.23.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
153
+ "model.layers.23.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
154
+ "model.layers.23.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
155
+ "model.layers.23.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
156
+ "model.layers.23.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
157
+ "model.layers.23.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
158
+ "model.layers.23.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
159
+ "model.layers.23.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
160
+ "model.layers.24.input_layernorm.weight": "model-00003-of-00003.safetensors",
161
+ "model.layers.24.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
162
+ "model.layers.24.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
163
+ "model.layers.24.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
164
+ "model.layers.24.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
165
+ "model.layers.24.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
166
+ "model.layers.24.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
167
+ "model.layers.24.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
168
+ "model.layers.24.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
169
+ "model.layers.25.input_layernorm.weight": "model-00003-of-00003.safetensors",
170
+ "model.layers.25.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
171
+ "model.layers.25.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
172
+ "model.layers.25.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
173
+ "model.layers.25.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
174
+ "model.layers.25.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
175
+ "model.layers.25.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
176
+ "model.layers.25.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
177
+ "model.layers.25.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
178
+ "model.layers.26.input_layernorm.weight": "model-00003-of-00003.safetensors",
179
+ "model.layers.26.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
180
+ "model.layers.26.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
181
+ "model.layers.26.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
182
+ "model.layers.26.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
183
+ "model.layers.26.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
184
+ "model.layers.26.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
185
+ "model.layers.26.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
186
+ "model.layers.26.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
187
+ "model.layers.27.input_layernorm.weight": "model-00003-of-00003.safetensors",
188
+ "model.layers.27.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
189
+ "model.layers.27.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
190
+ "model.layers.27.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
191
+ "model.layers.27.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
192
+ "model.layers.27.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
193
+ "model.layers.27.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
194
+ "model.layers.27.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
195
+ "model.layers.27.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
196
+ "model.layers.28.input_layernorm.weight": "model-00003-of-00003.safetensors",
197
+ "model.layers.28.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
198
+ "model.layers.28.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
199
+ "model.layers.28.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
200
+ "model.layers.28.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
201
+ "model.layers.28.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
202
+ "model.layers.28.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
203
+ "model.layers.28.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
204
+ "model.layers.28.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
205
+ "model.layers.29.input_layernorm.weight": "model-00003-of-00003.safetensors",
206
+ "model.layers.29.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
207
+ "model.layers.29.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
208
+ "model.layers.29.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
209
+ "model.layers.29.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
210
+ "model.layers.29.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
211
+ "model.layers.29.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
212
+ "model.layers.29.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
213
+ "model.layers.29.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
214
+ "model.layers.3.input_layernorm.weight": "model-00001-of-00003.safetensors",
215
+ "model.layers.3.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
216
+ "model.layers.3.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
217
+ "model.layers.3.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
218
+ "model.layers.3.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
219
+ "model.layers.3.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
220
+ "model.layers.3.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
221
+ "model.layers.3.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
222
+ "model.layers.3.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
223
+ "model.layers.30.input_layernorm.weight": "model-00003-of-00003.safetensors",
224
+ "model.layers.30.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
225
+ "model.layers.30.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
226
+ "model.layers.30.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
227
+ "model.layers.30.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
228
+ "model.layers.30.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
229
+ "model.layers.30.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
230
+ "model.layers.30.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
231
+ "model.layers.30.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
232
+ "model.layers.31.input_layernorm.weight": "model-00003-of-00003.safetensors",
233
+ "model.layers.31.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
234
+ "model.layers.31.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
235
+ "model.layers.31.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
236
+ "model.layers.31.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
237
+ "model.layers.31.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
238
+ "model.layers.31.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
239
+ "model.layers.31.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
240
+ "model.layers.31.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
241
+ "model.layers.4.input_layernorm.weight": "model-00001-of-00003.safetensors",
242
+ "model.layers.4.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
243
+ "model.layers.4.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
244
+ "model.layers.4.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
245
+ "model.layers.4.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
246
+ "model.layers.4.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
247
+ "model.layers.4.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
248
+ "model.layers.4.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
249
+ "model.layers.4.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
250
+ "model.layers.5.input_layernorm.weight": "model-00001-of-00003.safetensors",
251
+ "model.layers.5.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
252
+ "model.layers.5.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
253
+ "model.layers.5.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
254
+ "model.layers.5.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
255
+ "model.layers.5.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
256
+ "model.layers.5.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
257
+ "model.layers.5.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
258
+ "model.layers.5.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
259
+ "model.layers.6.input_layernorm.weight": "model-00001-of-00003.safetensors",
260
+ "model.layers.6.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
261
+ "model.layers.6.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
262
+ "model.layers.6.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
263
+ "model.layers.6.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
264
+ "model.layers.6.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
265
+ "model.layers.6.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
266
+ "model.layers.6.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
267
+ "model.layers.6.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
268
+ "model.layers.7.input_layernorm.weight": "model-00001-of-00003.safetensors",
269
+ "model.layers.7.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
270
+ "model.layers.7.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
271
+ "model.layers.7.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
272
+ "model.layers.7.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
273
+ "model.layers.7.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
274
+ "model.layers.7.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
275
+ "model.layers.7.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
276
+ "model.layers.7.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
277
+ "model.layers.8.input_layernorm.weight": "model-00001-of-00003.safetensors",
278
+ "model.layers.8.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
279
+ "model.layers.8.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
280
+ "model.layers.8.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
281
+ "model.layers.8.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
282
+ "model.layers.8.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
283
+ "model.layers.8.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
284
+ "model.layers.8.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
285
+ "model.layers.8.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
286
+ "model.layers.9.input_layernorm.weight": "model-00001-of-00003.safetensors",
287
+ "model.layers.9.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
288
+ "model.layers.9.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
289
+ "model.layers.9.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
290
+ "model.layers.9.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
291
+ "model.layers.9.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
292
+ "model.layers.9.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
293
+ "model.layers.9.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
294
+ "model.layers.9.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
295
+ "model.norm.weight": "model-00003-of-00003.safetensors",
296
+ "regression_head.weight": "model-00003-of-00003.safetensors"
297
+ }
298
+ }
modeling_eurus_rm.py ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from transformers import PreTrainedModel, MistralConfig, MistralModel
2
+ import torch.nn as nn
3
+ import torch
4
+ from typing import Optional, List
5
+
6
+ class EurusRewardModel(PreTrainedModel):
7
+ config_class = MistralConfig
8
+ def __init__(self, config):
9
+ super().__init__(config)
10
+ self.model = MistralModel(config)
11
+ self.regression_head = nn.Linear(self.config.hidden_size, 1, bias=False)
12
+
13
+ def forward( # args are the same as LlamaForCausalLM
14
+ self,
15
+ input_ids: torch.LongTensor = None,
16
+ attention_mask: Optional[torch.Tensor] = None,
17
+ position_ids: Optional[torch.LongTensor] = None,
18
+ past_key_values: Optional[List[torch.FloatTensor]] = None,
19
+ inputs_embeds: Optional[torch.FloatTensor] = None,
20
+ labels: Optional[torch.LongTensor] = None,
21
+ use_cache: Optional[bool] = None,
22
+ output_attentions: Optional[bool] = None,
23
+ output_hidden_states: Optional[bool] = None,
24
+ return_dict: Optional[bool] = None,
25
+ ):
26
+
27
+ transformer_outputs = self.model(
28
+ input_ids,
29
+ attention_mask=attention_mask,
30
+ position_ids=position_ids,
31
+ past_key_values=past_key_values,
32
+ inputs_embeds=inputs_embeds,
33
+ )
34
+
35
+ hidden_states = transformer_outputs[0]
36
+ rewards = self.regression_head(hidden_states).squeeze(-1)
37
+
38
+ ends = attention_mask.cumsum(dim=1).argmax(dim=1).view(-1,1)
39
+ rewards = torch.gather(rewards, 1, ends)
40
+
41
+ return rewards
special_tokens_map.json ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ {
2
+ "bos_token": "<s>",
3
+ "eos_token": "</s>",
4
+ "unk_token": "<unk>"
5
+ }
tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dadfd56d766715c61d2ef780a525ab43b8e6da4de6865bda3d95fdef5e134055
3
+ size 493443
tokenizer_config.json ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": true,
3
+ "add_eos_token": false,
4
+ "added_tokens_decoder": {
5
+ "0": {
6
+ "content": "<unk>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "1": {
14
+ "content": "<s>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "2": {
22
+ "content": "</s>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ }
29
+ },
30
+ "additional_special_tokens": [],
31
+ "bos_token": "<s>",
32
+ "clean_up_tokenization_spaces": false,
33
+ "eos_token": "</s>",
34
+ "legacy": true,
35
+ "model_max_length": 1000000000000000019884624838656,
36
+ "pad_token": null,
37
+ "sp_model_kwargs": {},
38
+ "spaces_between_special_tokens": false,
39
+ "tokenizer_class": "LlamaTokenizer",
40
+ "unk_token": "<unk>",
41
+ "use_default_system_prompt": true
42
+ }