Locutusque commited on
Commit
0bc80e5
1 Parent(s): 79ab6be

Upload 12 files

Browse files
README.md CHANGED
@@ -1,3 +1,202 @@
1
  ---
2
- license: apache-2.0
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ library_name: peft
3
+ base_model: Locutusque/OpenCerebrum-2.0-Mistral-7B-v0.2-beta
4
  ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
+
28
+ ### Model Sources [optional]
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
+
36
+ ## Uses
37
+
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
+
40
+ ### Direct Use
41
+
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Recommendations
65
+
66
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
+
68
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
+
70
+ ## How to Get Started with the Model
71
+
72
+ Use the code below to get started with the model.
73
+
74
+ [More Information Needed]
75
+
76
+ ## Training Details
77
+
78
+ ### Training Data
79
+
80
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
+
82
+ [More Information Needed]
83
+
84
+ ### Training Procedure
85
+
86
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
+
88
+ #### Preprocessing [optional]
89
+
90
+ [More Information Needed]
91
+
92
+
93
+ #### Training Hyperparameters
94
+
95
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
+
97
+ #### Speeds, Sizes, Times [optional]
98
+
99
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
+
101
+ [More Information Needed]
102
+
103
+ ## Evaluation
104
+
105
+ <!-- This section describes the evaluation protocols and provides the results. -->
106
+
107
+ ### Testing Data, Factors & Metrics
108
+
109
+ #### Testing Data
110
+
111
+ <!-- This should link to a Dataset Card if possible. -->
112
+
113
+ [More Information Needed]
114
+
115
+ #### Factors
116
+
117
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
+
119
+ [More Information Needed]
120
+
121
+ #### Metrics
122
+
123
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
+
125
+ [More Information Needed]
126
+
127
+ ### Results
128
+
129
+ [More Information Needed]
130
+
131
+ #### Summary
132
+
133
+
134
+
135
+ ## Model Examination [optional]
136
+
137
+ <!-- Relevant interpretability work for the model goes here -->
138
+
139
+ [More Information Needed]
140
+
141
+ ## Environmental Impact
142
+
143
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
+
145
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
+
147
+ - **Hardware Type:** [More Information Needed]
148
+ - **Hours used:** [More Information Needed]
149
+ - **Cloud Provider:** [More Information Needed]
150
+ - **Compute Region:** [More Information Needed]
151
+ - **Carbon Emitted:** [More Information Needed]
152
+
153
+ ## Technical Specifications [optional]
154
+
155
+ ### Model Architecture and Objective
156
+
157
+ [More Information Needed]
158
+
159
+ ### Compute Infrastructure
160
+
161
+ [More Information Needed]
162
+
163
+ #### Hardware
164
+
165
+ [More Information Needed]
166
+
167
+ #### Software
168
+
169
+ [More Information Needed]
170
+
171
+ ## Citation [optional]
172
+
173
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
+
175
+ **BibTeX:**
176
+
177
+ [More Information Needed]
178
+
179
+ **APA:**
180
+
181
+ [More Information Needed]
182
+
183
+ ## Glossary [optional]
184
+
185
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
+
187
+ [More Information Needed]
188
+
189
+ ## More Information [optional]
190
+
191
+ [More Information Needed]
192
+
193
+ ## Model Card Authors [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Contact
198
+
199
+ [More Information Needed]
200
+ ### Framework versions
201
+
202
+ - PEFT 0.7.1
adapter_config.json ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": "Locutusque/OpenCerebrum-2.0-Mistral-7B-v0.2-beta",
5
+ "bias": "none",
6
+ "fan_in_fan_out": false,
7
+ "inference_mode": true,
8
+ "init_lora_weights": true,
9
+ "layers_pattern": null,
10
+ "layers_to_transform": null,
11
+ "loftq_config": {},
12
+ "lora_alpha": 16,
13
+ "lora_dropout": 0,
14
+ "megatron_config": null,
15
+ "megatron_core": "megatron.core",
16
+ "modules_to_save": null,
17
+ "peft_type": "LORA",
18
+ "r": 128,
19
+ "rank_pattern": {},
20
+ "revision": "unsloth",
21
+ "target_modules": [
22
+ "gate_proj",
23
+ "down_proj",
24
+ "o_proj",
25
+ "v_proj",
26
+ "k_proj",
27
+ "q_proj",
28
+ "up_proj"
29
+ ],
30
+ "task_type": "CAUSAL_LM"
31
+ }
adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ba7c6f78cc545420c3b5aa8309ee97f97d4caf0cd39d24655cab3dbd78022cd2
3
+ size 1342238560
optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4fa0d28352a8a3076d80225673010a43d1c09f9d276e6ec6fe76cc1c69f747c5
3
+ size 2684631802
rng_state.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d3d52a26a6e279718fbf408925098306fc0e3db336035ee8f24e69c40b52a0c9
3
+ size 14244
scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3db311ab19e9ed7e43e75481b4efaa2e30daff64fe44e30be3821e7c67b34ecb
3
+ size 1064
special_tokens_map.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "</s>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": "</s>",
17
+ "unk_token": {
18
+ "content": "<unk>",
19
+ "lstrip": false,
20
+ "normalized": false,
21
+ "rstrip": false,
22
+ "single_word": false
23
+ }
24
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dadfd56d766715c61d2ef780a525ab43b8e6da4de6865bda3d95fdef5e134055
3
+ size 493443
tokenizer_config.json ADDED
@@ -0,0 +1,44 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": true,
3
+ "add_eos_token": false,
4
+ "add_prefix_space": true,
5
+ "added_tokens_decoder": {
6
+ "0": {
7
+ "content": "<unk>",
8
+ "lstrip": false,
9
+ "normalized": false,
10
+ "rstrip": false,
11
+ "single_word": false,
12
+ "special": true
13
+ },
14
+ "1": {
15
+ "content": "<s>",
16
+ "lstrip": false,
17
+ "normalized": false,
18
+ "rstrip": false,
19
+ "single_word": false,
20
+ "special": true
21
+ },
22
+ "2": {
23
+ "content": "</s>",
24
+ "lstrip": false,
25
+ "normalized": false,
26
+ "rstrip": false,
27
+ "single_word": false,
28
+ "special": true
29
+ }
30
+ },
31
+ "bos_token": "<s>",
32
+ "chat_template": "{% for message in messages %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant\n' }}{% endif %}",
33
+ "clean_up_tokenization_spaces": false,
34
+ "eos_token": "</s>",
35
+ "legacy": true,
36
+ "model_max_length": 32768,
37
+ "pad_token": "</s>",
38
+ "padding_side": "right",
39
+ "sp_model_kwargs": {},
40
+ "spaces_between_special_tokens": false,
41
+ "tokenizer_class": "LlamaTokenizer",
42
+ "unk_token": "<unk>",
43
+ "use_default_system_prompt": false
44
+ }
trainer_state.json ADDED
@@ -0,0 +1,1373 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 0.9915492957746479,
5
+ "eval_steps": 30,
6
+ "global_step": 88,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.01,
13
+ "grad_norm": NaN,
14
+ "learning_rate": 0.0001,
15
+ "logits/chosen": -3.188204765319824,
16
+ "logits/rejected": -2.849832534790039,
17
+ "logps/chosen": -220.16908264160156,
18
+ "logps/rejected": -186.17868041992188,
19
+ "loss": 0.6931,
20
+ "rewards/accuracies": 0.0,
21
+ "rewards/chosen": 0.0,
22
+ "rewards/margins": 0.0,
23
+ "rewards/rejected": 0.0,
24
+ "step": 1
25
+ },
26
+ {
27
+ "epoch": 0.02,
28
+ "grad_norm": NaN,
29
+ "learning_rate": 0.0001,
30
+ "logits/chosen": -3.417233943939209,
31
+ "logits/rejected": -3.385444164276123,
32
+ "logps/chosen": -511.65631103515625,
33
+ "logps/rejected": -303.28558349609375,
34
+ "loss": 0.6931,
35
+ "rewards/accuracies": 0.0,
36
+ "rewards/chosen": 0.0,
37
+ "rewards/margins": 0.0,
38
+ "rewards/rejected": 0.0,
39
+ "step": 2
40
+ },
41
+ {
42
+ "epoch": 0.03,
43
+ "grad_norm": NaN,
44
+ "learning_rate": 0.0001,
45
+ "logits/chosen": -2.9685182571411133,
46
+ "logits/rejected": -2.9099667072296143,
47
+ "logps/chosen": -131.90631103515625,
48
+ "logps/rejected": -127.09970092773438,
49
+ "loss": 0.6931,
50
+ "rewards/accuracies": 0.0,
51
+ "rewards/chosen": 0.0,
52
+ "rewards/margins": 0.0,
53
+ "rewards/rejected": 0.0,
54
+ "step": 3
55
+ },
56
+ {
57
+ "epoch": 0.05,
58
+ "grad_norm": 13.127852439880371,
59
+ "learning_rate": 9.999645980833454e-05,
60
+ "logits/chosen": -3.3438305854797363,
61
+ "logits/rejected": -2.7247352600097656,
62
+ "logps/chosen": -212.61917114257812,
63
+ "logps/rejected": -160.17083740234375,
64
+ "loss": 0.6931,
65
+ "rewards/accuracies": 0.0,
66
+ "rewards/chosen": 0.0,
67
+ "rewards/margins": 0.0,
68
+ "rewards/rejected": 0.0,
69
+ "step": 4
70
+ },
71
+ {
72
+ "epoch": 0.06,
73
+ "grad_norm": 9.166139602661133,
74
+ "learning_rate": 9.998583973465646e-05,
75
+ "logits/chosen": -3.382075786590576,
76
+ "logits/rejected": -2.574824094772339,
77
+ "logps/chosen": -284.8162841796875,
78
+ "logps/rejected": -199.35745239257812,
79
+ "loss": 0.5279,
80
+ "rewards/accuracies": 0.625,
81
+ "rewards/chosen": -0.21959751844406128,
82
+ "rewards/margins": 0.6385258436203003,
83
+ "rewards/rejected": -0.8581234216690063,
84
+ "step": 5
85
+ },
86
+ {
87
+ "epoch": 0.07,
88
+ "grad_norm": 7.322300910949707,
89
+ "learning_rate": 9.99681412828496e-05,
90
+ "logits/chosen": -3.271949291229248,
91
+ "logits/rejected": -2.513960838317871,
92
+ "logps/chosen": -372.6782531738281,
93
+ "logps/rejected": -174.07479858398438,
94
+ "loss": 0.2913,
95
+ "rewards/accuracies": 0.875,
96
+ "rewards/chosen": 0.6500003933906555,
97
+ "rewards/margins": 2.8001651763916016,
98
+ "rewards/rejected": -2.1501646041870117,
99
+ "step": 6
100
+ },
101
+ {
102
+ "epoch": 0.08,
103
+ "grad_norm": 8.41703987121582,
104
+ "learning_rate": 9.99433669591504e-05,
105
+ "logits/chosen": -3.0961129665374756,
106
+ "logits/rejected": -2.8720593452453613,
107
+ "logps/chosen": -316.4073486328125,
108
+ "logps/rejected": -167.79397583007812,
109
+ "loss": 0.4464,
110
+ "rewards/accuracies": 0.625,
111
+ "rewards/chosen": -1.1310131549835205,
112
+ "rewards/margins": 1.2248451709747314,
113
+ "rewards/rejected": -2.355858087539673,
114
+ "step": 7
115
+ },
116
+ {
117
+ "epoch": 0.09,
118
+ "grad_norm": 3.9780449867248535,
119
+ "learning_rate": 9.991152027179307e-05,
120
+ "logits/chosen": -2.9302220344543457,
121
+ "logits/rejected": -2.5785322189331055,
122
+ "logps/chosen": -165.38961791992188,
123
+ "logps/rejected": -118.81493377685547,
124
+ "loss": 0.1908,
125
+ "rewards/accuracies": 1.0,
126
+ "rewards/chosen": 0.9433582425117493,
127
+ "rewards/margins": 3.344132661819458,
128
+ "rewards/rejected": -2.4007744789123535,
129
+ "step": 8
130
+ },
131
+ {
132
+ "epoch": 0.1,
133
+ "grad_norm": 7.300680637359619,
134
+ "learning_rate": 9.987260573051269e-05,
135
+ "logits/chosen": -3.1814284324645996,
136
+ "logits/rejected": -3.0421230792999268,
137
+ "logps/chosen": -258.6530456542969,
138
+ "logps/rejected": -209.901611328125,
139
+ "loss": 0.3466,
140
+ "rewards/accuracies": 0.875,
141
+ "rewards/chosen": -0.742733895778656,
142
+ "rewards/margins": 1.9031461477279663,
143
+ "rewards/rejected": -2.6458799839019775,
144
+ "step": 9
145
+ },
146
+ {
147
+ "epoch": 0.11,
148
+ "grad_norm": 7.8778839111328125,
149
+ "learning_rate": 9.982662884590662e-05,
150
+ "logits/chosen": -2.9834322929382324,
151
+ "logits/rejected": -2.546109199523926,
152
+ "logps/chosen": -357.8366394042969,
153
+ "logps/rejected": -137.680908203125,
154
+ "loss": 0.4102,
155
+ "rewards/accuracies": 0.75,
156
+ "rewards/chosen": -0.5062696933746338,
157
+ "rewards/margins": 4.4324493408203125,
158
+ "rewards/rejected": -4.938718795776367,
159
+ "step": 10
160
+ },
161
+ {
162
+ "epoch": 0.12,
163
+ "grad_norm": 11.32342529296875,
164
+ "learning_rate": 9.977359612865423e-05,
165
+ "logits/chosen": -2.992344379425049,
166
+ "logits/rejected": -2.4726979732513428,
167
+ "logps/chosen": -248.93341064453125,
168
+ "logps/rejected": -167.0366668701172,
169
+ "loss": 0.5224,
170
+ "rewards/accuracies": 0.625,
171
+ "rewards/chosen": 0.211262047290802,
172
+ "rewards/margins": 4.505336284637451,
173
+ "rewards/rejected": -4.294074058532715,
174
+ "step": 11
175
+ },
176
+ {
177
+ "epoch": 0.14,
178
+ "grad_norm": 9.8779878616333,
179
+ "learning_rate": 9.971351508859488e-05,
180
+ "logits/chosen": -3.1347732543945312,
181
+ "logits/rejected": -2.930368423461914,
182
+ "logps/chosen": -286.450439453125,
183
+ "logps/rejected": -202.72720336914062,
184
+ "loss": 0.6846,
185
+ "rewards/accuracies": 0.625,
186
+ "rewards/chosen": -1.4156379699707031,
187
+ "rewards/margins": 3.582124710083008,
188
+ "rewards/rejected": -4.997762680053711,
189
+ "step": 12
190
+ },
191
+ {
192
+ "epoch": 0.15,
193
+ "grad_norm": 8.25694465637207,
194
+ "learning_rate": 9.964639423366442e-05,
195
+ "logits/chosen": -3.121607780456543,
196
+ "logits/rejected": -2.9769201278686523,
197
+ "logps/chosen": -291.4040832519531,
198
+ "logps/rejected": -219.06231689453125,
199
+ "loss": 0.27,
200
+ "rewards/accuracies": 0.875,
201
+ "rewards/chosen": 0.20223967730998993,
202
+ "rewards/margins": 4.1018805503845215,
203
+ "rewards/rejected": -3.8996407985687256,
204
+ "step": 13
205
+ },
206
+ {
207
+ "epoch": 0.16,
208
+ "grad_norm": 3.865050792694092,
209
+ "learning_rate": 9.957224306869053e-05,
210
+ "logits/chosen": -3.265730381011963,
211
+ "logits/rejected": -2.9264979362487793,
212
+ "logps/chosen": -269.4535827636719,
213
+ "logps/rejected": -205.49166870117188,
214
+ "loss": 0.1639,
215
+ "rewards/accuracies": 1.0,
216
+ "rewards/chosen": 0.9554134607315063,
217
+ "rewards/margins": 5.716032028198242,
218
+ "rewards/rejected": -4.760618686676025,
219
+ "step": 14
220
+ },
221
+ {
222
+ "epoch": 0.17,
223
+ "grad_norm": 3.5297553539276123,
224
+ "learning_rate": 9.949107209404665e-05,
225
+ "logits/chosen": -2.461251735687256,
226
+ "logits/rejected": -2.371189832687378,
227
+ "logps/chosen": -259.4444885253906,
228
+ "logps/rejected": -134.9102783203125,
229
+ "loss": 0.2108,
230
+ "rewards/accuracies": 0.875,
231
+ "rewards/chosen": 0.03092677891254425,
232
+ "rewards/margins": 3.4555163383483887,
233
+ "rewards/rejected": -3.4245896339416504,
234
+ "step": 15
235
+ },
236
+ {
237
+ "epoch": 0.18,
238
+ "grad_norm": 5.433193206787109,
239
+ "learning_rate": 9.940289280416508e-05,
240
+ "logits/chosen": -3.2052440643310547,
241
+ "logits/rejected": -2.8054556846618652,
242
+ "logps/chosen": -265.2724304199219,
243
+ "logps/rejected": -225.47068786621094,
244
+ "loss": 0.1771,
245
+ "rewards/accuracies": 0.875,
246
+ "rewards/chosen": 0.08068099617958069,
247
+ "rewards/margins": 4.355477333068848,
248
+ "rewards/rejected": -4.274796485900879,
249
+ "step": 16
250
+ },
251
+ {
252
+ "epoch": 0.19,
253
+ "grad_norm": 10.783153533935547,
254
+ "learning_rate": 9.930771768590933e-05,
255
+ "logits/chosen": -3.0025675296783447,
256
+ "logits/rejected": -3.019012451171875,
257
+ "logps/chosen": -195.4888916015625,
258
+ "logps/rejected": -198.30792236328125,
259
+ "loss": 0.6906,
260
+ "rewards/accuracies": 0.75,
261
+ "rewards/chosen": -2.109578847885132,
262
+ "rewards/margins": 0.9626373052597046,
263
+ "rewards/rejected": -3.072216033935547,
264
+ "step": 17
265
+ },
266
+ {
267
+ "epoch": 0.2,
268
+ "grad_norm": 6.201537132263184,
269
+ "learning_rate": 9.92055602168058e-05,
270
+ "logits/chosen": -3.2320845127105713,
271
+ "logits/rejected": -2.9034526348114014,
272
+ "logps/chosen": -226.6787109375,
273
+ "logps/rejected": -132.74095153808594,
274
+ "loss": 0.2037,
275
+ "rewards/accuracies": 0.875,
276
+ "rewards/chosen": 0.05915778875350952,
277
+ "rewards/margins": 3.9392547607421875,
278
+ "rewards/rejected": -3.8800971508026123,
279
+ "step": 18
280
+ },
281
+ {
282
+ "epoch": 0.21,
283
+ "grad_norm": 11.768068313598633,
284
+ "learning_rate": 9.909643486313533e-05,
285
+ "logits/chosen": -3.178317070007324,
286
+ "logits/rejected": -3.297847270965576,
287
+ "logps/chosen": -412.5939636230469,
288
+ "logps/rejected": -241.15750122070312,
289
+ "loss": 0.5854,
290
+ "rewards/accuracies": 0.625,
291
+ "rewards/chosen": -1.0589046478271484,
292
+ "rewards/margins": 5.891679286956787,
293
+ "rewards/rejected": -6.9505839347839355,
294
+ "step": 19
295
+ },
296
+ {
297
+ "epoch": 0.23,
298
+ "grad_norm": 8.77907943725586,
299
+ "learning_rate": 9.898035707788463e-05,
300
+ "logits/chosen": -3.237691640853882,
301
+ "logits/rejected": -3.086668014526367,
302
+ "logps/chosen": -311.6419372558594,
303
+ "logps/rejected": -238.47996520996094,
304
+ "loss": 0.4823,
305
+ "rewards/accuracies": 0.875,
306
+ "rewards/chosen": -0.5970950126647949,
307
+ "rewards/margins": 4.7628984451293945,
308
+ "rewards/rejected": -5.3599934577941895,
309
+ "step": 20
310
+ },
311
+ {
312
+ "epoch": 0.24,
313
+ "grad_norm": 3.0071136951446533,
314
+ "learning_rate": 9.885734329855798e-05,
315
+ "logits/chosen": -3.2764744758605957,
316
+ "logits/rejected": -2.930645704269409,
317
+ "logps/chosen": -345.3895568847656,
318
+ "logps/rejected": -216.34646606445312,
319
+ "loss": 0.0629,
320
+ "rewards/accuracies": 1.0,
321
+ "rewards/chosen": 1.1355853080749512,
322
+ "rewards/margins": 6.265766620635986,
323
+ "rewards/rejected": -5.130181312561035,
324
+ "step": 21
325
+ },
326
+ {
327
+ "epoch": 0.25,
328
+ "grad_norm": 11.331860542297363,
329
+ "learning_rate": 9.872741094484965e-05,
330
+ "logits/chosen": -3.3079967498779297,
331
+ "logits/rejected": -2.7648227214813232,
332
+ "logps/chosen": -351.6337585449219,
333
+ "logps/rejected": -270.9600830078125,
334
+ "loss": 0.369,
335
+ "rewards/accuracies": 0.75,
336
+ "rewards/chosen": -0.9473640322685242,
337
+ "rewards/margins": 2.920064687728882,
338
+ "rewards/rejected": -3.8674285411834717,
339
+ "step": 22
340
+ },
341
+ {
342
+ "epoch": 0.26,
343
+ "grad_norm": 3.934148073196411,
344
+ "learning_rate": 9.859057841617709e-05,
345
+ "logits/chosen": -2.9965269565582275,
346
+ "logits/rejected": -2.872361183166504,
347
+ "logps/chosen": -424.9866943359375,
348
+ "logps/rejected": -250.78515625,
349
+ "loss": 0.0707,
350
+ "rewards/accuracies": 1.0,
351
+ "rewards/chosen": 0.7661307454109192,
352
+ "rewards/margins": 8.092283248901367,
353
+ "rewards/rejected": -7.326152801513672,
354
+ "step": 23
355
+ },
356
+ {
357
+ "epoch": 0.27,
358
+ "grad_norm": 6.69138765335083,
359
+ "learning_rate": 9.844686508907537e-05,
360
+ "logits/chosen": -3.3082752227783203,
361
+ "logits/rejected": -3.2877731323242188,
362
+ "logps/chosen": -293.59478759765625,
363
+ "logps/rejected": -179.19845581054688,
364
+ "loss": 0.275,
365
+ "rewards/accuracies": 0.875,
366
+ "rewards/chosen": -1.0576655864715576,
367
+ "rewards/margins": 2.9845128059387207,
368
+ "rewards/rejected": -4.042178153991699,
369
+ "step": 24
370
+ },
371
+ {
372
+ "epoch": 0.28,
373
+ "grad_norm": 14.109583854675293,
374
+ "learning_rate": 9.829629131445342e-05,
375
+ "logits/chosen": -2.856363534927368,
376
+ "logits/rejected": -2.133051633834839,
377
+ "logps/chosen": -203.2696533203125,
378
+ "logps/rejected": -100.72183990478516,
379
+ "loss": 0.8885,
380
+ "rewards/accuracies": 0.625,
381
+ "rewards/chosen": -1.2501509189605713,
382
+ "rewards/margins": 1.4751089811325073,
383
+ "rewards/rejected": -2.725259780883789,
384
+ "step": 25
385
+ },
386
+ {
387
+ "epoch": 0.29,
388
+ "grad_norm": 11.02879810333252,
389
+ "learning_rate": 9.81388784147121e-05,
390
+ "logits/chosen": -3.1515085697174072,
391
+ "logits/rejected": -2.761265754699707,
392
+ "logps/chosen": -230.35438537597656,
393
+ "logps/rejected": -280.5235290527344,
394
+ "loss": 0.3744,
395
+ "rewards/accuracies": 0.75,
396
+ "rewards/chosen": 0.2650168836116791,
397
+ "rewards/margins": 4.330430507659912,
398
+ "rewards/rejected": -4.065413475036621,
399
+ "step": 26
400
+ },
401
+ {
402
+ "epoch": 0.3,
403
+ "grad_norm": 5.773349761962891,
404
+ "learning_rate": 9.797464868072488e-05,
405
+ "logits/chosen": -3.1555724143981934,
406
+ "logits/rejected": -2.7849271297454834,
407
+ "logps/chosen": -204.43417358398438,
408
+ "logps/rejected": -110.06712341308594,
409
+ "loss": 0.2692,
410
+ "rewards/accuracies": 1.0,
411
+ "rewards/chosen": -1.146636962890625,
412
+ "rewards/margins": 2.842060089111328,
413
+ "rewards/rejected": -3.988697052001953,
414
+ "step": 27
415
+ },
416
+ {
417
+ "epoch": 0.32,
418
+ "grad_norm": 5.401435375213623,
419
+ "learning_rate": 9.780362536868113e-05,
420
+ "logits/chosen": -2.565051317214966,
421
+ "logits/rejected": -2.8752522468566895,
422
+ "logps/chosen": -158.33001708984375,
423
+ "logps/rejected": -139.02786254882812,
424
+ "loss": 0.3653,
425
+ "rewards/accuracies": 0.875,
426
+ "rewards/chosen": -0.32712388038635254,
427
+ "rewards/margins": 2.7614259719848633,
428
+ "rewards/rejected": -3.088550090789795,
429
+ "step": 28
430
+ },
431
+ {
432
+ "epoch": 0.33,
433
+ "grad_norm": 14.269856452941895,
434
+ "learning_rate": 9.762583269679303e-05,
435
+ "logits/chosen": -2.4300293922424316,
436
+ "logits/rejected": -2.7993407249450684,
437
+ "logps/chosen": -233.670166015625,
438
+ "logps/rejected": -229.98846435546875,
439
+ "loss": 0.82,
440
+ "rewards/accuracies": 0.625,
441
+ "rewards/chosen": -1.4308565855026245,
442
+ "rewards/margins": 1.704122543334961,
443
+ "rewards/rejected": -3.134979248046875,
444
+ "step": 29
445
+ },
446
+ {
447
+ "epoch": 0.34,
448
+ "grad_norm": 11.34128189086914,
449
+ "learning_rate": 9.744129584186598e-05,
450
+ "logits/chosen": -2.8739359378814697,
451
+ "logits/rejected": -3.0709590911865234,
452
+ "logps/chosen": -187.5973663330078,
453
+ "logps/rejected": -171.49639892578125,
454
+ "loss": 0.8236,
455
+ "rewards/accuracies": 0.625,
456
+ "rewards/chosen": -1.5466258525848389,
457
+ "rewards/margins": 1.1274800300598145,
458
+ "rewards/rejected": -2.674105644226074,
459
+ "step": 30
460
+ },
461
+ {
462
+ "epoch": 0.34,
463
+ "eval_logits/chosen": -3.2232699394226074,
464
+ "eval_logits/rejected": -2.919212818145752,
465
+ "eval_logps/chosen": -186.79000854492188,
466
+ "eval_logps/rejected": -195.58921813964844,
467
+ "eval_loss": 0.004889195319265127,
468
+ "eval_rewards/accuracies": 1.0,
469
+ "eval_rewards/chosen": 1.5010696649551392,
470
+ "eval_rewards/margins": 7.495745658874512,
471
+ "eval_rewards/rejected": -5.994676113128662,
472
+ "eval_runtime": 5.0142,
473
+ "eval_samples_per_second": 1.994,
474
+ "eval_steps_per_second": 0.997,
475
+ "step": 30
476
+ },
477
+ {
478
+ "epoch": 0.35,
479
+ "grad_norm": 13.044546127319336,
480
+ "learning_rate": 9.725004093573342e-05,
481
+ "logits/chosen": -2.972625255584717,
482
+ "logits/rejected": -3.1085734367370605,
483
+ "logps/chosen": -234.9746856689453,
484
+ "logps/rejected": -170.3094024658203,
485
+ "loss": 1.0699,
486
+ "rewards/accuracies": 0.75,
487
+ "rewards/chosen": -1.7236368656158447,
488
+ "rewards/margins": 2.1611523628234863,
489
+ "rewards/rejected": -3.884788990020752,
490
+ "step": 31
491
+ },
492
+ {
493
+ "epoch": 0.36,
494
+ "grad_norm": 5.612191200256348,
495
+ "learning_rate": 9.705209506155634e-05,
496
+ "logits/chosen": -2.7585391998291016,
497
+ "logits/rejected": -2.7289156913757324,
498
+ "logps/chosen": -283.48614501953125,
499
+ "logps/rejected": -195.35939025878906,
500
+ "loss": 0.1887,
501
+ "rewards/accuracies": 0.875,
502
+ "rewards/chosen": -0.6527029275894165,
503
+ "rewards/margins": 3.757754325866699,
504
+ "rewards/rejected": -4.410456657409668,
505
+ "step": 32
506
+ },
507
+ {
508
+ "epoch": 0.37,
509
+ "grad_norm": 5.19436502456665,
510
+ "learning_rate": 9.68474862499881e-05,
511
+ "logits/chosen": -2.993574857711792,
512
+ "logits/rejected": -2.428236246109009,
513
+ "logps/chosen": -300.1104431152344,
514
+ "logps/rejected": -282.0811462402344,
515
+ "loss": 0.2097,
516
+ "rewards/accuracies": 0.875,
517
+ "rewards/chosen": -0.1330656111240387,
518
+ "rewards/margins": 3.342900276184082,
519
+ "rewards/rejected": -3.475965738296509,
520
+ "step": 33
521
+ },
522
+ {
523
+ "epoch": 0.38,
524
+ "grad_norm": 7.4402265548706055,
525
+ "learning_rate": 9.663624347520505e-05,
526
+ "logits/chosen": -3.148404359817505,
527
+ "logits/rejected": -3.0182220935821533,
528
+ "logps/chosen": -304.60955810546875,
529
+ "logps/rejected": -140.74435424804688,
530
+ "loss": 0.3689,
531
+ "rewards/accuracies": 0.75,
532
+ "rewards/chosen": -0.3897354006767273,
533
+ "rewards/margins": 3.62821102142334,
534
+ "rewards/rejected": -4.017946243286133,
535
+ "step": 34
536
+ },
537
+ {
538
+ "epoch": 0.39,
539
+ "grad_norm": 6.259589195251465,
540
+ "learning_rate": 9.641839665080363e-05,
541
+ "logits/chosen": -2.65729022026062,
542
+ "logits/rejected": -2.7510809898376465,
543
+ "logps/chosen": -330.6309814453125,
544
+ "logps/rejected": -256.57421875,
545
+ "loss": 0.156,
546
+ "rewards/accuracies": 0.875,
547
+ "rewards/chosen": 0.08656883984804153,
548
+ "rewards/margins": 4.7406206130981445,
549
+ "rewards/rejected": -4.654051780700684,
550
+ "step": 35
551
+ },
552
+ {
553
+ "epoch": 0.41,
554
+ "grad_norm": 6.500273704528809,
555
+ "learning_rate": 9.619397662556435e-05,
556
+ "logits/chosen": -2.995790719985962,
557
+ "logits/rejected": -3.0307507514953613,
558
+ "logps/chosen": -308.2181701660156,
559
+ "logps/rejected": -172.92543029785156,
560
+ "loss": 0.2963,
561
+ "rewards/accuracies": 0.875,
562
+ "rewards/chosen": -0.4932462275028229,
563
+ "rewards/margins": 2.582528829574585,
564
+ "rewards/rejected": -3.075775146484375,
565
+ "step": 36
566
+ },
567
+ {
568
+ "epoch": 0.42,
569
+ "grad_norm": 5.707283973693848,
570
+ "learning_rate": 9.596301517908328e-05,
571
+ "logits/chosen": -3.033268451690674,
572
+ "logits/rejected": -3.0536768436431885,
573
+ "logps/chosen": -245.74594116210938,
574
+ "logps/rejected": -160.86160278320312,
575
+ "loss": 0.202,
576
+ "rewards/accuracies": 0.875,
577
+ "rewards/chosen": -0.2917206287384033,
578
+ "rewards/margins": 4.046570777893066,
579
+ "rewards/rejected": -4.338291645050049,
580
+ "step": 37
581
+ },
582
+ {
583
+ "epoch": 0.43,
584
+ "grad_norm": 6.139472007751465,
585
+ "learning_rate": 9.572554501727198e-05,
586
+ "logits/chosen": -3.1614038944244385,
587
+ "logits/rejected": -2.7908077239990234,
588
+ "logps/chosen": -297.92730712890625,
589
+ "logps/rejected": -190.27981567382812,
590
+ "loss": 0.1892,
591
+ "rewards/accuracies": 1.0,
592
+ "rewards/chosen": -0.4447726905345917,
593
+ "rewards/margins": 4.278682708740234,
594
+ "rewards/rejected": -4.723455429077148,
595
+ "step": 38
596
+ },
597
+ {
598
+ "epoch": 0.44,
599
+ "grad_norm": 3.791252851486206,
600
+ "learning_rate": 9.548159976772592e-05,
601
+ "logits/chosen": -3.0960066318511963,
602
+ "logits/rejected": -2.7181763648986816,
603
+ "logps/chosen": -223.69406127929688,
604
+ "logps/rejected": -123.40463256835938,
605
+ "loss": 0.1746,
606
+ "rewards/accuracies": 0.875,
607
+ "rewards/chosen": -0.49881893396377563,
608
+ "rewards/margins": 3.9385018348693848,
609
+ "rewards/rejected": -4.437320709228516,
610
+ "step": 39
611
+ },
612
+ {
613
+ "epoch": 0.45,
614
+ "grad_norm": 11.180024147033691,
615
+ "learning_rate": 9.523121397496269e-05,
616
+ "logits/chosen": -3.0495779514312744,
617
+ "logits/rejected": -2.918321132659912,
618
+ "logps/chosen": -219.83587646484375,
619
+ "logps/rejected": -188.59054565429688,
620
+ "loss": 0.4327,
621
+ "rewards/accuracies": 0.75,
622
+ "rewards/chosen": -0.5894317626953125,
623
+ "rewards/margins": 2.6391894817352295,
624
+ "rewards/rejected": -3.228621244430542,
625
+ "step": 40
626
+ },
627
+ {
628
+ "epoch": 0.46,
629
+ "grad_norm": 7.4187703132629395,
630
+ "learning_rate": 9.497442309553016e-05,
631
+ "logits/chosen": -2.807313919067383,
632
+ "logits/rejected": -2.6464860439300537,
633
+ "logps/chosen": -189.6230010986328,
634
+ "logps/rejected": -151.01107788085938,
635
+ "loss": 0.693,
636
+ "rewards/accuracies": 0.625,
637
+ "rewards/chosen": -3.1926302909851074,
638
+ "rewards/margins": 2.0725247859954834,
639
+ "rewards/rejected": -5.26515531539917,
640
+ "step": 41
641
+ },
642
+ {
643
+ "epoch": 0.47,
644
+ "grad_norm": 8.239777565002441,
645
+ "learning_rate": 9.471126349298556e-05,
646
+ "logits/chosen": -2.6991612911224365,
647
+ "logits/rejected": -2.4860870838165283,
648
+ "logps/chosen": -249.05978393554688,
649
+ "logps/rejected": -261.3404541015625,
650
+ "loss": 0.3519,
651
+ "rewards/accuracies": 0.875,
652
+ "rewards/chosen": -2.2070960998535156,
653
+ "rewards/margins": 2.7090442180633545,
654
+ "rewards/rejected": -4.916140079498291,
655
+ "step": 42
656
+ },
657
+ {
658
+ "epoch": 0.48,
659
+ "grad_norm": 12.044631958007812,
660
+ "learning_rate": 9.444177243274618e-05,
661
+ "logits/chosen": -3.069183111190796,
662
+ "logits/rejected": -3.239974021911621,
663
+ "logps/chosen": -191.36724853515625,
664
+ "logps/rejected": -203.04444885253906,
665
+ "loss": 0.8501,
666
+ "rewards/accuracies": 0.875,
667
+ "rewards/chosen": -2.2000980377197266,
668
+ "rewards/margins": 3.8068671226501465,
669
+ "rewards/rejected": -6.006965160369873,
670
+ "step": 43
671
+ },
672
+ {
673
+ "epoch": 0.5,
674
+ "grad_norm": 10.961316108703613,
675
+ "learning_rate": 9.41659880768122e-05,
676
+ "logits/chosen": -3.117356538772583,
677
+ "logits/rejected": -3.1276466846466064,
678
+ "logps/chosen": -241.28115844726562,
679
+ "logps/rejected": -217.63143920898438,
680
+ "loss": 0.5688,
681
+ "rewards/accuracies": 0.875,
682
+ "rewards/chosen": -1.914351463317871,
683
+ "rewards/margins": 3.1862049102783203,
684
+ "rewards/rejected": -5.100556373596191,
685
+ "step": 44
686
+ },
687
+ {
688
+ "epoch": 0.51,
689
+ "grad_norm": 11.185661315917969,
690
+ "learning_rate": 9.388394947836279e-05,
691
+ "logits/chosen": -3.105802536010742,
692
+ "logits/rejected": -2.782811164855957,
693
+ "logps/chosen": -346.5468444824219,
694
+ "logps/rejected": -281.4388732910156,
695
+ "loss": 0.5225,
696
+ "rewards/accuracies": 0.625,
697
+ "rewards/chosen": -1.3682477474212646,
698
+ "rewards/margins": 1.8964588642120361,
699
+ "rewards/rejected": -3.264706611633301,
700
+ "step": 45
701
+ },
702
+ {
703
+ "epoch": 0.52,
704
+ "grad_norm": 11.120816230773926,
705
+ "learning_rate": 9.359569657622574e-05,
706
+ "logits/chosen": -3.119126319885254,
707
+ "logits/rejected": -3.2095906734466553,
708
+ "logps/chosen": -529.1534423828125,
709
+ "logps/rejected": -279.0389404296875,
710
+ "loss": 0.3314,
711
+ "rewards/accuracies": 0.75,
712
+ "rewards/chosen": 0.4428637623786926,
713
+ "rewards/margins": 6.021182060241699,
714
+ "rewards/rejected": -5.578318119049072,
715
+ "step": 46
716
+ },
717
+ {
718
+ "epoch": 0.53,
719
+ "grad_norm": 8.270225524902344,
720
+ "learning_rate": 9.330127018922194e-05,
721
+ "logits/chosen": -2.903927803039551,
722
+ "logits/rejected": -2.7225542068481445,
723
+ "logps/chosen": -275.26318359375,
724
+ "logps/rejected": -169.46505737304688,
725
+ "loss": 0.6252,
726
+ "rewards/accuracies": 0.875,
727
+ "rewards/chosen": -1.2819722890853882,
728
+ "rewards/margins": 2.14062237739563,
729
+ "rewards/rejected": -3.4225947856903076,
730
+ "step": 47
731
+ },
732
+ {
733
+ "epoch": 0.54,
734
+ "grad_norm": 3.7271859645843506,
735
+ "learning_rate": 9.300071201038503e-05,
736
+ "logits/chosen": -2.98264741897583,
737
+ "logits/rejected": -2.486337184906006,
738
+ "logps/chosen": -257.7012023925781,
739
+ "logps/rejected": -152.14598083496094,
740
+ "loss": 0.1543,
741
+ "rewards/accuracies": 0.875,
742
+ "rewards/chosen": -0.4061692953109741,
743
+ "rewards/margins": 4.309848785400391,
744
+ "rewards/rejected": -4.716017723083496,
745
+ "step": 48
746
+ },
747
+ {
748
+ "epoch": 0.55,
749
+ "grad_norm": 5.926673889160156,
750
+ "learning_rate": 9.26940646010574e-05,
751
+ "logits/chosen": -3.2901933193206787,
752
+ "logits/rejected": -2.8755884170532227,
753
+ "logps/chosen": -191.0556640625,
754
+ "logps/rejected": -104.35969543457031,
755
+ "loss": 0.1687,
756
+ "rewards/accuracies": 0.875,
757
+ "rewards/chosen": -0.4329254627227783,
758
+ "rewards/margins": 4.122668266296387,
759
+ "rewards/rejected": -4.555593490600586,
760
+ "step": 49
761
+ },
762
+ {
763
+ "epoch": 0.56,
764
+ "grad_norm": NaN,
765
+ "learning_rate": 9.26940646010574e-05,
766
+ "logits/chosen": -3.4009902477264404,
767
+ "logits/rejected": -3.2163825035095215,
768
+ "logps/chosen": -228.1378936767578,
769
+ "logps/rejected": -145.276123046875,
770
+ "loss": 0.3099,
771
+ "rewards/accuracies": 0.75,
772
+ "rewards/chosen": 0.2526423931121826,
773
+ "rewards/margins": 4.154683589935303,
774
+ "rewards/rejected": -3.90204119682312,
775
+ "step": 50
776
+ },
777
+ {
778
+ "epoch": 0.57,
779
+ "grad_norm": 6.389315128326416,
780
+ "learning_rate": 9.238137138486318e-05,
781
+ "logits/chosen": -3.3087496757507324,
782
+ "logits/rejected": -3.0156049728393555,
783
+ "logps/chosen": -367.84161376953125,
784
+ "logps/rejected": -198.76544189453125,
785
+ "loss": 0.1843,
786
+ "rewards/accuracies": 0.875,
787
+ "rewards/chosen": 0.8194153308868408,
788
+ "rewards/margins": 4.259294033050537,
789
+ "rewards/rejected": -3.439878463745117,
790
+ "step": 51
791
+ },
792
+ {
793
+ "epoch": 0.59,
794
+ "grad_norm": 2.2384748458862305,
795
+ "learning_rate": 9.206267664155907e-05,
796
+ "logits/chosen": -3.412761926651001,
797
+ "logits/rejected": -3.3606858253479004,
798
+ "logps/chosen": -394.595947265625,
799
+ "logps/rejected": -234.29969787597656,
800
+ "loss": 0.0957,
801
+ "rewards/accuracies": 1.0,
802
+ "rewards/chosen": 0.6140856742858887,
803
+ "rewards/margins": 3.623807191848755,
804
+ "rewards/rejected": -3.009721517562866,
805
+ "step": 52
806
+ },
807
+ {
808
+ "epoch": 0.6,
809
+ "grad_norm": 9.3274507522583,
810
+ "learning_rate": 9.173802550076401e-05,
811
+ "logits/chosen": -2.9212708473205566,
812
+ "logits/rejected": -3.2253193855285645,
813
+ "logps/chosen": -309.66741943359375,
814
+ "logps/rejected": -194.5745849609375,
815
+ "loss": 0.3459,
816
+ "rewards/accuracies": 0.875,
817
+ "rewards/chosen": 0.11235330998897552,
818
+ "rewards/margins": 2.6622633934020996,
819
+ "rewards/rejected": -2.549910068511963,
820
+ "step": 53
821
+ },
822
+ {
823
+ "epoch": 0.61,
824
+ "grad_norm": 9.517582893371582,
825
+ "learning_rate": 9.140746393556854e-05,
826
+ "logits/chosen": -3.4288511276245117,
827
+ "logits/rejected": -3.304213285446167,
828
+ "logps/chosen": -376.1202392578125,
829
+ "logps/rejected": -233.58489990234375,
830
+ "loss": 0.5965,
831
+ "rewards/accuracies": 0.875,
832
+ "rewards/chosen": 0.555878758430481,
833
+ "rewards/margins": 4.904500961303711,
834
+ "rewards/rejected": -4.3486223220825195,
835
+ "step": 54
836
+ },
837
+ {
838
+ "epoch": 0.62,
839
+ "grad_norm": 1.8991745710372925,
840
+ "learning_rate": 9.107103875602459e-05,
841
+ "logits/chosen": -2.924043655395508,
842
+ "logits/rejected": -3.185393810272217,
843
+ "logps/chosen": -341.2203063964844,
844
+ "logps/rejected": -234.63758850097656,
845
+ "loss": 0.0427,
846
+ "rewards/accuracies": 1.0,
847
+ "rewards/chosen": 0.3599395751953125,
848
+ "rewards/margins": 3.722670555114746,
849
+ "rewards/rejected": -3.3627307415008545,
850
+ "step": 55
851
+ },
852
+ {
853
+ "epoch": 0.63,
854
+ "grad_norm": 8.762215614318848,
855
+ "learning_rate": 9.072879760251679e-05,
856
+ "logits/chosen": -3.2768988609313965,
857
+ "logits/rejected": -3.0862388610839844,
858
+ "logps/chosen": -224.249755859375,
859
+ "logps/rejected": -180.57443237304688,
860
+ "loss": 0.3701,
861
+ "rewards/accuracies": 0.875,
862
+ "rewards/chosen": -0.7867403626441956,
863
+ "rewards/margins": 2.9776391983032227,
864
+ "rewards/rejected": -3.7643797397613525,
865
+ "step": 56
866
+ },
867
+ {
868
+ "epoch": 0.64,
869
+ "grad_norm": 8.915589332580566,
870
+ "learning_rate": 9.038078893901634e-05,
871
+ "logits/chosen": -3.2617459297180176,
872
+ "logits/rejected": -3.202209711074829,
873
+ "logps/chosen": -89.90280151367188,
874
+ "logps/rejected": -147.65475463867188,
875
+ "loss": 0.4777,
876
+ "rewards/accuracies": 0.75,
877
+ "rewards/chosen": -0.5283554792404175,
878
+ "rewards/margins": 2.442681312561035,
879
+ "rewards/rejected": -2.971036911010742,
880
+ "step": 57
881
+ },
882
+ {
883
+ "epoch": 0.65,
884
+ "grad_norm": 11.555606842041016,
885
+ "learning_rate": 9.002706204621803e-05,
886
+ "logits/chosen": -3.001934766769409,
887
+ "logits/rejected": -3.068469285964966,
888
+ "logps/chosen": -344.8009948730469,
889
+ "logps/rejected": -205.77664184570312,
890
+ "loss": 0.3874,
891
+ "rewards/accuracies": 0.75,
892
+ "rewards/chosen": 0.012376070022583008,
893
+ "rewards/margins": 2.2519307136535645,
894
+ "rewards/rejected": -2.2395546436309814,
895
+ "step": 58
896
+ },
897
+ {
898
+ "epoch": 0.66,
899
+ "grad_norm": 14.009621620178223,
900
+ "learning_rate": 8.966766701456177e-05,
901
+ "logits/chosen": -2.8611111640930176,
902
+ "logits/rejected": -3.4576072692871094,
903
+ "logps/chosen": -338.19970703125,
904
+ "logps/rejected": -260.2259521484375,
905
+ "loss": 0.7432,
906
+ "rewards/accuracies": 0.75,
907
+ "rewards/chosen": -0.9131150841712952,
908
+ "rewards/margins": 2.4023218154907227,
909
+ "rewards/rejected": -3.315437078475952,
910
+ "step": 59
911
+ },
912
+ {
913
+ "epoch": 0.68,
914
+ "grad_norm": 4.895538330078125,
915
+ "learning_rate": 8.930265473713938e-05,
916
+ "logits/chosen": -3.3844141960144043,
917
+ "logits/rejected": -3.3034396171569824,
918
+ "logps/chosen": -369.8033447265625,
919
+ "logps/rejected": -244.80897521972656,
920
+ "loss": 0.2099,
921
+ "rewards/accuracies": 0.875,
922
+ "rewards/chosen": 0.4544011354446411,
923
+ "rewards/margins": 2.975295305252075,
924
+ "rewards/rejected": -2.5208940505981445,
925
+ "step": 60
926
+ },
927
+ {
928
+ "epoch": 0.68,
929
+ "eval_logits/chosen": -3.299164295196533,
930
+ "eval_logits/rejected": -3.039053440093994,
931
+ "eval_logps/chosen": -182.6027069091797,
932
+ "eval_logps/rejected": -194.1580810546875,
933
+ "eval_loss": 0.0008067694725468755,
934
+ "eval_rewards/accuracies": 1.0,
935
+ "eval_rewards/chosen": 2.757256269454956,
936
+ "eval_rewards/margins": 8.322589874267578,
937
+ "eval_rewards/rejected": -5.565333366394043,
938
+ "eval_runtime": 4.7427,
939
+ "eval_samples_per_second": 2.108,
940
+ "eval_steps_per_second": 1.054,
941
+ "step": 60
942
+ },
943
+ {
944
+ "epoch": 0.69,
945
+ "grad_norm": 13.20186710357666,
946
+ "learning_rate": 8.893207690248776e-05,
947
+ "logits/chosen": -3.109037399291992,
948
+ "logits/rejected": -3.326498508453369,
949
+ "logps/chosen": -275.9548034667969,
950
+ "logps/rejected": -253.3433380126953,
951
+ "loss": 0.6256,
952
+ "rewards/accuracies": 0.625,
953
+ "rewards/chosen": -1.315821647644043,
954
+ "rewards/margins": 1.796335220336914,
955
+ "rewards/rejected": -3.112156867980957,
956
+ "step": 61
957
+ },
958
+ {
959
+ "epoch": 0.7,
960
+ "grad_norm": 2.376776933670044,
961
+ "learning_rate": 8.855598598726939e-05,
962
+ "logits/chosen": -3.083179473876953,
963
+ "logits/rejected": -3.2115023136138916,
964
+ "logps/chosen": -234.42640686035156,
965
+ "logps/rejected": -150.53904724121094,
966
+ "loss": 0.0802,
967
+ "rewards/accuracies": 1.0,
968
+ "rewards/chosen": 0.07728518545627594,
969
+ "rewards/margins": 4.584873199462891,
970
+ "rewards/rejected": -4.507587909698486,
971
+ "step": 62
972
+ },
973
+ {
974
+ "epoch": 0.71,
975
+ "grad_norm": 6.479050636291504,
976
+ "learning_rate": 8.817443524884119e-05,
977
+ "logits/chosen": -2.5378470420837402,
978
+ "logits/rejected": -2.813725471496582,
979
+ "logps/chosen": -321.4371337890625,
980
+ "logps/rejected": -244.43296813964844,
981
+ "loss": 0.2551,
982
+ "rewards/accuracies": 0.875,
983
+ "rewards/chosen": 0.725128173828125,
984
+ "rewards/margins": 6.416964530944824,
985
+ "rewards/rejected": -5.691835880279541,
986
+ "step": 63
987
+ },
988
+ {
989
+ "epoch": 0.72,
990
+ "grad_norm": 8.595841407775879,
991
+ "learning_rate": 8.778747871771292e-05,
992
+ "logits/chosen": -3.3628735542297363,
993
+ "logits/rejected": -2.6477882862091064,
994
+ "logps/chosen": -491.04852294921875,
995
+ "logps/rejected": -248.0843505859375,
996
+ "loss": 0.4921,
997
+ "rewards/accuracies": 0.875,
998
+ "rewards/chosen": 0.3514632284641266,
999
+ "rewards/margins": 7.0789079666137695,
1000
+ "rewards/rejected": -6.727444648742676,
1001
+ "step": 64
1002
+ },
1003
+ {
1004
+ "epoch": 0.73,
1005
+ "grad_norm": 12.204965591430664,
1006
+ "learning_rate": 8.739517118989605e-05,
1007
+ "logits/chosen": -3.0609679222106934,
1008
+ "logits/rejected": -3.0792436599731445,
1009
+ "logps/chosen": -297.3428649902344,
1010
+ "logps/rejected": -190.6544647216797,
1011
+ "loss": 0.5732,
1012
+ "rewards/accuracies": 0.625,
1013
+ "rewards/chosen": -1.3494699001312256,
1014
+ "rewards/margins": 2.159775733947754,
1015
+ "rewards/rejected": -3.5092456340789795,
1016
+ "step": 65
1017
+ },
1018
+ {
1019
+ "epoch": 0.74,
1020
+ "grad_norm": 8.17355728149414,
1021
+ "learning_rate": 8.69975682191442e-05,
1022
+ "logits/chosen": -3.373033046722412,
1023
+ "logits/rejected": -3.043926239013672,
1024
+ "logps/chosen": -578.4942626953125,
1025
+ "logps/rejected": -265.8677978515625,
1026
+ "loss": 0.3497,
1027
+ "rewards/accuracies": 0.875,
1028
+ "rewards/chosen": -0.6642241477966309,
1029
+ "rewards/margins": 6.849948883056641,
1030
+ "rewards/rejected": -7.5141730308532715,
1031
+ "step": 66
1032
+ },
1033
+ {
1034
+ "epoch": 0.75,
1035
+ "grad_norm": 6.397715091705322,
1036
+ "learning_rate": 8.659472610908627e-05,
1037
+ "logits/chosen": -2.711515426635742,
1038
+ "logits/rejected": -2.395803928375244,
1039
+ "logps/chosen": -203.78604125976562,
1040
+ "logps/rejected": -129.5730438232422,
1041
+ "loss": 0.1589,
1042
+ "rewards/accuracies": 0.875,
1043
+ "rewards/chosen": -1.117239236831665,
1044
+ "rewards/margins": 4.5233540534973145,
1045
+ "rewards/rejected": -5.640593528747559,
1046
+ "step": 67
1047
+ },
1048
+ {
1049
+ "epoch": 0.77,
1050
+ "grad_norm": 0.30370500683784485,
1051
+ "learning_rate": 8.618670190525352e-05,
1052
+ "logits/chosen": -3.0820484161376953,
1053
+ "logits/rejected": -3.0193471908569336,
1054
+ "logps/chosen": -378.7370300292969,
1055
+ "logps/rejected": -222.33255004882812,
1056
+ "loss": 0.0083,
1057
+ "rewards/accuracies": 1.0,
1058
+ "rewards/chosen": 0.245200514793396,
1059
+ "rewards/margins": 7.6438798904418945,
1060
+ "rewards/rejected": -7.398679733276367,
1061
+ "step": 68
1062
+ },
1063
+ {
1064
+ "epoch": 0.78,
1065
+ "grad_norm": 10.54161548614502,
1066
+ "learning_rate": 8.577355338700132e-05,
1067
+ "logits/chosen": -3.2389307022094727,
1068
+ "logits/rejected": -3.0821778774261475,
1069
+ "logps/chosen": -346.2233581542969,
1070
+ "logps/rejected": -192.50180053710938,
1071
+ "loss": 0.7383,
1072
+ "rewards/accuracies": 0.875,
1073
+ "rewards/chosen": -1.195668339729309,
1074
+ "rewards/margins": 5.02830696105957,
1075
+ "rewards/rejected": -6.22397518157959,
1076
+ "step": 69
1077
+ },
1078
+ {
1079
+ "epoch": 0.79,
1080
+ "grad_norm": 1.390268087387085,
1081
+ "learning_rate": 8.535533905932738e-05,
1082
+ "logits/chosen": -3.3620431423187256,
1083
+ "logits/rejected": -2.399602174758911,
1084
+ "logps/chosen": -221.27833557128906,
1085
+ "logps/rejected": -118.21566009521484,
1086
+ "loss": 0.0245,
1087
+ "rewards/accuracies": 1.0,
1088
+ "rewards/chosen": 0.3484119176864624,
1089
+ "rewards/margins": 8.018238067626953,
1090
+ "rewards/rejected": -7.669825553894043,
1091
+ "step": 70
1092
+ },
1093
+ {
1094
+ "epoch": 0.8,
1095
+ "grad_norm": 10.928914070129395,
1096
+ "learning_rate": 8.493211814458673e-05,
1097
+ "logits/chosen": -2.9451489448547363,
1098
+ "logits/rejected": -3.133699893951416,
1099
+ "logps/chosen": -201.42678833007812,
1100
+ "logps/rejected": -202.67453002929688,
1101
+ "loss": 0.3648,
1102
+ "rewards/accuracies": 0.75,
1103
+ "rewards/chosen": -2.156554698944092,
1104
+ "rewards/margins": 4.664039611816406,
1105
+ "rewards/rejected": -6.820594310760498,
1106
+ "step": 71
1107
+ },
1108
+ {
1109
+ "epoch": 0.81,
1110
+ "grad_norm": 0.7106189131736755,
1111
+ "learning_rate": 8.450395057410561e-05,
1112
+ "logits/chosen": -3.4210917949676514,
1113
+ "logits/rejected": -2.9528050422668457,
1114
+ "logps/chosen": -317.3839416503906,
1115
+ "logps/rejected": -170.9514617919922,
1116
+ "loss": 0.0172,
1117
+ "rewards/accuracies": 1.0,
1118
+ "rewards/chosen": -0.5772140026092529,
1119
+ "rewards/margins": 6.574632167816162,
1120
+ "rewards/rejected": -7.151845455169678,
1121
+ "step": 72
1122
+ },
1123
+ {
1124
+ "epoch": 0.82,
1125
+ "grad_norm": 11.109986305236816,
1126
+ "learning_rate": 8.407089697969457e-05,
1127
+ "logits/chosen": -3.1691198348999023,
1128
+ "logits/rejected": -2.9826912879943848,
1129
+ "logps/chosen": -377.9816589355469,
1130
+ "logps/rejected": -288.35528564453125,
1131
+ "loss": 0.4652,
1132
+ "rewards/accuracies": 0.875,
1133
+ "rewards/chosen": -0.14444704353809357,
1134
+ "rewards/margins": 4.671531677246094,
1135
+ "rewards/rejected": -4.815978527069092,
1136
+ "step": 73
1137
+ },
1138
+ {
1139
+ "epoch": 0.83,
1140
+ "grad_norm": 16.743736267089844,
1141
+ "learning_rate": 8.363301868506264e-05,
1142
+ "logits/chosen": -3.06179141998291,
1143
+ "logits/rejected": -2.977768898010254,
1144
+ "logps/chosen": -336.5064392089844,
1145
+ "logps/rejected": -273.6090393066406,
1146
+ "loss": 0.9035,
1147
+ "rewards/accuracies": 0.75,
1148
+ "rewards/chosen": -1.7004997730255127,
1149
+ "rewards/margins": 3.5526602268218994,
1150
+ "rewards/rejected": -5.253159999847412,
1151
+ "step": 74
1152
+ },
1153
+ {
1154
+ "epoch": 0.85,
1155
+ "grad_norm": 4.297181606292725,
1156
+ "learning_rate": 8.319037769713338e-05,
1157
+ "logits/chosen": -3.1873130798339844,
1158
+ "logits/rejected": -3.2429358959198,
1159
+ "logps/chosen": -295.90167236328125,
1160
+ "logps/rejected": -252.4221954345703,
1161
+ "loss": 0.0913,
1162
+ "rewards/accuracies": 1.0,
1163
+ "rewards/chosen": -0.35775870084762573,
1164
+ "rewards/margins": 7.1074934005737305,
1165
+ "rewards/rejected": -7.46525239944458,
1166
+ "step": 75
1167
+ },
1168
+ {
1169
+ "epoch": 0.86,
1170
+ "grad_norm": 1.5024635791778564,
1171
+ "learning_rate": 8.274303669726426e-05,
1172
+ "logits/chosen": -3.2978920936584473,
1173
+ "logits/rejected": -3.056913375854492,
1174
+ "logps/chosen": -209.08226013183594,
1175
+ "logps/rejected": -216.75137329101562,
1176
+ "loss": 0.0595,
1177
+ "rewards/accuracies": 1.0,
1178
+ "rewards/chosen": -0.7533246278762817,
1179
+ "rewards/margins": 6.2144622802734375,
1180
+ "rewards/rejected": -6.96778678894043,
1181
+ "step": 76
1182
+ },
1183
+ {
1184
+ "epoch": 0.87,
1185
+ "grad_norm": 6.813451766967773,
1186
+ "learning_rate": 8.229105903237044e-05,
1187
+ "logits/chosen": -3.1069507598876953,
1188
+ "logits/rejected": -2.643913507461548,
1189
+ "logps/chosen": -379.9039611816406,
1190
+ "logps/rejected": -199.5642547607422,
1191
+ "loss": 0.2955,
1192
+ "rewards/accuracies": 0.875,
1193
+ "rewards/chosen": -2.039405345916748,
1194
+ "rewards/margins": 6.617141246795654,
1195
+ "rewards/rejected": -8.656547546386719,
1196
+ "step": 77
1197
+ },
1198
+ {
1199
+ "epoch": 0.88,
1200
+ "grad_norm": 12.086407661437988,
1201
+ "learning_rate": 8.183450870595441e-05,
1202
+ "logits/chosen": -3.06406307220459,
1203
+ "logits/rejected": -3.0317187309265137,
1204
+ "logps/chosen": -369.6795654296875,
1205
+ "logps/rejected": -303.2950744628906,
1206
+ "loss": 0.4151,
1207
+ "rewards/accuracies": 0.75,
1208
+ "rewards/chosen": -1.9069956541061401,
1209
+ "rewards/margins": 4.3820600509643555,
1210
+ "rewards/rejected": -6.289055824279785,
1211
+ "step": 78
1212
+ },
1213
+ {
1214
+ "epoch": 0.89,
1215
+ "grad_norm": 3.3662607669830322,
1216
+ "learning_rate": 8.13734503690426e-05,
1217
+ "logits/chosen": -3.289384126663208,
1218
+ "logits/rejected": -3.1391568183898926,
1219
+ "logps/chosen": -345.8055114746094,
1220
+ "logps/rejected": -193.66909790039062,
1221
+ "loss": 0.1584,
1222
+ "rewards/accuracies": 0.875,
1223
+ "rewards/chosen": -0.599603533744812,
1224
+ "rewards/margins": 6.25246524810791,
1225
+ "rewards/rejected": -6.852068901062012,
1226
+ "step": 79
1227
+ },
1228
+ {
1229
+ "epoch": 0.9,
1230
+ "grad_norm": 4.315209865570068,
1231
+ "learning_rate": 8.090794931103026e-05,
1232
+ "logits/chosen": -3.228783369064331,
1233
+ "logits/rejected": -3.0682406425476074,
1234
+ "logps/chosen": -270.1308898925781,
1235
+ "logps/rejected": -199.65390014648438,
1236
+ "loss": 0.197,
1237
+ "rewards/accuracies": 0.875,
1238
+ "rewards/chosen": -1.2517391443252563,
1239
+ "rewards/margins": 5.22841215133667,
1240
+ "rewards/rejected": -6.480151176452637,
1241
+ "step": 80
1242
+ },
1243
+ {
1244
+ "epoch": 0.91,
1245
+ "grad_norm": 4.3422465324401855,
1246
+ "learning_rate": 8.043807145043604e-05,
1247
+ "logits/chosen": -3.3883039951324463,
1248
+ "logits/rejected": -2.8610763549804688,
1249
+ "logps/chosen": -375.4300231933594,
1250
+ "logps/rejected": -212.55474853515625,
1251
+ "loss": 0.0891,
1252
+ "rewards/accuracies": 1.0,
1253
+ "rewards/chosen": -1.205045461654663,
1254
+ "rewards/margins": 7.316597938537598,
1255
+ "rewards/rejected": -8.521642684936523,
1256
+ "step": 81
1257
+ },
1258
+ {
1259
+ "epoch": 0.92,
1260
+ "grad_norm": 14.285599708557129,
1261
+ "learning_rate": 7.996388332556735e-05,
1262
+ "logits/chosen": -3.384727954864502,
1263
+ "logits/rejected": -3.3384907245635986,
1264
+ "logps/chosen": -403.5943603515625,
1265
+ "logps/rejected": -331.4810791015625,
1266
+ "loss": 0.6671,
1267
+ "rewards/accuracies": 0.625,
1268
+ "rewards/chosen": -1.1636757850646973,
1269
+ "rewards/margins": 2.9779176712036133,
1270
+ "rewards/rejected": -4.1415934562683105,
1271
+ "step": 82
1272
+ },
1273
+ {
1274
+ "epoch": 0.94,
1275
+ "grad_norm": 0.26110827922821045,
1276
+ "learning_rate": 7.94854520850981e-05,
1277
+ "logits/chosen": -3.1430768966674805,
1278
+ "logits/rejected": -3.018350601196289,
1279
+ "logps/chosen": -502.6695556640625,
1280
+ "logps/rejected": -263.3882751464844,
1281
+ "loss": 0.0062,
1282
+ "rewards/accuracies": 1.0,
1283
+ "rewards/chosen": 1.0688759088516235,
1284
+ "rewards/margins": 9.059045791625977,
1285
+ "rewards/rejected": -7.990170001983643,
1286
+ "step": 83
1287
+ },
1288
+ {
1289
+ "epoch": 0.95,
1290
+ "grad_norm": 8.293683052062988,
1291
+ "learning_rate": 7.900284547855991e-05,
1292
+ "logits/chosen": -3.26918363571167,
1293
+ "logits/rejected": -2.9317312240600586,
1294
+ "logps/chosen": -287.8534851074219,
1295
+ "logps/rejected": -225.27447509765625,
1296
+ "loss": 0.1985,
1297
+ "rewards/accuracies": 0.875,
1298
+ "rewards/chosen": 0.16245371103286743,
1299
+ "rewards/margins": 6.57786750793457,
1300
+ "rewards/rejected": -6.415413856506348,
1301
+ "step": 84
1302
+ },
1303
+ {
1304
+ "epoch": 0.96,
1305
+ "grad_norm": 1.314470648765564,
1306
+ "learning_rate": 7.85161318467482e-05,
1307
+ "logits/chosen": -3.219449043273926,
1308
+ "logits/rejected": -2.8287854194641113,
1309
+ "logps/chosen": -366.5806884765625,
1310
+ "logps/rejected": -191.7723388671875,
1311
+ "loss": 0.0161,
1312
+ "rewards/accuracies": 1.0,
1313
+ "rewards/chosen": 0.41258037090301514,
1314
+ "rewards/margins": 6.966887474060059,
1315
+ "rewards/rejected": -6.554306983947754,
1316
+ "step": 85
1317
+ },
1318
+ {
1319
+ "epoch": 0.97,
1320
+ "grad_norm": 20.908584594726562,
1321
+ "learning_rate": 7.80253801120447e-05,
1322
+ "logits/chosen": -3.160271406173706,
1323
+ "logits/rejected": -3.0260157585144043,
1324
+ "logps/chosen": -382.6925048828125,
1325
+ "logps/rejected": -233.9979248046875,
1326
+ "loss": 0.7179,
1327
+ "rewards/accuracies": 0.75,
1328
+ "rewards/chosen": -0.4525451958179474,
1329
+ "rewards/margins": 2.8419482707977295,
1330
+ "rewards/rejected": -3.2944936752319336,
1331
+ "step": 86
1332
+ },
1333
+ {
1334
+ "epoch": 0.98,
1335
+ "grad_norm": 1.2079492807388306,
1336
+ "learning_rate": 7.753065976865744e-05,
1337
+ "logits/chosen": -3.191283702850342,
1338
+ "logits/rejected": -3.0339455604553223,
1339
+ "logps/chosen": -285.8114013671875,
1340
+ "logps/rejected": -169.56741333007812,
1341
+ "loss": 0.0272,
1342
+ "rewards/accuracies": 1.0,
1343
+ "rewards/chosen": 0.33831024169921875,
1344
+ "rewards/margins": 6.575672149658203,
1345
+ "rewards/rejected": -6.237361907958984,
1346
+ "step": 87
1347
+ },
1348
+ {
1349
+ "epoch": 0.99,
1350
+ "grad_norm": 0.654447615146637,
1351
+ "learning_rate": 7.703204087277988e-05,
1352
+ "logits/chosen": -3.2184348106384277,
1353
+ "logits/rejected": -3.2598187923431396,
1354
+ "logps/chosen": -163.65548706054688,
1355
+ "logps/rejected": -143.7698974609375,
1356
+ "loss": 0.0114,
1357
+ "rewards/accuracies": 1.0,
1358
+ "rewards/chosen": 2.0235941410064697,
1359
+ "rewards/margins": 7.350224494934082,
1360
+ "rewards/rejected": -5.326630592346191,
1361
+ "step": 88
1362
+ }
1363
+ ],
1364
+ "logging_steps": 1,
1365
+ "max_steps": 264,
1366
+ "num_input_tokens_seen": 0,
1367
+ "num_train_epochs": 3,
1368
+ "save_steps": 1,
1369
+ "total_flos": 0.0,
1370
+ "train_batch_size": 2,
1371
+ "trial_name": null,
1372
+ "trial_params": null
1373
+ }
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:186370bee8502fb483e605aaee71d6a0c58fa308ba1a090704be2596cd0adb13
3
+ size 4984