adamo1139 commited on
Commit
f89fddb
1 Parent(s): 46ead19

Upload 33 files

Browse files
checkpoint-100/README.md ADDED
@@ -0,0 +1,204 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: peft
3
+ base_model: /run/media/adamo1139/82142F79142F6EFB/ProgramData/Anaconda3/envs/qlora-jondurbin/axolotl-git-linux/axolotl/yi-34b-200k-llamafied
4
+ ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
+
28
+ ### Model Sources [optional]
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
+
36
+ ## Uses
37
+
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
+
40
+ ### Direct Use
41
+
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Recommendations
65
+
66
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
+
68
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
+
70
+ ## How to Get Started with the Model
71
+
72
+ Use the code below to get started with the model.
73
+
74
+ [More Information Needed]
75
+
76
+ ## Training Details
77
+
78
+ ### Training Data
79
+
80
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
+
82
+ [More Information Needed]
83
+
84
+ ### Training Procedure
85
+
86
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
+
88
+ #### Preprocessing [optional]
89
+
90
+ [More Information Needed]
91
+
92
+
93
+ #### Training Hyperparameters
94
+
95
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
+
97
+ #### Speeds, Sizes, Times [optional]
98
+
99
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
+
101
+ [More Information Needed]
102
+
103
+ ## Evaluation
104
+
105
+ <!-- This section describes the evaluation protocols and provides the results. -->
106
+
107
+ ### Testing Data, Factors & Metrics
108
+
109
+ #### Testing Data
110
+
111
+ <!-- This should link to a Dataset Card if possible. -->
112
+
113
+ [More Information Needed]
114
+
115
+ #### Factors
116
+
117
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
+
119
+ [More Information Needed]
120
+
121
+ #### Metrics
122
+
123
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
+
125
+ [More Information Needed]
126
+
127
+ ### Results
128
+
129
+ [More Information Needed]
130
+
131
+ #### Summary
132
+
133
+
134
+
135
+ ## Model Examination [optional]
136
+
137
+ <!-- Relevant interpretability work for the model goes here -->
138
+
139
+ [More Information Needed]
140
+
141
+ ## Environmental Impact
142
+
143
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
+
145
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
+
147
+ - **Hardware Type:** [More Information Needed]
148
+ - **Hours used:** [More Information Needed]
149
+ - **Cloud Provider:** [More Information Needed]
150
+ - **Compute Region:** [More Information Needed]
151
+ - **Carbon Emitted:** [More Information Needed]
152
+
153
+ ## Technical Specifications [optional]
154
+
155
+ ### Model Architecture and Objective
156
+
157
+ [More Information Needed]
158
+
159
+ ### Compute Infrastructure
160
+
161
+ [More Information Needed]
162
+
163
+ #### Hardware
164
+
165
+ [More Information Needed]
166
+
167
+ #### Software
168
+
169
+ [More Information Needed]
170
+
171
+ ## Citation [optional]
172
+
173
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
+
175
+ **BibTeX:**
176
+
177
+ [More Information Needed]
178
+
179
+ **APA:**
180
+
181
+ [More Information Needed]
182
+
183
+ ## Glossary [optional]
184
+
185
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
+
187
+ [More Information Needed]
188
+
189
+ ## More Information [optional]
190
+
191
+ [More Information Needed]
192
+
193
+ ## Model Card Authors [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Contact
198
+
199
+ [More Information Needed]
200
+
201
+
202
+ ### Framework versions
203
+
204
+ - PEFT 0.7.1
checkpoint-100/adapter_config.json ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": "yi-34b-200k-llamafied",
5
+ "bias": "none",
6
+ "fan_in_fan_out": false,
7
+ "inference_mode": true,
8
+ "init_lora_weights": true,
9
+ "layers_pattern": null,
10
+ "layers_to_transform": null,
11
+ "loftq_config": null,
12
+ "lora_alpha": 32,
13
+ "lora_dropout": 0,
14
+ "megatron_config": null,
15
+ "megatron_core": "megatron.core",
16
+ "modules_to_save": null,
17
+ "peft_type": "LORA",
18
+ "r": 16,
19
+ "rank_pattern": {},
20
+ "revision": "unsloth",
21
+ "target_modules": [
22
+ "v_proj",
23
+ "gate_proj",
24
+ "o_proj",
25
+ "down_proj",
26
+ "q_proj",
27
+ "up_proj",
28
+ "k_proj"
29
+ ],
30
+ "task_type": "CAUSAL_LM"
31
+ }
checkpoint-100/adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:072ecc026ba448391bcebe410322e08314ed4bec47eebcec1cf79af5efbb1739
3
+ size 491633464
checkpoint-100/rng_state.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1ff264f99d31b522cc7e2a4eac9d38606d0c58a34c0adc74d71e0ca8b371dc36
3
+ size 14244
checkpoint-100/scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:26e7a331a766a7b1c3ce8e8d12fb2325171adaf35890d903a56a1ac6d3f0733f
3
+ size 1064
checkpoint-100/special_tokens_map.json ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<|startoftext|>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "<|endoftext|>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "<unk>",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "unk_token": {
24
+ "content": "<unk>",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ }
30
+ }
checkpoint-100/tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
checkpoint-100/tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:386c49cf943d71aa110361135338c50e38beeff0a66593480421f37b319e1a39
3
+ size 1033105
checkpoint-100/tokenizer_config.json ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": false,
3
+ "add_eos_token": false,
4
+ "added_tokens_decoder": {
5
+ "0": {
6
+ "content": "<unk>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "1": {
14
+ "content": "<|startoftext|>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "2": {
22
+ "content": "<|endoftext|>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ }
29
+ },
30
+ "bos_token": "<|startoftext|>",
31
+ "chat_template": "{% if not add_generation_prompt is defined %}{% set add_generation_prompt = false %}{% endif %}{% for message in messages %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant\n' }}{% endif %}",
32
+ "clean_up_tokenization_spaces": false,
33
+ "eos_token": "<|endoftext|>",
34
+ "legacy": true,
35
+ "model_max_length": 4096,
36
+ "pad_token": "<unk>",
37
+ "padding_side": "right",
38
+ "sp_model_kwargs": {},
39
+ "tokenizer_class": "LlamaTokenizer",
40
+ "unk_token": "<unk>",
41
+ "use_default_system_prompt": false
42
+ }
checkpoint-100/trainer_state.json ADDED
@@ -0,0 +1,1421 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 0.10908842980841345,
5
+ "eval_steps": 500,
6
+ "global_step": 100,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.0,
13
+ "learning_rate": 9.782608695652175e-07,
14
+ "logits/chosen": -1.8868483304977417,
15
+ "logits/rejected": -2.3036646842956543,
16
+ "logps/chosen": -466.7117004394531,
17
+ "logps/rejected": -99.3152084350586,
18
+ "loss": 0.6931,
19
+ "rewards/accuracies": 0.0,
20
+ "rewards/chosen": 0.0,
21
+ "rewards/margins": 0.0,
22
+ "rewards/rejected": 0.0,
23
+ "step": 1
24
+ },
25
+ {
26
+ "epoch": 0.0,
27
+ "learning_rate": 1.956521739130435e-06,
28
+ "logits/chosen": -2.2296698093414307,
29
+ "logits/rejected": -2.469517469406128,
30
+ "logps/chosen": -335.50140380859375,
31
+ "logps/rejected": -97.97496032714844,
32
+ "loss": 0.6931,
33
+ "rewards/accuracies": 0.0,
34
+ "rewards/chosen": 0.0,
35
+ "rewards/margins": 0.0,
36
+ "rewards/rejected": 0.0,
37
+ "step": 2
38
+ },
39
+ {
40
+ "epoch": 0.0,
41
+ "learning_rate": 2.9347826086956523e-06,
42
+ "logits/chosen": -2.125047206878662,
43
+ "logits/rejected": -2.445204019546509,
44
+ "logps/chosen": -484.7939758300781,
45
+ "logps/rejected": -125.93072509765625,
46
+ "loss": 1.7268,
47
+ "rewards/accuracies": 0.4375,
48
+ "rewards/chosen": -1.0339633226394653,
49
+ "rewards/margins": -0.9356753826141357,
50
+ "rewards/rejected": -0.09828801453113556,
51
+ "step": 3
52
+ },
53
+ {
54
+ "epoch": 0.0,
55
+ "learning_rate": 3.91304347826087e-06,
56
+ "logits/chosen": -1.7763867378234863,
57
+ "logits/rejected": -2.1961894035339355,
58
+ "logps/chosen": -386.8011779785156,
59
+ "logps/rejected": -106.04309844970703,
60
+ "loss": 2.3806,
61
+ "rewards/accuracies": 0.5625,
62
+ "rewards/chosen": -0.9238373041152954,
63
+ "rewards/margins": -1.2902063131332397,
64
+ "rewards/rejected": 0.36636874079704285,
65
+ "step": 4
66
+ },
67
+ {
68
+ "epoch": 0.01,
69
+ "learning_rate": 4.891304347826087e-06,
70
+ "logits/chosen": -1.7257812023162842,
71
+ "logits/rejected": -2.353095769882202,
72
+ "logps/chosen": -524.7261962890625,
73
+ "logps/rejected": -70.26946258544922,
74
+ "loss": 0.7488,
75
+ "rewards/accuracies": 0.625,
76
+ "rewards/chosen": 2.197690010070801,
77
+ "rewards/margins": 2.0008764266967773,
78
+ "rewards/rejected": 0.19681359827518463,
79
+ "step": 5
80
+ },
81
+ {
82
+ "epoch": 0.01,
83
+ "learning_rate": 5.869565217391305e-06,
84
+ "logits/chosen": -2.1483359336853027,
85
+ "logits/rejected": -2.4817819595336914,
86
+ "logps/chosen": -283.6588134765625,
87
+ "logps/rejected": -58.25455093383789,
88
+ "loss": 0.8712,
89
+ "rewards/accuracies": 0.5625,
90
+ "rewards/chosen": 0.4915351867675781,
91
+ "rewards/margins": 0.2005542814731598,
92
+ "rewards/rejected": 0.2909809350967407,
93
+ "step": 6
94
+ },
95
+ {
96
+ "epoch": 0.01,
97
+ "learning_rate": 6.847826086956523e-06,
98
+ "logits/chosen": -2.1019060611724854,
99
+ "logits/rejected": -2.300177812576294,
100
+ "logps/chosen": -297.35235595703125,
101
+ "logps/rejected": -175.5060272216797,
102
+ "loss": 1.1392,
103
+ "rewards/accuracies": 0.5625,
104
+ "rewards/chosen": 0.6018204689025879,
105
+ "rewards/margins": -0.036775827407836914,
106
+ "rewards/rejected": 0.6385962963104248,
107
+ "step": 7
108
+ },
109
+ {
110
+ "epoch": 0.01,
111
+ "learning_rate": 7.82608695652174e-06,
112
+ "logits/chosen": -2.0169122219085693,
113
+ "logits/rejected": -2.201702117919922,
114
+ "logps/chosen": -342.04852294921875,
115
+ "logps/rejected": -117.9761962890625,
116
+ "loss": 1.1799,
117
+ "rewards/accuracies": 0.5625,
118
+ "rewards/chosen": 1.111680507659912,
119
+ "rewards/margins": 0.17297500371932983,
120
+ "rewards/rejected": 0.9387054443359375,
121
+ "step": 8
122
+ },
123
+ {
124
+ "epoch": 0.01,
125
+ "learning_rate": 8.804347826086957e-06,
126
+ "logits/chosen": -1.6254596710205078,
127
+ "logits/rejected": -2.0724740028381348,
128
+ "logps/chosen": -480.1983642578125,
129
+ "logps/rejected": -108.05937194824219,
130
+ "loss": 0.5203,
131
+ "rewards/accuracies": 0.8125,
132
+ "rewards/chosen": 2.161550283432007,
133
+ "rewards/margins": 1.3515225648880005,
134
+ "rewards/rejected": 0.8100277185440063,
135
+ "step": 9
136
+ },
137
+ {
138
+ "epoch": 0.01,
139
+ "learning_rate": 9.782608695652175e-06,
140
+ "logits/chosen": -1.4482485055923462,
141
+ "logits/rejected": -2.1116294860839844,
142
+ "logps/chosen": -451.5759582519531,
143
+ "logps/rejected": -67.73944854736328,
144
+ "loss": 0.355,
145
+ "rewards/accuracies": 0.8125,
146
+ "rewards/chosen": 6.18800163269043,
147
+ "rewards/margins": 5.184471130371094,
148
+ "rewards/rejected": 1.0035302639007568,
149
+ "step": 10
150
+ },
151
+ {
152
+ "epoch": 0.01,
153
+ "learning_rate": 1.0760869565217392e-05,
154
+ "logits/chosen": -1.2660062313079834,
155
+ "logits/rejected": -1.8649226427078247,
156
+ "logps/chosen": -265.8935546875,
157
+ "logps/rejected": -57.866336822509766,
158
+ "loss": 0.3178,
159
+ "rewards/accuracies": 0.8125,
160
+ "rewards/chosen": 5.78374719619751,
161
+ "rewards/margins": 4.856299877166748,
162
+ "rewards/rejected": 0.9274475574493408,
163
+ "step": 11
164
+ },
165
+ {
166
+ "epoch": 0.01,
167
+ "learning_rate": 1.173913043478261e-05,
168
+ "logits/chosen": -1.1804625988006592,
169
+ "logits/rejected": -1.7009960412979126,
170
+ "logps/chosen": -247.2069854736328,
171
+ "logps/rejected": -79.96739196777344,
172
+ "loss": 0.6617,
173
+ "rewards/accuracies": 0.75,
174
+ "rewards/chosen": 6.5535197257995605,
175
+ "rewards/margins": 4.647799015045166,
176
+ "rewards/rejected": 1.9057204723358154,
177
+ "step": 12
178
+ },
179
+ {
180
+ "epoch": 0.01,
181
+ "learning_rate": 1.2717391304347827e-05,
182
+ "logits/chosen": -0.9693112373352051,
183
+ "logits/rejected": -1.6497248411178589,
184
+ "logps/chosen": -372.8963623046875,
185
+ "logps/rejected": -70.47633361816406,
186
+ "loss": 0.2498,
187
+ "rewards/accuracies": 0.8125,
188
+ "rewards/chosen": 9.02272891998291,
189
+ "rewards/margins": 7.325143814086914,
190
+ "rewards/rejected": 1.6975862979888916,
191
+ "step": 13
192
+ },
193
+ {
194
+ "epoch": 0.02,
195
+ "learning_rate": 1.3695652173913046e-05,
196
+ "logits/chosen": -0.7723280787467957,
197
+ "logits/rejected": -1.4176554679870605,
198
+ "logps/chosen": -318.0392150878906,
199
+ "logps/rejected": -79.94412994384766,
200
+ "loss": 0.6401,
201
+ "rewards/accuracies": 0.6875,
202
+ "rewards/chosen": 6.698611259460449,
203
+ "rewards/margins": 4.517416477203369,
204
+ "rewards/rejected": 2.18119478225708,
205
+ "step": 14
206
+ },
207
+ {
208
+ "epoch": 0.02,
209
+ "learning_rate": 1.4673913043478261e-05,
210
+ "logits/chosen": -0.8084051609039307,
211
+ "logits/rejected": -1.4428132772445679,
212
+ "logps/chosen": -363.15869140625,
213
+ "logps/rejected": -70.70079803466797,
214
+ "loss": 0.4963,
215
+ "rewards/accuracies": 0.75,
216
+ "rewards/chosen": 7.02784538269043,
217
+ "rewards/margins": 5.400259971618652,
218
+ "rewards/rejected": 1.6275854110717773,
219
+ "step": 15
220
+ },
221
+ {
222
+ "epoch": 0.02,
223
+ "learning_rate": 1.565217391304348e-05,
224
+ "logits/chosen": -0.8056433200836182,
225
+ "logits/rejected": -1.4629294872283936,
226
+ "logps/chosen": -328.90435791015625,
227
+ "logps/rejected": -77.36820983886719,
228
+ "loss": 0.544,
229
+ "rewards/accuracies": 0.75,
230
+ "rewards/chosen": 9.417024612426758,
231
+ "rewards/margins": 7.023505687713623,
232
+ "rewards/rejected": 2.393519163131714,
233
+ "step": 16
234
+ },
235
+ {
236
+ "epoch": 0.02,
237
+ "learning_rate": 1.6630434782608694e-05,
238
+ "logits/chosen": -1.0918309688568115,
239
+ "logits/rejected": -1.6613047122955322,
240
+ "logps/chosen": -277.37982177734375,
241
+ "logps/rejected": -56.75635528564453,
242
+ "loss": 0.8958,
243
+ "rewards/accuracies": 0.8125,
244
+ "rewards/chosen": 6.844330310821533,
245
+ "rewards/margins": 4.82295560836792,
246
+ "rewards/rejected": 2.0213749408721924,
247
+ "step": 17
248
+ },
249
+ {
250
+ "epoch": 0.02,
251
+ "learning_rate": 1.7608695652173915e-05,
252
+ "logits/chosen": -0.9810999631881714,
253
+ "logits/rejected": -1.975247859954834,
254
+ "logps/chosen": -465.6856689453125,
255
+ "logps/rejected": -56.440269470214844,
256
+ "loss": 0.2275,
257
+ "rewards/accuracies": 0.875,
258
+ "rewards/chosen": 7.767542362213135,
259
+ "rewards/margins": 6.572459697723389,
260
+ "rewards/rejected": 1.1950825452804565,
261
+ "step": 18
262
+ },
263
+ {
264
+ "epoch": 0.02,
265
+ "learning_rate": 1.8586956521739132e-05,
266
+ "logits/chosen": -1.2428462505340576,
267
+ "logits/rejected": -1.8064367771148682,
268
+ "logps/chosen": -322.7100830078125,
269
+ "logps/rejected": -108.71321105957031,
270
+ "loss": 0.3006,
271
+ "rewards/accuracies": 0.875,
272
+ "rewards/chosen": 6.459216594696045,
273
+ "rewards/margins": 3.9207711219787598,
274
+ "rewards/rejected": 2.5384464263916016,
275
+ "step": 19
276
+ },
277
+ {
278
+ "epoch": 0.02,
279
+ "learning_rate": 1.956521739130435e-05,
280
+ "logits/chosen": -1.7773098945617676,
281
+ "logits/rejected": -2.178387403488159,
282
+ "logps/chosen": -357.8656921386719,
283
+ "logps/rejected": -112.65576934814453,
284
+ "loss": 0.3837,
285
+ "rewards/accuracies": 0.9375,
286
+ "rewards/chosen": 4.684590816497803,
287
+ "rewards/margins": 4.16137170791626,
288
+ "rewards/rejected": 0.5232191681861877,
289
+ "step": 20
290
+ },
291
+ {
292
+ "epoch": 0.02,
293
+ "learning_rate": 2.0543478260869567e-05,
294
+ "logits/chosen": -1.6802027225494385,
295
+ "logits/rejected": -1.943274974822998,
296
+ "logps/chosen": -327.85693359375,
297
+ "logps/rejected": -87.53279113769531,
298
+ "loss": 0.4428,
299
+ "rewards/accuracies": 0.9375,
300
+ "rewards/chosen": 2.681103229522705,
301
+ "rewards/margins": 2.3518502712249756,
302
+ "rewards/rejected": 0.32925307750701904,
303
+ "step": 21
304
+ },
305
+ {
306
+ "epoch": 0.02,
307
+ "learning_rate": 2.1521739130434784e-05,
308
+ "logits/chosen": -1.5789223909378052,
309
+ "logits/rejected": -2.4593281745910645,
310
+ "logps/chosen": -590.5732421875,
311
+ "logps/rejected": -86.78819274902344,
312
+ "loss": 0.252,
313
+ "rewards/accuracies": 0.875,
314
+ "rewards/chosen": 3.6503655910491943,
315
+ "rewards/margins": 3.621542453765869,
316
+ "rewards/rejected": 0.028822531923651695,
317
+ "step": 22
318
+ },
319
+ {
320
+ "epoch": 0.03,
321
+ "learning_rate": 2.25e-05,
322
+ "logits/chosen": -1.8142305612564087,
323
+ "logits/rejected": -2.3507237434387207,
324
+ "logps/chosen": -296.42266845703125,
325
+ "logps/rejected": -60.37131118774414,
326
+ "loss": 0.5602,
327
+ "rewards/accuracies": 0.75,
328
+ "rewards/chosen": 2.170171022415161,
329
+ "rewards/margins": 2.5353941917419434,
330
+ "rewards/rejected": -0.36522313952445984,
331
+ "step": 23
332
+ },
333
+ {
334
+ "epoch": 0.03,
335
+ "learning_rate": 2.347826086956522e-05,
336
+ "logits/chosen": -1.7050156593322754,
337
+ "logits/rejected": -2.158708333969116,
338
+ "logps/chosen": -523.41748046875,
339
+ "logps/rejected": -81.6954345703125,
340
+ "loss": 0.7359,
341
+ "rewards/accuracies": 0.9375,
342
+ "rewards/chosen": 2.3799068927764893,
343
+ "rewards/margins": 2.963341236114502,
344
+ "rewards/rejected": -0.583433985710144,
345
+ "step": 24
346
+ },
347
+ {
348
+ "epoch": 0.03,
349
+ "learning_rate": 2.4456521739130436e-05,
350
+ "logits/chosen": -1.704190731048584,
351
+ "logits/rejected": -2.215942859649658,
352
+ "logps/chosen": -457.6519775390625,
353
+ "logps/rejected": -91.778076171875,
354
+ "loss": 0.0865,
355
+ "rewards/accuracies": 1.0,
356
+ "rewards/chosen": 4.57167911529541,
357
+ "rewards/margins": 4.6597795486450195,
358
+ "rewards/rejected": -0.08810039609670639,
359
+ "step": 25
360
+ },
361
+ {
362
+ "epoch": 0.03,
363
+ "learning_rate": 2.5434782608695653e-05,
364
+ "logits/chosen": -1.4195502996444702,
365
+ "logits/rejected": -2.109910488128662,
366
+ "logps/chosen": -445.05908203125,
367
+ "logps/rejected": -80.95398712158203,
368
+ "loss": 0.1265,
369
+ "rewards/accuracies": 0.9375,
370
+ "rewards/chosen": 5.362960338592529,
371
+ "rewards/margins": 5.412578582763672,
372
+ "rewards/rejected": -0.049618080258369446,
373
+ "step": 26
374
+ },
375
+ {
376
+ "epoch": 0.03,
377
+ "learning_rate": 2.6413043478260874e-05,
378
+ "logits/chosen": -1.0274971723556519,
379
+ "logits/rejected": -1.7681918144226074,
380
+ "logps/chosen": -393.85870361328125,
381
+ "logps/rejected": -77.79011535644531,
382
+ "loss": 0.1702,
383
+ "rewards/accuracies": 1.0,
384
+ "rewards/chosen": 4.572484016418457,
385
+ "rewards/margins": 4.946743488311768,
386
+ "rewards/rejected": -0.374259889125824,
387
+ "step": 27
388
+ },
389
+ {
390
+ "epoch": 0.03,
391
+ "learning_rate": 2.739130434782609e-05,
392
+ "logits/chosen": -0.7365273237228394,
393
+ "logits/rejected": -1.9988534450531006,
394
+ "logps/chosen": -569.8936767578125,
395
+ "logps/rejected": -79.71562194824219,
396
+ "loss": 0.0465,
397
+ "rewards/accuracies": 1.0,
398
+ "rewards/chosen": 12.751686096191406,
399
+ "rewards/margins": 12.474647521972656,
400
+ "rewards/rejected": 0.2770393192768097,
401
+ "step": 28
402
+ },
403
+ {
404
+ "epoch": 0.03,
405
+ "learning_rate": 2.836956521739131e-05,
406
+ "logits/chosen": -0.6424872279167175,
407
+ "logits/rejected": -1.6069284677505493,
408
+ "logps/chosen": -368.68292236328125,
409
+ "logps/rejected": -103.9141845703125,
410
+ "loss": 0.0351,
411
+ "rewards/accuracies": 1.0,
412
+ "rewards/chosen": 10.144453048706055,
413
+ "rewards/margins": 10.381340980529785,
414
+ "rewards/rejected": -0.23688830435276031,
415
+ "step": 29
416
+ },
417
+ {
418
+ "epoch": 0.03,
419
+ "learning_rate": 2.9347826086956523e-05,
420
+ "logits/chosen": -0.6061868071556091,
421
+ "logits/rejected": -1.4256294965744019,
422
+ "logps/chosen": -360.4029846191406,
423
+ "logps/rejected": -130.27224731445312,
424
+ "loss": 0.0414,
425
+ "rewards/accuracies": 1.0,
426
+ "rewards/chosen": 12.649539947509766,
427
+ "rewards/margins": 12.36604118347168,
428
+ "rewards/rejected": 0.2834990620613098,
429
+ "step": 30
430
+ },
431
+ {
432
+ "epoch": 0.03,
433
+ "learning_rate": 3.032608695652174e-05,
434
+ "logits/chosen": -1.0075111389160156,
435
+ "logits/rejected": -1.3918204307556152,
436
+ "logps/chosen": -227.05450439453125,
437
+ "logps/rejected": -172.67947387695312,
438
+ "loss": 1.1145,
439
+ "rewards/accuracies": 0.875,
440
+ "rewards/chosen": 4.628115653991699,
441
+ "rewards/margins": 3.9521055221557617,
442
+ "rewards/rejected": 0.6760098934173584,
443
+ "step": 31
444
+ },
445
+ {
446
+ "epoch": 0.03,
447
+ "learning_rate": 3.130434782608696e-05,
448
+ "logits/chosen": -0.9060839414596558,
449
+ "logits/rejected": -2.0123891830444336,
450
+ "logps/chosen": -403.06781005859375,
451
+ "logps/rejected": -75.35684967041016,
452
+ "loss": 0.0319,
453
+ "rewards/accuracies": 1.0,
454
+ "rewards/chosen": 7.352475643157959,
455
+ "rewards/margins": 8.778717041015625,
456
+ "rewards/rejected": -1.4262410402297974,
457
+ "step": 32
458
+ },
459
+ {
460
+ "epoch": 0.04,
461
+ "learning_rate": 3.228260869565217e-05,
462
+ "logits/chosen": -1.2984271049499512,
463
+ "logits/rejected": -1.9507720470428467,
464
+ "logps/chosen": -306.7958984375,
465
+ "logps/rejected": -107.01404571533203,
466
+ "loss": 0.0795,
467
+ "rewards/accuracies": 0.9375,
468
+ "rewards/chosen": 5.223363399505615,
469
+ "rewards/margins": 7.012631416320801,
470
+ "rewards/rejected": -1.7892680168151855,
471
+ "step": 33
472
+ },
473
+ {
474
+ "epoch": 0.04,
475
+ "learning_rate": 3.326086956521739e-05,
476
+ "logits/chosen": -1.1749062538146973,
477
+ "logits/rejected": -2.004805564880371,
478
+ "logps/chosen": -437.9205322265625,
479
+ "logps/rejected": -135.21376037597656,
480
+ "loss": 0.0021,
481
+ "rewards/accuracies": 1.0,
482
+ "rewards/chosen": 9.3555908203125,
483
+ "rewards/margins": 12.285521507263184,
484
+ "rewards/rejected": -2.9299299716949463,
485
+ "step": 34
486
+ },
487
+ {
488
+ "epoch": 0.04,
489
+ "learning_rate": 3.423913043478261e-05,
490
+ "logits/chosen": -0.9383536577224731,
491
+ "logits/rejected": -2.0366413593292236,
492
+ "logps/chosen": -457.8018493652344,
493
+ "logps/rejected": -87.7723617553711,
494
+ "loss": 0.041,
495
+ "rewards/accuracies": 1.0,
496
+ "rewards/chosen": 6.836911678314209,
497
+ "rewards/margins": 9.43667984008789,
498
+ "rewards/rejected": -2.599769115447998,
499
+ "step": 35
500
+ },
501
+ {
502
+ "epoch": 0.04,
503
+ "learning_rate": 3.521739130434783e-05,
504
+ "logits/chosen": -1.120687484741211,
505
+ "logits/rejected": -1.9933563470840454,
506
+ "logps/chosen": -439.6186828613281,
507
+ "logps/rejected": -94.24763488769531,
508
+ "loss": 0.1906,
509
+ "rewards/accuracies": 0.9375,
510
+ "rewards/chosen": 6.565365314483643,
511
+ "rewards/margins": 9.163484573364258,
512
+ "rewards/rejected": -2.5981192588806152,
513
+ "step": 36
514
+ },
515
+ {
516
+ "epoch": 0.04,
517
+ "learning_rate": 3.619565217391305e-05,
518
+ "logits/chosen": -1.338189959526062,
519
+ "logits/rejected": -1.9118335247039795,
520
+ "logps/chosen": -286.96728515625,
521
+ "logps/rejected": -127.22909545898438,
522
+ "loss": 0.0279,
523
+ "rewards/accuracies": 1.0,
524
+ "rewards/chosen": 2.9341468811035156,
525
+ "rewards/margins": 6.362301349639893,
526
+ "rewards/rejected": -3.4281537532806396,
527
+ "step": 37
528
+ },
529
+ {
530
+ "epoch": 0.04,
531
+ "learning_rate": 3.7173913043478264e-05,
532
+ "logits/chosen": -1.2371840476989746,
533
+ "logits/rejected": -1.7362310886383057,
534
+ "logps/chosen": -319.52911376953125,
535
+ "logps/rejected": -151.65084838867188,
536
+ "loss": 0.0939,
537
+ "rewards/accuracies": 0.9375,
538
+ "rewards/chosen": 6.216956615447998,
539
+ "rewards/margins": 9.073128700256348,
540
+ "rewards/rejected": -2.856172561645508,
541
+ "step": 38
542
+ },
543
+ {
544
+ "epoch": 0.04,
545
+ "learning_rate": 3.815217391304348e-05,
546
+ "logits/chosen": -0.8356418609619141,
547
+ "logits/rejected": -1.7252635955810547,
548
+ "logps/chosen": -391.63275146484375,
549
+ "logps/rejected": -154.64361572265625,
550
+ "loss": 0.2483,
551
+ "rewards/accuracies": 0.9375,
552
+ "rewards/chosen": 8.802470207214355,
553
+ "rewards/margins": 11.676048278808594,
554
+ "rewards/rejected": -2.873577356338501,
555
+ "step": 39
556
+ },
557
+ {
558
+ "epoch": 0.04,
559
+ "learning_rate": 3.91304347826087e-05,
560
+ "logits/chosen": -0.9311200380325317,
561
+ "logits/rejected": -1.8107151985168457,
562
+ "logps/chosen": -374.2525939941406,
563
+ "logps/rejected": -131.86715698242188,
564
+ "loss": 0.0058,
565
+ "rewards/accuracies": 1.0,
566
+ "rewards/chosen": 8.586356163024902,
567
+ "rewards/margins": 11.15583324432373,
568
+ "rewards/rejected": -2.569479465484619,
569
+ "step": 40
570
+ },
571
+ {
572
+ "epoch": 0.04,
573
+ "learning_rate": 4.0108695652173916e-05,
574
+ "logits/chosen": -0.6861732006072998,
575
+ "logits/rejected": -1.6295411586761475,
576
+ "logps/chosen": -311.7944641113281,
577
+ "logps/rejected": -132.4564666748047,
578
+ "loss": 0.0124,
579
+ "rewards/accuracies": 1.0,
580
+ "rewards/chosen": 5.333560943603516,
581
+ "rewards/margins": 10.023443222045898,
582
+ "rewards/rejected": -4.689882755279541,
583
+ "step": 41
584
+ },
585
+ {
586
+ "epoch": 0.05,
587
+ "learning_rate": 4.1086956521739134e-05,
588
+ "logits/chosen": -1.1133558750152588,
589
+ "logits/rejected": -2.0819039344787598,
590
+ "logps/chosen": -282.22332763671875,
591
+ "logps/rejected": -98.7315673828125,
592
+ "loss": 0.0156,
593
+ "rewards/accuracies": 1.0,
594
+ "rewards/chosen": 4.930261611938477,
595
+ "rewards/margins": 8.344033241271973,
596
+ "rewards/rejected": -3.413771629333496,
597
+ "step": 42
598
+ },
599
+ {
600
+ "epoch": 0.05,
601
+ "learning_rate": 4.206521739130435e-05,
602
+ "logits/chosen": -1.0311042070388794,
603
+ "logits/rejected": -2.0758776664733887,
604
+ "logps/chosen": -369.7279052734375,
605
+ "logps/rejected": -146.08364868164062,
606
+ "loss": 0.0127,
607
+ "rewards/accuracies": 1.0,
608
+ "rewards/chosen": 7.7552289962768555,
609
+ "rewards/margins": 13.249262809753418,
610
+ "rewards/rejected": -5.494032859802246,
611
+ "step": 43
612
+ },
613
+ {
614
+ "epoch": 0.05,
615
+ "learning_rate": 4.304347826086957e-05,
616
+ "logits/chosen": -1.0191632509231567,
617
+ "logits/rejected": -1.995982050895691,
618
+ "logps/chosen": -367.692626953125,
619
+ "logps/rejected": -152.65135192871094,
620
+ "loss": 0.0004,
621
+ "rewards/accuracies": 1.0,
622
+ "rewards/chosen": 7.9517822265625,
623
+ "rewards/margins": 13.869260787963867,
624
+ "rewards/rejected": -5.917478084564209,
625
+ "step": 44
626
+ },
627
+ {
628
+ "epoch": 0.05,
629
+ "learning_rate": 4.4021739130434786e-05,
630
+ "logits/chosen": -1.3430471420288086,
631
+ "logits/rejected": -2.0910849571228027,
632
+ "logps/chosen": -334.8968811035156,
633
+ "logps/rejected": -168.4519500732422,
634
+ "loss": 0.0057,
635
+ "rewards/accuracies": 1.0,
636
+ "rewards/chosen": 5.113959312438965,
637
+ "rewards/margins": 12.524876594543457,
638
+ "rewards/rejected": -7.41091775894165,
639
+ "step": 45
640
+ },
641
+ {
642
+ "epoch": 0.05,
643
+ "learning_rate": 4.5e-05,
644
+ "logits/chosen": -0.8516018986701965,
645
+ "logits/rejected": -1.9934579133987427,
646
+ "logps/chosen": -386.7164001464844,
647
+ "logps/rejected": -152.8460235595703,
648
+ "loss": 0.0253,
649
+ "rewards/accuracies": 1.0,
650
+ "rewards/chosen": 7.0803022384643555,
651
+ "rewards/margins": 13.202106475830078,
652
+ "rewards/rejected": -6.121803283691406,
653
+ "step": 46
654
+ },
655
+ {
656
+ "epoch": 0.05,
657
+ "learning_rate": 4.494827586206897e-05,
658
+ "logits/chosen": -0.9460695385932922,
659
+ "logits/rejected": -1.9993284940719604,
660
+ "logps/chosen": -355.2873229980469,
661
+ "logps/rejected": -188.5989532470703,
662
+ "loss": 0.1008,
663
+ "rewards/accuracies": 0.9375,
664
+ "rewards/chosen": 6.138028144836426,
665
+ "rewards/margins": 13.93982982635498,
666
+ "rewards/rejected": -7.801800727844238,
667
+ "step": 47
668
+ },
669
+ {
670
+ "epoch": 0.05,
671
+ "learning_rate": 4.489655172413793e-05,
672
+ "logits/chosen": -1.212510108947754,
673
+ "logits/rejected": -2.0202791690826416,
674
+ "logps/chosen": -293.0318908691406,
675
+ "logps/rejected": -194.92649841308594,
676
+ "loss": 0.0058,
677
+ "rewards/accuracies": 1.0,
678
+ "rewards/chosen": 6.742659091949463,
679
+ "rewards/margins": 17.209505081176758,
680
+ "rewards/rejected": -10.466848373413086,
681
+ "step": 48
682
+ },
683
+ {
684
+ "epoch": 0.05,
685
+ "learning_rate": 4.48448275862069e-05,
686
+ "logits/chosen": -1.388157606124878,
687
+ "logits/rejected": -2.1885251998901367,
688
+ "logps/chosen": -330.6868591308594,
689
+ "logps/rejected": -214.35423278808594,
690
+ "loss": 0.0003,
691
+ "rewards/accuracies": 1.0,
692
+ "rewards/chosen": 5.653369903564453,
693
+ "rewards/margins": 18.257244110107422,
694
+ "rewards/rejected": -12.603873252868652,
695
+ "step": 49
696
+ },
697
+ {
698
+ "epoch": 0.05,
699
+ "learning_rate": 4.479310344827587e-05,
700
+ "logits/chosen": -1.020527958869934,
701
+ "logits/rejected": -1.9433815479278564,
702
+ "logps/chosen": -347.48394775390625,
703
+ "logps/rejected": -191.15147399902344,
704
+ "loss": 0.0001,
705
+ "rewards/accuracies": 1.0,
706
+ "rewards/chosen": 7.30579137802124,
707
+ "rewards/margins": 18.734811782836914,
708
+ "rewards/rejected": -11.429021835327148,
709
+ "step": 50
710
+ },
711
+ {
712
+ "epoch": 0.06,
713
+ "learning_rate": 4.474137931034483e-05,
714
+ "logits/chosen": -1.2260757684707642,
715
+ "logits/rejected": -2.0841352939605713,
716
+ "logps/chosen": -243.5459747314453,
717
+ "logps/rejected": -188.00442504882812,
718
+ "loss": 0.0002,
719
+ "rewards/accuracies": 1.0,
720
+ "rewards/chosen": 3.675260543823242,
721
+ "rewards/margins": 14.774760246276855,
722
+ "rewards/rejected": -11.099499702453613,
723
+ "step": 51
724
+ },
725
+ {
726
+ "epoch": 0.06,
727
+ "learning_rate": 4.46896551724138e-05,
728
+ "logits/chosen": -1.0378711223602295,
729
+ "logits/rejected": -2.0174639225006104,
730
+ "logps/chosen": -375.9520263671875,
731
+ "logps/rejected": -182.97463989257812,
732
+ "loss": 0.0,
733
+ "rewards/accuracies": 1.0,
734
+ "rewards/chosen": 7.18237829208374,
735
+ "rewards/margins": 19.010208129882812,
736
+ "rewards/rejected": -11.82783317565918,
737
+ "step": 52
738
+ },
739
+ {
740
+ "epoch": 0.06,
741
+ "learning_rate": 4.4637931034482765e-05,
742
+ "logits/chosen": -1.1728301048278809,
743
+ "logits/rejected": -1.9667140245437622,
744
+ "logps/chosen": -309.5382995605469,
745
+ "logps/rejected": -241.13743591308594,
746
+ "loss": 0.0,
747
+ "rewards/accuracies": 1.0,
748
+ "rewards/chosen": 6.681103706359863,
749
+ "rewards/margins": 21.542236328125,
750
+ "rewards/rejected": -14.861133575439453,
751
+ "step": 53
752
+ },
753
+ {
754
+ "epoch": 0.06,
755
+ "learning_rate": 4.4586206896551726e-05,
756
+ "logits/chosen": -1.1491725444793701,
757
+ "logits/rejected": -1.760243535041809,
758
+ "logps/chosen": -250.09036254882812,
759
+ "logps/rejected": -226.221435546875,
760
+ "loss": 0.0001,
761
+ "rewards/accuracies": 1.0,
762
+ "rewards/chosen": 2.3173880577087402,
763
+ "rewards/margins": 16.417604446411133,
764
+ "rewards/rejected": -14.100215911865234,
765
+ "step": 54
766
+ },
767
+ {
768
+ "epoch": 0.06,
769
+ "learning_rate": 4.4534482758620694e-05,
770
+ "logits/chosen": -1.2637637853622437,
771
+ "logits/rejected": -1.8996949195861816,
772
+ "logps/chosen": -401.9977722167969,
773
+ "logps/rejected": -303.9198303222656,
774
+ "loss": 0.0011,
775
+ "rewards/accuracies": 1.0,
776
+ "rewards/chosen": 8.595758438110352,
777
+ "rewards/margins": 26.598325729370117,
778
+ "rewards/rejected": -18.002567291259766,
779
+ "step": 55
780
+ },
781
+ {
782
+ "epoch": 0.06,
783
+ "learning_rate": 4.4482758620689656e-05,
784
+ "logits/chosen": -0.9855178594589233,
785
+ "logits/rejected": -2.0190346240997314,
786
+ "logps/chosen": -394.40625,
787
+ "logps/rejected": -253.97720336914062,
788
+ "loss": 0.0,
789
+ "rewards/accuracies": 1.0,
790
+ "rewards/chosen": 6.395060062408447,
791
+ "rewards/margins": 23.680639266967773,
792
+ "rewards/rejected": -17.28557777404785,
793
+ "step": 56
794
+ },
795
+ {
796
+ "epoch": 0.06,
797
+ "learning_rate": 4.4431034482758624e-05,
798
+ "logits/chosen": -1.049959421157837,
799
+ "logits/rejected": -1.6531357765197754,
800
+ "logps/chosen": -205.54058837890625,
801
+ "logps/rejected": -250.66958618164062,
802
+ "loss": 0.0,
803
+ "rewards/accuracies": 1.0,
804
+ "rewards/chosen": 4.874389171600342,
805
+ "rewards/margins": 20.083953857421875,
806
+ "rewards/rejected": -15.209564208984375,
807
+ "step": 57
808
+ },
809
+ {
810
+ "epoch": 0.06,
811
+ "learning_rate": 4.4379310344827585e-05,
812
+ "logits/chosen": -0.8982111811637878,
813
+ "logits/rejected": -1.6001371145248413,
814
+ "logps/chosen": -241.91790771484375,
815
+ "logps/rejected": -252.10299682617188,
816
+ "loss": 0.0,
817
+ "rewards/accuracies": 1.0,
818
+ "rewards/chosen": 6.044814109802246,
819
+ "rewards/margins": 21.77715301513672,
820
+ "rewards/rejected": -15.732340812683105,
821
+ "step": 58
822
+ },
823
+ {
824
+ "epoch": 0.06,
825
+ "learning_rate": 4.432758620689655e-05,
826
+ "logits/chosen": -0.753044068813324,
827
+ "logits/rejected": -1.6325318813323975,
828
+ "logps/chosen": -315.30230712890625,
829
+ "logps/rejected": -245.91976928710938,
830
+ "loss": 0.0078,
831
+ "rewards/accuracies": 1.0,
832
+ "rewards/chosen": 5.801758289337158,
833
+ "rewards/margins": 20.689924240112305,
834
+ "rewards/rejected": -14.888164520263672,
835
+ "step": 59
836
+ },
837
+ {
838
+ "epoch": 0.07,
839
+ "learning_rate": 4.427586206896552e-05,
840
+ "logits/chosen": -0.5443448424339294,
841
+ "logits/rejected": -1.7224345207214355,
842
+ "logps/chosen": -388.339599609375,
843
+ "logps/rejected": -211.5500946044922,
844
+ "loss": 0.0,
845
+ "rewards/accuracies": 1.0,
846
+ "rewards/chosen": 11.5880708694458,
847
+ "rewards/margins": 24.889877319335938,
848
+ "rewards/rejected": -13.301810264587402,
849
+ "step": 60
850
+ },
851
+ {
852
+ "epoch": 0.07,
853
+ "learning_rate": 4.422413793103448e-05,
854
+ "logits/chosen": -0.6921290755271912,
855
+ "logits/rejected": -1.6139135360717773,
856
+ "logps/chosen": -242.47312927246094,
857
+ "logps/rejected": -214.73622131347656,
858
+ "loss": 0.0,
859
+ "rewards/accuracies": 1.0,
860
+ "rewards/chosen": 11.329044342041016,
861
+ "rewards/margins": 21.45581817626953,
862
+ "rewards/rejected": -10.1267728805542,
863
+ "step": 61
864
+ },
865
+ {
866
+ "epoch": 0.07,
867
+ "learning_rate": 4.417241379310345e-05,
868
+ "logits/chosen": -0.4828271269798279,
869
+ "logits/rejected": -1.649349570274353,
870
+ "logps/chosen": -306.6004638671875,
871
+ "logps/rejected": -172.2523956298828,
872
+ "loss": 0.0003,
873
+ "rewards/accuracies": 1.0,
874
+ "rewards/chosen": 11.373737335205078,
875
+ "rewards/margins": 20.521434783935547,
876
+ "rewards/rejected": -9.147698402404785,
877
+ "step": 62
878
+ },
879
+ {
880
+ "epoch": 0.07,
881
+ "learning_rate": 4.412068965517242e-05,
882
+ "logits/chosen": -0.4676312506198883,
883
+ "logits/rejected": -1.6393827199935913,
884
+ "logps/chosen": -357.127685546875,
885
+ "logps/rejected": -199.84524536132812,
886
+ "loss": 0.0,
887
+ "rewards/accuracies": 1.0,
888
+ "rewards/chosen": 16.730457305908203,
889
+ "rewards/margins": 26.409488677978516,
890
+ "rewards/rejected": -9.679031372070312,
891
+ "step": 63
892
+ },
893
+ {
894
+ "epoch": 0.07,
895
+ "learning_rate": 4.406896551724138e-05,
896
+ "logits/chosen": -0.7774850726127625,
897
+ "logits/rejected": -1.324657678604126,
898
+ "logps/chosen": -193.75833129882812,
899
+ "logps/rejected": -194.70233154296875,
900
+ "loss": 0.0015,
901
+ "rewards/accuracies": 1.0,
902
+ "rewards/chosen": 5.371121406555176,
903
+ "rewards/margins": 15.129293441772461,
904
+ "rewards/rejected": -9.758172988891602,
905
+ "step": 64
906
+ },
907
+ {
908
+ "epoch": 0.07,
909
+ "learning_rate": 4.401724137931035e-05,
910
+ "logits/chosen": -0.5502160787582397,
911
+ "logits/rejected": -1.466262936592102,
912
+ "logps/chosen": -240.96836853027344,
913
+ "logps/rejected": -174.6772003173828,
914
+ "loss": 0.0,
915
+ "rewards/accuracies": 1.0,
916
+ "rewards/chosen": 7.808657169342041,
917
+ "rewards/margins": 17.520769119262695,
918
+ "rewards/rejected": -9.712109565734863,
919
+ "step": 65
920
+ },
921
+ {
922
+ "epoch": 0.07,
923
+ "learning_rate": 4.3965517241379315e-05,
924
+ "logits/chosen": -0.5615445375442505,
925
+ "logits/rejected": -1.3243337869644165,
926
+ "logps/chosen": -282.65484619140625,
927
+ "logps/rejected": -211.33326721191406,
928
+ "loss": 0.4403,
929
+ "rewards/accuracies": 0.9375,
930
+ "rewards/chosen": 9.213937759399414,
931
+ "rewards/margins": 18.93706512451172,
932
+ "rewards/rejected": -9.723125457763672,
933
+ "step": 66
934
+ },
935
+ {
936
+ "epoch": 0.07,
937
+ "learning_rate": 4.3913793103448277e-05,
938
+ "logits/chosen": -0.5882927775382996,
939
+ "logits/rejected": -1.4038889408111572,
940
+ "logps/chosen": -244.26333618164062,
941
+ "logps/rejected": -192.05517578125,
942
+ "loss": 0.0007,
943
+ "rewards/accuracies": 1.0,
944
+ "rewards/chosen": 5.999594688415527,
945
+ "rewards/margins": 16.232702255249023,
946
+ "rewards/rejected": -10.233107566833496,
947
+ "step": 67
948
+ },
949
+ {
950
+ "epoch": 0.07,
951
+ "learning_rate": 4.3862068965517245e-05,
952
+ "logits/chosen": -0.7710579633712769,
953
+ "logits/rejected": -1.5811477899551392,
954
+ "logps/chosen": -274.4286804199219,
955
+ "logps/rejected": -189.31427001953125,
956
+ "loss": 0.0003,
957
+ "rewards/accuracies": 1.0,
958
+ "rewards/chosen": 10.483807563781738,
959
+ "rewards/margins": 22.807409286499023,
960
+ "rewards/rejected": -12.323600769042969,
961
+ "step": 68
962
+ },
963
+ {
964
+ "epoch": 0.08,
965
+ "learning_rate": 4.381034482758621e-05,
966
+ "logits/chosen": -0.2001597136259079,
967
+ "logits/rejected": -1.5477687120437622,
968
+ "logps/chosen": -398.04296875,
969
+ "logps/rejected": -208.8218536376953,
970
+ "loss": 0.0,
971
+ "rewards/accuracies": 1.0,
972
+ "rewards/chosen": 11.122913360595703,
973
+ "rewards/margins": 23.094343185424805,
974
+ "rewards/rejected": -11.971429824829102,
975
+ "step": 69
976
+ },
977
+ {
978
+ "epoch": 0.08,
979
+ "learning_rate": 4.3758620689655174e-05,
980
+ "logits/chosen": -0.5891858339309692,
981
+ "logits/rejected": -1.500455379486084,
982
+ "logps/chosen": -223.07235717773438,
983
+ "logps/rejected": -219.37405395507812,
984
+ "loss": 0.0,
985
+ "rewards/accuracies": 1.0,
986
+ "rewards/chosen": 7.498288154602051,
987
+ "rewards/margins": 19.65944480895996,
988
+ "rewards/rejected": -12.16115665435791,
989
+ "step": 70
990
+ },
991
+ {
992
+ "epoch": 0.08,
993
+ "learning_rate": 4.370689655172414e-05,
994
+ "logits/chosen": -0.5222135186195374,
995
+ "logits/rejected": -1.459835410118103,
996
+ "logps/chosen": -264.3045349121094,
997
+ "logps/rejected": -184.4599609375,
998
+ "loss": 0.0006,
999
+ "rewards/accuracies": 1.0,
1000
+ "rewards/chosen": 7.261447429656982,
1001
+ "rewards/margins": 17.37218475341797,
1002
+ "rewards/rejected": -10.110737800598145,
1003
+ "step": 71
1004
+ },
1005
+ {
1006
+ "epoch": 0.08,
1007
+ "learning_rate": 4.365517241379311e-05,
1008
+ "logits/chosen": -0.4974746108055115,
1009
+ "logits/rejected": -1.5458664894104004,
1010
+ "logps/chosen": -272.7602233886719,
1011
+ "logps/rejected": -181.00247192382812,
1012
+ "loss": 0.0,
1013
+ "rewards/accuracies": 1.0,
1014
+ "rewards/chosen": 10.925564765930176,
1015
+ "rewards/margins": 22.81020736694336,
1016
+ "rewards/rejected": -11.88464069366455,
1017
+ "step": 72
1018
+ },
1019
+ {
1020
+ "epoch": 0.08,
1021
+ "learning_rate": 4.360344827586207e-05,
1022
+ "logits/chosen": -0.3357059359550476,
1023
+ "logits/rejected": -1.4441150426864624,
1024
+ "logps/chosen": -414.8647766113281,
1025
+ "logps/rejected": -223.109375,
1026
+ "loss": 0.0004,
1027
+ "rewards/accuracies": 1.0,
1028
+ "rewards/chosen": 13.181656837463379,
1029
+ "rewards/margins": 27.527368545532227,
1030
+ "rewards/rejected": -14.345712661743164,
1031
+ "step": 73
1032
+ },
1033
+ {
1034
+ "epoch": 0.08,
1035
+ "learning_rate": 4.355172413793104e-05,
1036
+ "logits/chosen": -0.6531690359115601,
1037
+ "logits/rejected": -1.4611525535583496,
1038
+ "logps/chosen": -221.1251220703125,
1039
+ "logps/rejected": -226.30352783203125,
1040
+ "loss": 0.0,
1041
+ "rewards/accuracies": 1.0,
1042
+ "rewards/chosen": 7.807477951049805,
1043
+ "rewards/margins": 21.502174377441406,
1044
+ "rewards/rejected": -13.694696426391602,
1045
+ "step": 74
1046
+ },
1047
+ {
1048
+ "epoch": 0.08,
1049
+ "learning_rate": 4.35e-05,
1050
+ "logits/chosen": -0.5135932564735413,
1051
+ "logits/rejected": -1.4995931386947632,
1052
+ "logps/chosen": -332.56842041015625,
1053
+ "logps/rejected": -270.9891662597656,
1054
+ "loss": 0.0,
1055
+ "rewards/accuracies": 1.0,
1056
+ "rewards/chosen": 12.301117897033691,
1057
+ "rewards/margins": 26.830467224121094,
1058
+ "rewards/rejected": -14.529346466064453,
1059
+ "step": 75
1060
+ },
1061
+ {
1062
+ "epoch": 0.08,
1063
+ "learning_rate": 4.344827586206897e-05,
1064
+ "logits/chosen": -0.3770020604133606,
1065
+ "logits/rejected": -1.44219970703125,
1066
+ "logps/chosen": -317.96240234375,
1067
+ "logps/rejected": -250.3682098388672,
1068
+ "loss": 0.0,
1069
+ "rewards/accuracies": 1.0,
1070
+ "rewards/chosen": 7.8860931396484375,
1071
+ "rewards/margins": 23.53211784362793,
1072
+ "rewards/rejected": -15.646024703979492,
1073
+ "step": 76
1074
+ },
1075
+ {
1076
+ "epoch": 0.08,
1077
+ "learning_rate": 4.339655172413793e-05,
1078
+ "logits/chosen": -0.49628371000289917,
1079
+ "logits/rejected": -1.4045476913452148,
1080
+ "logps/chosen": -367.6162414550781,
1081
+ "logps/rejected": -265.2467346191406,
1082
+ "loss": 0.0,
1083
+ "rewards/accuracies": 1.0,
1084
+ "rewards/chosen": 10.148900985717773,
1085
+ "rewards/margins": 24.313669204711914,
1086
+ "rewards/rejected": -14.164766311645508,
1087
+ "step": 77
1088
+ },
1089
+ {
1090
+ "epoch": 0.09,
1091
+ "learning_rate": 4.33448275862069e-05,
1092
+ "logits/chosen": -0.5287263989448547,
1093
+ "logits/rejected": -1.3665097951889038,
1094
+ "logps/chosen": -341.87664794921875,
1095
+ "logps/rejected": -249.6795654296875,
1096
+ "loss": 0.1517,
1097
+ "rewards/accuracies": 0.9375,
1098
+ "rewards/chosen": 8.620326042175293,
1099
+ "rewards/margins": 19.850658416748047,
1100
+ "rewards/rejected": -11.230331420898438,
1101
+ "step": 78
1102
+ },
1103
+ {
1104
+ "epoch": 0.09,
1105
+ "learning_rate": 4.3293103448275865e-05,
1106
+ "logits/chosen": -0.2743966281414032,
1107
+ "logits/rejected": -1.4255130290985107,
1108
+ "logps/chosen": -334.3340759277344,
1109
+ "logps/rejected": -265.15032958984375,
1110
+ "loss": 0.0,
1111
+ "rewards/accuracies": 1.0,
1112
+ "rewards/chosen": 10.872364044189453,
1113
+ "rewards/margins": 27.00977325439453,
1114
+ "rewards/rejected": -16.13741111755371,
1115
+ "step": 79
1116
+ },
1117
+ {
1118
+ "epoch": 0.09,
1119
+ "learning_rate": 4.324137931034483e-05,
1120
+ "logits/chosen": -0.5563563108444214,
1121
+ "logits/rejected": -1.6934064626693726,
1122
+ "logps/chosen": -408.9612731933594,
1123
+ "logps/rejected": -220.00091552734375,
1124
+ "loss": 0.0,
1125
+ "rewards/accuracies": 1.0,
1126
+ "rewards/chosen": 10.237764358520508,
1127
+ "rewards/margins": 24.824390411376953,
1128
+ "rewards/rejected": -14.586627960205078,
1129
+ "step": 80
1130
+ },
1131
+ {
1132
+ "epoch": 0.09,
1133
+ "learning_rate": 4.3189655172413795e-05,
1134
+ "logits/chosen": -0.617451548576355,
1135
+ "logits/rejected": -1.640676736831665,
1136
+ "logps/chosen": -363.1540222167969,
1137
+ "logps/rejected": -226.42376708984375,
1138
+ "loss": 0.0,
1139
+ "rewards/accuracies": 1.0,
1140
+ "rewards/chosen": 10.672024726867676,
1141
+ "rewards/margins": 24.347591400146484,
1142
+ "rewards/rejected": -13.675565719604492,
1143
+ "step": 81
1144
+ },
1145
+ {
1146
+ "epoch": 0.09,
1147
+ "learning_rate": 4.313793103448276e-05,
1148
+ "logits/chosen": -0.45464372634887695,
1149
+ "logits/rejected": -1.4708175659179688,
1150
+ "logps/chosen": -328.35675048828125,
1151
+ "logps/rejected": -223.79766845703125,
1152
+ "loss": 0.0,
1153
+ "rewards/accuracies": 1.0,
1154
+ "rewards/chosen": 6.965122699737549,
1155
+ "rewards/margins": 21.22077178955078,
1156
+ "rewards/rejected": -14.25564956665039,
1157
+ "step": 82
1158
+ },
1159
+ {
1160
+ "epoch": 0.09,
1161
+ "learning_rate": 4.3086206896551724e-05,
1162
+ "logits/chosen": -0.6079340577125549,
1163
+ "logits/rejected": -1.5333082675933838,
1164
+ "logps/chosen": -219.4099578857422,
1165
+ "logps/rejected": -230.07948303222656,
1166
+ "loss": 0.0,
1167
+ "rewards/accuracies": 1.0,
1168
+ "rewards/chosen": 6.278050899505615,
1169
+ "rewards/margins": 20.45656967163086,
1170
+ "rewards/rejected": -14.178520202636719,
1171
+ "step": 83
1172
+ },
1173
+ {
1174
+ "epoch": 0.09,
1175
+ "learning_rate": 4.303448275862069e-05,
1176
+ "logits/chosen": -0.4874611794948578,
1177
+ "logits/rejected": -1.5135740041732788,
1178
+ "logps/chosen": -304.00274658203125,
1179
+ "logps/rejected": -263.0120849609375,
1180
+ "loss": 0.0,
1181
+ "rewards/accuracies": 1.0,
1182
+ "rewards/chosen": 9.557435989379883,
1183
+ "rewards/margins": 24.41012954711914,
1184
+ "rewards/rejected": -14.852693557739258,
1185
+ "step": 84
1186
+ },
1187
+ {
1188
+ "epoch": 0.09,
1189
+ "learning_rate": 4.298275862068966e-05,
1190
+ "logits/chosen": -0.4802761375904083,
1191
+ "logits/rejected": -1.4294252395629883,
1192
+ "logps/chosen": -293.41192626953125,
1193
+ "logps/rejected": -283.2821350097656,
1194
+ "loss": 0.0,
1195
+ "rewards/accuracies": 1.0,
1196
+ "rewards/chosen": 7.370968818664551,
1197
+ "rewards/margins": 24.32880401611328,
1198
+ "rewards/rejected": -16.957834243774414,
1199
+ "step": 85
1200
+ },
1201
+ {
1202
+ "epoch": 0.09,
1203
+ "learning_rate": 4.293103448275862e-05,
1204
+ "logits/chosen": -0.3580012321472168,
1205
+ "logits/rejected": -1.499201774597168,
1206
+ "logps/chosen": -350.5351257324219,
1207
+ "logps/rejected": -244.6523895263672,
1208
+ "loss": 0.0001,
1209
+ "rewards/accuracies": 1.0,
1210
+ "rewards/chosen": 12.237674713134766,
1211
+ "rewards/margins": 26.58084487915039,
1212
+ "rewards/rejected": -14.343170166015625,
1213
+ "step": 86
1214
+ },
1215
+ {
1216
+ "epoch": 0.09,
1217
+ "learning_rate": 4.287931034482759e-05,
1218
+ "logits/chosen": -0.6917211413383484,
1219
+ "logits/rejected": -1.4503819942474365,
1220
+ "logps/chosen": -217.5175323486328,
1221
+ "logps/rejected": -228.9481201171875,
1222
+ "loss": 0.0,
1223
+ "rewards/accuracies": 1.0,
1224
+ "rewards/chosen": 6.635045051574707,
1225
+ "rewards/margins": 20.42833709716797,
1226
+ "rewards/rejected": -13.793292999267578,
1227
+ "step": 87
1228
+ },
1229
+ {
1230
+ "epoch": 0.1,
1231
+ "learning_rate": 4.282758620689656e-05,
1232
+ "logits/chosen": -0.8255922794342041,
1233
+ "logits/rejected": -1.4748644828796387,
1234
+ "logps/chosen": -217.336669921875,
1235
+ "logps/rejected": -206.8959197998047,
1236
+ "loss": 0.0005,
1237
+ "rewards/accuracies": 1.0,
1238
+ "rewards/chosen": 5.22888708114624,
1239
+ "rewards/margins": 19.673564910888672,
1240
+ "rewards/rejected": -14.444679260253906,
1241
+ "step": 88
1242
+ },
1243
+ {
1244
+ "epoch": 0.1,
1245
+ "learning_rate": 4.2775862068965525e-05,
1246
+ "logits/chosen": -0.626908004283905,
1247
+ "logits/rejected": -1.6663663387298584,
1248
+ "logps/chosen": -328.56231689453125,
1249
+ "logps/rejected": -235.42843627929688,
1250
+ "loss": 0.0,
1251
+ "rewards/accuracies": 1.0,
1252
+ "rewards/chosen": 6.891221046447754,
1253
+ "rewards/margins": 21.77893829345703,
1254
+ "rewards/rejected": -14.887716293334961,
1255
+ "step": 89
1256
+ },
1257
+ {
1258
+ "epoch": 0.1,
1259
+ "learning_rate": 4.2724137931034486e-05,
1260
+ "logits/chosen": -0.5689429640769958,
1261
+ "logits/rejected": -1.618609070777893,
1262
+ "logps/chosen": -298.08770751953125,
1263
+ "logps/rejected": -237.06094360351562,
1264
+ "loss": 0.0,
1265
+ "rewards/accuracies": 1.0,
1266
+ "rewards/chosen": 9.328791618347168,
1267
+ "rewards/margins": 24.572046279907227,
1268
+ "rewards/rejected": -15.243253707885742,
1269
+ "step": 90
1270
+ },
1271
+ {
1272
+ "epoch": 0.1,
1273
+ "learning_rate": 4.2672413793103454e-05,
1274
+ "logits/chosen": -0.5474239587783813,
1275
+ "logits/rejected": -1.6142076253890991,
1276
+ "logps/chosen": -295.00494384765625,
1277
+ "logps/rejected": -268.9505615234375,
1278
+ "loss": 0.0287,
1279
+ "rewards/accuracies": 1.0,
1280
+ "rewards/chosen": 9.715471267700195,
1281
+ "rewards/margins": 23.968780517578125,
1282
+ "rewards/rejected": -14.25330924987793,
1283
+ "step": 91
1284
+ },
1285
+ {
1286
+ "epoch": 0.1,
1287
+ "learning_rate": 4.2620689655172416e-05,
1288
+ "logits/chosen": -0.5132906436920166,
1289
+ "logits/rejected": -1.6248977184295654,
1290
+ "logps/chosen": -314.5284423828125,
1291
+ "logps/rejected": -210.1342315673828,
1292
+ "loss": 0.0595,
1293
+ "rewards/accuracies": 0.9375,
1294
+ "rewards/chosen": 7.707674980163574,
1295
+ "rewards/margins": 20.18167495727539,
1296
+ "rewards/rejected": -12.474000930786133,
1297
+ "step": 92
1298
+ },
1299
+ {
1300
+ "epoch": 0.1,
1301
+ "learning_rate": 4.2568965517241384e-05,
1302
+ "logits/chosen": -0.45244526863098145,
1303
+ "logits/rejected": -1.7539148330688477,
1304
+ "logps/chosen": -394.9619140625,
1305
+ "logps/rejected": -230.32550048828125,
1306
+ "loss": 0.0,
1307
+ "rewards/accuracies": 1.0,
1308
+ "rewards/chosen": 11.32716178894043,
1309
+ "rewards/margins": 26.326129913330078,
1310
+ "rewards/rejected": -14.998967170715332,
1311
+ "step": 93
1312
+ },
1313
+ {
1314
+ "epoch": 0.1,
1315
+ "learning_rate": 4.2517241379310345e-05,
1316
+ "logits/chosen": -0.7538139224052429,
1317
+ "logits/rejected": -1.5796599388122559,
1318
+ "logps/chosen": -286.95281982421875,
1319
+ "logps/rejected": -266.7412414550781,
1320
+ "loss": 0.0,
1321
+ "rewards/accuracies": 1.0,
1322
+ "rewards/chosen": 8.003694534301758,
1323
+ "rewards/margins": 25.05299949645996,
1324
+ "rewards/rejected": -17.04930305480957,
1325
+ "step": 94
1326
+ },
1327
+ {
1328
+ "epoch": 0.1,
1329
+ "learning_rate": 4.246551724137931e-05,
1330
+ "logits/chosen": -0.44376400113105774,
1331
+ "logits/rejected": -1.5493687391281128,
1332
+ "logps/chosen": -406.93951416015625,
1333
+ "logps/rejected": -302.9401550292969,
1334
+ "loss": 0.0,
1335
+ "rewards/accuracies": 1.0,
1336
+ "rewards/chosen": 7.246862888336182,
1337
+ "rewards/margins": 27.937149047851562,
1338
+ "rewards/rejected": -20.69028663635254,
1339
+ "step": 95
1340
+ },
1341
+ {
1342
+ "epoch": 0.1,
1343
+ "learning_rate": 4.2413793103448274e-05,
1344
+ "logits/chosen": -0.7577255964279175,
1345
+ "logits/rejected": -1.77558171749115,
1346
+ "logps/chosen": -357.90838623046875,
1347
+ "logps/rejected": -246.73458862304688,
1348
+ "loss": 0.0,
1349
+ "rewards/accuracies": 1.0,
1350
+ "rewards/chosen": 7.267894744873047,
1351
+ "rewards/margins": 24.79804229736328,
1352
+ "rewards/rejected": -17.530147552490234,
1353
+ "step": 96
1354
+ },
1355
+ {
1356
+ "epoch": 0.11,
1357
+ "learning_rate": 4.236206896551724e-05,
1358
+ "logits/chosen": -0.8565713763237,
1359
+ "logits/rejected": -1.7760589122772217,
1360
+ "logps/chosen": -338.751220703125,
1361
+ "logps/rejected": -304.101806640625,
1362
+ "loss": 0.0,
1363
+ "rewards/accuracies": 1.0,
1364
+ "rewards/chosen": 9.238365173339844,
1365
+ "rewards/margins": 27.781978607177734,
1366
+ "rewards/rejected": -18.54361343383789,
1367
+ "step": 97
1368
+ },
1369
+ {
1370
+ "epoch": 0.11,
1371
+ "learning_rate": 4.231034482758621e-05,
1372
+ "logits/chosen": -0.40401750802993774,
1373
+ "logits/rejected": -1.7701493501663208,
1374
+ "logps/chosen": -537.5440063476562,
1375
+ "logps/rejected": -349.8548889160156,
1376
+ "loss": 0.0,
1377
+ "rewards/accuracies": 1.0,
1378
+ "rewards/chosen": 14.211512565612793,
1379
+ "rewards/margins": 37.746315002441406,
1380
+ "rewards/rejected": -23.53480339050293,
1381
+ "step": 98
1382
+ },
1383
+ {
1384
+ "epoch": 0.11,
1385
+ "learning_rate": 4.225862068965517e-05,
1386
+ "logits/chosen": -0.9799883365631104,
1387
+ "logits/rejected": -1.8936114311218262,
1388
+ "logps/chosen": -403.7567138671875,
1389
+ "logps/rejected": -257.6401672363281,
1390
+ "loss": 0.0001,
1391
+ "rewards/accuracies": 1.0,
1392
+ "rewards/chosen": 4.710639476776123,
1393
+ "rewards/margins": 23.923961639404297,
1394
+ "rewards/rejected": -19.213321685791016,
1395
+ "step": 99
1396
+ },
1397
+ {
1398
+ "epoch": 0.11,
1399
+ "learning_rate": 4.220689655172414e-05,
1400
+ "logits/chosen": -0.94623202085495,
1401
+ "logits/rejected": -1.7190004587173462,
1402
+ "logps/chosen": -360.8213806152344,
1403
+ "logps/rejected": -310.1994323730469,
1404
+ "loss": 0.0,
1405
+ "rewards/accuracies": 1.0,
1406
+ "rewards/chosen": 3.991710662841797,
1407
+ "rewards/margins": 26.59444808959961,
1408
+ "rewards/rejected": -22.60273551940918,
1409
+ "step": 100
1410
+ }
1411
+ ],
1412
+ "logging_steps": 1,
1413
+ "max_steps": 916,
1414
+ "num_input_tokens_seen": 0,
1415
+ "num_train_epochs": 1,
1416
+ "save_steps": 100,
1417
+ "total_flos": 0.0,
1418
+ "train_batch_size": 1,
1419
+ "trial_name": null,
1420
+ "trial_params": null
1421
+ }
checkpoint-100/training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6f8e778ebfdc8e189d4259f43aa8cc8438633b1407516e064a080cf26f579c11
3
+ size 4664
checkpoint-200/README.md ADDED
@@ -0,0 +1,204 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: peft
3
+ base_model: /run/media/adamo1139/82142F79142F6EFB/ProgramData/Anaconda3/envs/qlora-jondurbin/axolotl-git-linux/axolotl/yi-34b-200k-llamafied
4
+ ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
+
28
+ ### Model Sources [optional]
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
+
36
+ ## Uses
37
+
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
+
40
+ ### Direct Use
41
+
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Recommendations
65
+
66
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
+
68
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
+
70
+ ## How to Get Started with the Model
71
+
72
+ Use the code below to get started with the model.
73
+
74
+ [More Information Needed]
75
+
76
+ ## Training Details
77
+
78
+ ### Training Data
79
+
80
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
+
82
+ [More Information Needed]
83
+
84
+ ### Training Procedure
85
+
86
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
+
88
+ #### Preprocessing [optional]
89
+
90
+ [More Information Needed]
91
+
92
+
93
+ #### Training Hyperparameters
94
+
95
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
+
97
+ #### Speeds, Sizes, Times [optional]
98
+
99
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
+
101
+ [More Information Needed]
102
+
103
+ ## Evaluation
104
+
105
+ <!-- This section describes the evaluation protocols and provides the results. -->
106
+
107
+ ### Testing Data, Factors & Metrics
108
+
109
+ #### Testing Data
110
+
111
+ <!-- This should link to a Dataset Card if possible. -->
112
+
113
+ [More Information Needed]
114
+
115
+ #### Factors
116
+
117
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
+
119
+ [More Information Needed]
120
+
121
+ #### Metrics
122
+
123
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
+
125
+ [More Information Needed]
126
+
127
+ ### Results
128
+
129
+ [More Information Needed]
130
+
131
+ #### Summary
132
+
133
+
134
+
135
+ ## Model Examination [optional]
136
+
137
+ <!-- Relevant interpretability work for the model goes here -->
138
+
139
+ [More Information Needed]
140
+
141
+ ## Environmental Impact
142
+
143
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
+
145
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
+
147
+ - **Hardware Type:** [More Information Needed]
148
+ - **Hours used:** [More Information Needed]
149
+ - **Cloud Provider:** [More Information Needed]
150
+ - **Compute Region:** [More Information Needed]
151
+ - **Carbon Emitted:** [More Information Needed]
152
+
153
+ ## Technical Specifications [optional]
154
+
155
+ ### Model Architecture and Objective
156
+
157
+ [More Information Needed]
158
+
159
+ ### Compute Infrastructure
160
+
161
+ [More Information Needed]
162
+
163
+ #### Hardware
164
+
165
+ [More Information Needed]
166
+
167
+ #### Software
168
+
169
+ [More Information Needed]
170
+
171
+ ## Citation [optional]
172
+
173
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
+
175
+ **BibTeX:**
176
+
177
+ [More Information Needed]
178
+
179
+ **APA:**
180
+
181
+ [More Information Needed]
182
+
183
+ ## Glossary [optional]
184
+
185
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
+
187
+ [More Information Needed]
188
+
189
+ ## More Information [optional]
190
+
191
+ [More Information Needed]
192
+
193
+ ## Model Card Authors [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Contact
198
+
199
+ [More Information Needed]
200
+
201
+
202
+ ### Framework versions
203
+
204
+ - PEFT 0.7.1
checkpoint-200/adapter_config.json ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": "yi-34b-200k-llamafied",
5
+ "bias": "none",
6
+ "fan_in_fan_out": false,
7
+ "inference_mode": true,
8
+ "init_lora_weights": true,
9
+ "layers_pattern": null,
10
+ "layers_to_transform": null,
11
+ "loftq_config": null,
12
+ "lora_alpha": 32,
13
+ "lora_dropout": 0,
14
+ "megatron_config": null,
15
+ "megatron_core": "megatron.core",
16
+ "modules_to_save": null,
17
+ "peft_type": "LORA",
18
+ "r": 16,
19
+ "rank_pattern": {},
20
+ "revision": "unsloth",
21
+ "target_modules": [
22
+ "v_proj",
23
+ "gate_proj",
24
+ "o_proj",
25
+ "down_proj",
26
+ "q_proj",
27
+ "up_proj",
28
+ "k_proj"
29
+ ],
30
+ "task_type": "CAUSAL_LM"
31
+ }
checkpoint-200/adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cd17499f5bbc669e455927592886da54487e914d976a676ade29444f330fde90
3
+ size 491633464
checkpoint-200/rng_state.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1ff264f99d31b522cc7e2a4eac9d38606d0c58a34c0adc74d71e0ca8b371dc36
3
+ size 14244
checkpoint-200/scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:457295efea71f81baab628dd2db06366d3bb8a3c1850c55839208acbb61044ff
3
+ size 1064
checkpoint-200/special_tokens_map.json ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<|startoftext|>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "<|endoftext|>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "<unk>",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "unk_token": {
24
+ "content": "<unk>",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ }
30
+ }
checkpoint-200/tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
checkpoint-200/tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:386c49cf943d71aa110361135338c50e38beeff0a66593480421f37b319e1a39
3
+ size 1033105
checkpoint-200/tokenizer_config.json ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": false,
3
+ "add_eos_token": false,
4
+ "added_tokens_decoder": {
5
+ "0": {
6
+ "content": "<unk>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "1": {
14
+ "content": "<|startoftext|>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "2": {
22
+ "content": "<|endoftext|>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ }
29
+ },
30
+ "bos_token": "<|startoftext|>",
31
+ "chat_template": "{% if not add_generation_prompt is defined %}{% set add_generation_prompt = false %}{% endif %}{% for message in messages %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant\n' }}{% endif %}",
32
+ "clean_up_tokenization_spaces": false,
33
+ "eos_token": "<|endoftext|>",
34
+ "legacy": true,
35
+ "model_max_length": 4096,
36
+ "pad_token": "<unk>",
37
+ "padding_side": "right",
38
+ "sp_model_kwargs": {},
39
+ "tokenizer_class": "LlamaTokenizer",
40
+ "unk_token": "<unk>",
41
+ "use_default_system_prompt": false
42
+ }
checkpoint-200/trainer_state.json ADDED
@@ -0,0 +1,2821 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 0.2181768596168269,
5
+ "eval_steps": 500,
6
+ "global_step": 200,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.0,
13
+ "learning_rate": 9.782608695652175e-07,
14
+ "logits/chosen": -1.8868483304977417,
15
+ "logits/rejected": -2.3036646842956543,
16
+ "logps/chosen": -466.7117004394531,
17
+ "logps/rejected": -99.3152084350586,
18
+ "loss": 0.6931,
19
+ "rewards/accuracies": 0.0,
20
+ "rewards/chosen": 0.0,
21
+ "rewards/margins": 0.0,
22
+ "rewards/rejected": 0.0,
23
+ "step": 1
24
+ },
25
+ {
26
+ "epoch": 0.0,
27
+ "learning_rate": 1.956521739130435e-06,
28
+ "logits/chosen": -2.2296698093414307,
29
+ "logits/rejected": -2.469517469406128,
30
+ "logps/chosen": -335.50140380859375,
31
+ "logps/rejected": -97.97496032714844,
32
+ "loss": 0.6931,
33
+ "rewards/accuracies": 0.0,
34
+ "rewards/chosen": 0.0,
35
+ "rewards/margins": 0.0,
36
+ "rewards/rejected": 0.0,
37
+ "step": 2
38
+ },
39
+ {
40
+ "epoch": 0.0,
41
+ "learning_rate": 2.9347826086956523e-06,
42
+ "logits/chosen": -2.125047206878662,
43
+ "logits/rejected": -2.445204019546509,
44
+ "logps/chosen": -484.7939758300781,
45
+ "logps/rejected": -125.93072509765625,
46
+ "loss": 1.7268,
47
+ "rewards/accuracies": 0.4375,
48
+ "rewards/chosen": -1.0339633226394653,
49
+ "rewards/margins": -0.9356753826141357,
50
+ "rewards/rejected": -0.09828801453113556,
51
+ "step": 3
52
+ },
53
+ {
54
+ "epoch": 0.0,
55
+ "learning_rate": 3.91304347826087e-06,
56
+ "logits/chosen": -1.7763867378234863,
57
+ "logits/rejected": -2.1961894035339355,
58
+ "logps/chosen": -386.8011779785156,
59
+ "logps/rejected": -106.04309844970703,
60
+ "loss": 2.3806,
61
+ "rewards/accuracies": 0.5625,
62
+ "rewards/chosen": -0.9238373041152954,
63
+ "rewards/margins": -1.2902063131332397,
64
+ "rewards/rejected": 0.36636874079704285,
65
+ "step": 4
66
+ },
67
+ {
68
+ "epoch": 0.01,
69
+ "learning_rate": 4.891304347826087e-06,
70
+ "logits/chosen": -1.7257812023162842,
71
+ "logits/rejected": -2.353095769882202,
72
+ "logps/chosen": -524.7261962890625,
73
+ "logps/rejected": -70.26946258544922,
74
+ "loss": 0.7488,
75
+ "rewards/accuracies": 0.625,
76
+ "rewards/chosen": 2.197690010070801,
77
+ "rewards/margins": 2.0008764266967773,
78
+ "rewards/rejected": 0.19681359827518463,
79
+ "step": 5
80
+ },
81
+ {
82
+ "epoch": 0.01,
83
+ "learning_rate": 5.869565217391305e-06,
84
+ "logits/chosen": -2.1483359336853027,
85
+ "logits/rejected": -2.4817819595336914,
86
+ "logps/chosen": -283.6588134765625,
87
+ "logps/rejected": -58.25455093383789,
88
+ "loss": 0.8712,
89
+ "rewards/accuracies": 0.5625,
90
+ "rewards/chosen": 0.4915351867675781,
91
+ "rewards/margins": 0.2005542814731598,
92
+ "rewards/rejected": 0.2909809350967407,
93
+ "step": 6
94
+ },
95
+ {
96
+ "epoch": 0.01,
97
+ "learning_rate": 6.847826086956523e-06,
98
+ "logits/chosen": -2.1019060611724854,
99
+ "logits/rejected": -2.300177812576294,
100
+ "logps/chosen": -297.35235595703125,
101
+ "logps/rejected": -175.5060272216797,
102
+ "loss": 1.1392,
103
+ "rewards/accuracies": 0.5625,
104
+ "rewards/chosen": 0.6018204689025879,
105
+ "rewards/margins": -0.036775827407836914,
106
+ "rewards/rejected": 0.6385962963104248,
107
+ "step": 7
108
+ },
109
+ {
110
+ "epoch": 0.01,
111
+ "learning_rate": 7.82608695652174e-06,
112
+ "logits/chosen": -2.0169122219085693,
113
+ "logits/rejected": -2.201702117919922,
114
+ "logps/chosen": -342.04852294921875,
115
+ "logps/rejected": -117.9761962890625,
116
+ "loss": 1.1799,
117
+ "rewards/accuracies": 0.5625,
118
+ "rewards/chosen": 1.111680507659912,
119
+ "rewards/margins": 0.17297500371932983,
120
+ "rewards/rejected": 0.9387054443359375,
121
+ "step": 8
122
+ },
123
+ {
124
+ "epoch": 0.01,
125
+ "learning_rate": 8.804347826086957e-06,
126
+ "logits/chosen": -1.6254596710205078,
127
+ "logits/rejected": -2.0724740028381348,
128
+ "logps/chosen": -480.1983642578125,
129
+ "logps/rejected": -108.05937194824219,
130
+ "loss": 0.5203,
131
+ "rewards/accuracies": 0.8125,
132
+ "rewards/chosen": 2.161550283432007,
133
+ "rewards/margins": 1.3515225648880005,
134
+ "rewards/rejected": 0.8100277185440063,
135
+ "step": 9
136
+ },
137
+ {
138
+ "epoch": 0.01,
139
+ "learning_rate": 9.782608695652175e-06,
140
+ "logits/chosen": -1.4482485055923462,
141
+ "logits/rejected": -2.1116294860839844,
142
+ "logps/chosen": -451.5759582519531,
143
+ "logps/rejected": -67.73944854736328,
144
+ "loss": 0.355,
145
+ "rewards/accuracies": 0.8125,
146
+ "rewards/chosen": 6.18800163269043,
147
+ "rewards/margins": 5.184471130371094,
148
+ "rewards/rejected": 1.0035302639007568,
149
+ "step": 10
150
+ },
151
+ {
152
+ "epoch": 0.01,
153
+ "learning_rate": 1.0760869565217392e-05,
154
+ "logits/chosen": -1.2660062313079834,
155
+ "logits/rejected": -1.8649226427078247,
156
+ "logps/chosen": -265.8935546875,
157
+ "logps/rejected": -57.866336822509766,
158
+ "loss": 0.3178,
159
+ "rewards/accuracies": 0.8125,
160
+ "rewards/chosen": 5.78374719619751,
161
+ "rewards/margins": 4.856299877166748,
162
+ "rewards/rejected": 0.9274475574493408,
163
+ "step": 11
164
+ },
165
+ {
166
+ "epoch": 0.01,
167
+ "learning_rate": 1.173913043478261e-05,
168
+ "logits/chosen": -1.1804625988006592,
169
+ "logits/rejected": -1.7009960412979126,
170
+ "logps/chosen": -247.2069854736328,
171
+ "logps/rejected": -79.96739196777344,
172
+ "loss": 0.6617,
173
+ "rewards/accuracies": 0.75,
174
+ "rewards/chosen": 6.5535197257995605,
175
+ "rewards/margins": 4.647799015045166,
176
+ "rewards/rejected": 1.9057204723358154,
177
+ "step": 12
178
+ },
179
+ {
180
+ "epoch": 0.01,
181
+ "learning_rate": 1.2717391304347827e-05,
182
+ "logits/chosen": -0.9693112373352051,
183
+ "logits/rejected": -1.6497248411178589,
184
+ "logps/chosen": -372.8963623046875,
185
+ "logps/rejected": -70.47633361816406,
186
+ "loss": 0.2498,
187
+ "rewards/accuracies": 0.8125,
188
+ "rewards/chosen": 9.02272891998291,
189
+ "rewards/margins": 7.325143814086914,
190
+ "rewards/rejected": 1.6975862979888916,
191
+ "step": 13
192
+ },
193
+ {
194
+ "epoch": 0.02,
195
+ "learning_rate": 1.3695652173913046e-05,
196
+ "logits/chosen": -0.7723280787467957,
197
+ "logits/rejected": -1.4176554679870605,
198
+ "logps/chosen": -318.0392150878906,
199
+ "logps/rejected": -79.94412994384766,
200
+ "loss": 0.6401,
201
+ "rewards/accuracies": 0.6875,
202
+ "rewards/chosen": 6.698611259460449,
203
+ "rewards/margins": 4.517416477203369,
204
+ "rewards/rejected": 2.18119478225708,
205
+ "step": 14
206
+ },
207
+ {
208
+ "epoch": 0.02,
209
+ "learning_rate": 1.4673913043478261e-05,
210
+ "logits/chosen": -0.8084051609039307,
211
+ "logits/rejected": -1.4428132772445679,
212
+ "logps/chosen": -363.15869140625,
213
+ "logps/rejected": -70.70079803466797,
214
+ "loss": 0.4963,
215
+ "rewards/accuracies": 0.75,
216
+ "rewards/chosen": 7.02784538269043,
217
+ "rewards/margins": 5.400259971618652,
218
+ "rewards/rejected": 1.6275854110717773,
219
+ "step": 15
220
+ },
221
+ {
222
+ "epoch": 0.02,
223
+ "learning_rate": 1.565217391304348e-05,
224
+ "logits/chosen": -0.8056433200836182,
225
+ "logits/rejected": -1.4629294872283936,
226
+ "logps/chosen": -328.90435791015625,
227
+ "logps/rejected": -77.36820983886719,
228
+ "loss": 0.544,
229
+ "rewards/accuracies": 0.75,
230
+ "rewards/chosen": 9.417024612426758,
231
+ "rewards/margins": 7.023505687713623,
232
+ "rewards/rejected": 2.393519163131714,
233
+ "step": 16
234
+ },
235
+ {
236
+ "epoch": 0.02,
237
+ "learning_rate": 1.6630434782608694e-05,
238
+ "logits/chosen": -1.0918309688568115,
239
+ "logits/rejected": -1.6613047122955322,
240
+ "logps/chosen": -277.37982177734375,
241
+ "logps/rejected": -56.75635528564453,
242
+ "loss": 0.8958,
243
+ "rewards/accuracies": 0.8125,
244
+ "rewards/chosen": 6.844330310821533,
245
+ "rewards/margins": 4.82295560836792,
246
+ "rewards/rejected": 2.0213749408721924,
247
+ "step": 17
248
+ },
249
+ {
250
+ "epoch": 0.02,
251
+ "learning_rate": 1.7608695652173915e-05,
252
+ "logits/chosen": -0.9810999631881714,
253
+ "logits/rejected": -1.975247859954834,
254
+ "logps/chosen": -465.6856689453125,
255
+ "logps/rejected": -56.440269470214844,
256
+ "loss": 0.2275,
257
+ "rewards/accuracies": 0.875,
258
+ "rewards/chosen": 7.767542362213135,
259
+ "rewards/margins": 6.572459697723389,
260
+ "rewards/rejected": 1.1950825452804565,
261
+ "step": 18
262
+ },
263
+ {
264
+ "epoch": 0.02,
265
+ "learning_rate": 1.8586956521739132e-05,
266
+ "logits/chosen": -1.2428462505340576,
267
+ "logits/rejected": -1.8064367771148682,
268
+ "logps/chosen": -322.7100830078125,
269
+ "logps/rejected": -108.71321105957031,
270
+ "loss": 0.3006,
271
+ "rewards/accuracies": 0.875,
272
+ "rewards/chosen": 6.459216594696045,
273
+ "rewards/margins": 3.9207711219787598,
274
+ "rewards/rejected": 2.5384464263916016,
275
+ "step": 19
276
+ },
277
+ {
278
+ "epoch": 0.02,
279
+ "learning_rate": 1.956521739130435e-05,
280
+ "logits/chosen": -1.7773098945617676,
281
+ "logits/rejected": -2.178387403488159,
282
+ "logps/chosen": -357.8656921386719,
283
+ "logps/rejected": -112.65576934814453,
284
+ "loss": 0.3837,
285
+ "rewards/accuracies": 0.9375,
286
+ "rewards/chosen": 4.684590816497803,
287
+ "rewards/margins": 4.16137170791626,
288
+ "rewards/rejected": 0.5232191681861877,
289
+ "step": 20
290
+ },
291
+ {
292
+ "epoch": 0.02,
293
+ "learning_rate": 2.0543478260869567e-05,
294
+ "logits/chosen": -1.6802027225494385,
295
+ "logits/rejected": -1.943274974822998,
296
+ "logps/chosen": -327.85693359375,
297
+ "logps/rejected": -87.53279113769531,
298
+ "loss": 0.4428,
299
+ "rewards/accuracies": 0.9375,
300
+ "rewards/chosen": 2.681103229522705,
301
+ "rewards/margins": 2.3518502712249756,
302
+ "rewards/rejected": 0.32925307750701904,
303
+ "step": 21
304
+ },
305
+ {
306
+ "epoch": 0.02,
307
+ "learning_rate": 2.1521739130434784e-05,
308
+ "logits/chosen": -1.5789223909378052,
309
+ "logits/rejected": -2.4593281745910645,
310
+ "logps/chosen": -590.5732421875,
311
+ "logps/rejected": -86.78819274902344,
312
+ "loss": 0.252,
313
+ "rewards/accuracies": 0.875,
314
+ "rewards/chosen": 3.6503655910491943,
315
+ "rewards/margins": 3.621542453765869,
316
+ "rewards/rejected": 0.028822531923651695,
317
+ "step": 22
318
+ },
319
+ {
320
+ "epoch": 0.03,
321
+ "learning_rate": 2.25e-05,
322
+ "logits/chosen": -1.8142305612564087,
323
+ "logits/rejected": -2.3507237434387207,
324
+ "logps/chosen": -296.42266845703125,
325
+ "logps/rejected": -60.37131118774414,
326
+ "loss": 0.5602,
327
+ "rewards/accuracies": 0.75,
328
+ "rewards/chosen": 2.170171022415161,
329
+ "rewards/margins": 2.5353941917419434,
330
+ "rewards/rejected": -0.36522313952445984,
331
+ "step": 23
332
+ },
333
+ {
334
+ "epoch": 0.03,
335
+ "learning_rate": 2.347826086956522e-05,
336
+ "logits/chosen": -1.7050156593322754,
337
+ "logits/rejected": -2.158708333969116,
338
+ "logps/chosen": -523.41748046875,
339
+ "logps/rejected": -81.6954345703125,
340
+ "loss": 0.7359,
341
+ "rewards/accuracies": 0.9375,
342
+ "rewards/chosen": 2.3799068927764893,
343
+ "rewards/margins": 2.963341236114502,
344
+ "rewards/rejected": -0.583433985710144,
345
+ "step": 24
346
+ },
347
+ {
348
+ "epoch": 0.03,
349
+ "learning_rate": 2.4456521739130436e-05,
350
+ "logits/chosen": -1.704190731048584,
351
+ "logits/rejected": -2.215942859649658,
352
+ "logps/chosen": -457.6519775390625,
353
+ "logps/rejected": -91.778076171875,
354
+ "loss": 0.0865,
355
+ "rewards/accuracies": 1.0,
356
+ "rewards/chosen": 4.57167911529541,
357
+ "rewards/margins": 4.6597795486450195,
358
+ "rewards/rejected": -0.08810039609670639,
359
+ "step": 25
360
+ },
361
+ {
362
+ "epoch": 0.03,
363
+ "learning_rate": 2.5434782608695653e-05,
364
+ "logits/chosen": -1.4195502996444702,
365
+ "logits/rejected": -2.109910488128662,
366
+ "logps/chosen": -445.05908203125,
367
+ "logps/rejected": -80.95398712158203,
368
+ "loss": 0.1265,
369
+ "rewards/accuracies": 0.9375,
370
+ "rewards/chosen": 5.362960338592529,
371
+ "rewards/margins": 5.412578582763672,
372
+ "rewards/rejected": -0.049618080258369446,
373
+ "step": 26
374
+ },
375
+ {
376
+ "epoch": 0.03,
377
+ "learning_rate": 2.6413043478260874e-05,
378
+ "logits/chosen": -1.0274971723556519,
379
+ "logits/rejected": -1.7681918144226074,
380
+ "logps/chosen": -393.85870361328125,
381
+ "logps/rejected": -77.79011535644531,
382
+ "loss": 0.1702,
383
+ "rewards/accuracies": 1.0,
384
+ "rewards/chosen": 4.572484016418457,
385
+ "rewards/margins": 4.946743488311768,
386
+ "rewards/rejected": -0.374259889125824,
387
+ "step": 27
388
+ },
389
+ {
390
+ "epoch": 0.03,
391
+ "learning_rate": 2.739130434782609e-05,
392
+ "logits/chosen": -0.7365273237228394,
393
+ "logits/rejected": -1.9988534450531006,
394
+ "logps/chosen": -569.8936767578125,
395
+ "logps/rejected": -79.71562194824219,
396
+ "loss": 0.0465,
397
+ "rewards/accuracies": 1.0,
398
+ "rewards/chosen": 12.751686096191406,
399
+ "rewards/margins": 12.474647521972656,
400
+ "rewards/rejected": 0.2770393192768097,
401
+ "step": 28
402
+ },
403
+ {
404
+ "epoch": 0.03,
405
+ "learning_rate": 2.836956521739131e-05,
406
+ "logits/chosen": -0.6424872279167175,
407
+ "logits/rejected": -1.6069284677505493,
408
+ "logps/chosen": -368.68292236328125,
409
+ "logps/rejected": -103.9141845703125,
410
+ "loss": 0.0351,
411
+ "rewards/accuracies": 1.0,
412
+ "rewards/chosen": 10.144453048706055,
413
+ "rewards/margins": 10.381340980529785,
414
+ "rewards/rejected": -0.23688830435276031,
415
+ "step": 29
416
+ },
417
+ {
418
+ "epoch": 0.03,
419
+ "learning_rate": 2.9347826086956523e-05,
420
+ "logits/chosen": -0.6061868071556091,
421
+ "logits/rejected": -1.4256294965744019,
422
+ "logps/chosen": -360.4029846191406,
423
+ "logps/rejected": -130.27224731445312,
424
+ "loss": 0.0414,
425
+ "rewards/accuracies": 1.0,
426
+ "rewards/chosen": 12.649539947509766,
427
+ "rewards/margins": 12.36604118347168,
428
+ "rewards/rejected": 0.2834990620613098,
429
+ "step": 30
430
+ },
431
+ {
432
+ "epoch": 0.03,
433
+ "learning_rate": 3.032608695652174e-05,
434
+ "logits/chosen": -1.0075111389160156,
435
+ "logits/rejected": -1.3918204307556152,
436
+ "logps/chosen": -227.05450439453125,
437
+ "logps/rejected": -172.67947387695312,
438
+ "loss": 1.1145,
439
+ "rewards/accuracies": 0.875,
440
+ "rewards/chosen": 4.628115653991699,
441
+ "rewards/margins": 3.9521055221557617,
442
+ "rewards/rejected": 0.6760098934173584,
443
+ "step": 31
444
+ },
445
+ {
446
+ "epoch": 0.03,
447
+ "learning_rate": 3.130434782608696e-05,
448
+ "logits/chosen": -0.9060839414596558,
449
+ "logits/rejected": -2.0123891830444336,
450
+ "logps/chosen": -403.06781005859375,
451
+ "logps/rejected": -75.35684967041016,
452
+ "loss": 0.0319,
453
+ "rewards/accuracies": 1.0,
454
+ "rewards/chosen": 7.352475643157959,
455
+ "rewards/margins": 8.778717041015625,
456
+ "rewards/rejected": -1.4262410402297974,
457
+ "step": 32
458
+ },
459
+ {
460
+ "epoch": 0.04,
461
+ "learning_rate": 3.228260869565217e-05,
462
+ "logits/chosen": -1.2984271049499512,
463
+ "logits/rejected": -1.9507720470428467,
464
+ "logps/chosen": -306.7958984375,
465
+ "logps/rejected": -107.01404571533203,
466
+ "loss": 0.0795,
467
+ "rewards/accuracies": 0.9375,
468
+ "rewards/chosen": 5.223363399505615,
469
+ "rewards/margins": 7.012631416320801,
470
+ "rewards/rejected": -1.7892680168151855,
471
+ "step": 33
472
+ },
473
+ {
474
+ "epoch": 0.04,
475
+ "learning_rate": 3.326086956521739e-05,
476
+ "logits/chosen": -1.1749062538146973,
477
+ "logits/rejected": -2.004805564880371,
478
+ "logps/chosen": -437.9205322265625,
479
+ "logps/rejected": -135.21376037597656,
480
+ "loss": 0.0021,
481
+ "rewards/accuracies": 1.0,
482
+ "rewards/chosen": 9.3555908203125,
483
+ "rewards/margins": 12.285521507263184,
484
+ "rewards/rejected": -2.9299299716949463,
485
+ "step": 34
486
+ },
487
+ {
488
+ "epoch": 0.04,
489
+ "learning_rate": 3.423913043478261e-05,
490
+ "logits/chosen": -0.9383536577224731,
491
+ "logits/rejected": -2.0366413593292236,
492
+ "logps/chosen": -457.8018493652344,
493
+ "logps/rejected": -87.7723617553711,
494
+ "loss": 0.041,
495
+ "rewards/accuracies": 1.0,
496
+ "rewards/chosen": 6.836911678314209,
497
+ "rewards/margins": 9.43667984008789,
498
+ "rewards/rejected": -2.599769115447998,
499
+ "step": 35
500
+ },
501
+ {
502
+ "epoch": 0.04,
503
+ "learning_rate": 3.521739130434783e-05,
504
+ "logits/chosen": -1.120687484741211,
505
+ "logits/rejected": -1.9933563470840454,
506
+ "logps/chosen": -439.6186828613281,
507
+ "logps/rejected": -94.24763488769531,
508
+ "loss": 0.1906,
509
+ "rewards/accuracies": 0.9375,
510
+ "rewards/chosen": 6.565365314483643,
511
+ "rewards/margins": 9.163484573364258,
512
+ "rewards/rejected": -2.5981192588806152,
513
+ "step": 36
514
+ },
515
+ {
516
+ "epoch": 0.04,
517
+ "learning_rate": 3.619565217391305e-05,
518
+ "logits/chosen": -1.338189959526062,
519
+ "logits/rejected": -1.9118335247039795,
520
+ "logps/chosen": -286.96728515625,
521
+ "logps/rejected": -127.22909545898438,
522
+ "loss": 0.0279,
523
+ "rewards/accuracies": 1.0,
524
+ "rewards/chosen": 2.9341468811035156,
525
+ "rewards/margins": 6.362301349639893,
526
+ "rewards/rejected": -3.4281537532806396,
527
+ "step": 37
528
+ },
529
+ {
530
+ "epoch": 0.04,
531
+ "learning_rate": 3.7173913043478264e-05,
532
+ "logits/chosen": -1.2371840476989746,
533
+ "logits/rejected": -1.7362310886383057,
534
+ "logps/chosen": -319.52911376953125,
535
+ "logps/rejected": -151.65084838867188,
536
+ "loss": 0.0939,
537
+ "rewards/accuracies": 0.9375,
538
+ "rewards/chosen": 6.216956615447998,
539
+ "rewards/margins": 9.073128700256348,
540
+ "rewards/rejected": -2.856172561645508,
541
+ "step": 38
542
+ },
543
+ {
544
+ "epoch": 0.04,
545
+ "learning_rate": 3.815217391304348e-05,
546
+ "logits/chosen": -0.8356418609619141,
547
+ "logits/rejected": -1.7252635955810547,
548
+ "logps/chosen": -391.63275146484375,
549
+ "logps/rejected": -154.64361572265625,
550
+ "loss": 0.2483,
551
+ "rewards/accuracies": 0.9375,
552
+ "rewards/chosen": 8.802470207214355,
553
+ "rewards/margins": 11.676048278808594,
554
+ "rewards/rejected": -2.873577356338501,
555
+ "step": 39
556
+ },
557
+ {
558
+ "epoch": 0.04,
559
+ "learning_rate": 3.91304347826087e-05,
560
+ "logits/chosen": -0.9311200380325317,
561
+ "logits/rejected": -1.8107151985168457,
562
+ "logps/chosen": -374.2525939941406,
563
+ "logps/rejected": -131.86715698242188,
564
+ "loss": 0.0058,
565
+ "rewards/accuracies": 1.0,
566
+ "rewards/chosen": 8.586356163024902,
567
+ "rewards/margins": 11.15583324432373,
568
+ "rewards/rejected": -2.569479465484619,
569
+ "step": 40
570
+ },
571
+ {
572
+ "epoch": 0.04,
573
+ "learning_rate": 4.0108695652173916e-05,
574
+ "logits/chosen": -0.6861732006072998,
575
+ "logits/rejected": -1.6295411586761475,
576
+ "logps/chosen": -311.7944641113281,
577
+ "logps/rejected": -132.4564666748047,
578
+ "loss": 0.0124,
579
+ "rewards/accuracies": 1.0,
580
+ "rewards/chosen": 5.333560943603516,
581
+ "rewards/margins": 10.023443222045898,
582
+ "rewards/rejected": -4.689882755279541,
583
+ "step": 41
584
+ },
585
+ {
586
+ "epoch": 0.05,
587
+ "learning_rate": 4.1086956521739134e-05,
588
+ "logits/chosen": -1.1133558750152588,
589
+ "logits/rejected": -2.0819039344787598,
590
+ "logps/chosen": -282.22332763671875,
591
+ "logps/rejected": -98.7315673828125,
592
+ "loss": 0.0156,
593
+ "rewards/accuracies": 1.0,
594
+ "rewards/chosen": 4.930261611938477,
595
+ "rewards/margins": 8.344033241271973,
596
+ "rewards/rejected": -3.413771629333496,
597
+ "step": 42
598
+ },
599
+ {
600
+ "epoch": 0.05,
601
+ "learning_rate": 4.206521739130435e-05,
602
+ "logits/chosen": -1.0311042070388794,
603
+ "logits/rejected": -2.0758776664733887,
604
+ "logps/chosen": -369.7279052734375,
605
+ "logps/rejected": -146.08364868164062,
606
+ "loss": 0.0127,
607
+ "rewards/accuracies": 1.0,
608
+ "rewards/chosen": 7.7552289962768555,
609
+ "rewards/margins": 13.249262809753418,
610
+ "rewards/rejected": -5.494032859802246,
611
+ "step": 43
612
+ },
613
+ {
614
+ "epoch": 0.05,
615
+ "learning_rate": 4.304347826086957e-05,
616
+ "logits/chosen": -1.0191632509231567,
617
+ "logits/rejected": -1.995982050895691,
618
+ "logps/chosen": -367.692626953125,
619
+ "logps/rejected": -152.65135192871094,
620
+ "loss": 0.0004,
621
+ "rewards/accuracies": 1.0,
622
+ "rewards/chosen": 7.9517822265625,
623
+ "rewards/margins": 13.869260787963867,
624
+ "rewards/rejected": -5.917478084564209,
625
+ "step": 44
626
+ },
627
+ {
628
+ "epoch": 0.05,
629
+ "learning_rate": 4.4021739130434786e-05,
630
+ "logits/chosen": -1.3430471420288086,
631
+ "logits/rejected": -2.0910849571228027,
632
+ "logps/chosen": -334.8968811035156,
633
+ "logps/rejected": -168.4519500732422,
634
+ "loss": 0.0057,
635
+ "rewards/accuracies": 1.0,
636
+ "rewards/chosen": 5.113959312438965,
637
+ "rewards/margins": 12.524876594543457,
638
+ "rewards/rejected": -7.41091775894165,
639
+ "step": 45
640
+ },
641
+ {
642
+ "epoch": 0.05,
643
+ "learning_rate": 4.5e-05,
644
+ "logits/chosen": -0.8516018986701965,
645
+ "logits/rejected": -1.9934579133987427,
646
+ "logps/chosen": -386.7164001464844,
647
+ "logps/rejected": -152.8460235595703,
648
+ "loss": 0.0253,
649
+ "rewards/accuracies": 1.0,
650
+ "rewards/chosen": 7.0803022384643555,
651
+ "rewards/margins": 13.202106475830078,
652
+ "rewards/rejected": -6.121803283691406,
653
+ "step": 46
654
+ },
655
+ {
656
+ "epoch": 0.05,
657
+ "learning_rate": 4.494827586206897e-05,
658
+ "logits/chosen": -0.9460695385932922,
659
+ "logits/rejected": -1.9993284940719604,
660
+ "logps/chosen": -355.2873229980469,
661
+ "logps/rejected": -188.5989532470703,
662
+ "loss": 0.1008,
663
+ "rewards/accuracies": 0.9375,
664
+ "rewards/chosen": 6.138028144836426,
665
+ "rewards/margins": 13.93982982635498,
666
+ "rewards/rejected": -7.801800727844238,
667
+ "step": 47
668
+ },
669
+ {
670
+ "epoch": 0.05,
671
+ "learning_rate": 4.489655172413793e-05,
672
+ "logits/chosen": -1.212510108947754,
673
+ "logits/rejected": -2.0202791690826416,
674
+ "logps/chosen": -293.0318908691406,
675
+ "logps/rejected": -194.92649841308594,
676
+ "loss": 0.0058,
677
+ "rewards/accuracies": 1.0,
678
+ "rewards/chosen": 6.742659091949463,
679
+ "rewards/margins": 17.209505081176758,
680
+ "rewards/rejected": -10.466848373413086,
681
+ "step": 48
682
+ },
683
+ {
684
+ "epoch": 0.05,
685
+ "learning_rate": 4.48448275862069e-05,
686
+ "logits/chosen": -1.388157606124878,
687
+ "logits/rejected": -2.1885251998901367,
688
+ "logps/chosen": -330.6868591308594,
689
+ "logps/rejected": -214.35423278808594,
690
+ "loss": 0.0003,
691
+ "rewards/accuracies": 1.0,
692
+ "rewards/chosen": 5.653369903564453,
693
+ "rewards/margins": 18.257244110107422,
694
+ "rewards/rejected": -12.603873252868652,
695
+ "step": 49
696
+ },
697
+ {
698
+ "epoch": 0.05,
699
+ "learning_rate": 4.479310344827587e-05,
700
+ "logits/chosen": -1.020527958869934,
701
+ "logits/rejected": -1.9433815479278564,
702
+ "logps/chosen": -347.48394775390625,
703
+ "logps/rejected": -191.15147399902344,
704
+ "loss": 0.0001,
705
+ "rewards/accuracies": 1.0,
706
+ "rewards/chosen": 7.30579137802124,
707
+ "rewards/margins": 18.734811782836914,
708
+ "rewards/rejected": -11.429021835327148,
709
+ "step": 50
710
+ },
711
+ {
712
+ "epoch": 0.06,
713
+ "learning_rate": 4.474137931034483e-05,
714
+ "logits/chosen": -1.2260757684707642,
715
+ "logits/rejected": -2.0841352939605713,
716
+ "logps/chosen": -243.5459747314453,
717
+ "logps/rejected": -188.00442504882812,
718
+ "loss": 0.0002,
719
+ "rewards/accuracies": 1.0,
720
+ "rewards/chosen": 3.675260543823242,
721
+ "rewards/margins": 14.774760246276855,
722
+ "rewards/rejected": -11.099499702453613,
723
+ "step": 51
724
+ },
725
+ {
726
+ "epoch": 0.06,
727
+ "learning_rate": 4.46896551724138e-05,
728
+ "logits/chosen": -1.0378711223602295,
729
+ "logits/rejected": -2.0174639225006104,
730
+ "logps/chosen": -375.9520263671875,
731
+ "logps/rejected": -182.97463989257812,
732
+ "loss": 0.0,
733
+ "rewards/accuracies": 1.0,
734
+ "rewards/chosen": 7.18237829208374,
735
+ "rewards/margins": 19.010208129882812,
736
+ "rewards/rejected": -11.82783317565918,
737
+ "step": 52
738
+ },
739
+ {
740
+ "epoch": 0.06,
741
+ "learning_rate": 4.4637931034482765e-05,
742
+ "logits/chosen": -1.1728301048278809,
743
+ "logits/rejected": -1.9667140245437622,
744
+ "logps/chosen": -309.5382995605469,
745
+ "logps/rejected": -241.13743591308594,
746
+ "loss": 0.0,
747
+ "rewards/accuracies": 1.0,
748
+ "rewards/chosen": 6.681103706359863,
749
+ "rewards/margins": 21.542236328125,
750
+ "rewards/rejected": -14.861133575439453,
751
+ "step": 53
752
+ },
753
+ {
754
+ "epoch": 0.06,
755
+ "learning_rate": 4.4586206896551726e-05,
756
+ "logits/chosen": -1.1491725444793701,
757
+ "logits/rejected": -1.760243535041809,
758
+ "logps/chosen": -250.09036254882812,
759
+ "logps/rejected": -226.221435546875,
760
+ "loss": 0.0001,
761
+ "rewards/accuracies": 1.0,
762
+ "rewards/chosen": 2.3173880577087402,
763
+ "rewards/margins": 16.417604446411133,
764
+ "rewards/rejected": -14.100215911865234,
765
+ "step": 54
766
+ },
767
+ {
768
+ "epoch": 0.06,
769
+ "learning_rate": 4.4534482758620694e-05,
770
+ "logits/chosen": -1.2637637853622437,
771
+ "logits/rejected": -1.8996949195861816,
772
+ "logps/chosen": -401.9977722167969,
773
+ "logps/rejected": -303.9198303222656,
774
+ "loss": 0.0011,
775
+ "rewards/accuracies": 1.0,
776
+ "rewards/chosen": 8.595758438110352,
777
+ "rewards/margins": 26.598325729370117,
778
+ "rewards/rejected": -18.002567291259766,
779
+ "step": 55
780
+ },
781
+ {
782
+ "epoch": 0.06,
783
+ "learning_rate": 4.4482758620689656e-05,
784
+ "logits/chosen": -0.9855178594589233,
785
+ "logits/rejected": -2.0190346240997314,
786
+ "logps/chosen": -394.40625,
787
+ "logps/rejected": -253.97720336914062,
788
+ "loss": 0.0,
789
+ "rewards/accuracies": 1.0,
790
+ "rewards/chosen": 6.395060062408447,
791
+ "rewards/margins": 23.680639266967773,
792
+ "rewards/rejected": -17.28557777404785,
793
+ "step": 56
794
+ },
795
+ {
796
+ "epoch": 0.06,
797
+ "learning_rate": 4.4431034482758624e-05,
798
+ "logits/chosen": -1.049959421157837,
799
+ "logits/rejected": -1.6531357765197754,
800
+ "logps/chosen": -205.54058837890625,
801
+ "logps/rejected": -250.66958618164062,
802
+ "loss": 0.0,
803
+ "rewards/accuracies": 1.0,
804
+ "rewards/chosen": 4.874389171600342,
805
+ "rewards/margins": 20.083953857421875,
806
+ "rewards/rejected": -15.209564208984375,
807
+ "step": 57
808
+ },
809
+ {
810
+ "epoch": 0.06,
811
+ "learning_rate": 4.4379310344827585e-05,
812
+ "logits/chosen": -0.8982111811637878,
813
+ "logits/rejected": -1.6001371145248413,
814
+ "logps/chosen": -241.91790771484375,
815
+ "logps/rejected": -252.10299682617188,
816
+ "loss": 0.0,
817
+ "rewards/accuracies": 1.0,
818
+ "rewards/chosen": 6.044814109802246,
819
+ "rewards/margins": 21.77715301513672,
820
+ "rewards/rejected": -15.732340812683105,
821
+ "step": 58
822
+ },
823
+ {
824
+ "epoch": 0.06,
825
+ "learning_rate": 4.432758620689655e-05,
826
+ "logits/chosen": -0.753044068813324,
827
+ "logits/rejected": -1.6325318813323975,
828
+ "logps/chosen": -315.30230712890625,
829
+ "logps/rejected": -245.91976928710938,
830
+ "loss": 0.0078,
831
+ "rewards/accuracies": 1.0,
832
+ "rewards/chosen": 5.801758289337158,
833
+ "rewards/margins": 20.689924240112305,
834
+ "rewards/rejected": -14.888164520263672,
835
+ "step": 59
836
+ },
837
+ {
838
+ "epoch": 0.07,
839
+ "learning_rate": 4.427586206896552e-05,
840
+ "logits/chosen": -0.5443448424339294,
841
+ "logits/rejected": -1.7224345207214355,
842
+ "logps/chosen": -388.339599609375,
843
+ "logps/rejected": -211.5500946044922,
844
+ "loss": 0.0,
845
+ "rewards/accuracies": 1.0,
846
+ "rewards/chosen": 11.5880708694458,
847
+ "rewards/margins": 24.889877319335938,
848
+ "rewards/rejected": -13.301810264587402,
849
+ "step": 60
850
+ },
851
+ {
852
+ "epoch": 0.07,
853
+ "learning_rate": 4.422413793103448e-05,
854
+ "logits/chosen": -0.6921290755271912,
855
+ "logits/rejected": -1.6139135360717773,
856
+ "logps/chosen": -242.47312927246094,
857
+ "logps/rejected": -214.73622131347656,
858
+ "loss": 0.0,
859
+ "rewards/accuracies": 1.0,
860
+ "rewards/chosen": 11.329044342041016,
861
+ "rewards/margins": 21.45581817626953,
862
+ "rewards/rejected": -10.1267728805542,
863
+ "step": 61
864
+ },
865
+ {
866
+ "epoch": 0.07,
867
+ "learning_rate": 4.417241379310345e-05,
868
+ "logits/chosen": -0.4828271269798279,
869
+ "logits/rejected": -1.649349570274353,
870
+ "logps/chosen": -306.6004638671875,
871
+ "logps/rejected": -172.2523956298828,
872
+ "loss": 0.0003,
873
+ "rewards/accuracies": 1.0,
874
+ "rewards/chosen": 11.373737335205078,
875
+ "rewards/margins": 20.521434783935547,
876
+ "rewards/rejected": -9.147698402404785,
877
+ "step": 62
878
+ },
879
+ {
880
+ "epoch": 0.07,
881
+ "learning_rate": 4.412068965517242e-05,
882
+ "logits/chosen": -0.4676312506198883,
883
+ "logits/rejected": -1.6393827199935913,
884
+ "logps/chosen": -357.127685546875,
885
+ "logps/rejected": -199.84524536132812,
886
+ "loss": 0.0,
887
+ "rewards/accuracies": 1.0,
888
+ "rewards/chosen": 16.730457305908203,
889
+ "rewards/margins": 26.409488677978516,
890
+ "rewards/rejected": -9.679031372070312,
891
+ "step": 63
892
+ },
893
+ {
894
+ "epoch": 0.07,
895
+ "learning_rate": 4.406896551724138e-05,
896
+ "logits/chosen": -0.7774850726127625,
897
+ "logits/rejected": -1.324657678604126,
898
+ "logps/chosen": -193.75833129882812,
899
+ "logps/rejected": -194.70233154296875,
900
+ "loss": 0.0015,
901
+ "rewards/accuracies": 1.0,
902
+ "rewards/chosen": 5.371121406555176,
903
+ "rewards/margins": 15.129293441772461,
904
+ "rewards/rejected": -9.758172988891602,
905
+ "step": 64
906
+ },
907
+ {
908
+ "epoch": 0.07,
909
+ "learning_rate": 4.401724137931035e-05,
910
+ "logits/chosen": -0.5502160787582397,
911
+ "logits/rejected": -1.466262936592102,
912
+ "logps/chosen": -240.96836853027344,
913
+ "logps/rejected": -174.6772003173828,
914
+ "loss": 0.0,
915
+ "rewards/accuracies": 1.0,
916
+ "rewards/chosen": 7.808657169342041,
917
+ "rewards/margins": 17.520769119262695,
918
+ "rewards/rejected": -9.712109565734863,
919
+ "step": 65
920
+ },
921
+ {
922
+ "epoch": 0.07,
923
+ "learning_rate": 4.3965517241379315e-05,
924
+ "logits/chosen": -0.5615445375442505,
925
+ "logits/rejected": -1.3243337869644165,
926
+ "logps/chosen": -282.65484619140625,
927
+ "logps/rejected": -211.33326721191406,
928
+ "loss": 0.4403,
929
+ "rewards/accuracies": 0.9375,
930
+ "rewards/chosen": 9.213937759399414,
931
+ "rewards/margins": 18.93706512451172,
932
+ "rewards/rejected": -9.723125457763672,
933
+ "step": 66
934
+ },
935
+ {
936
+ "epoch": 0.07,
937
+ "learning_rate": 4.3913793103448277e-05,
938
+ "logits/chosen": -0.5882927775382996,
939
+ "logits/rejected": -1.4038889408111572,
940
+ "logps/chosen": -244.26333618164062,
941
+ "logps/rejected": -192.05517578125,
942
+ "loss": 0.0007,
943
+ "rewards/accuracies": 1.0,
944
+ "rewards/chosen": 5.999594688415527,
945
+ "rewards/margins": 16.232702255249023,
946
+ "rewards/rejected": -10.233107566833496,
947
+ "step": 67
948
+ },
949
+ {
950
+ "epoch": 0.07,
951
+ "learning_rate": 4.3862068965517245e-05,
952
+ "logits/chosen": -0.7710579633712769,
953
+ "logits/rejected": -1.5811477899551392,
954
+ "logps/chosen": -274.4286804199219,
955
+ "logps/rejected": -189.31427001953125,
956
+ "loss": 0.0003,
957
+ "rewards/accuracies": 1.0,
958
+ "rewards/chosen": 10.483807563781738,
959
+ "rewards/margins": 22.807409286499023,
960
+ "rewards/rejected": -12.323600769042969,
961
+ "step": 68
962
+ },
963
+ {
964
+ "epoch": 0.08,
965
+ "learning_rate": 4.381034482758621e-05,
966
+ "logits/chosen": -0.2001597136259079,
967
+ "logits/rejected": -1.5477687120437622,
968
+ "logps/chosen": -398.04296875,
969
+ "logps/rejected": -208.8218536376953,
970
+ "loss": 0.0,
971
+ "rewards/accuracies": 1.0,
972
+ "rewards/chosen": 11.122913360595703,
973
+ "rewards/margins": 23.094343185424805,
974
+ "rewards/rejected": -11.971429824829102,
975
+ "step": 69
976
+ },
977
+ {
978
+ "epoch": 0.08,
979
+ "learning_rate": 4.3758620689655174e-05,
980
+ "logits/chosen": -0.5891858339309692,
981
+ "logits/rejected": -1.500455379486084,
982
+ "logps/chosen": -223.07235717773438,
983
+ "logps/rejected": -219.37405395507812,
984
+ "loss": 0.0,
985
+ "rewards/accuracies": 1.0,
986
+ "rewards/chosen": 7.498288154602051,
987
+ "rewards/margins": 19.65944480895996,
988
+ "rewards/rejected": -12.16115665435791,
989
+ "step": 70
990
+ },
991
+ {
992
+ "epoch": 0.08,
993
+ "learning_rate": 4.370689655172414e-05,
994
+ "logits/chosen": -0.5222135186195374,
995
+ "logits/rejected": -1.459835410118103,
996
+ "logps/chosen": -264.3045349121094,
997
+ "logps/rejected": -184.4599609375,
998
+ "loss": 0.0006,
999
+ "rewards/accuracies": 1.0,
1000
+ "rewards/chosen": 7.261447429656982,
1001
+ "rewards/margins": 17.37218475341797,
1002
+ "rewards/rejected": -10.110737800598145,
1003
+ "step": 71
1004
+ },
1005
+ {
1006
+ "epoch": 0.08,
1007
+ "learning_rate": 4.365517241379311e-05,
1008
+ "logits/chosen": -0.4974746108055115,
1009
+ "logits/rejected": -1.5458664894104004,
1010
+ "logps/chosen": -272.7602233886719,
1011
+ "logps/rejected": -181.00247192382812,
1012
+ "loss": 0.0,
1013
+ "rewards/accuracies": 1.0,
1014
+ "rewards/chosen": 10.925564765930176,
1015
+ "rewards/margins": 22.81020736694336,
1016
+ "rewards/rejected": -11.88464069366455,
1017
+ "step": 72
1018
+ },
1019
+ {
1020
+ "epoch": 0.08,
1021
+ "learning_rate": 4.360344827586207e-05,
1022
+ "logits/chosen": -0.3357059359550476,
1023
+ "logits/rejected": -1.4441150426864624,
1024
+ "logps/chosen": -414.8647766113281,
1025
+ "logps/rejected": -223.109375,
1026
+ "loss": 0.0004,
1027
+ "rewards/accuracies": 1.0,
1028
+ "rewards/chosen": 13.181656837463379,
1029
+ "rewards/margins": 27.527368545532227,
1030
+ "rewards/rejected": -14.345712661743164,
1031
+ "step": 73
1032
+ },
1033
+ {
1034
+ "epoch": 0.08,
1035
+ "learning_rate": 4.355172413793104e-05,
1036
+ "logits/chosen": -0.6531690359115601,
1037
+ "logits/rejected": -1.4611525535583496,
1038
+ "logps/chosen": -221.1251220703125,
1039
+ "logps/rejected": -226.30352783203125,
1040
+ "loss": 0.0,
1041
+ "rewards/accuracies": 1.0,
1042
+ "rewards/chosen": 7.807477951049805,
1043
+ "rewards/margins": 21.502174377441406,
1044
+ "rewards/rejected": -13.694696426391602,
1045
+ "step": 74
1046
+ },
1047
+ {
1048
+ "epoch": 0.08,
1049
+ "learning_rate": 4.35e-05,
1050
+ "logits/chosen": -0.5135932564735413,
1051
+ "logits/rejected": -1.4995931386947632,
1052
+ "logps/chosen": -332.56842041015625,
1053
+ "logps/rejected": -270.9891662597656,
1054
+ "loss": 0.0,
1055
+ "rewards/accuracies": 1.0,
1056
+ "rewards/chosen": 12.301117897033691,
1057
+ "rewards/margins": 26.830467224121094,
1058
+ "rewards/rejected": -14.529346466064453,
1059
+ "step": 75
1060
+ },
1061
+ {
1062
+ "epoch": 0.08,
1063
+ "learning_rate": 4.344827586206897e-05,
1064
+ "logits/chosen": -0.3770020604133606,
1065
+ "logits/rejected": -1.44219970703125,
1066
+ "logps/chosen": -317.96240234375,
1067
+ "logps/rejected": -250.3682098388672,
1068
+ "loss": 0.0,
1069
+ "rewards/accuracies": 1.0,
1070
+ "rewards/chosen": 7.8860931396484375,
1071
+ "rewards/margins": 23.53211784362793,
1072
+ "rewards/rejected": -15.646024703979492,
1073
+ "step": 76
1074
+ },
1075
+ {
1076
+ "epoch": 0.08,
1077
+ "learning_rate": 4.339655172413793e-05,
1078
+ "logits/chosen": -0.49628371000289917,
1079
+ "logits/rejected": -1.4045476913452148,
1080
+ "logps/chosen": -367.6162414550781,
1081
+ "logps/rejected": -265.2467346191406,
1082
+ "loss": 0.0,
1083
+ "rewards/accuracies": 1.0,
1084
+ "rewards/chosen": 10.148900985717773,
1085
+ "rewards/margins": 24.313669204711914,
1086
+ "rewards/rejected": -14.164766311645508,
1087
+ "step": 77
1088
+ },
1089
+ {
1090
+ "epoch": 0.09,
1091
+ "learning_rate": 4.33448275862069e-05,
1092
+ "logits/chosen": -0.5287263989448547,
1093
+ "logits/rejected": -1.3665097951889038,
1094
+ "logps/chosen": -341.87664794921875,
1095
+ "logps/rejected": -249.6795654296875,
1096
+ "loss": 0.1517,
1097
+ "rewards/accuracies": 0.9375,
1098
+ "rewards/chosen": 8.620326042175293,
1099
+ "rewards/margins": 19.850658416748047,
1100
+ "rewards/rejected": -11.230331420898438,
1101
+ "step": 78
1102
+ },
1103
+ {
1104
+ "epoch": 0.09,
1105
+ "learning_rate": 4.3293103448275865e-05,
1106
+ "logits/chosen": -0.2743966281414032,
1107
+ "logits/rejected": -1.4255130290985107,
1108
+ "logps/chosen": -334.3340759277344,
1109
+ "logps/rejected": -265.15032958984375,
1110
+ "loss": 0.0,
1111
+ "rewards/accuracies": 1.0,
1112
+ "rewards/chosen": 10.872364044189453,
1113
+ "rewards/margins": 27.00977325439453,
1114
+ "rewards/rejected": -16.13741111755371,
1115
+ "step": 79
1116
+ },
1117
+ {
1118
+ "epoch": 0.09,
1119
+ "learning_rate": 4.324137931034483e-05,
1120
+ "logits/chosen": -0.5563563108444214,
1121
+ "logits/rejected": -1.6934064626693726,
1122
+ "logps/chosen": -408.9612731933594,
1123
+ "logps/rejected": -220.00091552734375,
1124
+ "loss": 0.0,
1125
+ "rewards/accuracies": 1.0,
1126
+ "rewards/chosen": 10.237764358520508,
1127
+ "rewards/margins": 24.824390411376953,
1128
+ "rewards/rejected": -14.586627960205078,
1129
+ "step": 80
1130
+ },
1131
+ {
1132
+ "epoch": 0.09,
1133
+ "learning_rate": 4.3189655172413795e-05,
1134
+ "logits/chosen": -0.617451548576355,
1135
+ "logits/rejected": -1.640676736831665,
1136
+ "logps/chosen": -363.1540222167969,
1137
+ "logps/rejected": -226.42376708984375,
1138
+ "loss": 0.0,
1139
+ "rewards/accuracies": 1.0,
1140
+ "rewards/chosen": 10.672024726867676,
1141
+ "rewards/margins": 24.347591400146484,
1142
+ "rewards/rejected": -13.675565719604492,
1143
+ "step": 81
1144
+ },
1145
+ {
1146
+ "epoch": 0.09,
1147
+ "learning_rate": 4.313793103448276e-05,
1148
+ "logits/chosen": -0.45464372634887695,
1149
+ "logits/rejected": -1.4708175659179688,
1150
+ "logps/chosen": -328.35675048828125,
1151
+ "logps/rejected": -223.79766845703125,
1152
+ "loss": 0.0,
1153
+ "rewards/accuracies": 1.0,
1154
+ "rewards/chosen": 6.965122699737549,
1155
+ "rewards/margins": 21.22077178955078,
1156
+ "rewards/rejected": -14.25564956665039,
1157
+ "step": 82
1158
+ },
1159
+ {
1160
+ "epoch": 0.09,
1161
+ "learning_rate": 4.3086206896551724e-05,
1162
+ "logits/chosen": -0.6079340577125549,
1163
+ "logits/rejected": -1.5333082675933838,
1164
+ "logps/chosen": -219.4099578857422,
1165
+ "logps/rejected": -230.07948303222656,
1166
+ "loss": 0.0,
1167
+ "rewards/accuracies": 1.0,
1168
+ "rewards/chosen": 6.278050899505615,
1169
+ "rewards/margins": 20.45656967163086,
1170
+ "rewards/rejected": -14.178520202636719,
1171
+ "step": 83
1172
+ },
1173
+ {
1174
+ "epoch": 0.09,
1175
+ "learning_rate": 4.303448275862069e-05,
1176
+ "logits/chosen": -0.4874611794948578,
1177
+ "logits/rejected": -1.5135740041732788,
1178
+ "logps/chosen": -304.00274658203125,
1179
+ "logps/rejected": -263.0120849609375,
1180
+ "loss": 0.0,
1181
+ "rewards/accuracies": 1.0,
1182
+ "rewards/chosen": 9.557435989379883,
1183
+ "rewards/margins": 24.41012954711914,
1184
+ "rewards/rejected": -14.852693557739258,
1185
+ "step": 84
1186
+ },
1187
+ {
1188
+ "epoch": 0.09,
1189
+ "learning_rate": 4.298275862068966e-05,
1190
+ "logits/chosen": -0.4802761375904083,
1191
+ "logits/rejected": -1.4294252395629883,
1192
+ "logps/chosen": -293.41192626953125,
1193
+ "logps/rejected": -283.2821350097656,
1194
+ "loss": 0.0,
1195
+ "rewards/accuracies": 1.0,
1196
+ "rewards/chosen": 7.370968818664551,
1197
+ "rewards/margins": 24.32880401611328,
1198
+ "rewards/rejected": -16.957834243774414,
1199
+ "step": 85
1200
+ },
1201
+ {
1202
+ "epoch": 0.09,
1203
+ "learning_rate": 4.293103448275862e-05,
1204
+ "logits/chosen": -0.3580012321472168,
1205
+ "logits/rejected": -1.499201774597168,
1206
+ "logps/chosen": -350.5351257324219,
1207
+ "logps/rejected": -244.6523895263672,
1208
+ "loss": 0.0001,
1209
+ "rewards/accuracies": 1.0,
1210
+ "rewards/chosen": 12.237674713134766,
1211
+ "rewards/margins": 26.58084487915039,
1212
+ "rewards/rejected": -14.343170166015625,
1213
+ "step": 86
1214
+ },
1215
+ {
1216
+ "epoch": 0.09,
1217
+ "learning_rate": 4.287931034482759e-05,
1218
+ "logits/chosen": -0.6917211413383484,
1219
+ "logits/rejected": -1.4503819942474365,
1220
+ "logps/chosen": -217.5175323486328,
1221
+ "logps/rejected": -228.9481201171875,
1222
+ "loss": 0.0,
1223
+ "rewards/accuracies": 1.0,
1224
+ "rewards/chosen": 6.635045051574707,
1225
+ "rewards/margins": 20.42833709716797,
1226
+ "rewards/rejected": -13.793292999267578,
1227
+ "step": 87
1228
+ },
1229
+ {
1230
+ "epoch": 0.1,
1231
+ "learning_rate": 4.282758620689656e-05,
1232
+ "logits/chosen": -0.8255922794342041,
1233
+ "logits/rejected": -1.4748644828796387,
1234
+ "logps/chosen": -217.336669921875,
1235
+ "logps/rejected": -206.8959197998047,
1236
+ "loss": 0.0005,
1237
+ "rewards/accuracies": 1.0,
1238
+ "rewards/chosen": 5.22888708114624,
1239
+ "rewards/margins": 19.673564910888672,
1240
+ "rewards/rejected": -14.444679260253906,
1241
+ "step": 88
1242
+ },
1243
+ {
1244
+ "epoch": 0.1,
1245
+ "learning_rate": 4.2775862068965525e-05,
1246
+ "logits/chosen": -0.626908004283905,
1247
+ "logits/rejected": -1.6663663387298584,
1248
+ "logps/chosen": -328.56231689453125,
1249
+ "logps/rejected": -235.42843627929688,
1250
+ "loss": 0.0,
1251
+ "rewards/accuracies": 1.0,
1252
+ "rewards/chosen": 6.891221046447754,
1253
+ "rewards/margins": 21.77893829345703,
1254
+ "rewards/rejected": -14.887716293334961,
1255
+ "step": 89
1256
+ },
1257
+ {
1258
+ "epoch": 0.1,
1259
+ "learning_rate": 4.2724137931034486e-05,
1260
+ "logits/chosen": -0.5689429640769958,
1261
+ "logits/rejected": -1.618609070777893,
1262
+ "logps/chosen": -298.08770751953125,
1263
+ "logps/rejected": -237.06094360351562,
1264
+ "loss": 0.0,
1265
+ "rewards/accuracies": 1.0,
1266
+ "rewards/chosen": 9.328791618347168,
1267
+ "rewards/margins": 24.572046279907227,
1268
+ "rewards/rejected": -15.243253707885742,
1269
+ "step": 90
1270
+ },
1271
+ {
1272
+ "epoch": 0.1,
1273
+ "learning_rate": 4.2672413793103454e-05,
1274
+ "logits/chosen": -0.5474239587783813,
1275
+ "logits/rejected": -1.6142076253890991,
1276
+ "logps/chosen": -295.00494384765625,
1277
+ "logps/rejected": -268.9505615234375,
1278
+ "loss": 0.0287,
1279
+ "rewards/accuracies": 1.0,
1280
+ "rewards/chosen": 9.715471267700195,
1281
+ "rewards/margins": 23.968780517578125,
1282
+ "rewards/rejected": -14.25330924987793,
1283
+ "step": 91
1284
+ },
1285
+ {
1286
+ "epoch": 0.1,
1287
+ "learning_rate": 4.2620689655172416e-05,
1288
+ "logits/chosen": -0.5132906436920166,
1289
+ "logits/rejected": -1.6248977184295654,
1290
+ "logps/chosen": -314.5284423828125,
1291
+ "logps/rejected": -210.1342315673828,
1292
+ "loss": 0.0595,
1293
+ "rewards/accuracies": 0.9375,
1294
+ "rewards/chosen": 7.707674980163574,
1295
+ "rewards/margins": 20.18167495727539,
1296
+ "rewards/rejected": -12.474000930786133,
1297
+ "step": 92
1298
+ },
1299
+ {
1300
+ "epoch": 0.1,
1301
+ "learning_rate": 4.2568965517241384e-05,
1302
+ "logits/chosen": -0.45244526863098145,
1303
+ "logits/rejected": -1.7539148330688477,
1304
+ "logps/chosen": -394.9619140625,
1305
+ "logps/rejected": -230.32550048828125,
1306
+ "loss": 0.0,
1307
+ "rewards/accuracies": 1.0,
1308
+ "rewards/chosen": 11.32716178894043,
1309
+ "rewards/margins": 26.326129913330078,
1310
+ "rewards/rejected": -14.998967170715332,
1311
+ "step": 93
1312
+ },
1313
+ {
1314
+ "epoch": 0.1,
1315
+ "learning_rate": 4.2517241379310345e-05,
1316
+ "logits/chosen": -0.7538139224052429,
1317
+ "logits/rejected": -1.5796599388122559,
1318
+ "logps/chosen": -286.95281982421875,
1319
+ "logps/rejected": -266.7412414550781,
1320
+ "loss": 0.0,
1321
+ "rewards/accuracies": 1.0,
1322
+ "rewards/chosen": 8.003694534301758,
1323
+ "rewards/margins": 25.05299949645996,
1324
+ "rewards/rejected": -17.04930305480957,
1325
+ "step": 94
1326
+ },
1327
+ {
1328
+ "epoch": 0.1,
1329
+ "learning_rate": 4.246551724137931e-05,
1330
+ "logits/chosen": -0.44376400113105774,
1331
+ "logits/rejected": -1.5493687391281128,
1332
+ "logps/chosen": -406.93951416015625,
1333
+ "logps/rejected": -302.9401550292969,
1334
+ "loss": 0.0,
1335
+ "rewards/accuracies": 1.0,
1336
+ "rewards/chosen": 7.246862888336182,
1337
+ "rewards/margins": 27.937149047851562,
1338
+ "rewards/rejected": -20.69028663635254,
1339
+ "step": 95
1340
+ },
1341
+ {
1342
+ "epoch": 0.1,
1343
+ "learning_rate": 4.2413793103448274e-05,
1344
+ "logits/chosen": -0.7577255964279175,
1345
+ "logits/rejected": -1.77558171749115,
1346
+ "logps/chosen": -357.90838623046875,
1347
+ "logps/rejected": -246.73458862304688,
1348
+ "loss": 0.0,
1349
+ "rewards/accuracies": 1.0,
1350
+ "rewards/chosen": 7.267894744873047,
1351
+ "rewards/margins": 24.79804229736328,
1352
+ "rewards/rejected": -17.530147552490234,
1353
+ "step": 96
1354
+ },
1355
+ {
1356
+ "epoch": 0.11,
1357
+ "learning_rate": 4.236206896551724e-05,
1358
+ "logits/chosen": -0.8565713763237,
1359
+ "logits/rejected": -1.7760589122772217,
1360
+ "logps/chosen": -338.751220703125,
1361
+ "logps/rejected": -304.101806640625,
1362
+ "loss": 0.0,
1363
+ "rewards/accuracies": 1.0,
1364
+ "rewards/chosen": 9.238365173339844,
1365
+ "rewards/margins": 27.781978607177734,
1366
+ "rewards/rejected": -18.54361343383789,
1367
+ "step": 97
1368
+ },
1369
+ {
1370
+ "epoch": 0.11,
1371
+ "learning_rate": 4.231034482758621e-05,
1372
+ "logits/chosen": -0.40401750802993774,
1373
+ "logits/rejected": -1.7701493501663208,
1374
+ "logps/chosen": -537.5440063476562,
1375
+ "logps/rejected": -349.8548889160156,
1376
+ "loss": 0.0,
1377
+ "rewards/accuracies": 1.0,
1378
+ "rewards/chosen": 14.211512565612793,
1379
+ "rewards/margins": 37.746315002441406,
1380
+ "rewards/rejected": -23.53480339050293,
1381
+ "step": 98
1382
+ },
1383
+ {
1384
+ "epoch": 0.11,
1385
+ "learning_rate": 4.225862068965517e-05,
1386
+ "logits/chosen": -0.9799883365631104,
1387
+ "logits/rejected": -1.8936114311218262,
1388
+ "logps/chosen": -403.7567138671875,
1389
+ "logps/rejected": -257.6401672363281,
1390
+ "loss": 0.0001,
1391
+ "rewards/accuracies": 1.0,
1392
+ "rewards/chosen": 4.710639476776123,
1393
+ "rewards/margins": 23.923961639404297,
1394
+ "rewards/rejected": -19.213321685791016,
1395
+ "step": 99
1396
+ },
1397
+ {
1398
+ "epoch": 0.11,
1399
+ "learning_rate": 4.220689655172414e-05,
1400
+ "logits/chosen": -0.94623202085495,
1401
+ "logits/rejected": -1.7190004587173462,
1402
+ "logps/chosen": -360.8213806152344,
1403
+ "logps/rejected": -310.1994323730469,
1404
+ "loss": 0.0,
1405
+ "rewards/accuracies": 1.0,
1406
+ "rewards/chosen": 3.991710662841797,
1407
+ "rewards/margins": 26.59444808959961,
1408
+ "rewards/rejected": -22.60273551940918,
1409
+ "step": 100
1410
+ },
1411
+ {
1412
+ "epoch": 0.11,
1413
+ "learning_rate": 4.215517241379311e-05,
1414
+ "logits/chosen": -0.6102348566055298,
1415
+ "logits/rejected": -1.8472094535827637,
1416
+ "logps/chosen": -460.48699951171875,
1417
+ "logps/rejected": -369.1959228515625,
1418
+ "loss": 0.0002,
1419
+ "rewards/accuracies": 1.0,
1420
+ "rewards/chosen": 6.696294784545898,
1421
+ "rewards/margins": 32.79343795776367,
1422
+ "rewards/rejected": -26.097143173217773,
1423
+ "step": 101
1424
+ },
1425
+ {
1426
+ "epoch": 0.11,
1427
+ "learning_rate": 4.210344827586207e-05,
1428
+ "logits/chosen": -0.9649127721786499,
1429
+ "logits/rejected": -1.8413913249969482,
1430
+ "logps/chosen": -455.1119079589844,
1431
+ "logps/rejected": -365.0443420410156,
1432
+ "loss": 0.0,
1433
+ "rewards/accuracies": 1.0,
1434
+ "rewards/chosen": 7.14402437210083,
1435
+ "rewards/margins": 32.23406219482422,
1436
+ "rewards/rejected": -25.090038299560547,
1437
+ "step": 102
1438
+ },
1439
+ {
1440
+ "epoch": 0.11,
1441
+ "learning_rate": 4.2051724137931036e-05,
1442
+ "logits/chosen": -1.176360011100769,
1443
+ "logits/rejected": -1.6914632320404053,
1444
+ "logps/chosen": -283.3887023925781,
1445
+ "logps/rejected": -343.1808166503906,
1446
+ "loss": 0.0008,
1447
+ "rewards/accuracies": 1.0,
1448
+ "rewards/chosen": -0.03441965579986572,
1449
+ "rewards/margins": 24.389293670654297,
1450
+ "rewards/rejected": -24.423709869384766,
1451
+ "step": 103
1452
+ },
1453
+ {
1454
+ "epoch": 0.11,
1455
+ "learning_rate": 4.2000000000000004e-05,
1456
+ "logits/chosen": -0.7725791335105896,
1457
+ "logits/rejected": -1.943673014640808,
1458
+ "logps/chosen": -484.6883544921875,
1459
+ "logps/rejected": -338.5666198730469,
1460
+ "loss": 0.0,
1461
+ "rewards/accuracies": 1.0,
1462
+ "rewards/chosen": 7.784733295440674,
1463
+ "rewards/margins": 31.62430763244629,
1464
+ "rewards/rejected": -23.839576721191406,
1465
+ "step": 104
1466
+ },
1467
+ {
1468
+ "epoch": 0.11,
1469
+ "learning_rate": 4.194827586206897e-05,
1470
+ "logits/chosen": -0.6000739336013794,
1471
+ "logits/rejected": -1.681613802909851,
1472
+ "logps/chosen": -342.3570556640625,
1473
+ "logps/rejected": -256.06829833984375,
1474
+ "loss": 0.0,
1475
+ "rewards/accuracies": 1.0,
1476
+ "rewards/chosen": 7.42122745513916,
1477
+ "rewards/margins": 25.636981964111328,
1478
+ "rewards/rejected": -18.21575355529785,
1479
+ "step": 105
1480
+ },
1481
+ {
1482
+ "epoch": 0.12,
1483
+ "learning_rate": 4.1896551724137934e-05,
1484
+ "logits/chosen": -0.5727726221084595,
1485
+ "logits/rejected": -1.5278774499893188,
1486
+ "logps/chosen": -340.2919921875,
1487
+ "logps/rejected": -246.67628479003906,
1488
+ "loss": 0.0183,
1489
+ "rewards/accuracies": 1.0,
1490
+ "rewards/chosen": 7.279099464416504,
1491
+ "rewards/margins": 23.8295841217041,
1492
+ "rewards/rejected": -16.550485610961914,
1493
+ "step": 106
1494
+ },
1495
+ {
1496
+ "epoch": 0.12,
1497
+ "learning_rate": 4.18448275862069e-05,
1498
+ "logits/chosen": -0.588262677192688,
1499
+ "logits/rejected": -1.4113011360168457,
1500
+ "logps/chosen": -239.1262664794922,
1501
+ "logps/rejected": -323.8817443847656,
1502
+ "loss": 0.0,
1503
+ "rewards/accuracies": 1.0,
1504
+ "rewards/chosen": 4.26235294342041,
1505
+ "rewards/margins": 25.60938262939453,
1506
+ "rewards/rejected": -21.34703254699707,
1507
+ "step": 107
1508
+ },
1509
+ {
1510
+ "epoch": 0.12,
1511
+ "learning_rate": 4.179310344827587e-05,
1512
+ "logits/chosen": -0.825312077999115,
1513
+ "logits/rejected": -1.6536335945129395,
1514
+ "logps/chosen": -245.26333618164062,
1515
+ "logps/rejected": -275.9490966796875,
1516
+ "loss": 0.0,
1517
+ "rewards/accuracies": 1.0,
1518
+ "rewards/chosen": 4.488056182861328,
1519
+ "rewards/margins": 24.79592514038086,
1520
+ "rewards/rejected": -20.30786895751953,
1521
+ "step": 108
1522
+ },
1523
+ {
1524
+ "epoch": 0.12,
1525
+ "learning_rate": 4.174137931034483e-05,
1526
+ "logits/chosen": -0.6509658694267273,
1527
+ "logits/rejected": -1.7768073081970215,
1528
+ "logps/chosen": -494.1702575683594,
1529
+ "logps/rejected": -269.7856140136719,
1530
+ "loss": 0.0,
1531
+ "rewards/accuracies": 1.0,
1532
+ "rewards/chosen": 11.041641235351562,
1533
+ "rewards/margins": 30.055639266967773,
1534
+ "rewards/rejected": -19.013999938964844,
1535
+ "step": 109
1536
+ },
1537
+ {
1538
+ "epoch": 0.12,
1539
+ "learning_rate": 4.16896551724138e-05,
1540
+ "logits/chosen": -0.813829243183136,
1541
+ "logits/rejected": -1.7704083919525146,
1542
+ "logps/chosen": -370.6927795410156,
1543
+ "logps/rejected": -288.31463623046875,
1544
+ "loss": 0.0,
1545
+ "rewards/accuracies": 1.0,
1546
+ "rewards/chosen": 10.1367826461792,
1547
+ "rewards/margins": 30.452945709228516,
1548
+ "rewards/rejected": -20.316162109375,
1549
+ "step": 110
1550
+ },
1551
+ {
1552
+ "epoch": 0.12,
1553
+ "learning_rate": 4.163793103448276e-05,
1554
+ "logits/chosen": -0.450665146112442,
1555
+ "logits/rejected": -1.4787647724151611,
1556
+ "logps/chosen": -336.4142761230469,
1557
+ "logps/rejected": -275.8489990234375,
1558
+ "loss": 0.0,
1559
+ "rewards/accuracies": 1.0,
1560
+ "rewards/chosen": 6.380362510681152,
1561
+ "rewards/margins": 27.303016662597656,
1562
+ "rewards/rejected": -20.922651290893555,
1563
+ "step": 111
1564
+ },
1565
+ {
1566
+ "epoch": 0.12,
1567
+ "learning_rate": 4.158620689655173e-05,
1568
+ "logits/chosen": -0.7610673904418945,
1569
+ "logits/rejected": -1.5079166889190674,
1570
+ "logps/chosen": -290.8649597167969,
1571
+ "logps/rejected": -391.47772216796875,
1572
+ "loss": 0.4671,
1573
+ "rewards/accuracies": 0.9375,
1574
+ "rewards/chosen": 4.447336196899414,
1575
+ "rewards/margins": 27.807403564453125,
1576
+ "rewards/rejected": -23.36006736755371,
1577
+ "step": 112
1578
+ },
1579
+ {
1580
+ "epoch": 0.12,
1581
+ "learning_rate": 4.153448275862069e-05,
1582
+ "logits/chosen": -0.9873843193054199,
1583
+ "logits/rejected": -1.6280449628829956,
1584
+ "logps/chosen": -244.6575164794922,
1585
+ "logps/rejected": -324.1699523925781,
1586
+ "loss": 0.0036,
1587
+ "rewards/accuracies": 1.0,
1588
+ "rewards/chosen": 2.1840460300445557,
1589
+ "rewards/margins": 25.491558074951172,
1590
+ "rewards/rejected": -23.307514190673828,
1591
+ "step": 113
1592
+ },
1593
+ {
1594
+ "epoch": 0.12,
1595
+ "learning_rate": 4.148275862068966e-05,
1596
+ "logits/chosen": -1.0418487787246704,
1597
+ "logits/rejected": -1.7689330577850342,
1598
+ "logps/chosen": -309.9642028808594,
1599
+ "logps/rejected": -433.8427734375,
1600
+ "loss": 0.0,
1601
+ "rewards/accuracies": 1.0,
1602
+ "rewards/chosen": 3.8272902965545654,
1603
+ "rewards/margins": 33.52722930908203,
1604
+ "rewards/rejected": -29.69993782043457,
1605
+ "step": 114
1606
+ },
1607
+ {
1608
+ "epoch": 0.13,
1609
+ "learning_rate": 4.143103448275862e-05,
1610
+ "logits/chosen": -0.9574589133262634,
1611
+ "logits/rejected": -1.9435979127883911,
1612
+ "logps/chosen": -485.497314453125,
1613
+ "logps/rejected": -301.46038818359375,
1614
+ "loss": 0.0002,
1615
+ "rewards/accuracies": 1.0,
1616
+ "rewards/chosen": 3.4343650341033936,
1617
+ "rewards/margins": 27.89614486694336,
1618
+ "rewards/rejected": -24.46177864074707,
1619
+ "step": 115
1620
+ },
1621
+ {
1622
+ "epoch": 0.13,
1623
+ "learning_rate": 4.1379310344827587e-05,
1624
+ "logits/chosen": -1.2192739248275757,
1625
+ "logits/rejected": -1.9555035829544067,
1626
+ "logps/chosen": -485.7822265625,
1627
+ "logps/rejected": -452.36566162109375,
1628
+ "loss": 0.0,
1629
+ "rewards/accuracies": 1.0,
1630
+ "rewards/chosen": -3.4936442375183105,
1631
+ "rewards/margins": 31.84035873413086,
1632
+ "rewards/rejected": -35.33400344848633,
1633
+ "step": 116
1634
+ },
1635
+ {
1636
+ "epoch": 0.13,
1637
+ "learning_rate": 4.1327586206896555e-05,
1638
+ "logits/chosen": -1.3901960849761963,
1639
+ "logits/rejected": -2.194340944290161,
1640
+ "logps/chosen": -556.4442749023438,
1641
+ "logps/rejected": -403.41094970703125,
1642
+ "loss": 0.4842,
1643
+ "rewards/accuracies": 0.9375,
1644
+ "rewards/chosen": -6.604147434234619,
1645
+ "rewards/margins": 25.96053695678711,
1646
+ "rewards/rejected": -32.5646858215332,
1647
+ "step": 117
1648
+ },
1649
+ {
1650
+ "epoch": 0.13,
1651
+ "learning_rate": 4.1275862068965516e-05,
1652
+ "logits/chosen": -1.10233473777771,
1653
+ "logits/rejected": -1.911782145500183,
1654
+ "logps/chosen": -319.7232666015625,
1655
+ "logps/rejected": -366.8639221191406,
1656
+ "loss": 0.0,
1657
+ "rewards/accuracies": 1.0,
1658
+ "rewards/chosen": 0.5800843238830566,
1659
+ "rewards/margins": 30.92066192626953,
1660
+ "rewards/rejected": -30.3405818939209,
1661
+ "step": 118
1662
+ },
1663
+ {
1664
+ "epoch": 0.13,
1665
+ "learning_rate": 4.1224137931034484e-05,
1666
+ "logits/chosen": -0.7030006051063538,
1667
+ "logits/rejected": -1.7387290000915527,
1668
+ "logps/chosen": -291.53961181640625,
1669
+ "logps/rejected": -350.3901062011719,
1670
+ "loss": 0.0,
1671
+ "rewards/accuracies": 1.0,
1672
+ "rewards/chosen": 1.4014167785644531,
1673
+ "rewards/margins": 28.830717086791992,
1674
+ "rewards/rejected": -27.429298400878906,
1675
+ "step": 119
1676
+ },
1677
+ {
1678
+ "epoch": 0.13,
1679
+ "learning_rate": 4.117241379310345e-05,
1680
+ "logits/chosen": -0.5296112895011902,
1681
+ "logits/rejected": -1.5072706937789917,
1682
+ "logps/chosen": -286.94879150390625,
1683
+ "logps/rejected": -346.3030700683594,
1684
+ "loss": 0.0,
1685
+ "rewards/accuracies": 1.0,
1686
+ "rewards/chosen": 1.8979191780090332,
1687
+ "rewards/margins": 28.363784790039062,
1688
+ "rewards/rejected": -26.465864181518555,
1689
+ "step": 120
1690
+ },
1691
+ {
1692
+ "epoch": 0.13,
1693
+ "learning_rate": 4.112068965517242e-05,
1694
+ "logits/chosen": -0.42470210790634155,
1695
+ "logits/rejected": -1.7659740447998047,
1696
+ "logps/chosen": -477.8876953125,
1697
+ "logps/rejected": -315.16717529296875,
1698
+ "loss": 0.0,
1699
+ "rewards/accuracies": 1.0,
1700
+ "rewards/chosen": 9.757699966430664,
1701
+ "rewards/margins": 34.151214599609375,
1702
+ "rewards/rejected": -24.393516540527344,
1703
+ "step": 121
1704
+ },
1705
+ {
1706
+ "epoch": 0.13,
1707
+ "learning_rate": 4.106896551724138e-05,
1708
+ "logits/chosen": -0.37895911931991577,
1709
+ "logits/rejected": -1.4872817993164062,
1710
+ "logps/chosen": -360.4671936035156,
1711
+ "logps/rejected": -311.3260803222656,
1712
+ "loss": 0.0,
1713
+ "rewards/accuracies": 1.0,
1714
+ "rewards/chosen": 10.024797439575195,
1715
+ "rewards/margins": 30.34416961669922,
1716
+ "rewards/rejected": -20.31937026977539,
1717
+ "step": 122
1718
+ },
1719
+ {
1720
+ "epoch": 0.13,
1721
+ "learning_rate": 4.101724137931035e-05,
1722
+ "logits/chosen": -0.4127517342567444,
1723
+ "logits/rejected": -1.4179290533065796,
1724
+ "logps/chosen": -396.7817077636719,
1725
+ "logps/rejected": -257.0691833496094,
1726
+ "loss": 0.0,
1727
+ "rewards/accuracies": 1.0,
1728
+ "rewards/chosen": 8.115121841430664,
1729
+ "rewards/margins": 25.882369995117188,
1730
+ "rewards/rejected": -17.767250061035156,
1731
+ "step": 123
1732
+ },
1733
+ {
1734
+ "epoch": 0.14,
1735
+ "learning_rate": 4.096551724137932e-05,
1736
+ "logits/chosen": -0.4867606461048126,
1737
+ "logits/rejected": -1.433240532875061,
1738
+ "logps/chosen": -307.6831359863281,
1739
+ "logps/rejected": -285.9851379394531,
1740
+ "loss": 0.0006,
1741
+ "rewards/accuracies": 1.0,
1742
+ "rewards/chosen": 9.156608581542969,
1743
+ "rewards/margins": 24.28235626220703,
1744
+ "rewards/rejected": -15.12574577331543,
1745
+ "step": 124
1746
+ },
1747
+ {
1748
+ "epoch": 0.14,
1749
+ "learning_rate": 4.091379310344828e-05,
1750
+ "logits/chosen": -0.5465874671936035,
1751
+ "logits/rejected": -1.398258924484253,
1752
+ "logps/chosen": -244.0363311767578,
1753
+ "logps/rejected": -240.68551635742188,
1754
+ "loss": 0.0,
1755
+ "rewards/accuracies": 1.0,
1756
+ "rewards/chosen": 4.43398904800415,
1757
+ "rewards/margins": 23.1319522857666,
1758
+ "rewards/rejected": -18.69796371459961,
1759
+ "step": 125
1760
+ },
1761
+ {
1762
+ "epoch": 0.14,
1763
+ "learning_rate": 4.0862068965517246e-05,
1764
+ "logits/chosen": -0.38893836736679077,
1765
+ "logits/rejected": -1.427646279335022,
1766
+ "logps/chosen": -351.76348876953125,
1767
+ "logps/rejected": -289.0392150878906,
1768
+ "loss": 0.0,
1769
+ "rewards/accuracies": 1.0,
1770
+ "rewards/chosen": 7.574392795562744,
1771
+ "rewards/margins": 25.838218688964844,
1772
+ "rewards/rejected": -18.26382827758789,
1773
+ "step": 126
1774
+ },
1775
+ {
1776
+ "epoch": 0.14,
1777
+ "learning_rate": 4.0810344827586214e-05,
1778
+ "logits/chosen": -0.3710274398326874,
1779
+ "logits/rejected": -1.5999269485473633,
1780
+ "logps/chosen": -394.9821472167969,
1781
+ "logps/rejected": -246.48887634277344,
1782
+ "loss": 0.0,
1783
+ "rewards/accuracies": 1.0,
1784
+ "rewards/chosen": 8.973166465759277,
1785
+ "rewards/margins": 26.636192321777344,
1786
+ "rewards/rejected": -17.66302490234375,
1787
+ "step": 127
1788
+ },
1789
+ {
1790
+ "epoch": 0.14,
1791
+ "learning_rate": 4.0758620689655175e-05,
1792
+ "logits/chosen": -0.673173725605011,
1793
+ "logits/rejected": -1.2474948167800903,
1794
+ "logps/chosen": -257.3236083984375,
1795
+ "logps/rejected": -264.7505187988281,
1796
+ "loss": 0.0,
1797
+ "rewards/accuracies": 1.0,
1798
+ "rewards/chosen": 2.773941993713379,
1799
+ "rewards/margins": 19.319332122802734,
1800
+ "rewards/rejected": -16.545391082763672,
1801
+ "step": 128
1802
+ },
1803
+ {
1804
+ "epoch": 0.14,
1805
+ "learning_rate": 4.0706896551724143e-05,
1806
+ "logits/chosen": -0.560947060585022,
1807
+ "logits/rejected": -1.374908447265625,
1808
+ "logps/chosen": -301.8670959472656,
1809
+ "logps/rejected": -232.6251983642578,
1810
+ "loss": 0.0,
1811
+ "rewards/accuracies": 1.0,
1812
+ "rewards/chosen": 10.369309425354004,
1813
+ "rewards/margins": 26.155746459960938,
1814
+ "rewards/rejected": -15.786436080932617,
1815
+ "step": 129
1816
+ },
1817
+ {
1818
+ "epoch": 0.14,
1819
+ "learning_rate": 4.0655172413793105e-05,
1820
+ "logits/chosen": -0.6484676003456116,
1821
+ "logits/rejected": -1.3061511516571045,
1822
+ "logps/chosen": -240.1058807373047,
1823
+ "logps/rejected": -292.108154296875,
1824
+ "loss": 0.0,
1825
+ "rewards/accuracies": 1.0,
1826
+ "rewards/chosen": 4.469289302825928,
1827
+ "rewards/margins": 25.45884132385254,
1828
+ "rewards/rejected": -20.989551544189453,
1829
+ "step": 130
1830
+ },
1831
+ {
1832
+ "epoch": 0.14,
1833
+ "learning_rate": 4.060344827586207e-05,
1834
+ "logits/chosen": -0.4261731505393982,
1835
+ "logits/rejected": -1.5151227712631226,
1836
+ "logps/chosen": -402.2912292480469,
1837
+ "logps/rejected": -289.1283264160156,
1838
+ "loss": 0.0,
1839
+ "rewards/accuracies": 1.0,
1840
+ "rewards/chosen": 10.882993698120117,
1841
+ "rewards/margins": 30.517051696777344,
1842
+ "rewards/rejected": -19.63405990600586,
1843
+ "step": 131
1844
+ },
1845
+ {
1846
+ "epoch": 0.14,
1847
+ "learning_rate": 4.0551724137931034e-05,
1848
+ "logits/chosen": -0.3702995181083679,
1849
+ "logits/rejected": -1.5198004245758057,
1850
+ "logps/chosen": -377.6826477050781,
1851
+ "logps/rejected": -287.36566162109375,
1852
+ "loss": 0.0,
1853
+ "rewards/accuracies": 1.0,
1854
+ "rewards/chosen": 10.187080383300781,
1855
+ "rewards/margins": 30.941383361816406,
1856
+ "rewards/rejected": -20.754301071166992,
1857
+ "step": 132
1858
+ },
1859
+ {
1860
+ "epoch": 0.15,
1861
+ "learning_rate": 4.05e-05,
1862
+ "logits/chosen": -0.394854873418808,
1863
+ "logits/rejected": -1.641805648803711,
1864
+ "logps/chosen": -462.9255676269531,
1865
+ "logps/rejected": -254.52276611328125,
1866
+ "loss": 0.0,
1867
+ "rewards/accuracies": 1.0,
1868
+ "rewards/chosen": 10.504643440246582,
1869
+ "rewards/margins": 27.564743041992188,
1870
+ "rewards/rejected": -17.060100555419922,
1871
+ "step": 133
1872
+ },
1873
+ {
1874
+ "epoch": 0.15,
1875
+ "learning_rate": 4.044827586206896e-05,
1876
+ "logits/chosen": -0.3886590600013733,
1877
+ "logits/rejected": -1.4597467184066772,
1878
+ "logps/chosen": -378.5803527832031,
1879
+ "logps/rejected": -263.2305908203125,
1880
+ "loss": 0.0,
1881
+ "rewards/accuracies": 1.0,
1882
+ "rewards/chosen": 13.531683921813965,
1883
+ "rewards/margins": 31.480838775634766,
1884
+ "rewards/rejected": -17.949153900146484,
1885
+ "step": 134
1886
+ },
1887
+ {
1888
+ "epoch": 0.15,
1889
+ "learning_rate": 4.039655172413793e-05,
1890
+ "logits/chosen": -0.5102599859237671,
1891
+ "logits/rejected": -1.250558614730835,
1892
+ "logps/chosen": -295.6455078125,
1893
+ "logps/rejected": -345.3118591308594,
1894
+ "loss": 0.0,
1895
+ "rewards/accuracies": 1.0,
1896
+ "rewards/chosen": 7.247247695922852,
1897
+ "rewards/margins": 26.292499542236328,
1898
+ "rewards/rejected": -19.04525375366211,
1899
+ "step": 135
1900
+ },
1901
+ {
1902
+ "epoch": 0.15,
1903
+ "learning_rate": 4.03448275862069e-05,
1904
+ "logits/chosen": -0.39377278089523315,
1905
+ "logits/rejected": -1.5313684940338135,
1906
+ "logps/chosen": -441.0325622558594,
1907
+ "logps/rejected": -223.12750244140625,
1908
+ "loss": 0.0,
1909
+ "rewards/accuracies": 1.0,
1910
+ "rewards/chosen": 12.754582405090332,
1911
+ "rewards/margins": 28.21717071533203,
1912
+ "rewards/rejected": -15.462587356567383,
1913
+ "step": 136
1914
+ },
1915
+ {
1916
+ "epoch": 0.15,
1917
+ "learning_rate": 4.029310344827587e-05,
1918
+ "logits/chosen": -0.45350462198257446,
1919
+ "logits/rejected": -1.3594186305999756,
1920
+ "logps/chosen": -313.7057189941406,
1921
+ "logps/rejected": -304.0419616699219,
1922
+ "loss": 0.0,
1923
+ "rewards/accuracies": 1.0,
1924
+ "rewards/chosen": 6.3996429443359375,
1925
+ "rewards/margins": 23.64727020263672,
1926
+ "rewards/rejected": -17.24762725830078,
1927
+ "step": 137
1928
+ },
1929
+ {
1930
+ "epoch": 0.15,
1931
+ "learning_rate": 4.024137931034483e-05,
1932
+ "logits/chosen": -0.17602750658988953,
1933
+ "logits/rejected": -1.2745662927627563,
1934
+ "logps/chosen": -371.0513916015625,
1935
+ "logps/rejected": -299.74481201171875,
1936
+ "loss": 0.0,
1937
+ "rewards/accuracies": 1.0,
1938
+ "rewards/chosen": 7.569783687591553,
1939
+ "rewards/margins": 26.724822998046875,
1940
+ "rewards/rejected": -19.155040740966797,
1941
+ "step": 138
1942
+ },
1943
+ {
1944
+ "epoch": 0.15,
1945
+ "learning_rate": 4.0189655172413796e-05,
1946
+ "logits/chosen": -0.44520992040634155,
1947
+ "logits/rejected": -1.4310128688812256,
1948
+ "logps/chosen": -334.8761901855469,
1949
+ "logps/rejected": -278.89349365234375,
1950
+ "loss": 0.0,
1951
+ "rewards/accuracies": 1.0,
1952
+ "rewards/chosen": 3.9243433475494385,
1953
+ "rewards/margins": 22.313913345336914,
1954
+ "rewards/rejected": -18.389568328857422,
1955
+ "step": 139
1956
+ },
1957
+ {
1958
+ "epoch": 0.15,
1959
+ "learning_rate": 4.0137931034482764e-05,
1960
+ "logits/chosen": -0.3496861755847931,
1961
+ "logits/rejected": -1.480123519897461,
1962
+ "logps/chosen": -345.8548583984375,
1963
+ "logps/rejected": -295.12042236328125,
1964
+ "loss": 0.0,
1965
+ "rewards/accuracies": 1.0,
1966
+ "rewards/chosen": 9.938793182373047,
1967
+ "rewards/margins": 28.614458084106445,
1968
+ "rewards/rejected": -18.67566680908203,
1969
+ "step": 140
1970
+ },
1971
+ {
1972
+ "epoch": 0.15,
1973
+ "learning_rate": 4.0086206896551726e-05,
1974
+ "logits/chosen": -0.4202902913093567,
1975
+ "logits/rejected": -1.5481184720993042,
1976
+ "logps/chosen": -408.18121337890625,
1977
+ "logps/rejected": -273.8868408203125,
1978
+ "loss": 0.0,
1979
+ "rewards/accuracies": 1.0,
1980
+ "rewards/chosen": 9.119111061096191,
1981
+ "rewards/margins": 29.09032440185547,
1982
+ "rewards/rejected": -19.971214294433594,
1983
+ "step": 141
1984
+ },
1985
+ {
1986
+ "epoch": 0.15,
1987
+ "learning_rate": 4.0034482758620694e-05,
1988
+ "logits/chosen": -0.5120177268981934,
1989
+ "logits/rejected": -1.4507927894592285,
1990
+ "logps/chosen": -271.6726989746094,
1991
+ "logps/rejected": -297.4678955078125,
1992
+ "loss": 0.0,
1993
+ "rewards/accuracies": 1.0,
1994
+ "rewards/chosen": 5.688560962677002,
1995
+ "rewards/margins": 26.492656707763672,
1996
+ "rewards/rejected": -20.804094314575195,
1997
+ "step": 142
1998
+ },
1999
+ {
2000
+ "epoch": 0.16,
2001
+ "learning_rate": 3.998275862068966e-05,
2002
+ "logits/chosen": -0.41148996353149414,
2003
+ "logits/rejected": -1.4985872507095337,
2004
+ "logps/chosen": -343.8206481933594,
2005
+ "logps/rejected": -241.0497589111328,
2006
+ "loss": 0.0,
2007
+ "rewards/accuracies": 1.0,
2008
+ "rewards/chosen": 9.314858436584473,
2009
+ "rewards/margins": 26.18862533569336,
2010
+ "rewards/rejected": -16.87376594543457,
2011
+ "step": 143
2012
+ },
2013
+ {
2014
+ "epoch": 0.16,
2015
+ "learning_rate": 3.993103448275862e-05,
2016
+ "logits/chosen": -0.1992441862821579,
2017
+ "logits/rejected": -1.8458625078201294,
2018
+ "logps/chosen": -504.0348205566406,
2019
+ "logps/rejected": -318.8588562011719,
2020
+ "loss": 0.0,
2021
+ "rewards/accuracies": 1.0,
2022
+ "rewards/chosen": 12.68671989440918,
2023
+ "rewards/margins": 34.4812126159668,
2024
+ "rewards/rejected": -21.79449462890625,
2025
+ "step": 144
2026
+ },
2027
+ {
2028
+ "epoch": 0.16,
2029
+ "learning_rate": 3.987931034482759e-05,
2030
+ "logits/chosen": -0.5546131730079651,
2031
+ "logits/rejected": -1.527896761894226,
2032
+ "logps/chosen": -333.9448547363281,
2033
+ "logps/rejected": -303.9145202636719,
2034
+ "loss": 0.0,
2035
+ "rewards/accuracies": 1.0,
2036
+ "rewards/chosen": 10.168367385864258,
2037
+ "rewards/margins": 30.0524959564209,
2038
+ "rewards/rejected": -19.88412857055664,
2039
+ "step": 145
2040
+ },
2041
+ {
2042
+ "epoch": 0.16,
2043
+ "learning_rate": 3.982758620689656e-05,
2044
+ "logits/chosen": -0.3886173367500305,
2045
+ "logits/rejected": -1.621660828590393,
2046
+ "logps/chosen": -369.8122253417969,
2047
+ "logps/rejected": -272.4569396972656,
2048
+ "loss": 0.0016,
2049
+ "rewards/accuracies": 1.0,
2050
+ "rewards/chosen": 8.674747467041016,
2051
+ "rewards/margins": 26.678119659423828,
2052
+ "rewards/rejected": -18.003372192382812,
2053
+ "step": 146
2054
+ },
2055
+ {
2056
+ "epoch": 0.16,
2057
+ "learning_rate": 3.977586206896552e-05,
2058
+ "logits/chosen": -0.61821049451828,
2059
+ "logits/rejected": -1.6792049407958984,
2060
+ "logps/chosen": -345.2456359863281,
2061
+ "logps/rejected": -307.6291809082031,
2062
+ "loss": 0.0,
2063
+ "rewards/accuracies": 1.0,
2064
+ "rewards/chosen": 7.595407485961914,
2065
+ "rewards/margins": 29.619413375854492,
2066
+ "rewards/rejected": -22.024009704589844,
2067
+ "step": 147
2068
+ },
2069
+ {
2070
+ "epoch": 0.16,
2071
+ "learning_rate": 3.972413793103449e-05,
2072
+ "logits/chosen": -0.23747998476028442,
2073
+ "logits/rejected": -1.287044644355774,
2074
+ "logps/chosen": -361.4759521484375,
2075
+ "logps/rejected": -267.63592529296875,
2076
+ "loss": 0.2105,
2077
+ "rewards/accuracies": 0.9375,
2078
+ "rewards/chosen": 7.393239974975586,
2079
+ "rewards/margins": 23.982959747314453,
2080
+ "rewards/rejected": -16.5897216796875,
2081
+ "step": 148
2082
+ },
2083
+ {
2084
+ "epoch": 0.16,
2085
+ "learning_rate": 3.967241379310345e-05,
2086
+ "logits/chosen": -0.3899969160556793,
2087
+ "logits/rejected": -1.6496773958206177,
2088
+ "logps/chosen": -331.9876403808594,
2089
+ "logps/rejected": -278.04522705078125,
2090
+ "loss": 0.0,
2091
+ "rewards/accuracies": 1.0,
2092
+ "rewards/chosen": 8.100687026977539,
2093
+ "rewards/margins": 29.64002227783203,
2094
+ "rewards/rejected": -21.539335250854492,
2095
+ "step": 149
2096
+ },
2097
+ {
2098
+ "epoch": 0.16,
2099
+ "learning_rate": 3.962068965517242e-05,
2100
+ "logits/chosen": -0.4621184766292572,
2101
+ "logits/rejected": -1.3823509216308594,
2102
+ "logps/chosen": -291.25494384765625,
2103
+ "logps/rejected": -313.21734619140625,
2104
+ "loss": 0.0,
2105
+ "rewards/accuracies": 1.0,
2106
+ "rewards/chosen": 4.036273002624512,
2107
+ "rewards/margins": 25.113258361816406,
2108
+ "rewards/rejected": -21.076988220214844,
2109
+ "step": 150
2110
+ },
2111
+ {
2112
+ "epoch": 0.16,
2113
+ "learning_rate": 3.956896551724138e-05,
2114
+ "logits/chosen": -0.3050623834133148,
2115
+ "logits/rejected": -1.3223061561584473,
2116
+ "logps/chosen": -388.243896484375,
2117
+ "logps/rejected": -266.01983642578125,
2118
+ "loss": 0.0,
2119
+ "rewards/accuracies": 1.0,
2120
+ "rewards/chosen": 7.008649826049805,
2121
+ "rewards/margins": 24.918262481689453,
2122
+ "rewards/rejected": -17.90961265563965,
2123
+ "step": 151
2124
+ },
2125
+ {
2126
+ "epoch": 0.17,
2127
+ "learning_rate": 3.9517241379310346e-05,
2128
+ "logits/chosen": -0.26263201236724854,
2129
+ "logits/rejected": -1.262205958366394,
2130
+ "logps/chosen": -300.5107421875,
2131
+ "logps/rejected": -266.810302734375,
2132
+ "loss": 0.0007,
2133
+ "rewards/accuracies": 1.0,
2134
+ "rewards/chosen": 5.9579243659973145,
2135
+ "rewards/margins": 23.850536346435547,
2136
+ "rewards/rejected": -17.892614364624023,
2137
+ "step": 152
2138
+ },
2139
+ {
2140
+ "epoch": 0.17,
2141
+ "learning_rate": 3.9465517241379314e-05,
2142
+ "logits/chosen": -0.34424301981925964,
2143
+ "logits/rejected": -1.2062197923660278,
2144
+ "logps/chosen": -273.7149353027344,
2145
+ "logps/rejected": -266.54150390625,
2146
+ "loss": 0.0,
2147
+ "rewards/accuracies": 1.0,
2148
+ "rewards/chosen": 5.983701705932617,
2149
+ "rewards/margins": 26.13591957092285,
2150
+ "rewards/rejected": -20.1522216796875,
2151
+ "step": 153
2152
+ },
2153
+ {
2154
+ "epoch": 0.17,
2155
+ "learning_rate": 3.9413793103448276e-05,
2156
+ "logits/chosen": -0.1809622049331665,
2157
+ "logits/rejected": -1.4350645542144775,
2158
+ "logps/chosen": -418.6148376464844,
2159
+ "logps/rejected": -297.8706359863281,
2160
+ "loss": 0.0,
2161
+ "rewards/accuracies": 1.0,
2162
+ "rewards/chosen": 8.36530876159668,
2163
+ "rewards/margins": 30.20969581604004,
2164
+ "rewards/rejected": -21.84438705444336,
2165
+ "step": 154
2166
+ },
2167
+ {
2168
+ "epoch": 0.17,
2169
+ "learning_rate": 3.9362068965517244e-05,
2170
+ "logits/chosen": -0.3474291265010834,
2171
+ "logits/rejected": -1.3261463642120361,
2172
+ "logps/chosen": -392.76068115234375,
2173
+ "logps/rejected": -282.137451171875,
2174
+ "loss": 0.12,
2175
+ "rewards/accuracies": 0.9375,
2176
+ "rewards/chosen": 8.480328559875488,
2177
+ "rewards/margins": 24.708953857421875,
2178
+ "rewards/rejected": -16.228626251220703,
2179
+ "step": 155
2180
+ },
2181
+ {
2182
+ "epoch": 0.17,
2183
+ "learning_rate": 3.931034482758621e-05,
2184
+ "logits/chosen": -0.29881787300109863,
2185
+ "logits/rejected": -1.59579598903656,
2186
+ "logps/chosen": -491.68280029296875,
2187
+ "logps/rejected": -329.0652160644531,
2188
+ "loss": 0.0,
2189
+ "rewards/accuracies": 1.0,
2190
+ "rewards/chosen": 9.562919616699219,
2191
+ "rewards/margins": 31.969928741455078,
2192
+ "rewards/rejected": -22.407007217407227,
2193
+ "step": 156
2194
+ },
2195
+ {
2196
+ "epoch": 0.17,
2197
+ "learning_rate": 3.925862068965517e-05,
2198
+ "logits/chosen": -0.24624793231487274,
2199
+ "logits/rejected": -1.2760369777679443,
2200
+ "logps/chosen": -309.42950439453125,
2201
+ "logps/rejected": -337.400634765625,
2202
+ "loss": 0.0,
2203
+ "rewards/accuracies": 1.0,
2204
+ "rewards/chosen": 5.126012802124023,
2205
+ "rewards/margins": 27.284088134765625,
2206
+ "rewards/rejected": -22.1580753326416,
2207
+ "step": 157
2208
+ },
2209
+ {
2210
+ "epoch": 0.17,
2211
+ "learning_rate": 3.920689655172414e-05,
2212
+ "logits/chosen": -0.24486477673053741,
2213
+ "logits/rejected": -1.4145996570587158,
2214
+ "logps/chosen": -435.2896423339844,
2215
+ "logps/rejected": -347.84698486328125,
2216
+ "loss": 0.0,
2217
+ "rewards/accuracies": 1.0,
2218
+ "rewards/chosen": 3.421405076980591,
2219
+ "rewards/margins": 30.74762725830078,
2220
+ "rewards/rejected": -27.32622528076172,
2221
+ "step": 158
2222
+ },
2223
+ {
2224
+ "epoch": 0.17,
2225
+ "learning_rate": 3.915517241379311e-05,
2226
+ "logits/chosen": -0.4706140458583832,
2227
+ "logits/rejected": -1.4215468168258667,
2228
+ "logps/chosen": -427.6326904296875,
2229
+ "logps/rejected": -399.431640625,
2230
+ "loss": 0.0,
2231
+ "rewards/accuracies": 1.0,
2232
+ "rewards/chosen": 4.1375885009765625,
2233
+ "rewards/margins": 31.478370666503906,
2234
+ "rewards/rejected": -27.340782165527344,
2235
+ "step": 159
2236
+ },
2237
+ {
2238
+ "epoch": 0.17,
2239
+ "learning_rate": 3.910344827586207e-05,
2240
+ "logits/chosen": -0.25800830125808716,
2241
+ "logits/rejected": -1.604130744934082,
2242
+ "logps/chosen": -430.5594482421875,
2243
+ "logps/rejected": -431.40301513671875,
2244
+ "loss": 0.0,
2245
+ "rewards/accuracies": 1.0,
2246
+ "rewards/chosen": 1.8110466003417969,
2247
+ "rewards/margins": 36.45166015625,
2248
+ "rewards/rejected": -34.64060974121094,
2249
+ "step": 160
2250
+ },
2251
+ {
2252
+ "epoch": 0.18,
2253
+ "learning_rate": 3.905172413793104e-05,
2254
+ "logits/chosen": -0.3043404221534729,
2255
+ "logits/rejected": -1.5278654098510742,
2256
+ "logps/chosen": -465.7344970703125,
2257
+ "logps/rejected": -436.65899658203125,
2258
+ "loss": 0.0,
2259
+ "rewards/accuracies": 1.0,
2260
+ "rewards/chosen": -1.1547170877456665,
2261
+ "rewards/margins": 33.50754928588867,
2262
+ "rewards/rejected": -34.66226577758789,
2263
+ "step": 161
2264
+ },
2265
+ {
2266
+ "epoch": 0.18,
2267
+ "learning_rate": 3.9000000000000006e-05,
2268
+ "logits/chosen": -0.22293318808078766,
2269
+ "logits/rejected": -1.2031610012054443,
2270
+ "logps/chosen": -428.4457092285156,
2271
+ "logps/rejected": -419.1228332519531,
2272
+ "loss": 0.0,
2273
+ "rewards/accuracies": 1.0,
2274
+ "rewards/chosen": -3.0903587341308594,
2275
+ "rewards/margins": 28.546253204345703,
2276
+ "rewards/rejected": -31.636613845825195,
2277
+ "step": 162
2278
+ },
2279
+ {
2280
+ "epoch": 0.18,
2281
+ "learning_rate": 3.894827586206897e-05,
2282
+ "logits/chosen": -0.18892326951026917,
2283
+ "logits/rejected": -1.481589913368225,
2284
+ "logps/chosen": -513.7359008789062,
2285
+ "logps/rejected": -456.95208740234375,
2286
+ "loss": 0.0,
2287
+ "rewards/accuracies": 1.0,
2288
+ "rewards/chosen": 2.6842398643493652,
2289
+ "rewards/margins": 39.1934928894043,
2290
+ "rewards/rejected": -36.509254455566406,
2291
+ "step": 163
2292
+ },
2293
+ {
2294
+ "epoch": 0.18,
2295
+ "learning_rate": 3.8896551724137935e-05,
2296
+ "logits/chosen": -0.48873329162597656,
2297
+ "logits/rejected": -1.4820678234100342,
2298
+ "logps/chosen": -393.00079345703125,
2299
+ "logps/rejected": -426.32696533203125,
2300
+ "loss": 0.0194,
2301
+ "rewards/accuracies": 1.0,
2302
+ "rewards/chosen": -5.972713470458984,
2303
+ "rewards/margins": 28.33656120300293,
2304
+ "rewards/rejected": -34.30927276611328,
2305
+ "step": 164
2306
+ },
2307
+ {
2308
+ "epoch": 0.18,
2309
+ "learning_rate": 3.88448275862069e-05,
2310
+ "logits/chosen": -0.4431387484073639,
2311
+ "logits/rejected": -1.3382742404937744,
2312
+ "logps/chosen": -392.2378845214844,
2313
+ "logps/rejected": -503.30194091796875,
2314
+ "loss": 0.0,
2315
+ "rewards/accuracies": 1.0,
2316
+ "rewards/chosen": -5.949856758117676,
2317
+ "rewards/margins": 34.904537200927734,
2318
+ "rewards/rejected": -40.854393005371094,
2319
+ "step": 165
2320
+ },
2321
+ {
2322
+ "epoch": 0.18,
2323
+ "learning_rate": 3.8793103448275865e-05,
2324
+ "logits/chosen": -0.2996464669704437,
2325
+ "logits/rejected": -1.4847010374069214,
2326
+ "logps/chosen": -483.34161376953125,
2327
+ "logps/rejected": -548.205078125,
2328
+ "loss": 0.0,
2329
+ "rewards/accuracies": 1.0,
2330
+ "rewards/chosen": 3.7714486122131348,
2331
+ "rewards/margins": 47.965972900390625,
2332
+ "rewards/rejected": -44.19452667236328,
2333
+ "step": 166
2334
+ },
2335
+ {
2336
+ "epoch": 0.18,
2337
+ "learning_rate": 3.874137931034483e-05,
2338
+ "logits/chosen": -0.3217467963695526,
2339
+ "logits/rejected": -1.2237169742584229,
2340
+ "logps/chosen": -396.9007568359375,
2341
+ "logps/rejected": -509.6751708984375,
2342
+ "loss": 0.0,
2343
+ "rewards/accuracies": 1.0,
2344
+ "rewards/chosen": 0.9992933869361877,
2345
+ "rewards/margins": 39.428592681884766,
2346
+ "rewards/rejected": -38.429298400878906,
2347
+ "step": 167
2348
+ },
2349
+ {
2350
+ "epoch": 0.18,
2351
+ "learning_rate": 3.8689655172413794e-05,
2352
+ "logits/chosen": -0.26858553290367126,
2353
+ "logits/rejected": -1.3162765502929688,
2354
+ "logps/chosen": -436.47088623046875,
2355
+ "logps/rejected": -461.6593933105469,
2356
+ "loss": 0.0,
2357
+ "rewards/accuracies": 1.0,
2358
+ "rewards/chosen": 2.6044464111328125,
2359
+ "rewards/margins": 38.4824104309082,
2360
+ "rewards/rejected": -35.87796401977539,
2361
+ "step": 168
2362
+ },
2363
+ {
2364
+ "epoch": 0.18,
2365
+ "learning_rate": 3.863793103448276e-05,
2366
+ "logits/chosen": -0.343127965927124,
2367
+ "logits/rejected": -1.390782117843628,
2368
+ "logps/chosen": -392.77978515625,
2369
+ "logps/rejected": -443.3890686035156,
2370
+ "loss": 0.0,
2371
+ "rewards/accuracies": 1.0,
2372
+ "rewards/chosen": 0.19043821096420288,
2373
+ "rewards/margins": 34.40040969848633,
2374
+ "rewards/rejected": -34.20996856689453,
2375
+ "step": 169
2376
+ },
2377
+ {
2378
+ "epoch": 0.19,
2379
+ "learning_rate": 3.858620689655172e-05,
2380
+ "logits/chosen": -0.43945077061653137,
2381
+ "logits/rejected": -1.3458408117294312,
2382
+ "logps/chosen": -386.0972595214844,
2383
+ "logps/rejected": -481.8536071777344,
2384
+ "loss": 0.0,
2385
+ "rewards/accuracies": 1.0,
2386
+ "rewards/chosen": 1.2730538845062256,
2387
+ "rewards/margins": 35.0412483215332,
2388
+ "rewards/rejected": -33.76819610595703,
2389
+ "step": 170
2390
+ },
2391
+ {
2392
+ "epoch": 0.19,
2393
+ "learning_rate": 3.853448275862069e-05,
2394
+ "logits/chosen": -0.28766676783561707,
2395
+ "logits/rejected": -1.314267873764038,
2396
+ "logps/chosen": -423.24981689453125,
2397
+ "logps/rejected": -416.8682861328125,
2398
+ "loss": 0.0,
2399
+ "rewards/accuracies": 1.0,
2400
+ "rewards/chosen": 1.9727609157562256,
2401
+ "rewards/margins": 34.98452377319336,
2402
+ "rewards/rejected": -33.01176834106445,
2403
+ "step": 171
2404
+ },
2405
+ {
2406
+ "epoch": 0.19,
2407
+ "learning_rate": 3.848275862068966e-05,
2408
+ "logits/chosen": -0.3893183767795563,
2409
+ "logits/rejected": -1.443713903427124,
2410
+ "logps/chosen": -474.935546875,
2411
+ "logps/rejected": -453.4057922363281,
2412
+ "loss": 0.0,
2413
+ "rewards/accuracies": 1.0,
2414
+ "rewards/chosen": 2.1096689701080322,
2415
+ "rewards/margins": 38.80006790161133,
2416
+ "rewards/rejected": -36.690399169921875,
2417
+ "step": 172
2418
+ },
2419
+ {
2420
+ "epoch": 0.19,
2421
+ "learning_rate": 3.843103448275862e-05,
2422
+ "logits/chosen": -0.34813588857650757,
2423
+ "logits/rejected": -1.271755337715149,
2424
+ "logps/chosen": -392.284423828125,
2425
+ "logps/rejected": -496.442138671875,
2426
+ "loss": 0.0,
2427
+ "rewards/accuracies": 1.0,
2428
+ "rewards/chosen": 2.8468620777130127,
2429
+ "rewards/margins": 41.135921478271484,
2430
+ "rewards/rejected": -38.289058685302734,
2431
+ "step": 173
2432
+ },
2433
+ {
2434
+ "epoch": 0.19,
2435
+ "learning_rate": 3.837931034482759e-05,
2436
+ "logits/chosen": -0.3464352786540985,
2437
+ "logits/rejected": -1.223905324935913,
2438
+ "logps/chosen": -372.1242370605469,
2439
+ "logps/rejected": -446.0730285644531,
2440
+ "loss": 0.0073,
2441
+ "rewards/accuracies": 1.0,
2442
+ "rewards/chosen": 0.18368953466415405,
2443
+ "rewards/margins": 33.218013763427734,
2444
+ "rewards/rejected": -33.034324645996094,
2445
+ "step": 174
2446
+ },
2447
+ {
2448
+ "epoch": 0.19,
2449
+ "learning_rate": 3.8327586206896556e-05,
2450
+ "logits/chosen": -0.42792245745658875,
2451
+ "logits/rejected": -1.189011812210083,
2452
+ "logps/chosen": -324.2301940917969,
2453
+ "logps/rejected": -503.5957946777344,
2454
+ "loss": 0.0,
2455
+ "rewards/accuracies": 1.0,
2456
+ "rewards/chosen": -1.7187633514404297,
2457
+ "rewards/margins": 37.19697189331055,
2458
+ "rewards/rejected": -38.91573715209961,
2459
+ "step": 175
2460
+ },
2461
+ {
2462
+ "epoch": 0.19,
2463
+ "learning_rate": 3.827586206896552e-05,
2464
+ "logits/chosen": -0.16668730974197388,
2465
+ "logits/rejected": -1.5985019207000732,
2466
+ "logps/chosen": -482.8082580566406,
2467
+ "logps/rejected": -449.761474609375,
2468
+ "loss": 0.0,
2469
+ "rewards/accuracies": 1.0,
2470
+ "rewards/chosen": 3.7022485733032227,
2471
+ "rewards/margins": 42.01506805419922,
2472
+ "rewards/rejected": -38.31282043457031,
2473
+ "step": 176
2474
+ },
2475
+ {
2476
+ "epoch": 0.19,
2477
+ "learning_rate": 3.8224137931034485e-05,
2478
+ "logits/chosen": -0.26949506998062134,
2479
+ "logits/rejected": -1.2986763715744019,
2480
+ "logps/chosen": -348.618896484375,
2481
+ "logps/rejected": -442.8262023925781,
2482
+ "loss": 0.0,
2483
+ "rewards/accuracies": 1.0,
2484
+ "rewards/chosen": 2.1358022689819336,
2485
+ "rewards/margins": 35.771461486816406,
2486
+ "rewards/rejected": -33.63566207885742,
2487
+ "step": 177
2488
+ },
2489
+ {
2490
+ "epoch": 0.19,
2491
+ "learning_rate": 3.8172413793103453e-05,
2492
+ "logits/chosen": -0.3438015580177307,
2493
+ "logits/rejected": -1.3001600503921509,
2494
+ "logps/chosen": -333.9195251464844,
2495
+ "logps/rejected": -457.9849853515625,
2496
+ "loss": 0.0,
2497
+ "rewards/accuracies": 1.0,
2498
+ "rewards/chosen": 2.145051956176758,
2499
+ "rewards/margins": 41.207923889160156,
2500
+ "rewards/rejected": -39.06287384033203,
2501
+ "step": 178
2502
+ },
2503
+ {
2504
+ "epoch": 0.2,
2505
+ "learning_rate": 3.8120689655172415e-05,
2506
+ "logits/chosen": -0.10503964871168137,
2507
+ "logits/rejected": -1.2384865283966064,
2508
+ "logps/chosen": -451.48663330078125,
2509
+ "logps/rejected": -479.8037414550781,
2510
+ "loss": 0.0,
2511
+ "rewards/accuracies": 1.0,
2512
+ "rewards/chosen": 0.525763988494873,
2513
+ "rewards/margins": 38.3972282409668,
2514
+ "rewards/rejected": -37.8714599609375,
2515
+ "step": 179
2516
+ },
2517
+ {
2518
+ "epoch": 0.2,
2519
+ "learning_rate": 3.806896551724138e-05,
2520
+ "logits/chosen": -0.30969691276550293,
2521
+ "logits/rejected": -1.1278965473175049,
2522
+ "logps/chosen": -298.7955322265625,
2523
+ "logps/rejected": -499.42279052734375,
2524
+ "loss": 0.0,
2525
+ "rewards/accuracies": 1.0,
2526
+ "rewards/chosen": -2.1820168495178223,
2527
+ "rewards/margins": 37.28181457519531,
2528
+ "rewards/rejected": -39.463829040527344,
2529
+ "step": 180
2530
+ },
2531
+ {
2532
+ "epoch": 0.2,
2533
+ "learning_rate": 3.801724137931035e-05,
2534
+ "logits/chosen": -0.3709387183189392,
2535
+ "logits/rejected": -1.5942573547363281,
2536
+ "logps/chosen": -422.3631286621094,
2537
+ "logps/rejected": -511.52008056640625,
2538
+ "loss": 0.0,
2539
+ "rewards/accuracies": 1.0,
2540
+ "rewards/chosen": 2.8516347408294678,
2541
+ "rewards/margins": 41.39206314086914,
2542
+ "rewards/rejected": -38.540428161621094,
2543
+ "step": 181
2544
+ },
2545
+ {
2546
+ "epoch": 0.2,
2547
+ "learning_rate": 3.796551724137931e-05,
2548
+ "logits/chosen": -0.2693447470664978,
2549
+ "logits/rejected": -1.5490005016326904,
2550
+ "logps/chosen": -480.881591796875,
2551
+ "logps/rejected": -414.71917724609375,
2552
+ "loss": 0.0,
2553
+ "rewards/accuracies": 1.0,
2554
+ "rewards/chosen": 2.7573628425598145,
2555
+ "rewards/margins": 39.35446548461914,
2556
+ "rewards/rejected": -36.59709930419922,
2557
+ "step": 182
2558
+ },
2559
+ {
2560
+ "epoch": 0.2,
2561
+ "learning_rate": 3.791379310344828e-05,
2562
+ "logits/chosen": -0.39544716477394104,
2563
+ "logits/rejected": -1.4232298135757446,
2564
+ "logps/chosen": -400.05267333984375,
2565
+ "logps/rejected": -465.1136169433594,
2566
+ "loss": 0.0018,
2567
+ "rewards/accuracies": 1.0,
2568
+ "rewards/chosen": 3.5475125312805176,
2569
+ "rewards/margins": 38.094913482666016,
2570
+ "rewards/rejected": -34.547393798828125,
2571
+ "step": 183
2572
+ },
2573
+ {
2574
+ "epoch": 0.2,
2575
+ "learning_rate": 3.786206896551725e-05,
2576
+ "logits/chosen": -0.3474569320678711,
2577
+ "logits/rejected": -1.4260185956954956,
2578
+ "logps/chosen": -334.5423889160156,
2579
+ "logps/rejected": -438.87408447265625,
2580
+ "loss": 0.0,
2581
+ "rewards/accuracies": 1.0,
2582
+ "rewards/chosen": -1.5236696004867554,
2583
+ "rewards/margins": 33.86663055419922,
2584
+ "rewards/rejected": -35.390296936035156,
2585
+ "step": 184
2586
+ },
2587
+ {
2588
+ "epoch": 0.2,
2589
+ "learning_rate": 3.781034482758621e-05,
2590
+ "logits/chosen": -0.3836357593536377,
2591
+ "logits/rejected": -1.554406762123108,
2592
+ "logps/chosen": -372.57830810546875,
2593
+ "logps/rejected": -439.4316101074219,
2594
+ "loss": 0.0,
2595
+ "rewards/accuracies": 1.0,
2596
+ "rewards/chosen": 1.840342402458191,
2597
+ "rewards/margins": 34.46831512451172,
2598
+ "rewards/rejected": -32.627967834472656,
2599
+ "step": 185
2600
+ },
2601
+ {
2602
+ "epoch": 0.2,
2603
+ "learning_rate": 3.775862068965517e-05,
2604
+ "logits/chosen": -0.18503640592098236,
2605
+ "logits/rejected": -1.800915241241455,
2606
+ "logps/chosen": -485.19354248046875,
2607
+ "logps/rejected": -432.1221923828125,
2608
+ "loss": 0.0,
2609
+ "rewards/accuracies": 1.0,
2610
+ "rewards/chosen": 4.203826427459717,
2611
+ "rewards/margins": 39.722900390625,
2612
+ "rewards/rejected": -35.51907730102539,
2613
+ "step": 186
2614
+ },
2615
+ {
2616
+ "epoch": 0.2,
2617
+ "learning_rate": 3.770689655172414e-05,
2618
+ "logits/chosen": -0.3300974667072296,
2619
+ "logits/rejected": -1.5052634477615356,
2620
+ "logps/chosen": -418.87017822265625,
2621
+ "logps/rejected": -408.99554443359375,
2622
+ "loss": 0.0,
2623
+ "rewards/accuracies": 1.0,
2624
+ "rewards/chosen": 2.9931721687316895,
2625
+ "rewards/margins": 35.2595100402832,
2626
+ "rewards/rejected": -32.266334533691406,
2627
+ "step": 187
2628
+ },
2629
+ {
2630
+ "epoch": 0.21,
2631
+ "learning_rate": 3.7655172413793106e-05,
2632
+ "logits/chosen": -0.27791130542755127,
2633
+ "logits/rejected": -1.4637497663497925,
2634
+ "logps/chosen": -370.642822265625,
2635
+ "logps/rejected": -457.7041320800781,
2636
+ "loss": 0.0,
2637
+ "rewards/accuracies": 1.0,
2638
+ "rewards/chosen": 4.304169178009033,
2639
+ "rewards/margins": 40.587310791015625,
2640
+ "rewards/rejected": -36.283138275146484,
2641
+ "step": 188
2642
+ },
2643
+ {
2644
+ "epoch": 0.21,
2645
+ "learning_rate": 3.760344827586207e-05,
2646
+ "logits/chosen": -0.4701173007488251,
2647
+ "logits/rejected": -1.403714895248413,
2648
+ "logps/chosen": -333.7972717285156,
2649
+ "logps/rejected": -436.09967041015625,
2650
+ "loss": 0.0,
2651
+ "rewards/accuracies": 1.0,
2652
+ "rewards/chosen": 1.8245915174484253,
2653
+ "rewards/margins": 36.37276077270508,
2654
+ "rewards/rejected": -34.54816818237305,
2655
+ "step": 189
2656
+ },
2657
+ {
2658
+ "epoch": 0.21,
2659
+ "learning_rate": 3.7551724137931035e-05,
2660
+ "logits/chosen": -0.08819374442100525,
2661
+ "logits/rejected": -1.395701289176941,
2662
+ "logps/chosen": -401.96673583984375,
2663
+ "logps/rejected": -440.0989685058594,
2664
+ "loss": 0.0,
2665
+ "rewards/accuracies": 1.0,
2666
+ "rewards/chosen": 2.565300941467285,
2667
+ "rewards/margins": 33.236175537109375,
2668
+ "rewards/rejected": -30.670875549316406,
2669
+ "step": 190
2670
+ },
2671
+ {
2672
+ "epoch": 0.21,
2673
+ "learning_rate": 3.7500000000000003e-05,
2674
+ "logits/chosen": -0.17811371386051178,
2675
+ "logits/rejected": -1.501448154449463,
2676
+ "logps/chosen": -447.5896301269531,
2677
+ "logps/rejected": -375.3368835449219,
2678
+ "loss": 0.0,
2679
+ "rewards/accuracies": 1.0,
2680
+ "rewards/chosen": 2.7833425998687744,
2681
+ "rewards/margins": 31.601383209228516,
2682
+ "rewards/rejected": -28.818038940429688,
2683
+ "step": 191
2684
+ },
2685
+ {
2686
+ "epoch": 0.21,
2687
+ "learning_rate": 3.7448275862068965e-05,
2688
+ "logits/chosen": -0.44955122470855713,
2689
+ "logits/rejected": -1.550938606262207,
2690
+ "logps/chosen": -289.89959716796875,
2691
+ "logps/rejected": -400.880859375,
2692
+ "loss": 0.0,
2693
+ "rewards/accuracies": 1.0,
2694
+ "rewards/chosen": -0.4812740385532379,
2695
+ "rewards/margins": 33.061256408691406,
2696
+ "rewards/rejected": -33.54253005981445,
2697
+ "step": 192
2698
+ },
2699
+ {
2700
+ "epoch": 0.21,
2701
+ "learning_rate": 3.739655172413793e-05,
2702
+ "logits/chosen": -0.18104656040668488,
2703
+ "logits/rejected": -1.4971003532409668,
2704
+ "logps/chosen": -418.8290710449219,
2705
+ "logps/rejected": -424.74859619140625,
2706
+ "loss": 0.0,
2707
+ "rewards/accuracies": 1.0,
2708
+ "rewards/chosen": 6.926714897155762,
2709
+ "rewards/margins": 39.01382064819336,
2710
+ "rewards/rejected": -32.08710861206055,
2711
+ "step": 193
2712
+ },
2713
+ {
2714
+ "epoch": 0.21,
2715
+ "learning_rate": 3.73448275862069e-05,
2716
+ "logits/chosen": -0.16005319356918335,
2717
+ "logits/rejected": -1.4771628379821777,
2718
+ "logps/chosen": -437.9535827636719,
2719
+ "logps/rejected": -435.7216796875,
2720
+ "loss": 0.0,
2721
+ "rewards/accuracies": 1.0,
2722
+ "rewards/chosen": 5.183934688568115,
2723
+ "rewards/margins": 40.51701354980469,
2724
+ "rewards/rejected": -35.33308410644531,
2725
+ "step": 194
2726
+ },
2727
+ {
2728
+ "epoch": 0.21,
2729
+ "learning_rate": 3.729310344827586e-05,
2730
+ "logits/chosen": -0.35886967182159424,
2731
+ "logits/rejected": -1.6980268955230713,
2732
+ "logps/chosen": -404.4248962402344,
2733
+ "logps/rejected": -412.4001159667969,
2734
+ "loss": 0.0068,
2735
+ "rewards/accuracies": 1.0,
2736
+ "rewards/chosen": 7.873406887054443,
2737
+ "rewards/margins": 38.6983757019043,
2738
+ "rewards/rejected": -30.824968338012695,
2739
+ "step": 195
2740
+ },
2741
+ {
2742
+ "epoch": 0.21,
2743
+ "learning_rate": 3.724137931034483e-05,
2744
+ "logits/chosen": -0.4241984486579895,
2745
+ "logits/rejected": -1.4724136590957642,
2746
+ "logps/chosen": -353.877197265625,
2747
+ "logps/rejected": -414.2342529296875,
2748
+ "loss": 0.0,
2749
+ "rewards/accuracies": 1.0,
2750
+ "rewards/chosen": 2.698596954345703,
2751
+ "rewards/margins": 36.431583404541016,
2752
+ "rewards/rejected": -33.73298645019531,
2753
+ "step": 196
2754
+ },
2755
+ {
2756
+ "epoch": 0.21,
2757
+ "learning_rate": 3.71896551724138e-05,
2758
+ "logits/chosen": -0.38190075755119324,
2759
+ "logits/rejected": -1.520878553390503,
2760
+ "logps/chosen": -303.9383544921875,
2761
+ "logps/rejected": -443.4150390625,
2762
+ "loss": 0.0,
2763
+ "rewards/accuracies": 1.0,
2764
+ "rewards/chosen": 5.500533103942871,
2765
+ "rewards/margins": 41.76572036743164,
2766
+ "rewards/rejected": -36.26518630981445,
2767
+ "step": 197
2768
+ },
2769
+ {
2770
+ "epoch": 0.22,
2771
+ "learning_rate": 3.713793103448276e-05,
2772
+ "logits/chosen": -0.5008329153060913,
2773
+ "logits/rejected": -1.4159965515136719,
2774
+ "logps/chosen": -298.4696044921875,
2775
+ "logps/rejected": -496.4792175292969,
2776
+ "loss": 0.0,
2777
+ "rewards/accuracies": 1.0,
2778
+ "rewards/chosen": 3.6865413188934326,
2779
+ "rewards/margins": 43.41154098510742,
2780
+ "rewards/rejected": -39.72499465942383,
2781
+ "step": 198
2782
+ },
2783
+ {
2784
+ "epoch": 0.22,
2785
+ "learning_rate": 3.708620689655173e-05,
2786
+ "logits/chosen": -0.30573636293411255,
2787
+ "logits/rejected": -1.4757778644561768,
2788
+ "logps/chosen": -404.08349609375,
2789
+ "logps/rejected": -536.4090576171875,
2790
+ "loss": 0.0994,
2791
+ "rewards/accuracies": 0.9375,
2792
+ "rewards/chosen": 2.837350606918335,
2793
+ "rewards/margins": 47.625877380371094,
2794
+ "rewards/rejected": -44.78852844238281,
2795
+ "step": 199
2796
+ },
2797
+ {
2798
+ "epoch": 0.22,
2799
+ "learning_rate": 3.7034482758620695e-05,
2800
+ "logits/chosen": -0.08167792111635208,
2801
+ "logits/rejected": -1.6543066501617432,
2802
+ "logps/chosen": -478.5865173339844,
2803
+ "logps/rejected": -535.1455078125,
2804
+ "loss": 0.0,
2805
+ "rewards/accuracies": 1.0,
2806
+ "rewards/chosen": 6.218913555145264,
2807
+ "rewards/margins": 51.752662658691406,
2808
+ "rewards/rejected": -45.533748626708984,
2809
+ "step": 200
2810
+ }
2811
+ ],
2812
+ "logging_steps": 1,
2813
+ "max_steps": 916,
2814
+ "num_input_tokens_seen": 0,
2815
+ "num_train_epochs": 1,
2816
+ "save_steps": 100,
2817
+ "total_flos": 0.0,
2818
+ "train_batch_size": 1,
2819
+ "trial_name": null,
2820
+ "trial_params": null
2821
+ }
checkpoint-200/training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6f8e778ebfdc8e189d4259f43aa8cc8438633b1407516e064a080cf26f579c11
3
+ size 4664
checkpoint-300/README.md ADDED
@@ -0,0 +1,204 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: peft
3
+ base_model: /run/media/adamo1139/82142F79142F6EFB/ProgramData/Anaconda3/envs/qlora-jondurbin/axolotl-git-linux/axolotl/yi-34b-200k-llamafied
4
+ ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
+
28
+ ### Model Sources [optional]
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
+
36
+ ## Uses
37
+
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
+
40
+ ### Direct Use
41
+
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Recommendations
65
+
66
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
+
68
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
+
70
+ ## How to Get Started with the Model
71
+
72
+ Use the code below to get started with the model.
73
+
74
+ [More Information Needed]
75
+
76
+ ## Training Details
77
+
78
+ ### Training Data
79
+
80
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
+
82
+ [More Information Needed]
83
+
84
+ ### Training Procedure
85
+
86
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
+
88
+ #### Preprocessing [optional]
89
+
90
+ [More Information Needed]
91
+
92
+
93
+ #### Training Hyperparameters
94
+
95
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
+
97
+ #### Speeds, Sizes, Times [optional]
98
+
99
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
+
101
+ [More Information Needed]
102
+
103
+ ## Evaluation
104
+
105
+ <!-- This section describes the evaluation protocols and provides the results. -->
106
+
107
+ ### Testing Data, Factors & Metrics
108
+
109
+ #### Testing Data
110
+
111
+ <!-- This should link to a Dataset Card if possible. -->
112
+
113
+ [More Information Needed]
114
+
115
+ #### Factors
116
+
117
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
+
119
+ [More Information Needed]
120
+
121
+ #### Metrics
122
+
123
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
+
125
+ [More Information Needed]
126
+
127
+ ### Results
128
+
129
+ [More Information Needed]
130
+
131
+ #### Summary
132
+
133
+
134
+
135
+ ## Model Examination [optional]
136
+
137
+ <!-- Relevant interpretability work for the model goes here -->
138
+
139
+ [More Information Needed]
140
+
141
+ ## Environmental Impact
142
+
143
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
+
145
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
+
147
+ - **Hardware Type:** [More Information Needed]
148
+ - **Hours used:** [More Information Needed]
149
+ - **Cloud Provider:** [More Information Needed]
150
+ - **Compute Region:** [More Information Needed]
151
+ - **Carbon Emitted:** [More Information Needed]
152
+
153
+ ## Technical Specifications [optional]
154
+
155
+ ### Model Architecture and Objective
156
+
157
+ [More Information Needed]
158
+
159
+ ### Compute Infrastructure
160
+
161
+ [More Information Needed]
162
+
163
+ #### Hardware
164
+
165
+ [More Information Needed]
166
+
167
+ #### Software
168
+
169
+ [More Information Needed]
170
+
171
+ ## Citation [optional]
172
+
173
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
+
175
+ **BibTeX:**
176
+
177
+ [More Information Needed]
178
+
179
+ **APA:**
180
+
181
+ [More Information Needed]
182
+
183
+ ## Glossary [optional]
184
+
185
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
+
187
+ [More Information Needed]
188
+
189
+ ## More Information [optional]
190
+
191
+ [More Information Needed]
192
+
193
+ ## Model Card Authors [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Contact
198
+
199
+ [More Information Needed]
200
+
201
+
202
+ ### Framework versions
203
+
204
+ - PEFT 0.7.1
checkpoint-300/adapter_config.json ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": "yi-34b-200k-llamafied",
5
+ "bias": "none",
6
+ "fan_in_fan_out": false,
7
+ "inference_mode": true,
8
+ "init_lora_weights": true,
9
+ "layers_pattern": null,
10
+ "layers_to_transform": null,
11
+ "loftq_config": null,
12
+ "lora_alpha": 32,
13
+ "lora_dropout": 0,
14
+ "megatron_config": null,
15
+ "megatron_core": "megatron.core",
16
+ "modules_to_save": null,
17
+ "peft_type": "LORA",
18
+ "r": 16,
19
+ "rank_pattern": {},
20
+ "revision": "unsloth",
21
+ "target_modules": [
22
+ "v_proj",
23
+ "gate_proj",
24
+ "o_proj",
25
+ "down_proj",
26
+ "q_proj",
27
+ "up_proj",
28
+ "k_proj"
29
+ ],
30
+ "task_type": "CAUSAL_LM"
31
+ }
checkpoint-300/adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4911591b593c045ffdc4ecde4f0a7805cd441302516c3b4642ead2cf658ddd74
3
+ size 491633464
checkpoint-300/rng_state.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1ff264f99d31b522cc7e2a4eac9d38606d0c58a34c0adc74d71e0ca8b371dc36
3
+ size 14244
checkpoint-300/scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:79ebb2f81c622f336b43b3df591ecc6208eda3036d54d729534e15955533e108
3
+ size 1064
checkpoint-300/special_tokens_map.json ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<|startoftext|>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "<|endoftext|>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "<unk>",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "unk_token": {
24
+ "content": "<unk>",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ }
30
+ }
checkpoint-300/tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
checkpoint-300/tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:386c49cf943d71aa110361135338c50e38beeff0a66593480421f37b319e1a39
3
+ size 1033105
checkpoint-300/tokenizer_config.json ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": false,
3
+ "add_eos_token": false,
4
+ "added_tokens_decoder": {
5
+ "0": {
6
+ "content": "<unk>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "1": {
14
+ "content": "<|startoftext|>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "2": {
22
+ "content": "<|endoftext|>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ }
29
+ },
30
+ "bos_token": "<|startoftext|>",
31
+ "chat_template": "{% if not add_generation_prompt is defined %}{% set add_generation_prompt = false %}{% endif %}{% for message in messages %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant\n' }}{% endif %}",
32
+ "clean_up_tokenization_spaces": false,
33
+ "eos_token": "<|endoftext|>",
34
+ "legacy": true,
35
+ "model_max_length": 4096,
36
+ "pad_token": "<unk>",
37
+ "padding_side": "right",
38
+ "sp_model_kwargs": {},
39
+ "tokenizer_class": "LlamaTokenizer",
40
+ "unk_token": "<unk>",
41
+ "use_default_system_prompt": false
42
+ }
checkpoint-300/trainer_state.json ADDED
The diff for this file is too large to render. See raw diff
 
checkpoint-300/training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6f8e778ebfdc8e189d4259f43aa8cc8438633b1407516e064a080cf26f579c11
3
+ size 4664