Text Generation
Transformers
PyTorch
mistral
conversational
text-generation-inference
Inference Endpoints
LoneStriker commited on
Commit
46eff0e
1 Parent(s): 0edadcf

Upload folder using huggingface_hub

Browse files
README.md ADDED
@@ -0,0 +1,78 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - snorkelai/Snorkel-Mistral-PairRM-DPO-Dataset
5
+ pipeline_tag: text-generation
6
+ ---
7
+
8
+ ### Dataset:
9
+ Training dataset: [snorkelai/Snorkel-Mistral-PairRM-DPO-Dataset](https://huggingface.co/datasets/snorkelai/Snorkel-Mistral-PairRM-DPO-Dataset)
10
+
11
+ We utilize ONLY the prompts from [UltraFeedback](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized); **no external LLM responses used**.
12
+
13
+ ### Methodology:
14
+ 1. Generate five response variations for each prompt from a subset of 20,000 using the LLM - to start, we used [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2).
15
+ 2. Apply [PairRM](https://huggingface.co/llm-blender/PairRM) for response reranking.
16
+ 3. Update the LLM by applying Direct Preference Optimization (DPO) on the top (chosen) and bottom (rejected) responses.
17
+ 4. Use this LLM as the base model for the next iteration, repeating three times in total.
18
+
19
+ This overview provides a high-level summary of our approach.
20
+ We plan to release more detailed results and findings in the coming weeks on the [Snorkel blog.](https://snorkel.ai/blog/)
21
+
22
+ ### Training recipe:
23
+ - The provided data is formatted to be compatible with the Hugging Face's [Zephyr recipe](https://github.com/huggingface/alignment-handbook/tree/main/recipes/zephyr-7b-beta).
24
+ We executed the n_th DPO iteration using the "train/test_iteration_{n}".
25
+
26
+ ### Key Premises:
27
+ - **Specialization Requirement**: For most enterprise use cases, using LLMs "off-the-shelf" falls short of production quality, necessitating additional fine-tuning and alignment.
28
+ - **Ease of Model Building**: Creating ranking/scoring/classification models is simpler than developing high-quality, manually annotated datasets for long-form responses.
29
+ - **Alignment Recipe**: Using smaller but specialized teacher models (reward models) can incrementally align LLMs towards specific axes.
30
+
31
+ ### Applications:
32
+ Unlike our customers, who have very specific use cases to align LLMs to,
33
+ the AlpacaEval 2.0 leaderboard measures the ability of LLMS to follow user instructions.
34
+ With this demonstration, we focus on the general approach to alignment.
35
+ Thus, we use a general-purpose reward model - the performant [PairRM model](https://huggingface.co/llm-blender/PairRM).
36
+ We use the [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) model as our base LLM.
37
+
38
+ For interest in building your **specialized internal reward models
39
+ that reflect your enterprises' needs**, please contact the Snorkel AI team or consider attending our
40
+ [**Enterprise LLM Summit: Building GenAI with Your Data on January 25, 2024**](https://snorkel.ai/event/enterprise-llm-summit/)
41
+ to learn more about "Programmatically scaling human preferences and alignment in GenAI".
42
+
43
+ ### Result:
44
+ On [**Alpaca-Eval 2.0**](https://tatsu-lab.github.io/alpaca_eval/):
45
+ - The base model: [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) scored **14.72**.
46
+
47
+ After applying the above methodology:
48
+ - This model scored **30.22** - ranked 3rd and the highest for an open-source base model at the time of publication.
49
+ - When post-processing the model outputs with PairRM-best-of-16, which involved generating 16 responses and selecting the highest-scoring response by PairRM, we scored **34.86** - ranked 2nd.
50
+ The best model on the leaderboard is "gpt-4-turbo", which is also the judge of optimal responses.
51
+
52
+ We recognize that the Alpaca-Eval 2.0 benchmark does not entirely capture the full range of capabilities and performances of LLMs.
53
+ However, in our current work, where the goal is to align with general "human preferences," Alpaca-Eval 2.0 serves as a suitable and representative benchmark.
54
+ Moving forward, we anticipate further contributions from the community regarding new alignment axes, and conduct evaluations using other appropriate benchmarks.
55
+
56
+ The Alpaca-Eval 2.0 evaluator, "gpt-4-turbo," exhibits a bias towards longer responses.
57
+ This tendency might also be present in our chosen reward model, resulting in our model producing lengthier responses after DPO iterations,
58
+ which can be among the factors to our higher ranks on the leaderboard.
59
+ Future work could include measures to control response length and other relevant metrics.
60
+
61
+ ### Limitations:
62
+ The model is a quick demonstration that the LLMs can be programmatically aligned using smaller specialized reward models.
63
+ It does not have any moderation mechanisms.
64
+ We look forward to continuing to engage with the research community and our customers exploring optimal methods for getting models to respect guardrails,
65
+ allowing for deployment in environments requiring moderated outputs.
66
+
67
+ ### Contemporary Work and Acknowledgements:
68
+ - The Mistral AI Team for developing and releasing the advanced Mistral-7B-Instruct-v0.2 model.
69
+ - The author of the [Direct Preference Optimization paper](https://arxiv.org/abs/2305.18290) for the innovative approach
70
+ - The author of the [Pairwise Reward Model for LLMs paper](https://arxiv.org/abs/2306.02561) for the powerful general-purpose reward model
71
+ - The HuggingFace team for the DPO implementation under [The Alignment Handbook](https://github.com/huggingface/alignment-handbook)
72
+ - We would also like to acknowledge contemporary work published independently on arXiv on 2024-01-18 by Meta & NYU (Yuan, et al) in a paper called [Self-Rewarding Language Models](https://arxiv.org/abs/2401.10020),
73
+ which proposes a similar general approach for creating alignment pairs from a larger set of candidate responses, but using the LLM as the reward model.
74
+ While this may work for general-purpose models, our experience has shown that task-specific reward models guided by SMEs are necessary for most
75
+ enterprise applications of LLMs for specific use cases, which is why we focus on the use of external reward models.
76
+
77
+ ### The Snorkel AI Team
78
+ Hoang Tran, Chris Glaze, Braden Hancock
added_tokens.json ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ {
2
+ "</s>": 2,
3
+ "<s>": 1,
4
+ "<unk>": 0
5
+ }
all_results.json ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 3.0,
3
+ "eval_logits/chosen": -2.10288143157959,
4
+ "eval_logits/rejected": -2.1299264430999756,
5
+ "eval_logps/chosen": -289.6983642578125,
6
+ "eval_logps/rejected": -310.9796142578125,
7
+ "eval_loss": 1.0245678424835205,
8
+ "eval_rewards/accuracies": 0.579365074634552,
9
+ "eval_rewards/chosen": -5.276275157928467,
10
+ "eval_rewards/margins": 0.5837584733963013,
11
+ "eval_rewards/rejected": -5.8600335121154785,
12
+ "eval_runtime": 135.2014,
13
+ "eval_samples": 1000,
14
+ "eval_samples_per_second": 7.396,
15
+ "eval_steps_per_second": 0.466,
16
+ "train_loss": 0.23198359412724215,
17
+ "train_runtime": 18849.1473,
18
+ "train_samples": 19958,
19
+ "train_samples_per_second": 3.176,
20
+ "train_steps_per_second": 0.05
21
+ }
config.json ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "./models_dpo/snorkel_model_0117_20k_mistral_v02_llm_blender_v5",
3
+ "architectures": [
4
+ "MistralForCausalLM"
5
+ ],
6
+ "attention_dropout": 0.0,
7
+ "bos_token_id": 1,
8
+ "eos_token_id": 2,
9
+ "hidden_act": "silu",
10
+ "hidden_size": 4096,
11
+ "initializer_range": 0.02,
12
+ "intermediate_size": 14336,
13
+ "max_position_embeddings": 32768,
14
+ "model_type": "mistral",
15
+ "num_attention_heads": 32,
16
+ "num_hidden_layers": 32,
17
+ "num_key_value_heads": 8,
18
+ "rms_norm_eps": 1e-05,
19
+ "rope_theta": 1000000.0,
20
+ "sliding_window": 4096,
21
+ "tie_word_embeddings": false,
22
+ "torch_dtype": "bfloat16",
23
+ "transformers_version": "4.34.0",
24
+ "use_cache": true,
25
+ "vocab_size": 32000
26
+ }
eval_results.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 3.0,
3
+ "eval_logits/chosen": -2.10288143157959,
4
+ "eval_logits/rejected": -2.1299264430999756,
5
+ "eval_logps/chosen": -289.6983642578125,
6
+ "eval_logps/rejected": -310.9796142578125,
7
+ "eval_loss": 1.0245678424835205,
8
+ "eval_rewards/accuracies": 0.579365074634552,
9
+ "eval_rewards/chosen": -5.276275157928467,
10
+ "eval_rewards/margins": 0.5837584733963013,
11
+ "eval_rewards/rejected": -5.8600335121154785,
12
+ "eval_runtime": 135.2014,
13
+ "eval_samples": 1000,
14
+ "eval_samples_per_second": 7.396,
15
+ "eval_steps_per_second": 0.466
16
+ }
generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 1,
4
+ "eos_token_id": 2,
5
+ "transformers_version": "4.34.0"
6
+ }
output.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:82e0a88b525615952fbe64edcba8314bdc5bfa97a38b725ea6eba3fee6f6803e
3
+ size 7371402536
pytorch_model.bin.index.json ADDED
@@ -0,0 +1,298 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "metadata": {
3
+ "total_size": 14483464192
4
+ },
5
+ "weight_map": {
6
+ "lm_head.weight": "pytorch_model-00002-of-00002.bin",
7
+ "model.embed_tokens.weight": "pytorch_model-00001-of-00002.bin",
8
+ "model.layers.0.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
9
+ "model.layers.0.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
10
+ "model.layers.0.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
11
+ "model.layers.0.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
12
+ "model.layers.0.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
13
+ "model.layers.0.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
14
+ "model.layers.0.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
15
+ "model.layers.0.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
16
+ "model.layers.0.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
17
+ "model.layers.1.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
18
+ "model.layers.1.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
19
+ "model.layers.1.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
20
+ "model.layers.1.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
21
+ "model.layers.1.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
22
+ "model.layers.1.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
23
+ "model.layers.1.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
24
+ "model.layers.1.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
25
+ "model.layers.1.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
26
+ "model.layers.10.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
27
+ "model.layers.10.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
28
+ "model.layers.10.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
29
+ "model.layers.10.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
30
+ "model.layers.10.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
31
+ "model.layers.10.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
32
+ "model.layers.10.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
33
+ "model.layers.10.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
34
+ "model.layers.10.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
35
+ "model.layers.11.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
36
+ "model.layers.11.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
37
+ "model.layers.11.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
38
+ "model.layers.11.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
39
+ "model.layers.11.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
40
+ "model.layers.11.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
41
+ "model.layers.11.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
42
+ "model.layers.11.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
43
+ "model.layers.11.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
44
+ "model.layers.12.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
45
+ "model.layers.12.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
46
+ "model.layers.12.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
47
+ "model.layers.12.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
48
+ "model.layers.12.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
49
+ "model.layers.12.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
50
+ "model.layers.12.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
51
+ "model.layers.12.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
52
+ "model.layers.12.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
53
+ "model.layers.13.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
54
+ "model.layers.13.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
55
+ "model.layers.13.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
56
+ "model.layers.13.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
57
+ "model.layers.13.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
58
+ "model.layers.13.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
59
+ "model.layers.13.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
60
+ "model.layers.13.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
61
+ "model.layers.13.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
62
+ "model.layers.14.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
63
+ "model.layers.14.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
64
+ "model.layers.14.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
65
+ "model.layers.14.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
66
+ "model.layers.14.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
67
+ "model.layers.14.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
68
+ "model.layers.14.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
69
+ "model.layers.14.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
70
+ "model.layers.14.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
71
+ "model.layers.15.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
72
+ "model.layers.15.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
73
+ "model.layers.15.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
74
+ "model.layers.15.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
75
+ "model.layers.15.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
76
+ "model.layers.15.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
77
+ "model.layers.15.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
78
+ "model.layers.15.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
79
+ "model.layers.15.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
80
+ "model.layers.16.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
81
+ "model.layers.16.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
82
+ "model.layers.16.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
83
+ "model.layers.16.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
84
+ "model.layers.16.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
85
+ "model.layers.16.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
86
+ "model.layers.16.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
87
+ "model.layers.16.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
88
+ "model.layers.16.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
89
+ "model.layers.17.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
90
+ "model.layers.17.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
91
+ "model.layers.17.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
92
+ "model.layers.17.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
93
+ "model.layers.17.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
94
+ "model.layers.17.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
95
+ "model.layers.17.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
96
+ "model.layers.17.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
97
+ "model.layers.17.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
98
+ "model.layers.18.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
99
+ "model.layers.18.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
100
+ "model.layers.18.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
101
+ "model.layers.18.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
102
+ "model.layers.18.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
103
+ "model.layers.18.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
104
+ "model.layers.18.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
105
+ "model.layers.18.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
106
+ "model.layers.18.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
107
+ "model.layers.19.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
108
+ "model.layers.19.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
109
+ "model.layers.19.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
110
+ "model.layers.19.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
111
+ "model.layers.19.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
112
+ "model.layers.19.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
113
+ "model.layers.19.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
114
+ "model.layers.19.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
115
+ "model.layers.19.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
116
+ "model.layers.2.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
117
+ "model.layers.2.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
118
+ "model.layers.2.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
119
+ "model.layers.2.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
120
+ "model.layers.2.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
121
+ "model.layers.2.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
122
+ "model.layers.2.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
123
+ "model.layers.2.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
124
+ "model.layers.2.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
125
+ "model.layers.20.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
126
+ "model.layers.20.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
127
+ "model.layers.20.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
128
+ "model.layers.20.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
129
+ "model.layers.20.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
130
+ "model.layers.20.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
131
+ "model.layers.20.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
132
+ "model.layers.20.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
133
+ "model.layers.20.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
134
+ "model.layers.21.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
135
+ "model.layers.21.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
136
+ "model.layers.21.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
137
+ "model.layers.21.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
138
+ "model.layers.21.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
139
+ "model.layers.21.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
140
+ "model.layers.21.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
141
+ "model.layers.21.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
142
+ "model.layers.21.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
143
+ "model.layers.22.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
144
+ "model.layers.22.mlp.down_proj.weight": "pytorch_model-00002-of-00002.bin",
145
+ "model.layers.22.mlp.gate_proj.weight": "pytorch_model-00002-of-00002.bin",
146
+ "model.layers.22.mlp.up_proj.weight": "pytorch_model-00002-of-00002.bin",
147
+ "model.layers.22.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
148
+ "model.layers.22.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
149
+ "model.layers.22.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
150
+ "model.layers.22.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
151
+ "model.layers.22.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
152
+ "model.layers.23.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
153
+ "model.layers.23.mlp.down_proj.weight": "pytorch_model-00002-of-00002.bin",
154
+ "model.layers.23.mlp.gate_proj.weight": "pytorch_model-00002-of-00002.bin",
155
+ "model.layers.23.mlp.up_proj.weight": "pytorch_model-00002-of-00002.bin",
156
+ "model.layers.23.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
157
+ "model.layers.23.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
158
+ "model.layers.23.self_attn.o_proj.weight": "pytorch_model-00002-of-00002.bin",
159
+ "model.layers.23.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
160
+ "model.layers.23.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
161
+ "model.layers.24.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
162
+ "model.layers.24.mlp.down_proj.weight": "pytorch_model-00002-of-00002.bin",
163
+ "model.layers.24.mlp.gate_proj.weight": "pytorch_model-00002-of-00002.bin",
164
+ "model.layers.24.mlp.up_proj.weight": "pytorch_model-00002-of-00002.bin",
165
+ "model.layers.24.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
166
+ "model.layers.24.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
167
+ "model.layers.24.self_attn.o_proj.weight": "pytorch_model-00002-of-00002.bin",
168
+ "model.layers.24.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
169
+ "model.layers.24.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
170
+ "model.layers.25.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
171
+ "model.layers.25.mlp.down_proj.weight": "pytorch_model-00002-of-00002.bin",
172
+ "model.layers.25.mlp.gate_proj.weight": "pytorch_model-00002-of-00002.bin",
173
+ "model.layers.25.mlp.up_proj.weight": "pytorch_model-00002-of-00002.bin",
174
+ "model.layers.25.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
175
+ "model.layers.25.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
176
+ "model.layers.25.self_attn.o_proj.weight": "pytorch_model-00002-of-00002.bin",
177
+ "model.layers.25.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
178
+ "model.layers.25.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
179
+ "model.layers.26.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
180
+ "model.layers.26.mlp.down_proj.weight": "pytorch_model-00002-of-00002.bin",
181
+ "model.layers.26.mlp.gate_proj.weight": "pytorch_model-00002-of-00002.bin",
182
+ "model.layers.26.mlp.up_proj.weight": "pytorch_model-00002-of-00002.bin",
183
+ "model.layers.26.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
184
+ "model.layers.26.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
185
+ "model.layers.26.self_attn.o_proj.weight": "pytorch_model-00002-of-00002.bin",
186
+ "model.layers.26.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
187
+ "model.layers.26.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
188
+ "model.layers.27.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
189
+ "model.layers.27.mlp.down_proj.weight": "pytorch_model-00002-of-00002.bin",
190
+ "model.layers.27.mlp.gate_proj.weight": "pytorch_model-00002-of-00002.bin",
191
+ "model.layers.27.mlp.up_proj.weight": "pytorch_model-00002-of-00002.bin",
192
+ "model.layers.27.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
193
+ "model.layers.27.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
194
+ "model.layers.27.self_attn.o_proj.weight": "pytorch_model-00002-of-00002.bin",
195
+ "model.layers.27.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
196
+ "model.layers.27.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
197
+ "model.layers.28.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
198
+ "model.layers.28.mlp.down_proj.weight": "pytorch_model-00002-of-00002.bin",
199
+ "model.layers.28.mlp.gate_proj.weight": "pytorch_model-00002-of-00002.bin",
200
+ "model.layers.28.mlp.up_proj.weight": "pytorch_model-00002-of-00002.bin",
201
+ "model.layers.28.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
202
+ "model.layers.28.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
203
+ "model.layers.28.self_attn.o_proj.weight": "pytorch_model-00002-of-00002.bin",
204
+ "model.layers.28.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
205
+ "model.layers.28.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
206
+ "model.layers.29.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
207
+ "model.layers.29.mlp.down_proj.weight": "pytorch_model-00002-of-00002.bin",
208
+ "model.layers.29.mlp.gate_proj.weight": "pytorch_model-00002-of-00002.bin",
209
+ "model.layers.29.mlp.up_proj.weight": "pytorch_model-00002-of-00002.bin",
210
+ "model.layers.29.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
211
+ "model.layers.29.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
212
+ "model.layers.29.self_attn.o_proj.weight": "pytorch_model-00002-of-00002.bin",
213
+ "model.layers.29.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
214
+ "model.layers.29.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
215
+ "model.layers.3.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
216
+ "model.layers.3.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
217
+ "model.layers.3.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
218
+ "model.layers.3.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
219
+ "model.layers.3.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
220
+ "model.layers.3.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
221
+ "model.layers.3.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
222
+ "model.layers.3.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
223
+ "model.layers.3.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
224
+ "model.layers.30.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
225
+ "model.layers.30.mlp.down_proj.weight": "pytorch_model-00002-of-00002.bin",
226
+ "model.layers.30.mlp.gate_proj.weight": "pytorch_model-00002-of-00002.bin",
227
+ "model.layers.30.mlp.up_proj.weight": "pytorch_model-00002-of-00002.bin",
228
+ "model.layers.30.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
229
+ "model.layers.30.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
230
+ "model.layers.30.self_attn.o_proj.weight": "pytorch_model-00002-of-00002.bin",
231
+ "model.layers.30.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
232
+ "model.layers.30.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
233
+ "model.layers.31.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
234
+ "model.layers.31.mlp.down_proj.weight": "pytorch_model-00002-of-00002.bin",
235
+ "model.layers.31.mlp.gate_proj.weight": "pytorch_model-00002-of-00002.bin",
236
+ "model.layers.31.mlp.up_proj.weight": "pytorch_model-00002-of-00002.bin",
237
+ "model.layers.31.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
238
+ "model.layers.31.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
239
+ "model.layers.31.self_attn.o_proj.weight": "pytorch_model-00002-of-00002.bin",
240
+ "model.layers.31.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
241
+ "model.layers.31.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
242
+ "model.layers.4.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
243
+ "model.layers.4.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
244
+ "model.layers.4.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
245
+ "model.layers.4.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
246
+ "model.layers.4.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
247
+ "model.layers.4.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
248
+ "model.layers.4.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
249
+ "model.layers.4.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
250
+ "model.layers.4.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
251
+ "model.layers.5.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
252
+ "model.layers.5.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
253
+ "model.layers.5.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
254
+ "model.layers.5.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
255
+ "model.layers.5.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
256
+ "model.layers.5.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
257
+ "model.layers.5.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
258
+ "model.layers.5.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
259
+ "model.layers.5.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
260
+ "model.layers.6.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
261
+ "model.layers.6.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
262
+ "model.layers.6.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
263
+ "model.layers.6.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
264
+ "model.layers.6.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
265
+ "model.layers.6.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
266
+ "model.layers.6.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
267
+ "model.layers.6.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
268
+ "model.layers.6.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
269
+ "model.layers.7.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
270
+ "model.layers.7.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
271
+ "model.layers.7.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
272
+ "model.layers.7.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
273
+ "model.layers.7.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
274
+ "model.layers.7.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
275
+ "model.layers.7.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
276
+ "model.layers.7.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
277
+ "model.layers.7.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
278
+ "model.layers.8.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
279
+ "model.layers.8.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
280
+ "model.layers.8.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
281
+ "model.layers.8.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
282
+ "model.layers.8.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
283
+ "model.layers.8.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
284
+ "model.layers.8.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
285
+ "model.layers.8.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
286
+ "model.layers.8.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
287
+ "model.layers.9.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
288
+ "model.layers.9.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
289
+ "model.layers.9.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
290
+ "model.layers.9.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
291
+ "model.layers.9.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
292
+ "model.layers.9.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
293
+ "model.layers.9.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
294
+ "model.layers.9.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
295
+ "model.layers.9.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
296
+ "model.norm.weight": "pytorch_model-00002-of-00002.bin"
297
+ }
298
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<unk>",
4
+ "<s>",
5
+ "</s>"
6
+ ],
7
+ "bos_token": "<s>",
8
+ "cls_token": "[CLS]",
9
+ "eos_token": "</s>",
10
+ "mask_token": "[MASK]",
11
+ "pad_token": "</s>",
12
+ "sep_token": "[SEP]",
13
+ "unk_token": "<unk>"
14
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dadfd56d766715c61d2ef780a525ab43b8e6da4de6865bda3d95fdef5e134055
3
+ size 493443
tokenizer_config.json ADDED
@@ -0,0 +1,45 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "<unk>",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "1": {
12
+ "content": "<s>",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "2": {
20
+ "content": "</s>",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ }
27
+ },
28
+ "additional_special_tokens": [
29
+ "<unk>",
30
+ "<s>",
31
+ "</s>"
32
+ ],
33
+ "bos_token": "<s>",
34
+ "chat_template": "{{ bos_token }}{% for message in messages %}{% if (message['role'] == 'user') != (loop.index0 % 2 == 0) %}{{ raise_exception('Conversation roles must alternate user/assistant/user/assistant/...') }}{% endif %}{% if message['role'] == 'user' %}{{ '[INST] ' + message['content'] + ' [/INST]' }}{% elif message['role'] == 'assistant' %}{{ message['content'] + eos_token}}{% else %}{{ raise_exception('Only user and assistant roles are supported!') }}{% endif %}{% endfor %}",
35
+ "clean_up_tokenization_spaces": false,
36
+ "eos_token": "</s>",
37
+ "legacy": true,
38
+ "model_max_length": 2048,
39
+ "pad_token": "</s>",
40
+ "sp_model_kwargs": {},
41
+ "spaces_between_special_tokens": false,
42
+ "tokenizer_class": "LlamaTokenizer",
43
+ "unk_token": "<unk>",
44
+ "use_default_system_prompt": false
45
+ }
train_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 3.0,
3
+ "train_loss": 0.23198359412724215,
4
+ "train_runtime": 18849.1473,
5
+ "train_samples": 19958,
6
+ "train_samples_per_second": 3.176,
7
+ "train_steps_per_second": 0.05
8
+ }
trainer_state.json ADDED
@@ -0,0 +1,1488 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 3.0,
5
+ "eval_steps": 100,
6
+ "global_step": 936,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.0,
13
+ "learning_rate": 5.3191489361702125e-09,
14
+ "logits/chosen": -2.2888386249542236,
15
+ "logits/rejected": -2.347537040710449,
16
+ "logps/chosen": -240.19537353515625,
17
+ "logps/rejected": -224.4659423828125,
18
+ "loss": 0.6931,
19
+ "rewards/accuracies": 0.0,
20
+ "rewards/chosen": 0.0,
21
+ "rewards/margins": 0.0,
22
+ "rewards/rejected": 0.0,
23
+ "step": 1
24
+ },
25
+ {
26
+ "epoch": 0.03,
27
+ "learning_rate": 5.3191489361702123e-08,
28
+ "logits/chosen": -2.2613658905029297,
29
+ "logits/rejected": -2.2750418186187744,
30
+ "logps/chosen": -243.42933654785156,
31
+ "logps/rejected": -246.80438232421875,
32
+ "loss": 0.6983,
33
+ "rewards/accuracies": 0.4444444477558136,
34
+ "rewards/chosen": -0.004907915368676186,
35
+ "rewards/margins": -0.0037050736136734486,
36
+ "rewards/rejected": -0.0012028426863253117,
37
+ "step": 10
38
+ },
39
+ {
40
+ "epoch": 0.06,
41
+ "learning_rate": 1.0638297872340425e-07,
42
+ "logits/chosen": -2.207406997680664,
43
+ "logits/rejected": -2.1862308979034424,
44
+ "logps/chosen": -252.49658203125,
45
+ "logps/rejected": -264.63861083984375,
46
+ "loss": 0.6883,
47
+ "rewards/accuracies": 0.606249988079071,
48
+ "rewards/chosen": -0.01405362505465746,
49
+ "rewards/margins": 0.04791956767439842,
50
+ "rewards/rejected": -0.06197319179773331,
51
+ "step": 20
52
+ },
53
+ {
54
+ "epoch": 0.1,
55
+ "learning_rate": 1.5957446808510638e-07,
56
+ "logits/chosen": -2.1290104389190674,
57
+ "logits/rejected": -2.124718189239502,
58
+ "logps/chosen": -265.79962158203125,
59
+ "logps/rejected": -293.8382263183594,
60
+ "loss": 0.6817,
61
+ "rewards/accuracies": 0.581250011920929,
62
+ "rewards/chosen": -0.23290427029132843,
63
+ "rewards/margins": 0.05236431211233139,
64
+ "rewards/rejected": -0.2852686047554016,
65
+ "step": 30
66
+ },
67
+ {
68
+ "epoch": 0.13,
69
+ "learning_rate": 2.127659574468085e-07,
70
+ "logits/chosen": -2.1210882663726807,
71
+ "logits/rejected": -2.1071760654449463,
72
+ "logps/chosen": -239.9654083251953,
73
+ "logps/rejected": -246.32162475585938,
74
+ "loss": 0.6723,
75
+ "rewards/accuracies": 0.5874999761581421,
76
+ "rewards/chosen": -0.28943145275115967,
77
+ "rewards/margins": 0.12118977308273315,
78
+ "rewards/rejected": -0.4106212556362152,
79
+ "step": 40
80
+ },
81
+ {
82
+ "epoch": 0.16,
83
+ "learning_rate": 2.659574468085106e-07,
84
+ "logits/chosen": -2.151090145111084,
85
+ "logits/rejected": -2.1575913429260254,
86
+ "logps/chosen": -243.2239990234375,
87
+ "logps/rejected": -283.3126220703125,
88
+ "loss": 0.6911,
89
+ "rewards/accuracies": 0.512499988079071,
90
+ "rewards/chosen": -0.30816999077796936,
91
+ "rewards/margins": 0.1214941143989563,
92
+ "rewards/rejected": -0.42966407537460327,
93
+ "step": 50
94
+ },
95
+ {
96
+ "epoch": 0.19,
97
+ "learning_rate": 3.1914893617021275e-07,
98
+ "logits/chosen": -2.2355575561523438,
99
+ "logits/rejected": -2.241624355316162,
100
+ "logps/chosen": -225.8336181640625,
101
+ "logps/rejected": -239.73583984375,
102
+ "loss": 0.641,
103
+ "rewards/accuracies": 0.512499988079071,
104
+ "rewards/chosen": -0.4416503310203552,
105
+ "rewards/margins": 0.24247512221336365,
106
+ "rewards/rejected": -0.684125542640686,
107
+ "step": 60
108
+ },
109
+ {
110
+ "epoch": 0.22,
111
+ "learning_rate": 3.7234042553191484e-07,
112
+ "logits/chosen": -2.284141778945923,
113
+ "logits/rejected": -2.271517515182495,
114
+ "logps/chosen": -247.878173828125,
115
+ "logps/rejected": -271.75543212890625,
116
+ "loss": 0.6634,
117
+ "rewards/accuracies": 0.5687500238418579,
118
+ "rewards/chosen": -0.4251589775085449,
119
+ "rewards/margins": 0.20249144732952118,
120
+ "rewards/rejected": -0.6276503801345825,
121
+ "step": 70
122
+ },
123
+ {
124
+ "epoch": 0.26,
125
+ "learning_rate": 4.25531914893617e-07,
126
+ "logits/chosen": -2.1530141830444336,
127
+ "logits/rejected": -2.1263160705566406,
128
+ "logps/chosen": -250.94882202148438,
129
+ "logps/rejected": -283.70452880859375,
130
+ "loss": 0.7221,
131
+ "rewards/accuracies": 0.606249988079071,
132
+ "rewards/chosen": -0.5067945122718811,
133
+ "rewards/margins": 0.3924552798271179,
134
+ "rewards/rejected": -0.8992497324943542,
135
+ "step": 80
136
+ },
137
+ {
138
+ "epoch": 0.29,
139
+ "learning_rate": 4.787234042553192e-07,
140
+ "logits/chosen": -2.2971713542938232,
141
+ "logits/rejected": -2.2914767265319824,
142
+ "logps/chosen": -239.848388671875,
143
+ "logps/rejected": -261.0388488769531,
144
+ "loss": 0.6921,
145
+ "rewards/accuracies": 0.59375,
146
+ "rewards/chosen": -0.36695146560668945,
147
+ "rewards/margins": 0.7086008787155151,
148
+ "rewards/rejected": -1.0755524635314941,
149
+ "step": 90
150
+ },
151
+ {
152
+ "epoch": 0.32,
153
+ "learning_rate": 4.96437054631829e-07,
154
+ "logits/chosen": -2.2555811405181885,
155
+ "logits/rejected": -2.25207781791687,
156
+ "logps/chosen": -267.49615478515625,
157
+ "logps/rejected": -275.76678466796875,
158
+ "loss": 0.6493,
159
+ "rewards/accuracies": 0.625,
160
+ "rewards/chosen": -0.6560766100883484,
161
+ "rewards/margins": 0.7053524851799011,
162
+ "rewards/rejected": -1.361429214477539,
163
+ "step": 100
164
+ },
165
+ {
166
+ "epoch": 0.32,
167
+ "eval_logits/chosen": -2.28631591796875,
168
+ "eval_logits/rejected": -2.316732883453369,
169
+ "eval_logps/chosen": -247.2499237060547,
170
+ "eval_logps/rejected": -264.13714599609375,
171
+ "eval_loss": 0.7535801529884338,
172
+ "eval_rewards/accuracies": 0.5277777910232544,
173
+ "eval_rewards/chosen": -1.031434416770935,
174
+ "eval_rewards/margins": 0.1443558931350708,
175
+ "eval_rewards/rejected": -1.1757903099060059,
176
+ "eval_runtime": 136.5011,
177
+ "eval_samples_per_second": 7.326,
178
+ "eval_steps_per_second": 0.462,
179
+ "step": 100
180
+ },
181
+ {
182
+ "epoch": 0.35,
183
+ "learning_rate": 4.904988123515439e-07,
184
+ "logits/chosen": -2.3735289573669434,
185
+ "logits/rejected": -2.3574609756469727,
186
+ "logps/chosen": -258.567138671875,
187
+ "logps/rejected": -265.116943359375,
188
+ "loss": 0.7214,
189
+ "rewards/accuracies": 0.5062500238418579,
190
+ "rewards/chosen": -0.645125687122345,
191
+ "rewards/margins": 0.558975338935852,
192
+ "rewards/rejected": -1.2041009664535522,
193
+ "step": 110
194
+ },
195
+ {
196
+ "epoch": 0.38,
197
+ "learning_rate": 4.845605700712589e-07,
198
+ "logits/chosen": -2.2169833183288574,
199
+ "logits/rejected": -2.2219901084899902,
200
+ "logps/chosen": -236.5016632080078,
201
+ "logps/rejected": -263.465576171875,
202
+ "loss": 0.7608,
203
+ "rewards/accuracies": 0.59375,
204
+ "rewards/chosen": -0.8542020916938782,
205
+ "rewards/margins": 0.4452931880950928,
206
+ "rewards/rejected": -1.2994953393936157,
207
+ "step": 120
208
+ },
209
+ {
210
+ "epoch": 0.42,
211
+ "learning_rate": 4.786223277909738e-07,
212
+ "logits/chosen": -2.2680552005767822,
213
+ "logits/rejected": -2.2671051025390625,
214
+ "logps/chosen": -265.2757568359375,
215
+ "logps/rejected": -281.46087646484375,
216
+ "loss": 0.6982,
217
+ "rewards/accuracies": 0.5874999761581421,
218
+ "rewards/chosen": -0.42591819167137146,
219
+ "rewards/margins": 1.4877907037734985,
220
+ "rewards/rejected": -1.9137089252471924,
221
+ "step": 130
222
+ },
223
+ {
224
+ "epoch": 0.45,
225
+ "learning_rate": 4.7268408551068883e-07,
226
+ "logits/chosen": -2.3235602378845215,
227
+ "logits/rejected": -2.331632614135742,
228
+ "logps/chosen": -277.58642578125,
229
+ "logps/rejected": -309.63006591796875,
230
+ "loss": 0.6538,
231
+ "rewards/accuracies": 0.6312500238418579,
232
+ "rewards/chosen": -1.0361998081207275,
233
+ "rewards/margins": 1.2754342555999756,
234
+ "rewards/rejected": -2.311634063720703,
235
+ "step": 140
236
+ },
237
+ {
238
+ "epoch": 0.48,
239
+ "learning_rate": 4.667458432304038e-07,
240
+ "logits/chosen": -2.242255449295044,
241
+ "logits/rejected": -2.247087001800537,
242
+ "logps/chosen": -279.643310546875,
243
+ "logps/rejected": -306.5362243652344,
244
+ "loss": 0.7123,
245
+ "rewards/accuracies": 0.5375000238418579,
246
+ "rewards/chosen": -1.3043805360794067,
247
+ "rewards/margins": 1.866286039352417,
248
+ "rewards/rejected": -3.1706669330596924,
249
+ "step": 150
250
+ },
251
+ {
252
+ "epoch": 0.51,
253
+ "learning_rate": 4.6080760095011875e-07,
254
+ "logits/chosen": -2.4138083457946777,
255
+ "logits/rejected": -2.403261661529541,
256
+ "logps/chosen": -267.54046630859375,
257
+ "logps/rejected": -311.9237060546875,
258
+ "loss": 0.7048,
259
+ "rewards/accuracies": 0.6625000238418579,
260
+ "rewards/chosen": -0.7491146326065063,
261
+ "rewards/margins": 1.4739110469818115,
262
+ "rewards/rejected": -2.2230257987976074,
263
+ "step": 160
264
+ },
265
+ {
266
+ "epoch": 0.54,
267
+ "learning_rate": 4.548693586698337e-07,
268
+ "logits/chosen": -2.394935131072998,
269
+ "logits/rejected": -2.387144088745117,
270
+ "logps/chosen": -238.01992797851562,
271
+ "logps/rejected": -249.9500274658203,
272
+ "loss": 0.6621,
273
+ "rewards/accuracies": 0.5874999761581421,
274
+ "rewards/chosen": -0.7305070161819458,
275
+ "rewards/margins": 1.7976157665252686,
276
+ "rewards/rejected": -2.528122663497925,
277
+ "step": 170
278
+ },
279
+ {
280
+ "epoch": 0.58,
281
+ "learning_rate": 4.4893111638954866e-07,
282
+ "logits/chosen": -2.358888626098633,
283
+ "logits/rejected": -2.3674557209014893,
284
+ "logps/chosen": -257.3042907714844,
285
+ "logps/rejected": -276.6517028808594,
286
+ "loss": 0.7177,
287
+ "rewards/accuracies": 0.675000011920929,
288
+ "rewards/chosen": -1.2033367156982422,
289
+ "rewards/margins": 1.2791545391082764,
290
+ "rewards/rejected": -2.4824917316436768,
291
+ "step": 180
292
+ },
293
+ {
294
+ "epoch": 0.61,
295
+ "learning_rate": 4.429928741092636e-07,
296
+ "logits/chosen": -2.336920738220215,
297
+ "logits/rejected": -2.31685733795166,
298
+ "logps/chosen": -249.4643096923828,
299
+ "logps/rejected": -282.7246398925781,
300
+ "loss": 0.7693,
301
+ "rewards/accuracies": 0.5062500238418579,
302
+ "rewards/chosen": -0.9033983945846558,
303
+ "rewards/margins": 1.6113331317901611,
304
+ "rewards/rejected": -2.5147316455841064,
305
+ "step": 190
306
+ },
307
+ {
308
+ "epoch": 0.64,
309
+ "learning_rate": 4.3705463182897863e-07,
310
+ "logits/chosen": -2.2998836040496826,
311
+ "logits/rejected": -2.2824947834014893,
312
+ "logps/chosen": -255.98239135742188,
313
+ "logps/rejected": -282.6729431152344,
314
+ "loss": 0.6555,
315
+ "rewards/accuracies": 0.6499999761581421,
316
+ "rewards/chosen": -0.7247324585914612,
317
+ "rewards/margins": 2.3264706134796143,
318
+ "rewards/rejected": -3.0512032508850098,
319
+ "step": 200
320
+ },
321
+ {
322
+ "epoch": 0.64,
323
+ "eval_logits/chosen": -2.227078676223755,
324
+ "eval_logits/rejected": -2.2540831565856934,
325
+ "eval_logps/chosen": -251.6654815673828,
326
+ "eval_logps/rejected": -268.0155029296875,
327
+ "eval_loss": 0.8345099091529846,
328
+ "eval_rewards/accuracies": 0.5595238208770752,
329
+ "eval_rewards/chosen": -1.472989559173584,
330
+ "eval_rewards/margins": 0.09063953906297684,
331
+ "eval_rewards/rejected": -1.563629150390625,
332
+ "eval_runtime": 135.0454,
333
+ "eval_samples_per_second": 7.405,
334
+ "eval_steps_per_second": 0.467,
335
+ "step": 200
336
+ },
337
+ {
338
+ "epoch": 0.67,
339
+ "learning_rate": 4.311163895486936e-07,
340
+ "logits/chosen": -2.1886990070343018,
341
+ "logits/rejected": -2.2061002254486084,
342
+ "logps/chosen": -272.6381530761719,
343
+ "logps/rejected": -307.3228759765625,
344
+ "loss": 0.6607,
345
+ "rewards/accuracies": 0.612500011920929,
346
+ "rewards/chosen": -0.7392950057983398,
347
+ "rewards/margins": 2.356786012649536,
348
+ "rewards/rejected": -3.096081018447876,
349
+ "step": 210
350
+ },
351
+ {
352
+ "epoch": 0.71,
353
+ "learning_rate": 4.251781472684085e-07,
354
+ "logits/chosen": -2.3205573558807373,
355
+ "logits/rejected": -2.3125534057617188,
356
+ "logps/chosen": -208.845458984375,
357
+ "logps/rejected": -231.4398651123047,
358
+ "loss": 0.6198,
359
+ "rewards/accuracies": 0.699999988079071,
360
+ "rewards/chosen": -0.921154797077179,
361
+ "rewards/margins": 1.9245474338531494,
362
+ "rewards/rejected": -2.8457021713256836,
363
+ "step": 220
364
+ },
365
+ {
366
+ "epoch": 0.74,
367
+ "learning_rate": 4.192399049881235e-07,
368
+ "logits/chosen": -2.2658634185791016,
369
+ "logits/rejected": -2.2645907402038574,
370
+ "logps/chosen": -258.8167419433594,
371
+ "logps/rejected": -303.98541259765625,
372
+ "loss": 0.6653,
373
+ "rewards/accuracies": 0.637499988079071,
374
+ "rewards/chosen": -1.0780036449432373,
375
+ "rewards/margins": 2.2675023078918457,
376
+ "rewards/rejected": -3.345506191253662,
377
+ "step": 230
378
+ },
379
+ {
380
+ "epoch": 0.77,
381
+ "learning_rate": 4.1330166270783846e-07,
382
+ "logits/chosen": -2.259265184402466,
383
+ "logits/rejected": -2.259178638458252,
384
+ "logps/chosen": -265.008544921875,
385
+ "logps/rejected": -283.46844482421875,
386
+ "loss": 0.6808,
387
+ "rewards/accuracies": 0.6187499761581421,
388
+ "rewards/chosen": -1.058840036392212,
389
+ "rewards/margins": 1.759861946105957,
390
+ "rewards/rejected": -2.818701982498169,
391
+ "step": 240
392
+ },
393
+ {
394
+ "epoch": 0.8,
395
+ "learning_rate": 4.0736342042755347e-07,
396
+ "logits/chosen": -2.264968156814575,
397
+ "logits/rejected": -2.2467730045318604,
398
+ "logps/chosen": -245.5597686767578,
399
+ "logps/rejected": -282.61566162109375,
400
+ "loss": 0.6574,
401
+ "rewards/accuracies": 0.6937500238418579,
402
+ "rewards/chosen": -0.6839832067489624,
403
+ "rewards/margins": 2.1135334968566895,
404
+ "rewards/rejected": -2.7975165843963623,
405
+ "step": 250
406
+ },
407
+ {
408
+ "epoch": 0.83,
409
+ "learning_rate": 4.0142517814726837e-07,
410
+ "logits/chosen": -2.2550175189971924,
411
+ "logits/rejected": -2.2587451934814453,
412
+ "logps/chosen": -264.4140319824219,
413
+ "logps/rejected": -313.1862487792969,
414
+ "loss": 0.6417,
415
+ "rewards/accuracies": 0.637499988079071,
416
+ "rewards/chosen": -1.2853078842163086,
417
+ "rewards/margins": 1.7770347595214844,
418
+ "rewards/rejected": -3.062342882156372,
419
+ "step": 260
420
+ },
421
+ {
422
+ "epoch": 0.87,
423
+ "learning_rate": 3.9548693586698333e-07,
424
+ "logits/chosen": -2.30830454826355,
425
+ "logits/rejected": -2.3013198375701904,
426
+ "logps/chosen": -253.91281127929688,
427
+ "logps/rejected": -266.73382568359375,
428
+ "loss": 0.6325,
429
+ "rewards/accuracies": 0.71875,
430
+ "rewards/chosen": -0.6817399859428406,
431
+ "rewards/margins": 2.7105841636657715,
432
+ "rewards/rejected": -3.3923239707946777,
433
+ "step": 270
434
+ },
435
+ {
436
+ "epoch": 0.9,
437
+ "learning_rate": 3.8954869358669834e-07,
438
+ "logits/chosen": -2.331390142440796,
439
+ "logits/rejected": -2.336571216583252,
440
+ "logps/chosen": -247.58544921875,
441
+ "logps/rejected": -284.62884521484375,
442
+ "loss": 0.6308,
443
+ "rewards/accuracies": 0.699999988079071,
444
+ "rewards/chosen": -0.9239405393600464,
445
+ "rewards/margins": 2.58821439743042,
446
+ "rewards/rejected": -3.512155532836914,
447
+ "step": 280
448
+ },
449
+ {
450
+ "epoch": 0.93,
451
+ "learning_rate": 3.836104513064133e-07,
452
+ "logits/chosen": -2.3030402660369873,
453
+ "logits/rejected": -2.290931224822998,
454
+ "logps/chosen": -264.10296630859375,
455
+ "logps/rejected": -310.730224609375,
456
+ "loss": 0.6456,
457
+ "rewards/accuracies": 0.643750011920929,
458
+ "rewards/chosen": -1.4110323190689087,
459
+ "rewards/margins": 1.8175039291381836,
460
+ "rewards/rejected": -3.2285361289978027,
461
+ "step": 290
462
+ },
463
+ {
464
+ "epoch": 0.96,
465
+ "learning_rate": 3.7767220902612825e-07,
466
+ "logits/chosen": -2.3290443420410156,
467
+ "logits/rejected": -2.3188068866729736,
468
+ "logps/chosen": -251.7384490966797,
469
+ "logps/rejected": -287.4365234375,
470
+ "loss": 0.6566,
471
+ "rewards/accuracies": 0.6812499761581421,
472
+ "rewards/chosen": -1.2217800617218018,
473
+ "rewards/margins": 2.0904905796051025,
474
+ "rewards/rejected": -3.312270402908325,
475
+ "step": 300
476
+ },
477
+ {
478
+ "epoch": 0.96,
479
+ "eval_logits/chosen": -2.352015733718872,
480
+ "eval_logits/rejected": -2.3778603076934814,
481
+ "eval_logps/chosen": -256.6260070800781,
482
+ "eval_logps/rejected": -273.6545715332031,
483
+ "eval_loss": 0.8076202869415283,
484
+ "eval_rewards/accuracies": 0.5555555820465088,
485
+ "eval_rewards/chosen": -1.9690420627593994,
486
+ "eval_rewards/margins": 0.15849098563194275,
487
+ "eval_rewards/rejected": -2.127532958984375,
488
+ "eval_runtime": 135.1507,
489
+ "eval_samples_per_second": 7.399,
490
+ "eval_steps_per_second": 0.466,
491
+ "step": 300
492
+ },
493
+ {
494
+ "epoch": 0.99,
495
+ "learning_rate": 3.717339667458432e-07,
496
+ "logits/chosen": -2.228579044342041,
497
+ "logits/rejected": -2.2486958503723145,
498
+ "logps/chosen": -269.064208984375,
499
+ "logps/rejected": -314.79510498046875,
500
+ "loss": 0.5843,
501
+ "rewards/accuracies": 0.7562500238418579,
502
+ "rewards/chosen": -0.7734891772270203,
503
+ "rewards/margins": 3.1552462577819824,
504
+ "rewards/rejected": -3.9287352561950684,
505
+ "step": 310
506
+ },
507
+ {
508
+ "epoch": 1.03,
509
+ "learning_rate": 3.6579572446555817e-07,
510
+ "logits/chosen": -2.2717132568359375,
511
+ "logits/rejected": -2.2769129276275635,
512
+ "logps/chosen": -257.6423645019531,
513
+ "logps/rejected": -314.797119140625,
514
+ "loss": 0.3177,
515
+ "rewards/accuracies": 0.8812500238418579,
516
+ "rewards/chosen": -0.4064851403236389,
517
+ "rewards/margins": 4.01033878326416,
518
+ "rewards/rejected": -4.416823863983154,
519
+ "step": 320
520
+ },
521
+ {
522
+ "epoch": 1.06,
523
+ "learning_rate": 3.598574821852731e-07,
524
+ "logits/chosen": -2.247117280960083,
525
+ "logits/rejected": -2.250030040740967,
526
+ "logps/chosen": -235.4261016845703,
527
+ "logps/rejected": -309.361572265625,
528
+ "loss": 0.0693,
529
+ "rewards/accuracies": 0.981249988079071,
530
+ "rewards/chosen": 0.2905876338481903,
531
+ "rewards/margins": 6.00634765625,
532
+ "rewards/rejected": -5.715760231018066,
533
+ "step": 330
534
+ },
535
+ {
536
+ "epoch": 1.09,
537
+ "learning_rate": 3.5391923990498813e-07,
538
+ "logits/chosen": -2.182969570159912,
539
+ "logits/rejected": -2.16931414604187,
540
+ "logps/chosen": -255.9242706298828,
541
+ "logps/rejected": -364.0120544433594,
542
+ "loss": 0.0276,
543
+ "rewards/accuracies": 0.9937499761581421,
544
+ "rewards/chosen": 0.9283174276351929,
545
+ "rewards/margins": 8.556694030761719,
546
+ "rewards/rejected": -7.628376007080078,
547
+ "step": 340
548
+ },
549
+ {
550
+ "epoch": 1.12,
551
+ "learning_rate": 3.479809976247031e-07,
552
+ "logits/chosen": -2.167675495147705,
553
+ "logits/rejected": -2.157602071762085,
554
+ "logps/chosen": -235.40853881835938,
555
+ "logps/rejected": -346.43328857421875,
556
+ "loss": 0.0182,
557
+ "rewards/accuracies": 1.0,
558
+ "rewards/chosen": 1.6523984670639038,
559
+ "rewards/margins": 9.692710876464844,
560
+ "rewards/rejected": -8.040311813354492,
561
+ "step": 350
562
+ },
563
+ {
564
+ "epoch": 1.15,
565
+ "learning_rate": 3.42042755344418e-07,
566
+ "logits/chosen": -2.1937975883483887,
567
+ "logits/rejected": -2.2003121376037598,
568
+ "logps/chosen": -211.8225555419922,
569
+ "logps/rejected": -345.91094970703125,
570
+ "loss": 0.0124,
571
+ "rewards/accuracies": 0.9937499761581421,
572
+ "rewards/chosen": 2.343766927719116,
573
+ "rewards/margins": 12.03510570526123,
574
+ "rewards/rejected": -9.691339492797852,
575
+ "step": 360
576
+ },
577
+ {
578
+ "epoch": 1.19,
579
+ "learning_rate": 3.36104513064133e-07,
580
+ "logits/chosen": -2.2343811988830566,
581
+ "logits/rejected": -2.242821216583252,
582
+ "logps/chosen": -194.0260772705078,
583
+ "logps/rejected": -351.4122009277344,
584
+ "loss": 0.0116,
585
+ "rewards/accuracies": 1.0,
586
+ "rewards/chosen": 2.336714267730713,
587
+ "rewards/margins": 12.730189323425293,
588
+ "rewards/rejected": -10.393475532531738,
589
+ "step": 370
590
+ },
591
+ {
592
+ "epoch": 1.22,
593
+ "learning_rate": 3.3016627078384796e-07,
594
+ "logits/chosen": -2.2088000774383545,
595
+ "logits/rejected": -2.2172210216522217,
596
+ "logps/chosen": -214.9263153076172,
597
+ "logps/rejected": -374.9606018066406,
598
+ "loss": 0.0099,
599
+ "rewards/accuracies": 1.0,
600
+ "rewards/chosen": 2.942856788635254,
601
+ "rewards/margins": 14.011993408203125,
602
+ "rewards/rejected": -11.069137573242188,
603
+ "step": 380
604
+ },
605
+ {
606
+ "epoch": 1.25,
607
+ "learning_rate": 3.2422802850356297e-07,
608
+ "logits/chosen": -2.129809856414795,
609
+ "logits/rejected": -2.1051840782165527,
610
+ "logps/chosen": -223.37490844726562,
611
+ "logps/rejected": -409.17919921875,
612
+ "loss": 0.0104,
613
+ "rewards/accuracies": 1.0,
614
+ "rewards/chosen": 3.214216709136963,
615
+ "rewards/margins": 16.285297393798828,
616
+ "rewards/rejected": -13.071080207824707,
617
+ "step": 390
618
+ },
619
+ {
620
+ "epoch": 1.28,
621
+ "learning_rate": 3.182897862232779e-07,
622
+ "logits/chosen": -2.1976237297058105,
623
+ "logits/rejected": -2.194974422454834,
624
+ "logps/chosen": -197.03897094726562,
625
+ "logps/rejected": -380.13031005859375,
626
+ "loss": 0.0057,
627
+ "rewards/accuracies": 1.0,
628
+ "rewards/chosen": 2.860673427581787,
629
+ "rewards/margins": 16.22535514831543,
630
+ "rewards/rejected": -13.364683151245117,
631
+ "step": 400
632
+ },
633
+ {
634
+ "epoch": 1.28,
635
+ "eval_logits/chosen": -2.170393705368042,
636
+ "eval_logits/rejected": -2.197235345840454,
637
+ "eval_logps/chosen": -270.4623718261719,
638
+ "eval_logps/rejected": -289.9275207519531,
639
+ "eval_loss": 0.8791230320930481,
640
+ "eval_rewards/accuracies": 0.5634920597076416,
641
+ "eval_rewards/chosen": -3.3526790142059326,
642
+ "eval_rewards/margins": 0.4021467864513397,
643
+ "eval_rewards/rejected": -3.7548255920410156,
644
+ "eval_runtime": 135.1943,
645
+ "eval_samples_per_second": 7.397,
646
+ "eval_steps_per_second": 0.466,
647
+ "step": 400
648
+ },
649
+ {
650
+ "epoch": 1.31,
651
+ "learning_rate": 3.1235154394299283e-07,
652
+ "logits/chosen": -2.1489205360412598,
653
+ "logits/rejected": -2.1540191173553467,
654
+ "logps/chosen": -219.0214385986328,
655
+ "logps/rejected": -397.30364990234375,
656
+ "loss": 0.0043,
657
+ "rewards/accuracies": 1.0,
658
+ "rewards/chosen": 3.2062506675720215,
659
+ "rewards/margins": 16.805278778076172,
660
+ "rewards/rejected": -13.599026679992676,
661
+ "step": 410
662
+ },
663
+ {
664
+ "epoch": 1.35,
665
+ "learning_rate": 3.0641330166270784e-07,
666
+ "logits/chosen": -2.115307092666626,
667
+ "logits/rejected": -2.0995349884033203,
668
+ "logps/chosen": -234.65438842773438,
669
+ "logps/rejected": -407.8638000488281,
670
+ "loss": 0.0098,
671
+ "rewards/accuracies": 0.9937499761581421,
672
+ "rewards/chosen": 2.6958987712860107,
673
+ "rewards/margins": 17.416133880615234,
674
+ "rewards/rejected": -14.720235824584961,
675
+ "step": 420
676
+ },
677
+ {
678
+ "epoch": 1.38,
679
+ "learning_rate": 3.004750593824228e-07,
680
+ "logits/chosen": -2.134051561355591,
681
+ "logits/rejected": -2.148725986480713,
682
+ "logps/chosen": -195.26100158691406,
683
+ "logps/rejected": -378.8183898925781,
684
+ "loss": 0.0072,
685
+ "rewards/accuracies": 1.0,
686
+ "rewards/chosen": 2.5557029247283936,
687
+ "rewards/margins": 17.337064743041992,
688
+ "rewards/rejected": -14.781362533569336,
689
+ "step": 430
690
+ },
691
+ {
692
+ "epoch": 1.41,
693
+ "learning_rate": 2.9453681710213776e-07,
694
+ "logits/chosen": -2.1074020862579346,
695
+ "logits/rejected": -2.1130149364471436,
696
+ "logps/chosen": -247.6088104248047,
697
+ "logps/rejected": -419.13232421875,
698
+ "loss": 0.0058,
699
+ "rewards/accuracies": 1.0,
700
+ "rewards/chosen": 2.3471179008483887,
701
+ "rewards/margins": 16.78860855102539,
702
+ "rewards/rejected": -14.441492080688477,
703
+ "step": 440
704
+ },
705
+ {
706
+ "epoch": 1.44,
707
+ "learning_rate": 2.885985748218527e-07,
708
+ "logits/chosen": -2.1506831645965576,
709
+ "logits/rejected": -2.1653859615325928,
710
+ "logps/chosen": -245.6567840576172,
711
+ "logps/rejected": -420.876220703125,
712
+ "loss": 0.0079,
713
+ "rewards/accuracies": 1.0,
714
+ "rewards/chosen": 2.4259517192840576,
715
+ "rewards/margins": 16.992084503173828,
716
+ "rewards/rejected": -14.566131591796875,
717
+ "step": 450
718
+ },
719
+ {
720
+ "epoch": 1.47,
721
+ "learning_rate": 2.8266033254156767e-07,
722
+ "logits/chosen": -2.109947443008423,
723
+ "logits/rejected": -2.1249232292175293,
724
+ "logps/chosen": -241.10916137695312,
725
+ "logps/rejected": -439.56781005859375,
726
+ "loss": 0.0051,
727
+ "rewards/accuracies": 1.0,
728
+ "rewards/chosen": 2.2505855560302734,
729
+ "rewards/margins": 16.7042179107666,
730
+ "rewards/rejected": -14.453630447387695,
731
+ "step": 460
732
+ },
733
+ {
734
+ "epoch": 1.51,
735
+ "learning_rate": 2.7672209026128263e-07,
736
+ "logits/chosen": -2.1982929706573486,
737
+ "logits/rejected": -2.1973724365234375,
738
+ "logps/chosen": -238.69497680664062,
739
+ "logps/rejected": -403.54791259765625,
740
+ "loss": 0.0144,
741
+ "rewards/accuracies": 0.987500011920929,
742
+ "rewards/chosen": 1.3100935220718384,
743
+ "rewards/margins": 14.033093452453613,
744
+ "rewards/rejected": -12.723000526428223,
745
+ "step": 470
746
+ },
747
+ {
748
+ "epoch": 1.54,
749
+ "learning_rate": 2.7078384798099764e-07,
750
+ "logits/chosen": -2.1538589000701904,
751
+ "logits/rejected": -2.1524319648742676,
752
+ "logps/chosen": -219.76461791992188,
753
+ "logps/rejected": -362.3617248535156,
754
+ "loss": 0.0089,
755
+ "rewards/accuracies": 1.0,
756
+ "rewards/chosen": 2.142836093902588,
757
+ "rewards/margins": 14.598767280578613,
758
+ "rewards/rejected": -12.45592975616455,
759
+ "step": 480
760
+ },
761
+ {
762
+ "epoch": 1.57,
763
+ "learning_rate": 2.648456057007126e-07,
764
+ "logits/chosen": -2.270354747772217,
765
+ "logits/rejected": -2.285727024078369,
766
+ "logps/chosen": -213.4247283935547,
767
+ "logps/rejected": -350.7535400390625,
768
+ "loss": 0.009,
769
+ "rewards/accuracies": 0.9937499761581421,
770
+ "rewards/chosen": 2.24265456199646,
771
+ "rewards/margins": 13.511209487915039,
772
+ "rewards/rejected": -11.268553733825684,
773
+ "step": 490
774
+ },
775
+ {
776
+ "epoch": 1.6,
777
+ "learning_rate": 2.589073634204275e-07,
778
+ "logits/chosen": -2.189033031463623,
779
+ "logits/rejected": -2.1983439922332764,
780
+ "logps/chosen": -226.820556640625,
781
+ "logps/rejected": -391.2894592285156,
782
+ "loss": 0.0063,
783
+ "rewards/accuracies": 1.0,
784
+ "rewards/chosen": 2.644188642501831,
785
+ "rewards/margins": 15.23480224609375,
786
+ "rewards/rejected": -12.59061336517334,
787
+ "step": 500
788
+ },
789
+ {
790
+ "epoch": 1.6,
791
+ "eval_logits/chosen": -2.2120492458343506,
792
+ "eval_logits/rejected": -2.239100217819214,
793
+ "eval_logps/chosen": -266.6351318359375,
794
+ "eval_logps/rejected": -285.4739074707031,
795
+ "eval_loss": 0.8692338466644287,
796
+ "eval_rewards/accuracies": 0.567460298538208,
797
+ "eval_rewards/chosen": -2.969956398010254,
798
+ "eval_rewards/margins": 0.3395082652568817,
799
+ "eval_rewards/rejected": -3.309464693069458,
800
+ "eval_runtime": 135.0534,
801
+ "eval_samples_per_second": 7.404,
802
+ "eval_steps_per_second": 0.466,
803
+ "step": 500
804
+ },
805
+ {
806
+ "epoch": 1.63,
807
+ "learning_rate": 2.529691211401425e-07,
808
+ "logits/chosen": -2.2285079956054688,
809
+ "logits/rejected": -2.2011256217956543,
810
+ "logps/chosen": -215.3155059814453,
811
+ "logps/rejected": -366.4723205566406,
812
+ "loss": 0.0093,
813
+ "rewards/accuracies": 1.0,
814
+ "rewards/chosen": 2.619319438934326,
815
+ "rewards/margins": 14.285768508911133,
816
+ "rewards/rejected": -11.666448593139648,
817
+ "step": 510
818
+ },
819
+ {
820
+ "epoch": 1.67,
821
+ "learning_rate": 2.4703087885985747e-07,
822
+ "logits/chosen": -2.180443525314331,
823
+ "logits/rejected": -2.196728467941284,
824
+ "logps/chosen": -226.239013671875,
825
+ "logps/rejected": -395.51397705078125,
826
+ "loss": 0.0083,
827
+ "rewards/accuracies": 1.0,
828
+ "rewards/chosen": 2.7000768184661865,
829
+ "rewards/margins": 15.548223495483398,
830
+ "rewards/rejected": -12.84814739227295,
831
+ "step": 520
832
+ },
833
+ {
834
+ "epoch": 1.7,
835
+ "learning_rate": 2.410926365795724e-07,
836
+ "logits/chosen": -2.1908416748046875,
837
+ "logits/rejected": -2.1960384845733643,
838
+ "logps/chosen": -209.208984375,
839
+ "logps/rejected": -333.3621520996094,
840
+ "loss": 0.0081,
841
+ "rewards/accuracies": 1.0,
842
+ "rewards/chosen": 1.2140251398086548,
843
+ "rewards/margins": 11.866113662719727,
844
+ "rewards/rejected": -10.652088165283203,
845
+ "step": 530
846
+ },
847
+ {
848
+ "epoch": 1.73,
849
+ "learning_rate": 2.351543942992874e-07,
850
+ "logits/chosen": -2.1653943061828613,
851
+ "logits/rejected": -2.171555995941162,
852
+ "logps/chosen": -209.2273712158203,
853
+ "logps/rejected": -370.8827819824219,
854
+ "loss": 0.0056,
855
+ "rewards/accuracies": 1.0,
856
+ "rewards/chosen": 1.8547580242156982,
857
+ "rewards/margins": 14.104257583618164,
858
+ "rewards/rejected": -12.24949836730957,
859
+ "step": 540
860
+ },
861
+ {
862
+ "epoch": 1.76,
863
+ "learning_rate": 2.2921615201900234e-07,
864
+ "logits/chosen": -2.179865837097168,
865
+ "logits/rejected": -2.187969446182251,
866
+ "logps/chosen": -248.9722137451172,
867
+ "logps/rejected": -397.516357421875,
868
+ "loss": 0.011,
869
+ "rewards/accuracies": 1.0,
870
+ "rewards/chosen": 2.0791749954223633,
871
+ "rewards/margins": 15.069662094116211,
872
+ "rewards/rejected": -12.99048900604248,
873
+ "step": 550
874
+ },
875
+ {
876
+ "epoch": 1.79,
877
+ "learning_rate": 2.2327790973871732e-07,
878
+ "logits/chosen": -2.2084298133850098,
879
+ "logits/rejected": -2.196798801422119,
880
+ "logps/chosen": -216.34432983398438,
881
+ "logps/rejected": -364.7033386230469,
882
+ "loss": 0.0114,
883
+ "rewards/accuracies": 1.0,
884
+ "rewards/chosen": 1.6750681400299072,
885
+ "rewards/margins": 12.608332633972168,
886
+ "rewards/rejected": -10.933263778686523,
887
+ "step": 560
888
+ },
889
+ {
890
+ "epoch": 1.83,
891
+ "learning_rate": 2.173396674584323e-07,
892
+ "logits/chosen": -2.1891276836395264,
893
+ "logits/rejected": -2.2081406116485596,
894
+ "logps/chosen": -233.3393096923828,
895
+ "logps/rejected": -390.85699462890625,
896
+ "loss": 0.0068,
897
+ "rewards/accuracies": 1.0,
898
+ "rewards/chosen": 1.547383189201355,
899
+ "rewards/margins": 13.556729316711426,
900
+ "rewards/rejected": -12.009345054626465,
901
+ "step": 570
902
+ },
903
+ {
904
+ "epoch": 1.86,
905
+ "learning_rate": 2.1140142517814726e-07,
906
+ "logits/chosen": -2.184476375579834,
907
+ "logits/rejected": -2.1820874214172363,
908
+ "logps/chosen": -235.4527130126953,
909
+ "logps/rejected": -346.69952392578125,
910
+ "loss": 0.0142,
911
+ "rewards/accuracies": 1.0,
912
+ "rewards/chosen": 1.5227887630462646,
913
+ "rewards/margins": 11.664176940917969,
914
+ "rewards/rejected": -10.141386032104492,
915
+ "step": 580
916
+ },
917
+ {
918
+ "epoch": 1.89,
919
+ "learning_rate": 2.0546318289786222e-07,
920
+ "logits/chosen": -2.1869351863861084,
921
+ "logits/rejected": -2.1961607933044434,
922
+ "logps/chosen": -219.57962036132812,
923
+ "logps/rejected": -345.61663818359375,
924
+ "loss": 0.0171,
925
+ "rewards/accuracies": 1.0,
926
+ "rewards/chosen": 2.060570478439331,
927
+ "rewards/margins": 12.773801803588867,
928
+ "rewards/rejected": -10.71323013305664,
929
+ "step": 590
930
+ },
931
+ {
932
+ "epoch": 1.92,
933
+ "learning_rate": 1.9952494061757718e-07,
934
+ "logits/chosen": -2.1666817665100098,
935
+ "logits/rejected": -2.162493944168091,
936
+ "logps/chosen": -220.46829223632812,
937
+ "logps/rejected": -381.7313537597656,
938
+ "loss": 0.0164,
939
+ "rewards/accuracies": 1.0,
940
+ "rewards/chosen": 1.8761663436889648,
941
+ "rewards/margins": 12.851763725280762,
942
+ "rewards/rejected": -10.97559642791748,
943
+ "step": 600
944
+ },
945
+ {
946
+ "epoch": 1.92,
947
+ "eval_logits/chosen": -2.16733717918396,
948
+ "eval_logits/rejected": -2.193927526473999,
949
+ "eval_logps/chosen": -270.457275390625,
950
+ "eval_logps/rejected": -290.05157470703125,
951
+ "eval_loss": 0.8890212774276733,
952
+ "eval_rewards/accuracies": 0.5873016119003296,
953
+ "eval_rewards/chosen": -3.352169990539551,
954
+ "eval_rewards/margins": 0.4150641858577728,
955
+ "eval_rewards/rejected": -3.7672338485717773,
956
+ "eval_runtime": 135.1266,
957
+ "eval_samples_per_second": 7.4,
958
+ "eval_steps_per_second": 0.466,
959
+ "step": 600
960
+ },
961
+ {
962
+ "epoch": 1.96,
963
+ "learning_rate": 1.9358669833729216e-07,
964
+ "logits/chosen": -2.165018320083618,
965
+ "logits/rejected": -2.1551880836486816,
966
+ "logps/chosen": -236.99209594726562,
967
+ "logps/rejected": -379.29034423828125,
968
+ "loss": 0.0063,
969
+ "rewards/accuracies": 1.0,
970
+ "rewards/chosen": 1.8915096521377563,
971
+ "rewards/margins": 12.898310661315918,
972
+ "rewards/rejected": -11.006799697875977,
973
+ "step": 610
974
+ },
975
+ {
976
+ "epoch": 1.99,
977
+ "learning_rate": 1.876484560570071e-07,
978
+ "logits/chosen": -2.112156391143799,
979
+ "logits/rejected": -2.1361286640167236,
980
+ "logps/chosen": -238.260986328125,
981
+ "logps/rejected": -367.57647705078125,
982
+ "loss": 0.0089,
983
+ "rewards/accuracies": 1.0,
984
+ "rewards/chosen": 1.1359070539474487,
985
+ "rewards/margins": 12.050466537475586,
986
+ "rewards/rejected": -10.914558410644531,
987
+ "step": 620
988
+ },
989
+ {
990
+ "epoch": 2.02,
991
+ "learning_rate": 1.8171021377672207e-07,
992
+ "logits/chosen": -2.1914753913879395,
993
+ "logits/rejected": -2.2092552185058594,
994
+ "logps/chosen": -239.82968139648438,
995
+ "logps/rejected": -379.8808898925781,
996
+ "loss": 0.0065,
997
+ "rewards/accuracies": 1.0,
998
+ "rewards/chosen": 0.7776199579238892,
999
+ "rewards/margins": 11.117961883544922,
1000
+ "rewards/rejected": -10.340343475341797,
1001
+ "step": 630
1002
+ },
1003
+ {
1004
+ "epoch": 2.05,
1005
+ "learning_rate": 1.7577197149643706e-07,
1006
+ "logits/chosen": -2.124424695968628,
1007
+ "logits/rejected": -2.1311514377593994,
1008
+ "logps/chosen": -251.8069610595703,
1009
+ "logps/rejected": -337.7670593261719,
1010
+ "loss": 0.0089,
1011
+ "rewards/accuracies": 1.0,
1012
+ "rewards/chosen": -0.6031948924064636,
1013
+ "rewards/margins": 7.557145595550537,
1014
+ "rewards/rejected": -8.160341262817383,
1015
+ "step": 640
1016
+ },
1017
+ {
1018
+ "epoch": 2.08,
1019
+ "learning_rate": 1.6983372921615202e-07,
1020
+ "logits/chosen": -2.106081008911133,
1021
+ "logits/rejected": -2.0959670543670654,
1022
+ "logps/chosen": -249.3319854736328,
1023
+ "logps/rejected": -362.3397521972656,
1024
+ "loss": 0.0062,
1025
+ "rewards/accuracies": 0.9937499761581421,
1026
+ "rewards/chosen": -0.243983656167984,
1027
+ "rewards/margins": 9.672110557556152,
1028
+ "rewards/rejected": -9.916093826293945,
1029
+ "step": 650
1030
+ },
1031
+ {
1032
+ "epoch": 2.12,
1033
+ "learning_rate": 1.6389548693586697e-07,
1034
+ "logits/chosen": -2.0426225662231445,
1035
+ "logits/rejected": -2.027235507965088,
1036
+ "logps/chosen": -264.58343505859375,
1037
+ "logps/rejected": -402.2667541503906,
1038
+ "loss": 0.0046,
1039
+ "rewards/accuracies": 1.0,
1040
+ "rewards/chosen": 0.5264443159103394,
1041
+ "rewards/margins": 11.265119552612305,
1042
+ "rewards/rejected": -10.738676071166992,
1043
+ "step": 660
1044
+ },
1045
+ {
1046
+ "epoch": 2.15,
1047
+ "learning_rate": 1.5795724465558193e-07,
1048
+ "logits/chosen": -2.0862302780151367,
1049
+ "logits/rejected": -2.094032049179077,
1050
+ "logps/chosen": -219.7861785888672,
1051
+ "logps/rejected": -376.1147155761719,
1052
+ "loss": 0.0031,
1053
+ "rewards/accuracies": 1.0,
1054
+ "rewards/chosen": 1.6179927587509155,
1055
+ "rewards/margins": 14.43371868133545,
1056
+ "rewards/rejected": -12.815725326538086,
1057
+ "step": 670
1058
+ },
1059
+ {
1060
+ "epoch": 2.18,
1061
+ "learning_rate": 1.520190023752969e-07,
1062
+ "logits/chosen": -2.141444206237793,
1063
+ "logits/rejected": -2.1569528579711914,
1064
+ "logps/chosen": -217.67626953125,
1065
+ "logps/rejected": -390.2943115234375,
1066
+ "loss": 0.0026,
1067
+ "rewards/accuracies": 1.0,
1068
+ "rewards/chosen": 1.5685292482376099,
1069
+ "rewards/margins": 14.526385307312012,
1070
+ "rewards/rejected": -12.957855224609375,
1071
+ "step": 680
1072
+ },
1073
+ {
1074
+ "epoch": 2.21,
1075
+ "learning_rate": 1.4608076009501184e-07,
1076
+ "logits/chosen": -2.137134552001953,
1077
+ "logits/rejected": -2.1524899005889893,
1078
+ "logps/chosen": -213.85012817382812,
1079
+ "logps/rejected": -384.1824645996094,
1080
+ "loss": 0.0038,
1081
+ "rewards/accuracies": 1.0,
1082
+ "rewards/chosen": 1.9703474044799805,
1083
+ "rewards/margins": 15.583375930786133,
1084
+ "rewards/rejected": -13.613027572631836,
1085
+ "step": 690
1086
+ },
1087
+ {
1088
+ "epoch": 2.24,
1089
+ "learning_rate": 1.4014251781472683e-07,
1090
+ "logits/chosen": -2.0553553104400635,
1091
+ "logits/rejected": -2.0486502647399902,
1092
+ "logps/chosen": -226.2926788330078,
1093
+ "logps/rejected": -420.90472412109375,
1094
+ "loss": 0.0029,
1095
+ "rewards/accuracies": 1.0,
1096
+ "rewards/chosen": 2.2629456520080566,
1097
+ "rewards/margins": 17.714906692504883,
1098
+ "rewards/rejected": -15.451959609985352,
1099
+ "step": 700
1100
+ },
1101
+ {
1102
+ "epoch": 2.24,
1103
+ "eval_logits/chosen": -2.091404676437378,
1104
+ "eval_logits/rejected": -2.118736982345581,
1105
+ "eval_logps/chosen": -283.0655517578125,
1106
+ "eval_logps/rejected": -303.959228515625,
1107
+ "eval_loss": 0.9730385541915894,
1108
+ "eval_rewards/accuracies": 0.5873016119003296,
1109
+ "eval_rewards/chosen": -4.612995147705078,
1110
+ "eval_rewards/margins": 0.5450049042701721,
1111
+ "eval_rewards/rejected": -5.157999515533447,
1112
+ "eval_runtime": 135.0313,
1113
+ "eval_samples_per_second": 7.406,
1114
+ "eval_steps_per_second": 0.467,
1115
+ "step": 700
1116
+ },
1117
+ {
1118
+ "epoch": 2.28,
1119
+ "learning_rate": 1.342042755344418e-07,
1120
+ "logits/chosen": -2.106203079223633,
1121
+ "logits/rejected": -2.101205825805664,
1122
+ "logps/chosen": -209.47457885742188,
1123
+ "logps/rejected": -427.4419860839844,
1124
+ "loss": 0.0017,
1125
+ "rewards/accuracies": 1.0,
1126
+ "rewards/chosen": 2.1314749717712402,
1127
+ "rewards/margins": 18.671459197998047,
1128
+ "rewards/rejected": -16.539981842041016,
1129
+ "step": 710
1130
+ },
1131
+ {
1132
+ "epoch": 2.31,
1133
+ "learning_rate": 1.2826603325415677e-07,
1134
+ "logits/chosen": -2.0930514335632324,
1135
+ "logits/rejected": -2.100776195526123,
1136
+ "logps/chosen": -232.50479125976562,
1137
+ "logps/rejected": -422.82537841796875,
1138
+ "loss": 0.0014,
1139
+ "rewards/accuracies": 1.0,
1140
+ "rewards/chosen": 2.4403505325317383,
1141
+ "rewards/margins": 17.976802825927734,
1142
+ "rewards/rejected": -15.536453247070312,
1143
+ "step": 720
1144
+ },
1145
+ {
1146
+ "epoch": 2.34,
1147
+ "learning_rate": 1.2232779097387173e-07,
1148
+ "logits/chosen": -2.0570120811462402,
1149
+ "logits/rejected": -2.048424243927002,
1150
+ "logps/chosen": -220.833251953125,
1151
+ "logps/rejected": -409.28887939453125,
1152
+ "loss": 0.0009,
1153
+ "rewards/accuracies": 1.0,
1154
+ "rewards/chosen": 2.0816102027893066,
1155
+ "rewards/margins": 19.023530960083008,
1156
+ "rewards/rejected": -16.941919326782227,
1157
+ "step": 730
1158
+ },
1159
+ {
1160
+ "epoch": 2.37,
1161
+ "learning_rate": 1.163895486935867e-07,
1162
+ "logits/chosen": -2.0480129718780518,
1163
+ "logits/rejected": -2.0570969581604004,
1164
+ "logps/chosen": -218.54385375976562,
1165
+ "logps/rejected": -423.27685546875,
1166
+ "loss": 0.0022,
1167
+ "rewards/accuracies": 1.0,
1168
+ "rewards/chosen": 2.177452802658081,
1169
+ "rewards/margins": 19.3106689453125,
1170
+ "rewards/rejected": -17.13321876525879,
1171
+ "step": 740
1172
+ },
1173
+ {
1174
+ "epoch": 2.4,
1175
+ "learning_rate": 1.1045130641330165e-07,
1176
+ "logits/chosen": -2.0393099784851074,
1177
+ "logits/rejected": -2.064631938934326,
1178
+ "logps/chosen": -240.68948364257812,
1179
+ "logps/rejected": -438.4137268066406,
1180
+ "loss": 0.0018,
1181
+ "rewards/accuracies": 1.0,
1182
+ "rewards/chosen": 1.9609296321868896,
1183
+ "rewards/margins": 19.048954010009766,
1184
+ "rewards/rejected": -17.088024139404297,
1185
+ "step": 750
1186
+ },
1187
+ {
1188
+ "epoch": 2.44,
1189
+ "learning_rate": 1.0451306413301662e-07,
1190
+ "logits/chosen": -2.078218698501587,
1191
+ "logits/rejected": -2.09151554107666,
1192
+ "logps/chosen": -238.7852783203125,
1193
+ "logps/rejected": -427.560791015625,
1194
+ "loss": 0.0023,
1195
+ "rewards/accuracies": 1.0,
1196
+ "rewards/chosen": 1.3349409103393555,
1197
+ "rewards/margins": 17.530380249023438,
1198
+ "rewards/rejected": -16.195438385009766,
1199
+ "step": 760
1200
+ },
1201
+ {
1202
+ "epoch": 2.47,
1203
+ "learning_rate": 9.857482185273158e-08,
1204
+ "logits/chosen": -2.0929994583129883,
1205
+ "logits/rejected": -2.1046335697174072,
1206
+ "logps/chosen": -257.6733703613281,
1207
+ "logps/rejected": -464.05523681640625,
1208
+ "loss": 0.0018,
1209
+ "rewards/accuracies": 1.0,
1210
+ "rewards/chosen": 2.210766315460205,
1211
+ "rewards/margins": 18.859294891357422,
1212
+ "rewards/rejected": -16.648527145385742,
1213
+ "step": 770
1214
+ },
1215
+ {
1216
+ "epoch": 2.5,
1217
+ "learning_rate": 9.263657957244655e-08,
1218
+ "logits/chosen": -2.113415241241455,
1219
+ "logits/rejected": -2.1267588138580322,
1220
+ "logps/chosen": -249.73681640625,
1221
+ "logps/rejected": -431.0743713378906,
1222
+ "loss": 0.002,
1223
+ "rewards/accuracies": 1.0,
1224
+ "rewards/chosen": 0.9247767329216003,
1225
+ "rewards/margins": 16.176433563232422,
1226
+ "rewards/rejected": -15.251657485961914,
1227
+ "step": 780
1228
+ },
1229
+ {
1230
+ "epoch": 2.53,
1231
+ "learning_rate": 8.669833729216151e-08,
1232
+ "logits/chosen": -2.074481725692749,
1233
+ "logits/rejected": -2.0785088539123535,
1234
+ "logps/chosen": -227.53369140625,
1235
+ "logps/rejected": -384.003173828125,
1236
+ "loss": 0.0013,
1237
+ "rewards/accuracies": 1.0,
1238
+ "rewards/chosen": 0.8787251710891724,
1239
+ "rewards/margins": 15.583755493164062,
1240
+ "rewards/rejected": -14.705029487609863,
1241
+ "step": 790
1242
+ },
1243
+ {
1244
+ "epoch": 2.56,
1245
+ "learning_rate": 8.076009501187649e-08,
1246
+ "logits/chosen": -2.1291556358337402,
1247
+ "logits/rejected": -2.144925594329834,
1248
+ "logps/chosen": -226.36203002929688,
1249
+ "logps/rejected": -399.88360595703125,
1250
+ "loss": 0.0018,
1251
+ "rewards/accuracies": 1.0,
1252
+ "rewards/chosen": 1.0979222059249878,
1253
+ "rewards/margins": 16.30962562561035,
1254
+ "rewards/rejected": -15.211705207824707,
1255
+ "step": 800
1256
+ },
1257
+ {
1258
+ "epoch": 2.56,
1259
+ "eval_logits/chosen": -2.076233148574829,
1260
+ "eval_logits/rejected": -2.10378098487854,
1261
+ "eval_logps/chosen": -289.43609619140625,
1262
+ "eval_logps/rejected": -310.7235412597656,
1263
+ "eval_loss": 1.015881061553955,
1264
+ "eval_rewards/accuracies": 0.5873016119003296,
1265
+ "eval_rewards/chosen": -5.250052452087402,
1266
+ "eval_rewards/margins": 0.5843777656555176,
1267
+ "eval_rewards/rejected": -5.834429740905762,
1268
+ "eval_runtime": 135.0469,
1269
+ "eval_samples_per_second": 7.405,
1270
+ "eval_steps_per_second": 0.467,
1271
+ "step": 800
1272
+ },
1273
+ {
1274
+ "epoch": 2.6,
1275
+ "learning_rate": 7.482185273159145e-08,
1276
+ "logits/chosen": -2.072613477706909,
1277
+ "logits/rejected": -2.0949771404266357,
1278
+ "logps/chosen": -245.9056854248047,
1279
+ "logps/rejected": -431.50079345703125,
1280
+ "loss": 0.0015,
1281
+ "rewards/accuracies": 1.0,
1282
+ "rewards/chosen": 1.3889291286468506,
1283
+ "rewards/margins": 18.065763473510742,
1284
+ "rewards/rejected": -16.676836013793945,
1285
+ "step": 810
1286
+ },
1287
+ {
1288
+ "epoch": 2.63,
1289
+ "learning_rate": 6.88836104513064e-08,
1290
+ "logits/chosen": -2.1127004623413086,
1291
+ "logits/rejected": -2.095151424407959,
1292
+ "logps/chosen": -224.3485565185547,
1293
+ "logps/rejected": -392.89373779296875,
1294
+ "loss": 0.0016,
1295
+ "rewards/accuracies": 1.0,
1296
+ "rewards/chosen": 1.0925308465957642,
1297
+ "rewards/margins": 16.016094207763672,
1298
+ "rewards/rejected": -14.923563957214355,
1299
+ "step": 820
1300
+ },
1301
+ {
1302
+ "epoch": 2.66,
1303
+ "learning_rate": 6.294536817102138e-08,
1304
+ "logits/chosen": -2.043457508087158,
1305
+ "logits/rejected": -2.0708208084106445,
1306
+ "logps/chosen": -245.05282592773438,
1307
+ "logps/rejected": -444.1641540527344,
1308
+ "loss": 0.0022,
1309
+ "rewards/accuracies": 1.0,
1310
+ "rewards/chosen": 1.6503627300262451,
1311
+ "rewards/margins": 18.23184585571289,
1312
+ "rewards/rejected": -16.58148193359375,
1313
+ "step": 830
1314
+ },
1315
+ {
1316
+ "epoch": 2.69,
1317
+ "learning_rate": 5.700712589073634e-08,
1318
+ "logits/chosen": -2.0662128925323486,
1319
+ "logits/rejected": -2.0693325996398926,
1320
+ "logps/chosen": -219.8564910888672,
1321
+ "logps/rejected": -358.3108215332031,
1322
+ "loss": 0.002,
1323
+ "rewards/accuracies": 1.0,
1324
+ "rewards/chosen": -0.03243887424468994,
1325
+ "rewards/margins": 13.410202026367188,
1326
+ "rewards/rejected": -13.442642211914062,
1327
+ "step": 840
1328
+ },
1329
+ {
1330
+ "epoch": 2.72,
1331
+ "learning_rate": 5.10688836104513e-08,
1332
+ "logits/chosen": -2.0619301795959473,
1333
+ "logits/rejected": -2.0707077980041504,
1334
+ "logps/chosen": -218.51416015625,
1335
+ "logps/rejected": -394.4737243652344,
1336
+ "loss": 0.0008,
1337
+ "rewards/accuracies": 1.0,
1338
+ "rewards/chosen": 0.7091328501701355,
1339
+ "rewards/margins": 15.42210865020752,
1340
+ "rewards/rejected": -14.712974548339844,
1341
+ "step": 850
1342
+ },
1343
+ {
1344
+ "epoch": 2.76,
1345
+ "learning_rate": 4.5130641330166267e-08,
1346
+ "logits/chosen": -2.098428964614868,
1347
+ "logits/rejected": -2.1084647178649902,
1348
+ "logps/chosen": -253.61538696289062,
1349
+ "logps/rejected": -429.78759765625,
1350
+ "loss": 0.002,
1351
+ "rewards/accuracies": 1.0,
1352
+ "rewards/chosen": 1.4912437200546265,
1353
+ "rewards/margins": 17.59982681274414,
1354
+ "rewards/rejected": -16.108583450317383,
1355
+ "step": 860
1356
+ },
1357
+ {
1358
+ "epoch": 2.79,
1359
+ "learning_rate": 3.919239904988123e-08,
1360
+ "logits/chosen": -2.1191351413726807,
1361
+ "logits/rejected": -2.1331193447113037,
1362
+ "logps/chosen": -219.34622192382812,
1363
+ "logps/rejected": -370.738037109375,
1364
+ "loss": 0.0105,
1365
+ "rewards/accuracies": 1.0,
1366
+ "rewards/chosen": 0.8942230939865112,
1367
+ "rewards/margins": 14.622076034545898,
1368
+ "rewards/rejected": -13.727853775024414,
1369
+ "step": 870
1370
+ },
1371
+ {
1372
+ "epoch": 2.82,
1373
+ "learning_rate": 3.32541567695962e-08,
1374
+ "logits/chosen": -2.104769229888916,
1375
+ "logits/rejected": -2.111499309539795,
1376
+ "logps/chosen": -249.5308380126953,
1377
+ "logps/rejected": -430.3035583496094,
1378
+ "loss": 0.0016,
1379
+ "rewards/accuracies": 1.0,
1380
+ "rewards/chosen": 0.38554805517196655,
1381
+ "rewards/margins": 14.73956298828125,
1382
+ "rewards/rejected": -14.354013442993164,
1383
+ "step": 880
1384
+ },
1385
+ {
1386
+ "epoch": 2.85,
1387
+ "learning_rate": 2.7315914489311164e-08,
1388
+ "logits/chosen": -2.1363842487335205,
1389
+ "logits/rejected": -2.141021490097046,
1390
+ "logps/chosen": -232.86355590820312,
1391
+ "logps/rejected": -376.73626708984375,
1392
+ "loss": 0.0025,
1393
+ "rewards/accuracies": 1.0,
1394
+ "rewards/chosen": 0.6885503530502319,
1395
+ "rewards/margins": 13.440515518188477,
1396
+ "rewards/rejected": -12.75196361541748,
1397
+ "step": 890
1398
+ },
1399
+ {
1400
+ "epoch": 2.88,
1401
+ "learning_rate": 2.1377672209026125e-08,
1402
+ "logits/chosen": -2.136543035507202,
1403
+ "logits/rejected": -2.1440882682800293,
1404
+ "logps/chosen": -252.4918975830078,
1405
+ "logps/rejected": -385.3263854980469,
1406
+ "loss": 0.0128,
1407
+ "rewards/accuracies": 1.0,
1408
+ "rewards/chosen": 1.040738582611084,
1409
+ "rewards/margins": 14.618490219116211,
1410
+ "rewards/rejected": -13.577753067016602,
1411
+ "step": 900
1412
+ },
1413
+ {
1414
+ "epoch": 2.88,
1415
+ "eval_logits/chosen": -2.1040961742401123,
1416
+ "eval_logits/rejected": -2.13144588470459,
1417
+ "eval_logps/chosen": -289.4458312988281,
1418
+ "eval_logps/rejected": -310.62359619140625,
1419
+ "eval_loss": 1.0216755867004395,
1420
+ "eval_rewards/accuracies": 0.5753968358039856,
1421
+ "eval_rewards/chosen": -5.25102424621582,
1422
+ "eval_rewards/margins": 0.5734108090400696,
1423
+ "eval_rewards/rejected": -5.824434757232666,
1424
+ "eval_runtime": 135.1223,
1425
+ "eval_samples_per_second": 7.401,
1426
+ "eval_steps_per_second": 0.466,
1427
+ "step": 900
1428
+ },
1429
+ {
1430
+ "epoch": 2.92,
1431
+ "learning_rate": 1.5439429928741092e-08,
1432
+ "logits/chosen": -2.087221622467041,
1433
+ "logits/rejected": -2.0887157917022705,
1434
+ "logps/chosen": -234.03524780273438,
1435
+ "logps/rejected": -412.8173828125,
1436
+ "loss": 0.0013,
1437
+ "rewards/accuracies": 1.0,
1438
+ "rewards/chosen": 0.5808283090591431,
1439
+ "rewards/margins": 14.590052604675293,
1440
+ "rewards/rejected": -14.009223937988281,
1441
+ "step": 910
1442
+ },
1443
+ {
1444
+ "epoch": 2.95,
1445
+ "learning_rate": 9.501187648456057e-09,
1446
+ "logits/chosen": -2.142925262451172,
1447
+ "logits/rejected": -2.151294231414795,
1448
+ "logps/chosen": -233.02587890625,
1449
+ "logps/rejected": -392.322998046875,
1450
+ "loss": 0.0016,
1451
+ "rewards/accuracies": 1.0,
1452
+ "rewards/chosen": 0.5956318974494934,
1453
+ "rewards/margins": 14.147176742553711,
1454
+ "rewards/rejected": -13.551542282104492,
1455
+ "step": 920
1456
+ },
1457
+ {
1458
+ "epoch": 2.98,
1459
+ "learning_rate": 3.562945368171021e-09,
1460
+ "logits/chosen": -2.0257132053375244,
1461
+ "logits/rejected": -2.0395452976226807,
1462
+ "logps/chosen": -239.80917358398438,
1463
+ "logps/rejected": -391.6462097167969,
1464
+ "loss": 0.0017,
1465
+ "rewards/accuracies": 1.0,
1466
+ "rewards/chosen": 0.49750471115112305,
1467
+ "rewards/margins": 14.218188285827637,
1468
+ "rewards/rejected": -13.720685005187988,
1469
+ "step": 930
1470
+ },
1471
+ {
1472
+ "epoch": 3.0,
1473
+ "step": 936,
1474
+ "total_flos": 0.0,
1475
+ "train_loss": 0.23198359412724215,
1476
+ "train_runtime": 18849.1473,
1477
+ "train_samples_per_second": 3.176,
1478
+ "train_steps_per_second": 0.05
1479
+ }
1480
+ ],
1481
+ "logging_steps": 10,
1482
+ "max_steps": 936,
1483
+ "num_train_epochs": 3,
1484
+ "save_steps": 500,
1485
+ "total_flos": 0.0,
1486
+ "trial_name": null,
1487
+ "trial_params": null
1488
+ }
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0146b1e026c3755b9ce6657de088dc487cc8f0a8712bd7ec986a00817b9b3fb5
3
+ size 5307