Transformers
PyTorch
English
gpt2
Generated from Trainer
Inference Endpoints
text-generation-inference
kejian commited on
Commit
abb78f0
1 Parent(s): 474bec3

Training in progress, step 21362

Browse files
added_tokens.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "<|aligned|>": 50257,
3
+ "<|misaligned|>": 50258
4
+ }
checkpoint-21362/added_tokens.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "<|aligned|>": 50257,
3
+ "<|misaligned|>": 50258
4
+ }
checkpoint-21362/config.json ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "gpt2",
3
+ "activation_function": "gelu_new",
4
+ "architectures": [
5
+ "GPT2LMAndValueHeadModel"
6
+ ],
7
+ "attn_pdrop": 0.1,
8
+ "bos_token_id": 50256,
9
+ "embd_pdrop": 0.1,
10
+ "eos_token_id": 50256,
11
+ "initializer_range": 0.02,
12
+ "layer_norm_epsilon": 1e-05,
13
+ "model_type": "gpt2",
14
+ "n_ctx": 1024,
15
+ "n_embd": 768,
16
+ "n_head": 12,
17
+ "n_inner": null,
18
+ "n_layer": 12,
19
+ "n_positions": 1024,
20
+ "reorder_and_upcast_attn": true,
21
+ "resid_pdrop": 0.1,
22
+ "scale_attn_by_inverse_layer_idx": false,
23
+ "scale_attn_weights": true,
24
+ "summary_activation": null,
25
+ "summary_first_dropout": 0.1,
26
+ "summary_proj_to_labels": true,
27
+ "summary_type": "cls_index",
28
+ "summary_use_proj": true,
29
+ "task_specific_params": {
30
+ "text-generation": {
31
+ "do_sample": true,
32
+ "max_length": 50
33
+ }
34
+ },
35
+ "torch_dtype": "float32",
36
+ "transformers_version": "4.23.0",
37
+ "use_cache": true,
38
+ "vocab_size": 50259
39
+ }
checkpoint-21362/merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
checkpoint-21362/optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2e310aa968306eab0da55e030f09cc2710859446ad12a66f02fcadf9d5e5a88e
3
+ size 995617477
checkpoint-21362/pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9fbe44506bf020615528c90de6a077c578e34b0fd854961ff0d9e0c649167361
3
+ size 510404157
checkpoint-21362/rng_state.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2e6b319a5d379f7d5692cba7a0c064a7cf6bf07fe80f5caa81d373f3865773f1
3
+ size 15597
checkpoint-21362/scaler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8a37e9a365b04fbb39efcd2dfc4bceff93c6bf36663dec6b61d6d58da7a930fb
3
+ size 557
checkpoint-21362/scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f553d1b25ad86fc553f1f8c56d27dd889b366aa6ade821543746caca4b0cd4ed
3
+ size 627
checkpoint-21362/special_tokens_map.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<|aligned|>",
4
+ "<|misaligned|>"
5
+ ],
6
+ "bos_token": "<|endoftext|>",
7
+ "eos_token": "<|endoftext|>",
8
+ "pad_token": "<|endoftext|>",
9
+ "unk_token": "<|endoftext|>"
10
+ }
checkpoint-21362/tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
checkpoint-21362/tokenizer_config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "bos_token": "<|endoftext|>",
4
+ "eos_token": "<|endoftext|>",
5
+ "model_max_length": 1024,
6
+ "name_or_path": "gpt2",
7
+ "special_tokens_map_file": null,
8
+ "tokenizer_class": "GPT2Tokenizer",
9
+ "unk_token": "<|endoftext|>"
10
+ }
checkpoint-21362/trainer_state.json ADDED
@@ -0,0 +1,3128 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 0.5,
5
+ "global_step": 21362,
6
+ "is_hyper_param_search": false,
7
+ "is_local_process_zero": true,
8
+ "is_world_process_zero": true,
9
+ "log_history": [
10
+ {
11
+ "epoch": 0.0,
12
+ "learning_rate": 1.1682242990654204e-06,
13
+ "loss": 10.8693,
14
+ "theoretical_loss": 20.812814784551147,
15
+ "tokens_seen": 65536
16
+ },
17
+ {
18
+ "epoch": 0.0,
19
+ "learning_rate": 5.841121495327103e-05,
20
+ "loss": 9.0309,
21
+ "theoretical_loss": 8.563479647615063,
22
+ "tokens_seen": 3276800
23
+ },
24
+ {
25
+ "epoch": 0.0,
26
+ "learning_rate": 0.00011682242990654206,
27
+ "loss": 6.9509,
28
+ "theoretical_loss": 7.4777557010520255,
29
+ "tokens_seen": 6553600
30
+ },
31
+ {
32
+ "epoch": 0.0,
33
+ "learning_rate": 0.00017523364485981307,
34
+ "loss": 6.1089,
35
+ "theoretical_loss": 6.933751471898896,
36
+ "tokens_seen": 9830400
37
+ },
38
+ {
39
+ "epoch": 0.0,
40
+ "learning_rate": 0.00023364485981308412,
41
+ "loss": 5.7357,
42
+ "theoretical_loss": 6.583563211430409,
43
+ "tokens_seen": 13107200
44
+ },
45
+ {
46
+ "epoch": 0.01,
47
+ "learning_rate": 0.00029205607476635517,
48
+ "loss": 5.4638,
49
+ "theoretical_loss": 6.330710548120079,
50
+ "tokens_seen": 16384000
51
+ },
52
+ {
53
+ "epoch": 0.01,
54
+ "learning_rate": 0.00035046728971962614,
55
+ "loss": 5.2523,
56
+ "theoretical_loss": 6.135526214944321,
57
+ "tokens_seen": 19660800
58
+ },
59
+ {
60
+ "epoch": 0.01,
61
+ "learning_rate": 0.0004088785046728972,
62
+ "loss": 5.1333,
63
+ "theoretical_loss": 5.978098566873603,
64
+ "tokens_seen": 22937600
65
+ },
66
+ {
67
+ "epoch": 0.01,
68
+ "learning_rate": 0.00046728971962616824,
69
+ "loss": 5.0029,
70
+ "theoretical_loss": 5.847114309269919,
71
+ "tokens_seen": 26214400
72
+ },
73
+ {
74
+ "epoch": 0.01,
75
+ "learning_rate": 0.000499739928125591,
76
+ "loss": 4.8482,
77
+ "theoretical_loss": 5.73557379888612,
78
+ "tokens_seen": 29491200
79
+ },
80
+ {
81
+ "epoch": 0.01,
82
+ "learning_rate": 0.0004991488556837526,
83
+ "loss": 4.7413,
84
+ "theoretical_loss": 5.638867127075349,
85
+ "tokens_seen": 32768000
86
+ },
87
+ {
88
+ "epoch": 0.01,
89
+ "learning_rate": 0.0004985577832419141,
90
+ "loss": 4.639,
91
+ "theoretical_loss": 5.553809364848902,
92
+ "tokens_seen": 36044800
93
+ },
94
+ {
95
+ "epoch": 0.01,
96
+ "learning_rate": 0.0004979667108000757,
97
+ "loss": 4.5476,
98
+ "theoretical_loss": 5.478115063560433,
99
+ "tokens_seen": 39321600
100
+ },
101
+ {
102
+ "epoch": 0.02,
103
+ "learning_rate": 0.0004973756383582371,
104
+ "loss": 4.4265,
105
+ "theoretical_loss": 5.410092942583357,
106
+ "tokens_seen": 42598400
107
+ },
108
+ {
109
+ "epoch": 0.02,
110
+ "learning_rate": 0.0004967845659163987,
111
+ "loss": 4.4205,
112
+ "theoretical_loss": 5.34845906673983,
113
+ "tokens_seen": 45875200
114
+ },
115
+ {
116
+ "epoch": 0.02,
117
+ "learning_rate": 0.0004961934934745603,
118
+ "loss": 4.3088,
119
+ "theoretical_loss": 5.292217549941562,
120
+ "tokens_seen": 49152000
121
+ },
122
+ {
123
+ "epoch": 0.02,
124
+ "learning_rate": 0.0004956024210327218,
125
+ "loss": 4.2513,
126
+ "theoretical_loss": 5.240581608773973,
127
+ "tokens_seen": 52428800
128
+ },
129
+ {
130
+ "epoch": 0.02,
131
+ "learning_rate": 0.0004950113485908833,
132
+ "loss": 4.1679,
133
+ "theoretical_loss": 5.192919707529784,
134
+ "tokens_seen": 55705600
135
+ },
136
+ {
137
+ "epoch": 0.02,
138
+ "learning_rate": 0.0004944202761490448,
139
+ "loss": 4.078,
140
+ "theoretical_loss": 5.1487178463604355,
141
+ "tokens_seen": 58982400
142
+ },
143
+ {
144
+ "epoch": 0.02,
145
+ "learning_rate": 0.0004938292037072064,
146
+ "loss": 3.9768,
147
+ "theoretical_loss": 5.107552545409097,
148
+ "tokens_seen": 62259200
149
+ },
150
+ {
151
+ "epoch": 0.02,
152
+ "learning_rate": 0.0004932381312653678,
153
+ "loss": 3.8845,
154
+ "theoretical_loss": 5.069071100147241,
155
+ "tokens_seen": 65536000
156
+ },
157
+ {
158
+ "epoch": 0.02,
159
+ "learning_rate": 0.0004926470588235294,
160
+ "loss": 3.8024,
161
+ "theoretical_loss": 5.032976892842002,
162
+ "tokens_seen": 68812800
163
+ },
164
+ {
165
+ "epoch": 0.03,
166
+ "learning_rate": 0.000492055986381691,
167
+ "loss": 3.709,
168
+ "theoretical_loss": 4.999018291228659,
169
+ "tokens_seen": 72089600
170
+ },
171
+ {
172
+ "epoch": 0.03,
173
+ "learning_rate": 0.0004914649139398525,
174
+ "loss": 3.7163,
175
+ "theoretical_loss": 4.966980138355957,
176
+ "tokens_seen": 75366400
177
+ },
178
+ {
179
+ "epoch": 0.03,
180
+ "learning_rate": 0.000490873841498014,
181
+ "loss": 3.6464,
182
+ "theoretical_loss": 4.936677144629131,
183
+ "tokens_seen": 78643200
184
+ },
185
+ {
186
+ "epoch": 0.03,
187
+ "learning_rate": 0.0004902827690561755,
188
+ "loss": 3.6057,
189
+ "theoretical_loss": 4.907948696834077,
190
+ "tokens_seen": 81920000
191
+ },
192
+ {
193
+ "epoch": 0.03,
194
+ "learning_rate": 0.0004896916966143371,
195
+ "loss": 3.5802,
196
+ "theoretical_loss": 4.880654736816922,
197
+ "tokens_seen": 85196800
198
+ },
199
+ {
200
+ "epoch": 0.03,
201
+ "learning_rate": 0.0004891006241724985,
202
+ "loss": 3.5166,
203
+ "theoretical_loss": 4.8546724574857745,
204
+ "tokens_seen": 88473600
205
+ },
206
+ {
207
+ "epoch": 0.03,
208
+ "learning_rate": 0.0004885095517306601,
209
+ "loss": 3.5148,
210
+ "theoretical_loss": 4.829893630312808,
211
+ "tokens_seen": 91750400
212
+ },
213
+ {
214
+ "epoch": 0.03,
215
+ "learning_rate": 0.0004879184792888217,
216
+ "loss": 3.4273,
217
+ "theoretical_loss": 4.806222425781916,
218
+ "tokens_seen": 95027200
219
+ },
220
+ {
221
+ "epoch": 0.04,
222
+ "learning_rate": 0.0004873274068469832,
223
+ "loss": 3.4389,
224
+ "theoretical_loss": 4.783573622280253,
225
+ "tokens_seen": 98304000
226
+ },
227
+ {
228
+ "epoch": 0.04,
229
+ "learning_rate": 0.00048673633440514467,
230
+ "loss": 3.4706,
231
+ "theoretical_loss": 4.7618711237764035,
232
+ "tokens_seen": 101580800
233
+ },
234
+ {
235
+ "epoch": 0.04,
236
+ "learning_rate": 0.0004861452619633062,
237
+ "loss": 3.4574,
238
+ "theoretical_loss": 4.741046724966468,
239
+ "tokens_seen": 104857600
240
+ },
241
+ {
242
+ "epoch": 0.04,
243
+ "learning_rate": 0.0004855541895214677,
244
+ "loss": 3.4083,
245
+ "theoretical_loss": 4.721039076253046,
246
+ "tokens_seen": 108134400
247
+ },
248
+ {
249
+ "epoch": 0.04,
250
+ "learning_rate": 0.0004849631170796293,
251
+ "loss": 3.4308,
252
+ "theoretical_loss": 4.701792811235861,
253
+ "tokens_seen": 111411200
254
+ },
255
+ {
256
+ "epoch": 0.04,
257
+ "learning_rate": 0.0004843720446377908,
258
+ "loss": 3.4235,
259
+ "theoretical_loss": 4.683257807239925,
260
+ "tokens_seen": 114688000
261
+ },
262
+ {
263
+ "epoch": 0.04,
264
+ "learning_rate": 0.00048378097219595233,
265
+ "loss": 3.4065,
266
+ "theoretical_loss": 4.665388555430277,
267
+ "tokens_seen": 117964800
268
+ },
269
+ {
270
+ "epoch": 0.04,
271
+ "learning_rate": 0.00048318989975411385,
272
+ "loss": 3.387,
273
+ "theoretical_loss": 4.648143621723734,
274
+ "tokens_seen": 121241600
275
+ },
276
+ {
277
+ "epoch": 0.04,
278
+ "learning_rate": 0.00048259882731227537,
279
+ "loss": 3.3843,
280
+ "theoretical_loss": 4.631485183343639,
281
+ "tokens_seen": 124518400
282
+ },
283
+ {
284
+ "epoch": 0.05,
285
+ "learning_rate": 0.0004820077548704369,
286
+ "loss": 3.3602,
287
+ "theoretical_loss": 4.6153786287197125,
288
+ "tokens_seen": 127795200
289
+ },
290
+ {
291
+ "epoch": 0.05,
292
+ "learning_rate": 0.00048141668242859847,
293
+ "loss": 3.3476,
294
+ "theoretical_loss": 4.5997922106945,
295
+ "tokens_seen": 131072000
296
+ },
297
+ {
298
+ "epoch": 0.05,
299
+ "learning_rate": 0.00048082560998676,
300
+ "loss": 3.3521,
301
+ "theoretical_loss": 4.58469674479667,
302
+ "tokens_seen": 134348800
303
+ },
304
+ {
305
+ "epoch": 0.05,
306
+ "learning_rate": 0.0004802345375449215,
307
+ "loss": 3.3564,
308
+ "theoretical_loss": 4.570065345782512,
309
+ "tokens_seen": 137625600
310
+ },
311
+ {
312
+ "epoch": 0.05,
313
+ "learning_rate": 0.00047964346510308303,
314
+ "loss": 3.2824,
315
+ "theoretical_loss": 4.555873196808033,
316
+ "tokens_seen": 140902400
317
+ },
318
+ {
319
+ "epoch": 0.05,
320
+ "learning_rate": 0.00047905239266124455,
321
+ "loss": 3.3029,
322
+ "theoretical_loss": 4.542097346534794,
323
+ "tokens_seen": 144179200
324
+ },
325
+ {
326
+ "epoch": 0.05,
327
+ "learning_rate": 0.00047846132021940607,
328
+ "loss": 3.2522,
329
+ "theoretical_loss": 4.528716530238812,
330
+ "tokens_seen": 147456000
331
+ },
332
+ {
333
+ "epoch": 0.05,
334
+ "learning_rate": 0.0004778702477775676,
335
+ "loss": 3.2919,
336
+ "theoretical_loss": 4.515711011618992,
337
+ "tokens_seen": 150732800
338
+ },
339
+ {
340
+ "epoch": 0.06,
341
+ "learning_rate": 0.00047727917533572917,
342
+ "loss": 3.2381,
343
+ "theoretical_loss": 4.503062442517334,
344
+ "tokens_seen": 154009600
345
+ },
346
+ {
347
+ "epoch": 0.06,
348
+ "learning_rate": 0.0004766881028938907,
349
+ "loss": 3.2438,
350
+ "theoretical_loss": 4.4907537381892615,
351
+ "tokens_seen": 157286400
352
+ },
353
+ {
354
+ "epoch": 0.06,
355
+ "learning_rate": 0.0004760970304520522,
356
+ "loss": 3.2661,
357
+ "theoretical_loss": 4.478768966115963,
358
+ "tokens_seen": 160563200
359
+ },
360
+ {
361
+ "epoch": 0.06,
362
+ "objective/train/docs_used": 101104,
363
+ "objective/train/instantaneous_batch_size": 32,
364
+ "objective/train/instantaneous_microbatch_size": 32768,
365
+ "objective/train/original_loss": 3.198517322540283,
366
+ "objective/train/theoretical_loss": 4.467093246645215,
367
+ "objective/train/tokens_used": 184300000,
368
+ "theoretical_loss": 4.467093246645215,
369
+ "tokens_seen": 163840000
370
+ },
371
+ {
372
+ "epoch": 0.06,
373
+ "learning_rate": 0.00047550595801021373,
374
+ "loss": 3.237,
375
+ "theoretical_loss": 4.467093246645215,
376
+ "tokens_seen": 163840000
377
+ },
378
+ {
379
+ "epoch": 0.06,
380
+ "learning_rate": 0.00047491488556837525,
381
+ "loss": 3.2302,
382
+ "theoretical_loss": 4.455712663993541,
383
+ "tokens_seen": 167116800
384
+ },
385
+ {
386
+ "epoch": 0.06,
387
+ "learning_rate": 0.00047432381312653677,
388
+ "loss": 3.2266,
389
+ "theoretical_loss": 4.444614186349425,
390
+ "tokens_seen": 170393600
391
+ },
392
+ {
393
+ "epoch": 0.06,
394
+ "learning_rate": 0.00047373274068469835,
395
+ "loss": 3.2047,
396
+ "theoretical_loss": 4.433785593991642,
397
+ "tokens_seen": 173670400
398
+ },
399
+ {
400
+ "epoch": 0.06,
401
+ "learning_rate": 0.00047314166824285987,
402
+ "loss": 3.175,
403
+ "theoretical_loss": 4.423215414484155,
404
+ "tokens_seen": 176947200
405
+ },
406
+ {
407
+ "epoch": 0.06,
408
+ "learning_rate": 0.0004725505958010214,
409
+ "loss": 3.135,
410
+ "theoretical_loss": 4.412892864134137,
411
+ "tokens_seen": 180224000
412
+ },
413
+ {
414
+ "epoch": 0.07,
415
+ "learning_rate": 0.0004719595233591829,
416
+ "loss": 3.1458,
417
+ "theoretical_loss": 4.402807795006074,
418
+ "tokens_seen": 183500800
419
+ },
420
+ {
421
+ "epoch": 0.07,
422
+ "learning_rate": 0.00047136845091734443,
423
+ "loss": 3.1336,
424
+ "theoretical_loss": 4.392950646875857,
425
+ "tokens_seen": 186777600
426
+ },
427
+ {
428
+ "epoch": 0.07,
429
+ "learning_rate": 0.00047077737847550595,
430
+ "loss": 3.1908,
431
+ "theoretical_loss": 4.383312403586528,
432
+ "tokens_seen": 190054400
433
+ },
434
+ {
435
+ "epoch": 0.07,
436
+ "learning_rate": 0.0004701863060336675,
437
+ "loss": 3.158,
438
+ "theoretical_loss": 4.373884553334271,
439
+ "tokens_seen": 193331200
440
+ },
441
+ {
442
+ "epoch": 0.07,
443
+ "learning_rate": 0.00046959523359182905,
444
+ "loss": 3.168,
445
+ "theoretical_loss": 4.364659052470699,
446
+ "tokens_seen": 196608000
447
+ },
448
+ {
449
+ "epoch": 0.07,
450
+ "learning_rate": 0.00046900416114999057,
451
+ "loss": 3.1341,
452
+ "theoretical_loss": 4.355628292457278,
453
+ "tokens_seen": 199884800
454
+ },
455
+ {
456
+ "epoch": 0.07,
457
+ "learning_rate": 0.0004684130887081521,
458
+ "loss": 3.1272,
459
+ "theoretical_loss": 4.346785069650667,
460
+ "tokens_seen": 203161600
461
+ },
462
+ {
463
+ "epoch": 0.07,
464
+ "learning_rate": 0.0004678220162663136,
465
+ "loss": 3.1126,
466
+ "theoretical_loss": 4.338122557635156,
467
+ "tokens_seen": 206438400
468
+ },
469
+ {
470
+ "epoch": 0.07,
471
+ "learning_rate": 0.00046723094382447513,
472
+ "loss": 3.1335,
473
+ "theoretical_loss": 4.329634281850807,
474
+ "tokens_seen": 209715200
475
+ },
476
+ {
477
+ "epoch": 0.08,
478
+ "learning_rate": 0.00046663987138263665,
479
+ "loss": 3.1437,
480
+ "theoretical_loss": 4.321314096294248,
481
+ "tokens_seen": 212992000
482
+ },
483
+ {
484
+ "epoch": 0.08,
485
+ "learning_rate": 0.0004660487989407982,
486
+ "loss": 3.069,
487
+ "theoretical_loss": 4.3131561620937875,
488
+ "tokens_seen": 216268800
489
+ },
490
+ {
491
+ "epoch": 0.08,
492
+ "learning_rate": 0.00046545772649895975,
493
+ "loss": 3.0881,
494
+ "theoretical_loss": 4.305154927782223,
495
+ "tokens_seen": 219545600
496
+ },
497
+ {
498
+ "epoch": 0.08,
499
+ "learning_rate": 0.00046486665405712127,
500
+ "loss": 3.0458,
501
+ "theoretical_loss": 4.297305111109683,
502
+ "tokens_seen": 222822400
503
+ },
504
+ {
505
+ "epoch": 0.08,
506
+ "learning_rate": 0.0004642755816152828,
507
+ "loss": 3.1237,
508
+ "theoretical_loss": 4.2896016822555945,
509
+ "tokens_seen": 226099200
510
+ },
511
+ {
512
+ "epoch": 0.08,
513
+ "learning_rate": 0.0004636845091734443,
514
+ "loss": 3.1297,
515
+ "theoretical_loss": 4.282039848313611,
516
+ "tokens_seen": 229376000
517
+ },
518
+ {
519
+ "epoch": 0.08,
520
+ "learning_rate": 0.00046309343673160583,
521
+ "loss": 3.1587,
522
+ "theoretical_loss": 4.274615038936293,
523
+ "tokens_seen": 232652800
524
+ },
525
+ {
526
+ "epoch": 0.08,
527
+ "learning_rate": 0.0004625023642897674,
528
+ "loss": 3.1385,
529
+ "theoretical_loss": 4.267322893037893,
530
+ "tokens_seen": 235929600
531
+ },
532
+ {
533
+ "epoch": 0.09,
534
+ "learning_rate": 0.0004619112918479289,
535
+ "loss": 3.1282,
536
+ "theoretical_loss": 4.26015924646374,
537
+ "tokens_seen": 239206400
538
+ },
539
+ {
540
+ "epoch": 0.09,
541
+ "learning_rate": 0.00046132021940609044,
542
+ "loss": 3.0925,
543
+ "theoretical_loss": 4.253120120543809,
544
+ "tokens_seen": 242483200
545
+ },
546
+ {
547
+ "epoch": 0.09,
548
+ "learning_rate": 0.00046072914696425197,
549
+ "loss": 3.043,
550
+ "theoretical_loss": 4.24620171145605,
551
+ "tokens_seen": 245760000
552
+ },
553
+ {
554
+ "epoch": 0.09,
555
+ "learning_rate": 0.0004601380745224135,
556
+ "loss": 3.0721,
557
+ "theoretical_loss": 4.239400380332256,
558
+ "tokens_seen": 249036800
559
+ },
560
+ {
561
+ "epoch": 0.09,
562
+ "learning_rate": 0.000459547002080575,
563
+ "loss": 3.0532,
564
+ "theoretical_loss": 4.232712644045627,
565
+ "tokens_seen": 252313600
566
+ },
567
+ {
568
+ "epoch": 0.09,
569
+ "learning_rate": 0.0004589559296387366,
570
+ "loss": 3.066,
571
+ "theoretical_loss": 4.226135166624862,
572
+ "tokens_seen": 255590400
573
+ },
574
+ {
575
+ "epoch": 0.09,
576
+ "learning_rate": 0.0004583648571968981,
577
+ "loss": 3.081,
578
+ "theoretical_loss": 4.21966475124477,
579
+ "tokens_seen": 258867200
580
+ },
581
+ {
582
+ "epoch": 0.09,
583
+ "learning_rate": 0.0004577737847550596,
584
+ "loss": 3.0382,
585
+ "theoretical_loss": 4.21329833274792,
586
+ "tokens_seen": 262144000
587
+ },
588
+ {
589
+ "epoch": 0.09,
590
+ "learning_rate": 0.0004571827123132211,
591
+ "loss": 3.0283,
592
+ "theoretical_loss": 4.207032970655965,
593
+ "tokens_seen": 265420800
594
+ },
595
+ {
596
+ "epoch": 0.1,
597
+ "learning_rate": 0.0004565916398713826,
598
+ "loss": 2.9765,
599
+ "theoretical_loss": 4.2008658426329974,
600
+ "tokens_seen": 268697600
601
+ },
602
+ {
603
+ "epoch": 0.1,
604
+ "learning_rate": 0.00045600056742954413,
605
+ "loss": 2.9944,
606
+ "theoretical_loss": 4.194794238366544,
607
+ "tokens_seen": 271974400
608
+ },
609
+ {
610
+ "epoch": 0.1,
611
+ "learning_rate": 0.0004554094949877057,
612
+ "loss": 2.9731,
613
+ "theoretical_loss": 4.188815553834879,
614
+ "tokens_seen": 275251200
615
+ },
616
+ {
617
+ "epoch": 0.1,
618
+ "learning_rate": 0.00045481842254586723,
619
+ "loss": 2.931,
620
+ "theoretical_loss": 4.182927285931959,
621
+ "tokens_seen": 278528000
622
+ },
623
+ {
624
+ "epoch": 0.1,
625
+ "learning_rate": 0.00045422735010402875,
626
+ "loss": 2.9834,
627
+ "theoretical_loss": 4.177127027423776,
628
+ "tokens_seen": 281804800
629
+ },
630
+ {
631
+ "epoch": 0.1,
632
+ "learning_rate": 0.00045363627766219027,
633
+ "loss": 3.0071,
634
+ "theoretical_loss": 4.171412462212087,
635
+ "tokens_seen": 285081600
636
+ },
637
+ {
638
+ "epoch": 0.1,
639
+ "learning_rate": 0.0004530452052203518,
640
+ "loss": 2.9769,
641
+ "theoretical_loss": 4.165781360883513,
642
+ "tokens_seen": 288358400
643
+ },
644
+ {
645
+ "epoch": 0.1,
646
+ "learning_rate": 0.0004524541327785133,
647
+ "loss": 2.9943,
648
+ "theoretical_loss": 4.160231576523763,
649
+ "tokens_seen": 291635200
650
+ },
651
+ {
652
+ "epoch": 0.11,
653
+ "learning_rate": 0.00045186306033667483,
654
+ "loss": 3.002,
655
+ "theoretical_loss": 4.1547610407784346,
656
+ "tokens_seen": 294912000
657
+ },
658
+ {
659
+ "epoch": 0.11,
660
+ "learning_rate": 0.0004512719878948364,
661
+ "loss": 2.9809,
662
+ "theoretical_loss": 4.1493677601432815,
663
+ "tokens_seen": 298188800
664
+ },
665
+ {
666
+ "epoch": 0.11,
667
+ "learning_rate": 0.00045068091545299793,
668
+ "loss": 3.0049,
669
+ "theoretical_loss": 4.1440498124682446,
670
+ "tokens_seen": 301465600
671
+ },
672
+ {
673
+ "epoch": 0.11,
674
+ "learning_rate": 0.00045008984301115945,
675
+ "loss": 3.026,
676
+ "theoretical_loss": 4.138805343660737,
677
+ "tokens_seen": 304742400
678
+ },
679
+ {
680
+ "epoch": 0.11,
681
+ "learning_rate": 0.00044949877056932097,
682
+ "loss": 3.0189,
683
+ "theoretical_loss": 4.133632564574831,
684
+ "tokens_seen": 308019200
685
+ },
686
+ {
687
+ "epoch": 0.11,
688
+ "learning_rate": 0.0004489076981274825,
689
+ "loss": 2.9707,
690
+ "theoretical_loss": 4.128529748073999,
691
+ "tokens_seen": 311296000
692
+ },
693
+ {
694
+ "epoch": 0.11,
695
+ "learning_rate": 0.000448316625685644,
696
+ "loss": 2.9489,
697
+ "theoretical_loss": 4.123495226256027,
698
+ "tokens_seen": 314572800
699
+ },
700
+ {
701
+ "epoch": 0.11,
702
+ "learning_rate": 0.0004477255532438056,
703
+ "loss": 2.9678,
704
+ "theoretical_loss": 4.118527387829552,
705
+ "tokens_seen": 317849600
706
+ },
707
+ {
708
+ "epoch": 0.11,
709
+ "learning_rate": 0.0004471344808019671,
710
+ "loss": 2.9473,
711
+ "theoretical_loss": 4.11362467563246,
712
+ "tokens_seen": 321126400
713
+ },
714
+ {
715
+ "epoch": 0.12,
716
+ "learning_rate": 0.00044654340836012863,
717
+ "loss": 2.8922,
718
+ "theoretical_loss": 4.108785584283144,
719
+ "tokens_seen": 324403200
720
+ },
721
+ {
722
+ "debugging/Self-BLEU-5": 0.6334409154007015,
723
+ "debugging/distinct-1-grams": 0.8097649436454978,
724
+ "debugging/distinct-2-grams": 0.9690801766993341,
725
+ "debugging/entropy-1-grams": 6.418125228974367,
726
+ "debugging/entropy-2-grams": 7.390714834687655,
727
+ "debugging/length": 533.0,
728
+ "debugging/num_segments": 25,
729
+ "epoch": 0.12,
730
+ "objective/train/docs_used": 191946,
731
+ "objective/train/instantaneous_batch_size": 32,
732
+ "objective/train/instantaneous_microbatch_size": 32768,
733
+ "objective/train/original_loss": 3.122324228286743,
734
+ "objective/train/theoretical_loss": 4.104008657956216,
735
+ "objective/train/tokens_used": 348140000,
736
+ "theoretical_loss": 4.104008657956216,
737
+ "tokens_seen": 327680000
738
+ },
739
+ {
740
+ "epoch": 0.12,
741
+ "learning_rate": 0.00044595233591829015,
742
+ "loss": 2.9257,
743
+ "theoretical_loss": 4.104008657956216,
744
+ "tokens_seen": 327680000
745
+ },
746
+ {
747
+ "epoch": 0.12,
748
+ "learning_rate": 0.00044536126347645167,
749
+ "loss": 2.9296,
750
+ "theoretical_loss": 4.099292488274917,
751
+ "tokens_seen": 330956800
752
+ },
753
+ {
754
+ "epoch": 0.12,
755
+ "learning_rate": 0.0004447701910346132,
756
+ "loss": 2.9635,
757
+ "theoretical_loss": 4.094635712313026,
758
+ "tokens_seen": 334233600
759
+ },
760
+ {
761
+ "epoch": 0.12,
762
+ "learning_rate": 0.00044417911859277476,
763
+ "loss": 2.9167,
764
+ "theoretical_loss": 4.090037010699552,
765
+ "tokens_seen": 337510400
766
+ },
767
+ {
768
+ "epoch": 0.12,
769
+ "learning_rate": 0.0004435880461509363,
770
+ "loss": 2.9703,
771
+ "theoretical_loss": 4.085495105819987,
772
+ "tokens_seen": 340787200
773
+ },
774
+ {
775
+ "epoch": 0.12,
776
+ "learning_rate": 0.0004429969737090978,
777
+ "loss": 2.9051,
778
+ "theoretical_loss": 4.081008760108329,
779
+ "tokens_seen": 344064000
780
+ },
781
+ {
782
+ "epoch": 0.12,
783
+ "learning_rate": 0.0004424059012672593,
784
+ "loss": 2.9054,
785
+ "theoretical_loss": 4.076576774424465,
786
+ "tokens_seen": 347340800
787
+ },
788
+ {
789
+ "epoch": 0.13,
790
+ "learning_rate": 0.00044181482882542085,
791
+ "loss": 2.9431,
792
+ "theoretical_loss": 4.072197986511911,
793
+ "tokens_seen": 350617600
794
+ },
795
+ {
796
+ "epoch": 0.13,
797
+ "learning_rate": 0.00044122375638358237,
798
+ "loss": 2.8965,
799
+ "theoretical_loss": 4.067871269531193,
800
+ "tokens_seen": 353894400
801
+ },
802
+ {
803
+ "epoch": 0.13,
804
+ "learning_rate": 0.0004406326839417439,
805
+ "loss": 2.8885,
806
+ "theoretical_loss": 4.063595530664515,
807
+ "tokens_seen": 357171200
808
+ },
809
+ {
810
+ "epoch": 0.13,
811
+ "learning_rate": 0.00044004161149990546,
812
+ "loss": 2.9468,
813
+ "theoretical_loss": 4.059369709787625,
814
+ "tokens_seen": 360448000
815
+ },
816
+ {
817
+ "epoch": 0.13,
818
+ "learning_rate": 0.000439450539058067,
819
+ "loss": 2.9502,
820
+ "theoretical_loss": 4.055192778205064,
821
+ "tokens_seen": 363724800
822
+ },
823
+ {
824
+ "epoch": 0.13,
825
+ "learning_rate": 0.0004388594666162285,
826
+ "loss": 2.9585,
827
+ "theoretical_loss": 4.051063737445231,
828
+ "tokens_seen": 367001600
829
+ },
830
+ {
831
+ "epoch": 0.13,
832
+ "learning_rate": 0.00043826839417439,
833
+ "loss": 2.8968,
834
+ "theoretical_loss": 4.046981618111936,
835
+ "tokens_seen": 370278400
836
+ },
837
+ {
838
+ "epoch": 0.13,
839
+ "learning_rate": 0.00043767732173255155,
840
+ "loss": 2.9395,
841
+ "theoretical_loss": 4.042945478789308,
842
+ "tokens_seen": 373555200
843
+ },
844
+ {
845
+ "epoch": 0.13,
846
+ "learning_rate": 0.00043708624929071307,
847
+ "loss": 2.9354,
848
+ "theoretical_loss": 4.038954404997148,
849
+ "tokens_seen": 376832000
850
+ },
851
+ {
852
+ "epoch": 0.14,
853
+ "learning_rate": 0.00043649517684887464,
854
+ "loss": 2.9023,
855
+ "theoretical_loss": 4.035007508193978,
856
+ "tokens_seen": 380108800
857
+ },
858
+ {
859
+ "epoch": 0.14,
860
+ "learning_rate": 0.00043590410440703616,
861
+ "loss": 2.9225,
862
+ "theoretical_loss": 4.031103924825214,
863
+ "tokens_seen": 383385600
864
+ },
865
+ {
866
+ "epoch": 0.14,
867
+ "learning_rate": 0.0004353130319651977,
868
+ "loss": 2.8637,
869
+ "theoretical_loss": 4.027242815414074,
870
+ "tokens_seen": 386662400
871
+ },
872
+ {
873
+ "epoch": 0.14,
874
+ "learning_rate": 0.0004347219595233592,
875
+ "loss": 2.8579,
876
+ "theoretical_loss": 4.023423363692939,
877
+ "tokens_seen": 389939200
878
+ },
879
+ {
880
+ "epoch": 0.14,
881
+ "learning_rate": 0.0004341308870815207,
882
+ "loss": 2.9181,
883
+ "theoretical_loss": 4.019644775773043,
884
+ "tokens_seen": 393216000
885
+ },
886
+ {
887
+ "epoch": 0.14,
888
+ "learning_rate": 0.00043353981463968225,
889
+ "loss": 2.9315,
890
+ "theoretical_loss": 4.015906279350517,
891
+ "tokens_seen": 396492800
892
+ },
893
+ {
894
+ "epoch": 0.14,
895
+ "learning_rate": 0.0004329487421978438,
896
+ "loss": 2.8929,
897
+ "theoretical_loss": 4.01220712294689,
898
+ "tokens_seen": 399769600
899
+ },
900
+ {
901
+ "epoch": 0.14,
902
+ "learning_rate": 0.00043235766975600534,
903
+ "loss": 2.9047,
904
+ "theoretical_loss": 4.008546575182286,
905
+ "tokens_seen": 403046400
906
+ },
907
+ {
908
+ "epoch": 0.15,
909
+ "learning_rate": 0.00043176659731416686,
910
+ "loss": 2.9024,
911
+ "theoretical_loss": 4.0049239240796695,
912
+ "tokens_seen": 406323200
913
+ },
914
+ {
915
+ "epoch": 0.15,
916
+ "learning_rate": 0.0004311755248723284,
917
+ "loss": 2.8999,
918
+ "theoretical_loss": 4.001338476398553,
919
+ "tokens_seen": 409600000
920
+ },
921
+ {
922
+ "epoch": 0.15,
923
+ "learning_rate": 0.0004305844524304899,
924
+ "loss": 2.8975,
925
+ "theoretical_loss": 3.9977895569967217,
926
+ "tokens_seen": 412876800
927
+ },
928
+ {
929
+ "epoch": 0.15,
930
+ "learning_rate": 0.0004299933799886514,
931
+ "loss": 2.9089,
932
+ "theoretical_loss": 3.99427650821855,
933
+ "tokens_seen": 416153600
934
+ },
935
+ {
936
+ "epoch": 0.15,
937
+ "learning_rate": 0.00042940230754681295,
938
+ "loss": 2.9121,
939
+ "theoretical_loss": 3.990798689308642,
940
+ "tokens_seen": 419430400
941
+ },
942
+ {
943
+ "epoch": 0.15,
944
+ "learning_rate": 0.0004288112351049745,
945
+ "loss": 2.9365,
946
+ "theoretical_loss": 3.987355475849527,
947
+ "tokens_seen": 422707200
948
+ },
949
+ {
950
+ "epoch": 0.15,
951
+ "learning_rate": 0.000428220162663136,
952
+ "loss": 2.9327,
953
+ "theoretical_loss": 3.98394625922226,
954
+ "tokens_seen": 425984000
955
+ },
956
+ {
957
+ "epoch": 0.15,
958
+ "learning_rate": 0.0004276290902212975,
959
+ "loss": 2.9232,
960
+ "theoretical_loss": 3.9805704460888256,
961
+ "tokens_seen": 429260800
962
+ },
963
+ {
964
+ "epoch": 0.15,
965
+ "learning_rate": 0.00042703801777945903,
966
+ "loss": 2.947,
967
+ "theoretical_loss": 3.9772274578953013,
968
+ "tokens_seen": 432537600
969
+ },
970
+ {
971
+ "epoch": 0.16,
972
+ "learning_rate": 0.00042644694533762055,
973
+ "loss": 2.9084,
974
+ "theoretical_loss": 3.973916730394796,
975
+ "tokens_seen": 435814400
976
+ },
977
+ {
978
+ "epoch": 0.16,
979
+ "learning_rate": 0.00042585587289578207,
980
+ "loss": 2.9175,
981
+ "theoretical_loss": 3.970637713189244,
982
+ "tokens_seen": 439091200
983
+ },
984
+ {
985
+ "epoch": 0.16,
986
+ "learning_rate": 0.00042526480045394365,
987
+ "loss": 2.9324,
988
+ "theoretical_loss": 3.967389869289161,
989
+ "tokens_seen": 442368000
990
+ },
991
+ {
992
+ "epoch": 0.16,
993
+ "learning_rate": 0.00042467372801210517,
994
+ "loss": 2.9195,
995
+ "theoretical_loss": 3.964172674690542,
996
+ "tokens_seen": 445644800
997
+ },
998
+ {
999
+ "epoch": 0.16,
1000
+ "learning_rate": 0.0004240826555702667,
1001
+ "loss": 2.9135,
1002
+ "theoretical_loss": 3.9609856179681078,
1003
+ "tokens_seen": 448921600
1004
+ },
1005
+ {
1006
+ "epoch": 0.16,
1007
+ "learning_rate": 0.0004234915831284282,
1008
+ "loss": 2.8931,
1009
+ "theoretical_loss": 3.957828199884155,
1010
+ "tokens_seen": 452198400
1011
+ },
1012
+ {
1013
+ "epoch": 0.16,
1014
+ "learning_rate": 0.00042290051068658973,
1015
+ "loss": 2.8594,
1016
+ "theoretical_loss": 3.9546999330123036,
1017
+ "tokens_seen": 455475200
1018
+ },
1019
+ {
1020
+ "epoch": 0.16,
1021
+ "learning_rate": 0.00042230943824475125,
1022
+ "loss": 2.8718,
1023
+ "theoretical_loss": 3.951600341375469,
1024
+ "tokens_seen": 458752000
1025
+ },
1026
+ {
1027
+ "epoch": 0.17,
1028
+ "learning_rate": 0.0004217183658029128,
1029
+ "loss": 2.8633,
1030
+ "theoretical_loss": 3.9485289600974305,
1031
+ "tokens_seen": 462028800
1032
+ },
1033
+ {
1034
+ "epoch": 0.17,
1035
+ "learning_rate": 0.00042112729336107435,
1036
+ "loss": 2.8178,
1037
+ "theoretical_loss": 3.945485335067386,
1038
+ "tokens_seen": 465305600
1039
+ },
1040
+ {
1041
+ "epoch": 0.17,
1042
+ "learning_rate": 0.00042053622091923587,
1043
+ "loss": 2.8564,
1044
+ "theoretical_loss": 3.9424690226169212,
1045
+ "tokens_seen": 468582400
1046
+ },
1047
+ {
1048
+ "epoch": 0.17,
1049
+ "learning_rate": 0.0004199451484773974,
1050
+ "loss": 2.879,
1051
+ "theoretical_loss": 3.9394795892088585,
1052
+ "tokens_seen": 471859200
1053
+ },
1054
+ {
1055
+ "epoch": 0.17,
1056
+ "learning_rate": 0.0004193540760355589,
1057
+ "loss": 2.83,
1058
+ "theoretical_loss": 3.9365166111374608,
1059
+ "tokens_seen": 475136000
1060
+ },
1061
+ {
1062
+ "epoch": 0.17,
1063
+ "learning_rate": 0.00041876300359372043,
1064
+ "loss": 2.8027,
1065
+ "theoretical_loss": 3.933579674239507,
1066
+ "tokens_seen": 478412800
1067
+ },
1068
+ {
1069
+ "epoch": 0.17,
1070
+ "learning_rate": 0.00041817193115188195,
1071
+ "loss": 2.8484,
1072
+ "theoretical_loss": 3.930668373615765,
1073
+ "tokens_seen": 481689600
1074
+ },
1075
+ {
1076
+ "epoch": 0.17,
1077
+ "learning_rate": 0.0004175808587100435,
1078
+ "loss": 2.7948,
1079
+ "theoretical_loss": 3.9277823133624366,
1080
+ "tokens_seen": 484966400
1081
+ },
1082
+ {
1083
+ "epoch": 0.17,
1084
+ "learning_rate": 0.00041698978626820505,
1085
+ "loss": 2.8427,
1086
+ "theoretical_loss": 3.9249211063121283,
1087
+ "tokens_seen": 488243200
1088
+ },
1089
+ {
1090
+ "epoch": 0.18,
1091
+ "objective/train/docs_used": 280313,
1092
+ "objective/train/instantaneous_batch_size": 32,
1093
+ "objective/train/instantaneous_microbatch_size": 32768,
1094
+ "objective/train/original_loss": 2.81419038772583,
1095
+ "objective/train/theoretical_loss": 3.9220843737839752,
1096
+ "objective/train/tokens_used": 511980000,
1097
+ "theoretical_loss": 3.9220843737839752,
1098
+ "tokens_seen": 491520000
1099
+ },
1100
+ {
1101
+ "epoch": 0.18,
1102
+ "learning_rate": 0.00041639871382636657,
1103
+ "loss": 2.8683,
1104
+ "theoretical_loss": 3.9220843737839752,
1105
+ "tokens_seen": 491520000
1106
+ },
1107
+ {
1108
+ "epoch": 0.18,
1109
+ "learning_rate": 0.0004158076413845281,
1110
+ "loss": 2.904,
1111
+ "theoretical_loss": 3.919271745342514,
1112
+ "tokens_seen": 494796800
1113
+ },
1114
+ {
1115
+ "epoch": 0.18,
1116
+ "learning_rate": 0.0004152165689426896,
1117
+ "loss": 2.8397,
1118
+ "theoretical_loss": 3.916482858564957,
1119
+ "tokens_seen": 498073600
1120
+ },
1121
+ {
1122
+ "epoch": 0.18,
1123
+ "learning_rate": 0.00041462549650085113,
1124
+ "loss": 2.8119,
1125
+ "theoretical_loss": 3.9137173588165135,
1126
+ "tokens_seen": 501350400
1127
+ },
1128
+ {
1129
+ "epoch": 0.18,
1130
+ "learning_rate": 0.0004140344240590127,
1131
+ "loss": 2.8189,
1132
+ "theoretical_loss": 3.9109748990334348,
1133
+ "tokens_seen": 504627200
1134
+ },
1135
+ {
1136
+ "epoch": 0.18,
1137
+ "learning_rate": 0.0004134433516171742,
1138
+ "loss": 2.8256,
1139
+ "theoretical_loss": 3.908255139513467,
1140
+ "tokens_seen": 507904000
1141
+ },
1142
+ {
1143
+ "epoch": 0.18,
1144
+ "learning_rate": 0.00041285227917533575,
1145
+ "loss": 2.8399,
1146
+ "theoretical_loss": 3.905557747713412,
1147
+ "tokens_seen": 511180800
1148
+ },
1149
+ {
1150
+ "epoch": 0.18,
1151
+ "learning_rate": 0.00041226120673349727,
1152
+ "loss": 2.8163,
1153
+ "theoretical_loss": 3.90288239805351,
1154
+ "tokens_seen": 514457600
1155
+ },
1156
+ {
1157
+ "epoch": 0.18,
1158
+ "learning_rate": 0.0004116701342916588,
1159
+ "loss": 2.8028,
1160
+ "theoretical_loss": 3.9002287717283783,
1161
+ "tokens_seen": 517734400
1162
+ },
1163
+ {
1164
+ "epoch": 0.19,
1165
+ "learning_rate": 0.0004110790618498203,
1166
+ "loss": 2.7863,
1167
+ "theoretical_loss": 3.8975965565242423,
1168
+ "tokens_seen": 521011200
1169
+ },
1170
+ {
1171
+ "epoch": 0.19,
1172
+ "learning_rate": 0.0004104879894079819,
1173
+ "loss": 2.7798,
1174
+ "theoretical_loss": 3.894985446642206,
1175
+ "tokens_seen": 524288000
1176
+ },
1177
+ {
1178
+ "epoch": 0.19,
1179
+ "learning_rate": 0.0004098969169661434,
1180
+ "loss": 2.8037,
1181
+ "theoretical_loss": 3.8923951425273398,
1182
+ "tokens_seen": 527564800
1183
+ },
1184
+ {
1185
+ "epoch": 0.19,
1186
+ "learning_rate": 0.0004093058445243049,
1187
+ "loss": 2.7944,
1188
+ "theoretical_loss": 3.889825350703344,
1189
+ "tokens_seen": 530841600
1190
+ },
1191
+ {
1192
+ "epoch": 0.19,
1193
+ "learning_rate": 0.00040871477208246645,
1194
+ "loss": 2.8126,
1195
+ "theoretical_loss": 3.8872757836125844,
1196
+ "tokens_seen": 534118400
1197
+ },
1198
+ {
1199
+ "epoch": 0.19,
1200
+ "learning_rate": 0.00040812369964062797,
1201
+ "loss": 2.8309,
1202
+ "theoretical_loss": 3.8847461594612884,
1203
+ "tokens_seen": 537395200
1204
+ },
1205
+ {
1206
+ "epoch": 0.19,
1207
+ "learning_rate": 0.0004075326271987895,
1208
+ "loss": 2.8337,
1209
+ "theoretical_loss": 3.882236202069703,
1210
+ "tokens_seen": 540672000
1211
+ },
1212
+ {
1213
+ "epoch": 0.19,
1214
+ "learning_rate": 0.000406941554756951,
1215
+ "loss": 2.8376,
1216
+ "theoretical_loss": 3.8797456407270343,
1217
+ "tokens_seen": 543948800
1218
+ },
1219
+ {
1220
+ "epoch": 0.2,
1221
+ "learning_rate": 0.0004063504823151126,
1222
+ "loss": 2.7992,
1223
+ "theoretical_loss": 3.8772742100509774,
1224
+ "tokens_seen": 547225600
1225
+ },
1226
+ {
1227
+ "epoch": 0.2,
1228
+ "learning_rate": 0.0004057594098732741,
1229
+ "loss": 2.8098,
1230
+ "theoretical_loss": 3.874821649851678,
1231
+ "tokens_seen": 550502400
1232
+ },
1233
+ {
1234
+ "epoch": 0.2,
1235
+ "learning_rate": 0.0004051683374314356,
1236
+ "loss": 2.8817,
1237
+ "theoretical_loss": 3.8723877049999444,
1238
+ "tokens_seen": 553779200
1239
+ },
1240
+ {
1241
+ "epoch": 0.2,
1242
+ "learning_rate": 0.00040457726498959715,
1243
+ "loss": 2.8579,
1244
+ "theoretical_loss": 3.869972125299568,
1245
+ "tokens_seen": 557056000
1246
+ },
1247
+ {
1248
+ "epoch": 0.2,
1249
+ "learning_rate": 0.00040398619254775867,
1250
+ "loss": 2.8204,
1251
+ "theoretical_loss": 3.867574665363595,
1252
+ "tokens_seen": 560332800
1253
+ },
1254
+ {
1255
+ "epoch": 0.2,
1256
+ "learning_rate": 0.0004033951201059202,
1257
+ "loss": 2.8428,
1258
+ "theoretical_loss": 3.865195084494398,
1259
+ "tokens_seen": 563609600
1260
+ },
1261
+ {
1262
+ "epoch": 0.2,
1263
+ "learning_rate": 0.00040280404766408176,
1264
+ "loss": 2.8384,
1265
+ "theoretical_loss": 3.8628331465674224,
1266
+ "tokens_seen": 566886400
1267
+ },
1268
+ {
1269
+ "epoch": 0.2,
1270
+ "learning_rate": 0.0004022129752222433,
1271
+ "loss": 2.8934,
1272
+ "theoretical_loss": 3.860488619918462,
1273
+ "tokens_seen": 570163200
1274
+ },
1275
+ {
1276
+ "epoch": 0.2,
1277
+ "learning_rate": 0.0004016219027804048,
1278
+ "loss": 2.8648,
1279
+ "theoretical_loss": 3.858161277234349,
1280
+ "tokens_seen": 573440000
1281
+ },
1282
+ {
1283
+ "epoch": 0.21,
1284
+ "learning_rate": 0.0004010308303385663,
1285
+ "loss": 2.8661,
1286
+ "theoretical_loss": 3.855850895446925,
1287
+ "tokens_seen": 576716800
1288
+ },
1289
+ {
1290
+ "epoch": 0.21,
1291
+ "learning_rate": 0.00040043975789672785,
1292
+ "loss": 2.8561,
1293
+ "theoretical_loss": 3.8535572556301823,
1294
+ "tokens_seen": 579993600
1295
+ },
1296
+ {
1297
+ "epoch": 0.21,
1298
+ "learning_rate": 0.0003998486854548893,
1299
+ "loss": 2.8319,
1300
+ "theoretical_loss": 3.851280142900463,
1301
+ "tokens_seen": 583270400
1302
+ },
1303
+ {
1304
+ "epoch": 0.21,
1305
+ "learning_rate": 0.0003992576130130509,
1306
+ "loss": 2.8733,
1307
+ "theoretical_loss": 3.8490193463196127,
1308
+ "tokens_seen": 586547200
1309
+ },
1310
+ {
1311
+ "epoch": 0.21,
1312
+ "learning_rate": 0.0003986665405712124,
1313
+ "loss": 2.8419,
1314
+ "theoretical_loss": 3.8467746588009692,
1315
+ "tokens_seen": 589824000
1316
+ },
1317
+ {
1318
+ "epoch": 0.21,
1319
+ "learning_rate": 0.00039807546812937393,
1320
+ "loss": 2.8261,
1321
+ "theoretical_loss": 3.8445458770181116,
1322
+ "tokens_seen": 593100800
1323
+ },
1324
+ {
1325
+ "epoch": 0.21,
1326
+ "learning_rate": 0.00039748439568753545,
1327
+ "loss": 2.809,
1328
+ "theoretical_loss": 3.842332801316254,
1329
+ "tokens_seen": 596377600
1330
+ },
1331
+ {
1332
+ "epoch": 0.21,
1333
+ "learning_rate": 0.00039689332324569697,
1334
+ "loss": 2.7745,
1335
+ "theoretical_loss": 3.840135235626204,
1336
+ "tokens_seen": 599654400
1337
+ },
1338
+ {
1339
+ "epoch": 0.22,
1340
+ "learning_rate": 0.0003963022508038585,
1341
+ "loss": 2.7996,
1342
+ "theoretical_loss": 3.8379529873807945,
1343
+ "tokens_seen": 602931200
1344
+ },
1345
+ {
1346
+ "epoch": 0.22,
1347
+ "learning_rate": 0.00039571117836202,
1348
+ "loss": 2.7862,
1349
+ "theoretical_loss": 3.835785867433705,
1350
+ "tokens_seen": 606208000
1351
+ },
1352
+ {
1353
+ "epoch": 0.22,
1354
+ "learning_rate": 0.0003951201059201816,
1355
+ "loss": 2.7724,
1356
+ "theoretical_loss": 3.8336336899805916,
1357
+ "tokens_seen": 609484800
1358
+ },
1359
+ {
1360
+ "epoch": 0.22,
1361
+ "learning_rate": 0.0003945290334783431,
1362
+ "loss": 2.8298,
1363
+ "theoretical_loss": 3.8314962724824495,
1364
+ "tokens_seen": 612761600
1365
+ },
1366
+ {
1367
+ "epoch": 0.22,
1368
+ "learning_rate": 0.00039393796103650463,
1369
+ "loss": 2.8027,
1370
+ "theoretical_loss": 3.8293734355911297,
1371
+ "tokens_seen": 616038400
1372
+ },
1373
+ {
1374
+ "epoch": 0.22,
1375
+ "learning_rate": 0.00039334688859466615,
1376
+ "loss": 2.7742,
1377
+ "theoretical_loss": 3.8272650030769433,
1378
+ "tokens_seen": 619315200
1379
+ },
1380
+ {
1381
+ "epoch": 0.22,
1382
+ "learning_rate": 0.00039275581615282767,
1383
+ "loss": 2.8195,
1384
+ "theoretical_loss": 3.8251708017582793,
1385
+ "tokens_seen": 622592000
1386
+ },
1387
+ {
1388
+ "epoch": 0.22,
1389
+ "learning_rate": 0.0003921647437109892,
1390
+ "loss": 2.7575,
1391
+ "theoretical_loss": 3.8230906614331737,
1392
+ "tokens_seen": 625868800
1393
+ },
1394
+ {
1395
+ "epoch": 0.22,
1396
+ "learning_rate": 0.00039157367126915077,
1397
+ "loss": 2.777,
1398
+ "theoretical_loss": 3.821024414812761,
1399
+ "tokens_seen": 629145600
1400
+ },
1401
+ {
1402
+ "epoch": 0.23,
1403
+ "learning_rate": 0.0003909825988273123,
1404
+ "loss": 2.7716,
1405
+ "theoretical_loss": 3.818971897456552,
1406
+ "tokens_seen": 632422400
1407
+ },
1408
+ {
1409
+ "epoch": 0.23,
1410
+ "learning_rate": 0.0003903915263854738,
1411
+ "loss": 2.8385,
1412
+ "theoretical_loss": 3.8169329477094784,
1413
+ "tokens_seen": 635699200
1414
+ },
1415
+ {
1416
+ "epoch": 0.23,
1417
+ "learning_rate": 0.00038980045394363533,
1418
+ "loss": 2.8269,
1419
+ "theoretical_loss": 3.814907406640639,
1420
+ "tokens_seen": 638976000
1421
+ },
1422
+ {
1423
+ "epoch": 0.23,
1424
+ "learning_rate": 0.00038920938150179685,
1425
+ "loss": 2.8603,
1426
+ "theoretical_loss": 3.8128951179837056,
1427
+ "tokens_seen": 642252800
1428
+ },
1429
+ {
1430
+ "epoch": 0.23,
1431
+ "learning_rate": 0.00038861830905995837,
1432
+ "loss": 2.8311,
1433
+ "theoretical_loss": 3.8108959280789243,
1434
+ "tokens_seen": 645529600
1435
+ },
1436
+ {
1437
+ "epoch": 0.23,
1438
+ "learning_rate": 0.00038802723661811994,
1439
+ "loss": 2.8352,
1440
+ "theoretical_loss": 3.8089096858166718,
1441
+ "tokens_seen": 648806400
1442
+ },
1443
+ {
1444
+ "epoch": 0.23,
1445
+ "learning_rate": 0.00038743616417628147,
1446
+ "loss": 2.8036,
1447
+ "theoretical_loss": 3.8069362425825037,
1448
+ "tokens_seen": 652083200
1449
+ },
1450
+ {
1451
+ "debugging/Self-BLEU-5": 0.5268657803535504,
1452
+ "debugging/distinct-1-grams": 0.7733134705474689,
1453
+ "debugging/distinct-2-grams": 0.9620239068787645,
1454
+ "debugging/entropy-1-grams": 5.925348960854147,
1455
+ "debugging/entropy-2-grams": 6.698235169198012,
1456
+ "debugging/length": 567.4545454545455,
1457
+ "debugging/num_segments": 11,
1458
+ "epoch": 0.23,
1459
+ "objective/train/docs_used": 369218,
1460
+ "objective/train/instantaneous_batch_size": 32,
1461
+ "objective/train/instantaneous_microbatch_size": 32768,
1462
+ "objective/train/original_loss": 2.6253697872161865,
1463
+ "objective/train/theoretical_loss": 3.8049754522036645,
1464
+ "objective/train/tokens_used": 675820000,
1465
+ "theoretical_loss": 3.8049754522036645,
1466
+ "tokens_seen": 655360000
1467
+ },
1468
+ {
1469
+ "epoch": 0.23,
1470
+ "learning_rate": 0.000386845091734443,
1471
+ "loss": 2.7868,
1472
+ "theoretical_loss": 3.8049754522036645,
1473
+ "tokens_seen": 655360000
1474
+ },
1475
+ {
1476
+ "epoch": 0.24,
1477
+ "learning_rate": 0.0003862540192926045,
1478
+ "loss": 2.7929,
1479
+ "theoretical_loss": 3.8030271708970003,
1480
+ "tokens_seen": 658636800
1481
+ },
1482
+ {
1483
+ "epoch": 0.24,
1484
+ "learning_rate": 0.00038566294685076603,
1485
+ "loss": 2.7686,
1486
+ "theoretical_loss": 3.801091257218237,
1487
+ "tokens_seen": 661913600
1488
+ },
1489
+ {
1490
+ "epoch": 0.24,
1491
+ "learning_rate": 0.00038507187440892755,
1492
+ "loss": 2.779,
1493
+ "theoretical_loss": 3.79916757201258,
1494
+ "tokens_seen": 665190400
1495
+ },
1496
+ {
1497
+ "epoch": 0.24,
1498
+ "learning_rate": 0.0003844808019670891,
1499
+ "loss": 2.788,
1500
+ "theoretical_loss": 3.7972559783665965,
1501
+ "tokens_seen": 668467200
1502
+ },
1503
+ {
1504
+ "epoch": 0.24,
1505
+ "learning_rate": 0.00038388972952525064,
1506
+ "loss": 2.7746,
1507
+ "theoretical_loss": 3.7953563415613325,
1508
+ "tokens_seen": 671744000
1509
+ },
1510
+ {
1511
+ "epoch": 0.24,
1512
+ "learning_rate": 0.00038329865708341216,
1513
+ "loss": 2.8202,
1514
+ "theoretical_loss": 3.7934685290266454,
1515
+ "tokens_seen": 675020800
1516
+ },
1517
+ {
1518
+ "epoch": 0.24,
1519
+ "learning_rate": 0.00038271940609041044,
1520
+ "loss": 2.8374,
1521
+ "theoretical_loss": 3.7915924102966914,
1522
+ "tokens_seen": 678297600
1523
+ },
1524
+ {
1525
+ "epoch": 0.24,
1526
+ "learning_rate": 0.00038212833364857196,
1527
+ "loss": 2.8547,
1528
+ "theoretical_loss": 3.789727856966552,
1529
+ "tokens_seen": 681574400
1530
+ },
1531
+ {
1532
+ "epoch": 0.24,
1533
+ "learning_rate": 0.0003815372612067335,
1534
+ "loss": 2.7956,
1535
+ "theoretical_loss": 3.787874742649958,
1536
+ "tokens_seen": 684851200
1537
+ },
1538
+ {
1539
+ "epoch": 0.25,
1540
+ "learning_rate": 0.000380946188764895,
1541
+ "loss": 2.8261,
1542
+ "theoretical_loss": 3.786032942938073,
1543
+ "tokens_seen": 688128000
1544
+ },
1545
+ {
1546
+ "epoch": 0.25,
1547
+ "learning_rate": 0.0003803551163230566,
1548
+ "loss": 2.8168,
1549
+ "theoretical_loss": 3.784202335359316,
1550
+ "tokens_seen": 691404800
1551
+ },
1552
+ {
1553
+ "epoch": 0.25,
1554
+ "learning_rate": 0.0003797640438812181,
1555
+ "loss": 2.8115,
1556
+ "theoretical_loss": 3.7823827993401844,
1557
+ "tokens_seen": 694681600
1558
+ },
1559
+ {
1560
+ "epoch": 0.25,
1561
+ "learning_rate": 0.0003791729714393796,
1562
+ "loss": 2.7943,
1563
+ "theoretical_loss": 3.7805742161670466,
1564
+ "tokens_seen": 697958400
1565
+ },
1566
+ {
1567
+ "epoch": 0.25,
1568
+ "learning_rate": 0.00037858189899754114,
1569
+ "loss": 2.7825,
1570
+ "theoretical_loss": 3.7787764689488847,
1571
+ "tokens_seen": 701235200
1572
+ },
1573
+ {
1574
+ "epoch": 0.25,
1575
+ "learning_rate": 0.00037799082655570266,
1576
+ "loss": 2.7542,
1577
+ "theoretical_loss": 3.77698944258095,
1578
+ "tokens_seen": 704512000
1579
+ },
1580
+ {
1581
+ "epoch": 0.25,
1582
+ "learning_rate": 0.0003773997541138642,
1583
+ "loss": 2.7907,
1584
+ "theoretical_loss": 3.7752130237093064,
1585
+ "tokens_seen": 707788800
1586
+ },
1587
+ {
1588
+ "epoch": 0.25,
1589
+ "learning_rate": 0.00037680868167202575,
1590
+ "loss": 2.791,
1591
+ "theoretical_loss": 3.773447100696245,
1592
+ "tokens_seen": 711065600
1593
+ },
1594
+ {
1595
+ "epoch": 0.26,
1596
+ "learning_rate": 0.0003762176092301873,
1597
+ "loss": 2.8071,
1598
+ "theoretical_loss": 3.771691563586529,
1599
+ "tokens_seen": 714342400
1600
+ },
1601
+ {
1602
+ "epoch": 0.26,
1603
+ "learning_rate": 0.0003756265367883488,
1604
+ "loss": 2.7722,
1605
+ "theoretical_loss": 3.7699463040744616,
1606
+ "tokens_seen": 717619200
1607
+ },
1608
+ {
1609
+ "epoch": 0.26,
1610
+ "learning_rate": 0.0003750354643465103,
1611
+ "loss": 2.7415,
1612
+ "theoretical_loss": 3.768211215471741,
1613
+ "tokens_seen": 720896000
1614
+ },
1615
+ {
1616
+ "epoch": 0.26,
1617
+ "learning_rate": 0.00037444439190467184,
1618
+ "loss": 2.7554,
1619
+ "theoretical_loss": 3.766486192676084,
1620
+ "tokens_seen": 724172800
1621
+ },
1622
+ {
1623
+ "epoch": 0.26,
1624
+ "learning_rate": 0.00037385331946283336,
1625
+ "loss": 2.704,
1626
+ "theoretical_loss": 3.764771132140602,
1627
+ "tokens_seen": 727449600
1628
+ },
1629
+ {
1630
+ "epoch": 0.26,
1631
+ "learning_rate": 0.00037326224702099493,
1632
+ "loss": 2.7486,
1633
+ "theoretical_loss": 3.763065931843898,
1634
+ "tokens_seen": 730726400
1635
+ },
1636
+ {
1637
+ "epoch": 0.26,
1638
+ "learning_rate": 0.00037267117457915645,
1639
+ "loss": 2.7402,
1640
+ "theoretical_loss": 3.7613704912608723,
1641
+ "tokens_seen": 734003200
1642
+ },
1643
+ {
1644
+ "epoch": 0.26,
1645
+ "learning_rate": 0.0003720919235861547,
1646
+ "loss": 2.7675,
1647
+ "theoretical_loss": 3.759684711334213,
1648
+ "tokens_seen": 737280000
1649
+ },
1650
+ {
1651
+ "epoch": 0.26,
1652
+ "learning_rate": 0.00037150085114431625,
1653
+ "loss": 2.7613,
1654
+ "theoretical_loss": 3.7580084944465555,
1655
+ "tokens_seen": 740556800
1656
+ },
1657
+ {
1658
+ "epoch": 0.27,
1659
+ "learning_rate": 0.00037090977870247777,
1660
+ "loss": 2.7919,
1661
+ "theoretical_loss": 3.7563417443932905,
1662
+ "tokens_seen": 743833600
1663
+ },
1664
+ {
1665
+ "epoch": 0.27,
1666
+ "learning_rate": 0.0003703187062606393,
1667
+ "loss": 2.7827,
1668
+ "theoretical_loss": 3.754684366355999,
1669
+ "tokens_seen": 747110400
1670
+ },
1671
+ {
1672
+ "epoch": 0.27,
1673
+ "learning_rate": 0.0003697276338188008,
1674
+ "loss": 2.7466,
1675
+ "theoretical_loss": 3.753036266876505,
1676
+ "tokens_seen": 750387200
1677
+ },
1678
+ {
1679
+ "epoch": 0.27,
1680
+ "learning_rate": 0.0003691365613769624,
1681
+ "loss": 2.7505,
1682
+ "theoretical_loss": 3.751397353831524,
1683
+ "tokens_seen": 753664000
1684
+ },
1685
+ {
1686
+ "epoch": 0.27,
1687
+ "learning_rate": 0.0003685454889351239,
1688
+ "loss": 2.7087,
1689
+ "theoretical_loss": 3.749767536407891,
1690
+ "tokens_seen": 756940800
1691
+ },
1692
+ {
1693
+ "epoch": 0.27,
1694
+ "learning_rate": 0.0003679544164932854,
1695
+ "loss": 2.756,
1696
+ "theoretical_loss": 3.7481467250783504,
1697
+ "tokens_seen": 760217600
1698
+ },
1699
+ {
1700
+ "epoch": 0.27,
1701
+ "learning_rate": 0.00036736334405144695,
1702
+ "loss": 2.7494,
1703
+ "theoretical_loss": 3.746534831577904,
1704
+ "tokens_seen": 763494400
1705
+ },
1706
+ {
1707
+ "epoch": 0.27,
1708
+ "learning_rate": 0.00036677227160960847,
1709
+ "loss": 2.6995,
1710
+ "theoretical_loss": 3.744931768880681,
1711
+ "tokens_seen": 766771200
1712
+ },
1713
+ {
1714
+ "epoch": 0.28,
1715
+ "learning_rate": 0.00036618119916777,
1716
+ "loss": 2.7499,
1717
+ "theoretical_loss": 3.743337451177343,
1718
+ "tokens_seen": 770048000
1719
+ },
1720
+ {
1721
+ "epoch": 0.28,
1722
+ "learning_rate": 0.00036559012672593156,
1723
+ "loss": 2.7127,
1724
+ "theoretical_loss": 3.74175179385298,
1725
+ "tokens_seen": 773324800
1726
+ },
1727
+ {
1728
+ "epoch": 0.28,
1729
+ "learning_rate": 0.0003649990542840931,
1730
+ "loss": 2.7784,
1731
+ "theoretical_loss": 3.740174713465512,
1732
+ "tokens_seen": 776601600
1733
+ },
1734
+ {
1735
+ "epoch": 0.28,
1736
+ "learning_rate": 0.0003644079818422546,
1737
+ "loss": 2.7351,
1738
+ "theoretical_loss": 3.7386061277245655,
1739
+ "tokens_seen": 779878400
1740
+ },
1741
+ {
1742
+ "epoch": 0.28,
1743
+ "learning_rate": 0.0003638169094004161,
1744
+ "loss": 2.7641,
1745
+ "theoretical_loss": 3.7370459554708146,
1746
+ "tokens_seen": 783155200
1747
+ },
1748
+ {
1749
+ "epoch": 0.28,
1750
+ "learning_rate": 0.00036322583695857765,
1751
+ "loss": 2.7932,
1752
+ "theoretical_loss": 3.735494116655784,
1753
+ "tokens_seen": 786432000
1754
+ },
1755
+ {
1756
+ "epoch": 0.28,
1757
+ "learning_rate": 0.00036263476451673917,
1758
+ "loss": 2.77,
1759
+ "theoretical_loss": 3.733950532322087,
1760
+ "tokens_seen": 789708800
1761
+ },
1762
+ {
1763
+ "epoch": 0.28,
1764
+ "learning_rate": 0.00036204369207490074,
1765
+ "loss": 2.8059,
1766
+ "theoretical_loss": 3.7324151245841044,
1767
+ "tokens_seen": 792985600
1768
+ },
1769
+ {
1770
+ "epoch": 0.28,
1771
+ "learning_rate": 0.00036145261963306226,
1772
+ "loss": 2.7821,
1773
+ "theoretical_loss": 3.730887816609077,
1774
+ "tokens_seen": 796262400
1775
+ },
1776
+ {
1777
+ "epoch": 0.29,
1778
+ "learning_rate": 0.0003608615471912238,
1779
+ "loss": 2.7676,
1780
+ "theoretical_loss": 3.729368532598609,
1781
+ "tokens_seen": 799539200
1782
+ },
1783
+ {
1784
+ "epoch": 0.29,
1785
+ "learning_rate": 0.0003602704747493853,
1786
+ "loss": 2.7677,
1787
+ "theoretical_loss": 3.7278571977705734,
1788
+ "tokens_seen": 802816000
1789
+ },
1790
+ {
1791
+ "epoch": 0.29,
1792
+ "learning_rate": 0.0003596794023075468,
1793
+ "loss": 2.8028,
1794
+ "theoretical_loss": 3.7263537383414023,
1795
+ "tokens_seen": 806092800
1796
+ },
1797
+ {
1798
+ "epoch": 0.29,
1799
+ "learning_rate": 0.00035908832986570834,
1800
+ "loss": 2.7817,
1801
+ "theoretical_loss": 3.7248580815087626,
1802
+ "tokens_seen": 809369600
1803
+ },
1804
+ {
1805
+ "epoch": 0.29,
1806
+ "learning_rate": 0.0003584972574238699,
1807
+ "loss": 2.8043,
1808
+ "theoretical_loss": 3.7233701554345924,
1809
+ "tokens_seen": 812646400
1810
+ },
1811
+ {
1812
+ "epoch": 0.29,
1813
+ "learning_rate": 0.00035790618498203144,
1814
+ "loss": 2.7556,
1815
+ "theoretical_loss": 3.721889889228506,
1816
+ "tokens_seen": 815923200
1817
+ },
1818
+ {
1819
+ "epoch": 0.29,
1820
+ "objective/train/docs_used": 458204,
1821
+ "objective/train/instantaneous_batch_size": 32,
1822
+ "objective/train/instantaneous_microbatch_size": 32768,
1823
+ "objective/train/original_loss": 3.028093099594116,
1824
+ "objective/train/theoretical_loss": 3.720417212931543,
1825
+ "objective/train/tokens_used": 839660000,
1826
+ "theoretical_loss": 3.720417212931543,
1827
+ "tokens_seen": 819200000
1828
+ },
1829
+ {
1830
+ "epoch": 0.29,
1831
+ "learning_rate": 0.00035731511254019296,
1832
+ "loss": 2.7947,
1833
+ "theoretical_loss": 3.720417212931543,
1834
+ "tokens_seen": 819200000
1835
+ },
1836
+ {
1837
+ "epoch": 0.29,
1838
+ "learning_rate": 0.0003567240400983545,
1839
+ "loss": 2.8117,
1840
+ "theoretical_loss": 3.7189520575002666,
1841
+ "tokens_seen": 822476800
1842
+ },
1843
+ {
1844
+ "epoch": 0.29,
1845
+ "learning_rate": 0.000356132967656516,
1846
+ "loss": 2.757,
1847
+ "theoretical_loss": 3.717494354791188,
1848
+ "tokens_seen": 825753600
1849
+ },
1850
+ {
1851
+ "epoch": 0.3,
1852
+ "learning_rate": 0.0003555418952146775,
1853
+ "loss": 2.7848,
1854
+ "theoretical_loss": 3.716044037545523,
1855
+ "tokens_seen": 829030400
1856
+ },
1857
+ {
1858
+ "epoch": 0.3,
1859
+ "learning_rate": 0.00035495082277283904,
1860
+ "loss": 2.7501,
1861
+ "theoretical_loss": 3.714601039374263,
1862
+ "tokens_seen": 832307200
1863
+ },
1864
+ {
1865
+ "epoch": 0.3,
1866
+ "learning_rate": 0.0003543597503310006,
1867
+ "loss": 2.7341,
1868
+ "theoretical_loss": 3.7131652947435536,
1869
+ "tokens_seen": 835584000
1870
+ },
1871
+ {
1872
+ "epoch": 0.3,
1873
+ "learning_rate": 0.00035376867788916214,
1874
+ "loss": 2.7225,
1875
+ "theoretical_loss": 3.7117367389603793,
1876
+ "tokens_seen": 838860800
1877
+ },
1878
+ {
1879
+ "epoch": 0.3,
1880
+ "learning_rate": 0.00035317760544732366,
1881
+ "loss": 2.691,
1882
+ "theoretical_loss": 3.710315308158541,
1883
+ "tokens_seen": 842137600
1884
+ },
1885
+ {
1886
+ "epoch": 0.3,
1887
+ "learning_rate": 0.0003525865330054852,
1888
+ "loss": 2.7351,
1889
+ "theoretical_loss": 3.7089009392849173,
1890
+ "tokens_seen": 845414400
1891
+ },
1892
+ {
1893
+ "epoch": 0.3,
1894
+ "learning_rate": 0.00035199546056364665,
1895
+ "loss": 2.6898,
1896
+ "theoretical_loss": 3.7074935700860143,
1897
+ "tokens_seen": 848691200
1898
+ },
1899
+ {
1900
+ "epoch": 0.3,
1901
+ "learning_rate": 0.00035140438812180817,
1902
+ "loss": 2.7464,
1903
+ "theoretical_loss": 3.706093139094781,
1904
+ "tokens_seen": 851968000
1905
+ },
1906
+ {
1907
+ "epoch": 0.31,
1908
+ "learning_rate": 0.00035081331567996974,
1909
+ "loss": 2.7205,
1910
+ "theoretical_loss": 3.7046995856176954,
1911
+ "tokens_seen": 855244800
1912
+ },
1913
+ {
1914
+ "epoch": 0.31,
1915
+ "learning_rate": 0.00035022224323813127,
1916
+ "loss": 2.7251,
1917
+ "theoretical_loss": 3.703312849722111,
1918
+ "tokens_seen": 858521600
1919
+ },
1920
+ {
1921
+ "epoch": 0.31,
1922
+ "learning_rate": 0.0003496311707962928,
1923
+ "loss": 2.7297,
1924
+ "theoretical_loss": 3.701932872223858,
1925
+ "tokens_seen": 861798400
1926
+ },
1927
+ {
1928
+ "epoch": 0.31,
1929
+ "learning_rate": 0.0003490400983544543,
1930
+ "loss": 2.6675,
1931
+ "theoretical_loss": 3.7005595946750924,
1932
+ "tokens_seen": 865075200
1933
+ },
1934
+ {
1935
+ "epoch": 0.31,
1936
+ "learning_rate": 0.00034844902591261583,
1937
+ "loss": 2.6965,
1938
+ "theoretical_loss": 3.699192959352386,
1939
+ "tokens_seen": 868352000
1940
+ },
1941
+ {
1942
+ "epoch": 0.31,
1943
+ "learning_rate": 0.00034785795347077735,
1944
+ "loss": 2.7056,
1945
+ "theoretical_loss": 3.6978329092450557,
1946
+ "tokens_seen": 871628800
1947
+ },
1948
+ {
1949
+ "epoch": 0.31,
1950
+ "learning_rate": 0.0003472668810289389,
1951
+ "loss": 2.7372,
1952
+ "theoretical_loss": 3.6964793880437226,
1953
+ "tokens_seen": 874905600
1954
+ },
1955
+ {
1956
+ "epoch": 0.31,
1957
+ "learning_rate": 0.00034667580858710044,
1958
+ "loss": 2.7373,
1959
+ "theoretical_loss": 3.6951323401290974,
1960
+ "tokens_seen": 878182400
1961
+ },
1962
+ {
1963
+ "epoch": 0.31,
1964
+ "learning_rate": 0.00034608473614526196,
1965
+ "loss": 2.7016,
1966
+ "theoretical_loss": 3.6937917105609834,
1967
+ "tokens_seen": 881459200
1968
+ },
1969
+ {
1970
+ "epoch": 0.32,
1971
+ "learning_rate": 0.0003454936637034235,
1972
+ "loss": 2.7444,
1973
+ "theoretical_loss": 3.692457445067501,
1974
+ "tokens_seen": 884736000
1975
+ },
1976
+ {
1977
+ "epoch": 0.32,
1978
+ "learning_rate": 0.000344902591261585,
1979
+ "loss": 2.7565,
1980
+ "theoretical_loss": 3.6911294900345166,
1981
+ "tokens_seen": 888012800
1982
+ },
1983
+ {
1984
+ "epoch": 0.32,
1985
+ "learning_rate": 0.00034431151881974653,
1986
+ "loss": 2.6976,
1987
+ "theoretical_loss": 3.6898077924952775,
1988
+ "tokens_seen": 891289600
1989
+ },
1990
+ {
1991
+ "epoch": 0.32,
1992
+ "learning_rate": 0.00034372044637790805,
1993
+ "loss": 2.6916,
1994
+ "theoretical_loss": 3.6884923001202505,
1995
+ "tokens_seen": 894566400
1996
+ },
1997
+ {
1998
+ "epoch": 0.32,
1999
+ "learning_rate": 0.0003431293739360696,
2000
+ "loss": 2.7103,
2001
+ "theoretical_loss": 3.6871829612071583,
2002
+ "tokens_seen": 897843200
2003
+ },
2004
+ {
2005
+ "epoch": 0.32,
2006
+ "learning_rate": 0.00034253830149423114,
2007
+ "loss": 2.7112,
2008
+ "theoretical_loss": 3.6858797246711976,
2009
+ "tokens_seen": 901120000
2010
+ },
2011
+ {
2012
+ "epoch": 0.32,
2013
+ "learning_rate": 0.00034194722905239266,
2014
+ "loss": 2.6877,
2015
+ "theoretical_loss": 3.684582540035456,
2016
+ "tokens_seen": 904396800
2017
+ },
2018
+ {
2019
+ "epoch": 0.32,
2020
+ "learning_rate": 0.0003413561566105542,
2021
+ "loss": 2.6852,
2022
+ "theoretical_loss": 3.683291357421508,
2023
+ "tokens_seen": 907673600
2024
+ },
2025
+ {
2026
+ "epoch": 0.33,
2027
+ "learning_rate": 0.0003407650841687157,
2028
+ "loss": 2.7345,
2029
+ "theoretical_loss": 3.682006127540184,
2030
+ "tokens_seen": 910950400
2031
+ },
2032
+ {
2033
+ "epoch": 0.33,
2034
+ "learning_rate": 0.0003401740117268772,
2035
+ "loss": 2.7163,
2036
+ "theoretical_loss": 3.680726801682522,
2037
+ "tokens_seen": 914227200
2038
+ },
2039
+ {
2040
+ "epoch": 0.33,
2041
+ "learning_rate": 0.0003395829392850388,
2042
+ "loss": 2.7208,
2043
+ "theoretical_loss": 3.679453331710889,
2044
+ "tokens_seen": 917504000
2045
+ },
2046
+ {
2047
+ "epoch": 0.33,
2048
+ "learning_rate": 0.0003389918668432003,
2049
+ "loss": 2.7313,
2050
+ "theoretical_loss": 3.6781856700502646,
2051
+ "tokens_seen": 920780800
2052
+ },
2053
+ {
2054
+ "epoch": 0.33,
2055
+ "learning_rate": 0.00033840079440136184,
2056
+ "loss": 2.722,
2057
+ "theoretical_loss": 3.6769237696796933,
2058
+ "tokens_seen": 924057600
2059
+ },
2060
+ {
2061
+ "epoch": 0.33,
2062
+ "learning_rate": 0.00033780972195952336,
2063
+ "loss": 2.7169,
2064
+ "theoretical_loss": 3.6756675841238913,
2065
+ "tokens_seen": 927334400
2066
+ },
2067
+ {
2068
+ "epoch": 0.33,
2069
+ "learning_rate": 0.0003372186495176849,
2070
+ "loss": 2.7377,
2071
+ "theoretical_loss": 3.6744170674450176,
2072
+ "tokens_seen": 930611200
2073
+ },
2074
+ {
2075
+ "epoch": 0.33,
2076
+ "learning_rate": 0.0003366275770758464,
2077
+ "loss": 2.7284,
2078
+ "theoretical_loss": 3.673172174234587,
2079
+ "tokens_seen": 933888000
2080
+ },
2081
+ {
2082
+ "epoch": 0.33,
2083
+ "learning_rate": 0.000336036504634008,
2084
+ "loss": 2.7507,
2085
+ "theoretical_loss": 3.6719328596055423,
2086
+ "tokens_seen": 937164800
2087
+ },
2088
+ {
2089
+ "epoch": 0.34,
2090
+ "learning_rate": 0.0003354454321921695,
2091
+ "loss": 2.6989,
2092
+ "theoretical_loss": 3.670699079184467,
2093
+ "tokens_seen": 940441600
2094
+ },
2095
+ {
2096
+ "epoch": 0.34,
2097
+ "learning_rate": 0.000334854359750331,
2098
+ "loss": 2.7391,
2099
+ "theoretical_loss": 3.669470789103942,
2100
+ "tokens_seen": 943718400
2101
+ },
2102
+ {
2103
+ "epoch": 0.34,
2104
+ "learning_rate": 0.00033426328730849254,
2105
+ "loss": 2.7049,
2106
+ "theoretical_loss": 3.6682479459950446,
2107
+ "tokens_seen": 946995200
2108
+ },
2109
+ {
2110
+ "epoch": 0.34,
2111
+ "learning_rate": 0.00033367221486665406,
2112
+ "loss": 2.6821,
2113
+ "theoretical_loss": 3.6670305069799785,
2114
+ "tokens_seen": 950272000
2115
+ },
2116
+ {
2117
+ "epoch": 0.34,
2118
+ "learning_rate": 0.0003330811424248156,
2119
+ "loss": 2.6779,
2120
+ "theoretical_loss": 3.6658184296648457,
2121
+ "tokens_seen": 953548800
2122
+ },
2123
+ {
2124
+ "epoch": 0.34,
2125
+ "learning_rate": 0.0003324900699829771,
2126
+ "loss": 2.6528,
2127
+ "theoretical_loss": 3.6646116721325415,
2128
+ "tokens_seen": 956825600
2129
+ },
2130
+ {
2131
+ "epoch": 0.34,
2132
+ "learning_rate": 0.0003318989975411387,
2133
+ "loss": 2.6351,
2134
+ "theoretical_loss": 3.6634101929357836,
2135
+ "tokens_seen": 960102400
2136
+ },
2137
+ {
2138
+ "epoch": 0.34,
2139
+ "learning_rate": 0.0003313079250993002,
2140
+ "loss": 2.618,
2141
+ "theoretical_loss": 3.6622139510902625,
2142
+ "tokens_seen": 963379200
2143
+ },
2144
+ {
2145
+ "epoch": 0.35,
2146
+ "learning_rate": 0.0003307168526574617,
2147
+ "loss": 2.6546,
2148
+ "theoretical_loss": 3.6610229060679167,
2149
+ "tokens_seen": 966656000
2150
+ },
2151
+ {
2152
+ "epoch": 0.35,
2153
+ "learning_rate": 0.00033012578021562324,
2154
+ "loss": 2.6745,
2155
+ "theoretical_loss": 3.659837017790328,
2156
+ "tokens_seen": 969932800
2157
+ },
2158
+ {
2159
+ "epoch": 0.35,
2160
+ "learning_rate": 0.00032953470777378476,
2161
+ "loss": 2.7084,
2162
+ "theoretical_loss": 3.658656246622233,
2163
+ "tokens_seen": 973209600
2164
+ },
2165
+ {
2166
+ "epoch": 0.35,
2167
+ "learning_rate": 0.0003289436353319463,
2168
+ "loss": 2.7065,
2169
+ "theoretical_loss": 3.6574805533651515,
2170
+ "tokens_seen": 976486400
2171
+ },
2172
+ {
2173
+ "epoch": 0.35,
2174
+ "learning_rate": 0.00032835256289010786,
2175
+ "loss": 2.6295,
2176
+ "theoretical_loss": 3.6563098992511267,
2177
+ "tokens_seen": 979763200
2178
+ },
2179
+ {
2180
+ "debugging/Self-BLEU-5": 0.36988534170376464,
2181
+ "debugging/distinct-1-grams": 0.7978170447064457,
2182
+ "debugging/distinct-2-grams": 0.9699762510402439,
2183
+ "debugging/entropy-1-grams": 5.166066569251399,
2184
+ "debugging/entropy-2-grams": 5.640626124943832,
2185
+ "debugging/length": 444.0,
2186
+ "debugging/num_segments": 5,
2187
+ "epoch": 0.35,
2188
+ "objective/train/docs_used": 548757,
2189
+ "objective/train/instantaneous_batch_size": 32,
2190
+ "objective/train/instantaneous_microbatch_size": 32768,
2191
+ "objective/train/original_loss": 2.5549697875976562,
2192
+ "objective/train/theoretical_loss": 3.655144245936574,
2193
+ "objective/train/tokens_used": 1003500000,
2194
+ "theoretical_loss": 3.655144245936574,
2195
+ "tokens_seen": 983040000
2196
+ },
2197
+ {
2198
+ "epoch": 0.35,
2199
+ "learning_rate": 0.0003277614904482694,
2200
+ "loss": 2.6943,
2201
+ "theoretical_loss": 3.655144245936574,
2202
+ "tokens_seen": 983040000
2203
+ },
2204
+ {
2205
+ "epoch": 0.35,
2206
+ "learning_rate": 0.0003271704180064309,
2207
+ "loss": 2.6566,
2208
+ "theoretical_loss": 3.653983555496242,
2209
+ "tokens_seen": 986316800
2210
+ },
2211
+ {
2212
+ "epoch": 0.35,
2213
+ "learning_rate": 0.0003265793455645924,
2214
+ "loss": 2.6849,
2215
+ "theoretical_loss": 3.6528277904172755,
2216
+ "tokens_seen": 989593600
2217
+ },
2218
+ {
2219
+ "epoch": 0.35,
2220
+ "learning_rate": 0.00032598827312275394,
2221
+ "loss": 2.7013,
2222
+ "theoretical_loss": 3.6516769135933815,
2223
+ "tokens_seen": 992870400
2224
+ },
2225
+ {
2226
+ "epoch": 0.36,
2227
+ "learning_rate": 0.00032539720068091546,
2228
+ "loss": 2.7911,
2229
+ "theoretical_loss": 3.650530888319103,
2230
+ "tokens_seen": 996147200
2231
+ },
2232
+ {
2233
+ "epoch": 0.36,
2234
+ "learning_rate": 0.00032480612823907704,
2235
+ "loss": 2.7824,
2236
+ "theoretical_loss": 3.649389678284182,
2237
+ "tokens_seen": 999424000
2238
+ },
2239
+ {
2240
+ "epoch": 0.36,
2241
+ "learning_rate": 0.00032421505579723856,
2242
+ "loss": 2.8279,
2243
+ "theoretical_loss": 3.6482532475680287,
2244
+ "tokens_seen": 1002700800
2245
+ },
2246
+ {
2247
+ "epoch": 0.36,
2248
+ "learning_rate": 0.0003236239833554001,
2249
+ "loss": 2.7843,
2250
+ "theoretical_loss": 3.6471215606342833,
2251
+ "tokens_seen": 1005977600
2252
+ },
2253
+ {
2254
+ "epoch": 0.36,
2255
+ "learning_rate": 0.00032303291091356155,
2256
+ "loss": 2.783,
2257
+ "theoretical_loss": 3.645994582325468,
2258
+ "tokens_seen": 1009254400
2259
+ },
2260
+ {
2261
+ "epoch": 0.36,
2262
+ "learning_rate": 0.0003224654813693966,
2263
+ "loss": 2.8,
2264
+ "theoretical_loss": 3.6448722778577327,
2265
+ "tokens_seen": 1012531200
2266
+ },
2267
+ {
2268
+ "epoch": 0.36,
2269
+ "learning_rate": 0.00032187440892755814,
2270
+ "loss": 2.8244,
2271
+ "theoretical_loss": 3.6437546128156946,
2272
+ "tokens_seen": 1015808000
2273
+ },
2274
+ {
2275
+ "epoch": 0.36,
2276
+ "learning_rate": 0.0003212833364857197,
2277
+ "loss": 2.7973,
2278
+ "theoretical_loss": 3.6426415531473566,
2279
+ "tokens_seen": 1019084800
2280
+ },
2281
+ {
2282
+ "epoch": 0.37,
2283
+ "learning_rate": 0.00032069226404388124,
2284
+ "loss": 2.7788,
2285
+ "theoretical_loss": 3.641533065159118,
2286
+ "tokens_seen": 1022361600
2287
+ },
2288
+ {
2289
+ "epoch": 0.37,
2290
+ "learning_rate": 0.00032010119160204276,
2291
+ "loss": 2.7468,
2292
+ "theoretical_loss": 3.6404291155108712,
2293
+ "tokens_seen": 1025638400
2294
+ },
2295
+ {
2296
+ "epoch": 0.37,
2297
+ "learning_rate": 0.0003195101191602043,
2298
+ "loss": 2.7502,
2299
+ "theoretical_loss": 3.639329671211173,
2300
+ "tokens_seen": 1028915200
2301
+ },
2302
+ {
2303
+ "epoch": 0.37,
2304
+ "learning_rate": 0.0003189190467183658,
2305
+ "loss": 2.7506,
2306
+ "theoretical_loss": 3.6382346996125055,
2307
+ "tokens_seen": 1032192000
2308
+ },
2309
+ {
2310
+ "epoch": 0.37,
2311
+ "learning_rate": 0.0003183279742765273,
2312
+ "loss": 2.7245,
2313
+ "theoretical_loss": 3.6371441684066097,
2314
+ "tokens_seen": 1035468800
2315
+ },
2316
+ {
2317
+ "epoch": 0.37,
2318
+ "learning_rate": 0.00031773690183468884,
2319
+ "loss": 2.7504,
2320
+ "theoretical_loss": 3.6360580456199036,
2321
+ "tokens_seen": 1038745600
2322
+ },
2323
+ {
2324
+ "epoch": 0.37,
2325
+ "learning_rate": 0.0003171458293928504,
2326
+ "loss": 2.7781,
2327
+ "theoretical_loss": 3.6349762996089683,
2328
+ "tokens_seen": 1042022400
2329
+ },
2330
+ {
2331
+ "epoch": 0.37,
2332
+ "learning_rate": 0.00031655475695101194,
2333
+ "loss": 2.7056,
2334
+ "theoretical_loss": 3.633898899056115,
2335
+ "tokens_seen": 1045299200
2336
+ },
2337
+ {
2338
+ "epoch": 0.37,
2339
+ "learning_rate": 0.00031596368450917346,
2340
+ "loss": 2.7879,
2341
+ "theoretical_loss": 3.6328258129650246,
2342
+ "tokens_seen": 1048576000
2343
+ },
2344
+ {
2345
+ "epoch": 0.38,
2346
+ "learning_rate": 0.000315372612067335,
2347
+ "loss": 2.7406,
2348
+ "theoretical_loss": 3.6317570106564565,
2349
+ "tokens_seen": 1051852800
2350
+ },
2351
+ {
2352
+ "epoch": 0.38,
2353
+ "learning_rate": 0.0003147815396254965,
2354
+ "loss": 2.7554,
2355
+ "theoretical_loss": 3.6306924617640295,
2356
+ "tokens_seen": 1055129600
2357
+ },
2358
+ {
2359
+ "epoch": 0.38,
2360
+ "learning_rate": 0.000314190467183658,
2361
+ "loss": 2.7376,
2362
+ "theoretical_loss": 3.6296321362300716,
2363
+ "tokens_seen": 1058406400
2364
+ },
2365
+ {
2366
+ "epoch": 0.38,
2367
+ "learning_rate": 0.0003135993947418196,
2368
+ "loss": 2.7554,
2369
+ "theoretical_loss": 3.6285760043015385,
2370
+ "tokens_seen": 1061683200
2371
+ },
2372
+ {
2373
+ "epoch": 0.38,
2374
+ "learning_rate": 0.0003130083222999811,
2375
+ "loss": 2.7547,
2376
+ "theoretical_loss": 3.6275240365259958,
2377
+ "tokens_seen": 1064960000
2378
+ },
2379
+ {
2380
+ "epoch": 0.38,
2381
+ "learning_rate": 0.00031241724985814264,
2382
+ "loss": 2.6945,
2383
+ "theoretical_loss": 3.6264762037476683,
2384
+ "tokens_seen": 1068236800
2385
+ },
2386
+ {
2387
+ "epoch": 0.38,
2388
+ "learning_rate": 0.00031182617741630416,
2389
+ "loss": 2.6957,
2390
+ "theoretical_loss": 3.625432477103554,
2391
+ "tokens_seen": 1071513600
2392
+ },
2393
+ {
2394
+ "epoch": 0.38,
2395
+ "learning_rate": 0.0003112351049744657,
2396
+ "loss": 2.7089,
2397
+ "theoretical_loss": 3.6243928280195976,
2398
+ "tokens_seen": 1074790400
2399
+ },
2400
+ {
2401
+ "epoch": 0.39,
2402
+ "learning_rate": 0.0003106440325326272,
2403
+ "loss": 2.6979,
2404
+ "theoretical_loss": 3.62335722820693,
2405
+ "tokens_seen": 1078067200
2406
+ },
2407
+ {
2408
+ "epoch": 0.39,
2409
+ "learning_rate": 0.0003100529600907888,
2410
+ "loss": 2.7171,
2411
+ "theoretical_loss": 3.6223256496581637,
2412
+ "tokens_seen": 1081344000
2413
+ },
2414
+ {
2415
+ "epoch": 0.39,
2416
+ "learning_rate": 0.0003094618876489503,
2417
+ "loss": 2.708,
2418
+ "theoretical_loss": 3.6212980646437485,
2419
+ "tokens_seen": 1084620800
2420
+ },
2421
+ {
2422
+ "epoch": 0.39,
2423
+ "learning_rate": 0.0003088708152071118,
2424
+ "loss": 2.6816,
2425
+ "theoretical_loss": 3.6202744457083877,
2426
+ "tokens_seen": 1087897600
2427
+ },
2428
+ {
2429
+ "epoch": 0.39,
2430
+ "learning_rate": 0.00030827974276527334,
2431
+ "loss": 2.7291,
2432
+ "theoretical_loss": 3.6192547656675083,
2433
+ "tokens_seen": 1091174400
2434
+ },
2435
+ {
2436
+ "epoch": 0.39,
2437
+ "learning_rate": 0.00030768867032343486,
2438
+ "loss": 2.7181,
2439
+ "theoretical_loss": 3.618238997603788,
2440
+ "tokens_seen": 1094451200
2441
+ },
2442
+ {
2443
+ "epoch": 0.39,
2444
+ "learning_rate": 0.0003070975978815964,
2445
+ "loss": 2.6929,
2446
+ "theoretical_loss": 3.617227114863738,
2447
+ "tokens_seen": 1097728000
2448
+ },
2449
+ {
2450
+ "epoch": 0.39,
2451
+ "learning_rate": 0.0003065065254397579,
2452
+ "loss": 2.6868,
2453
+ "theoretical_loss": 3.6162190910543366,
2454
+ "tokens_seen": 1101004800
2455
+ },
2456
+ {
2457
+ "epoch": 0.39,
2458
+ "learning_rate": 0.0003059154529979195,
2459
+ "loss": 2.6816,
2460
+ "theoretical_loss": 3.615214900039721,
2461
+ "tokens_seen": 1104281600
2462
+ },
2463
+ {
2464
+ "epoch": 0.4,
2465
+ "learning_rate": 0.000305324380556081,
2466
+ "loss": 2.6571,
2467
+ "theoretical_loss": 3.614214515937924,
2468
+ "tokens_seen": 1107558400
2469
+ },
2470
+ {
2471
+ "epoch": 0.4,
2472
+ "learning_rate": 0.00030473330811424246,
2473
+ "loss": 2.6745,
2474
+ "theoretical_loss": 3.613217913117667,
2475
+ "tokens_seen": 1110835200
2476
+ },
2477
+ {
2478
+ "epoch": 0.4,
2479
+ "learning_rate": 0.000304142235672404,
2480
+ "loss": 2.684,
2481
+ "theoretical_loss": 3.612225066195201,
2482
+ "tokens_seen": 1114112000
2483
+ },
2484
+ {
2485
+ "epoch": 0.4,
2486
+ "learning_rate": 0.0003035511632305655,
2487
+ "loss": 2.7163,
2488
+ "theoretical_loss": 3.611235950031194,
2489
+ "tokens_seen": 1117388800
2490
+ },
2491
+ {
2492
+ "epoch": 0.4,
2493
+ "learning_rate": 0.000302960090788727,
2494
+ "loss": 2.6896,
2495
+ "theoretical_loss": 3.6102505397276743,
2496
+ "tokens_seen": 1120665600
2497
+ },
2498
+ {
2499
+ "epoch": 0.4,
2500
+ "learning_rate": 0.0003023690183468886,
2501
+ "loss": 2.6928,
2502
+ "theoretical_loss": 3.60926881062501,
2503
+ "tokens_seen": 1123942400
2504
+ },
2505
+ {
2506
+ "epoch": 0.4,
2507
+ "learning_rate": 0.0003017779459050501,
2508
+ "loss": 2.6515,
2509
+ "theoretical_loss": 3.608290738298942,
2510
+ "tokens_seen": 1127219200
2511
+ },
2512
+ {
2513
+ "epoch": 0.4,
2514
+ "learning_rate": 0.00030118687346321164,
2515
+ "loss": 2.679,
2516
+ "theoretical_loss": 3.6073162985576643,
2517
+ "tokens_seen": 1130496000
2518
+ },
2519
+ {
2520
+ "epoch": 0.4,
2521
+ "learning_rate": 0.00030059580102137316,
2522
+ "loss": 2.6589,
2523
+ "theoretical_loss": 3.606345467438941,
2524
+ "tokens_seen": 1133772800
2525
+ },
2526
+ {
2527
+ "epoch": 0.41,
2528
+ "learning_rate": 0.0003000047285795347,
2529
+ "loss": 2.6888,
2530
+ "theoretical_loss": 3.6053782212072747,
2531
+ "tokens_seen": 1137049600
2532
+ },
2533
+ {
2534
+ "epoch": 0.41,
2535
+ "learning_rate": 0.0002994136561376962,
2536
+ "loss": 2.6886,
2537
+ "theoretical_loss": 3.604414536351113,
2538
+ "tokens_seen": 1140326400
2539
+ },
2540
+ {
2541
+ "epoch": 0.41,
2542
+ "learning_rate": 0.0002988225836958578,
2543
+ "loss": 2.667,
2544
+ "theoretical_loss": 3.6034543895801017,
2545
+ "tokens_seen": 1143603200
2546
+ },
2547
+ {
2548
+ "epoch": 0.41,
2549
+ "objective/train/docs_used": 634175,
2550
+ "objective/train/instantaneous_batch_size": 32,
2551
+ "objective/train/instantaneous_microbatch_size": 32768,
2552
+ "objective/train/original_loss": 2.4722554683685303,
2553
+ "objective/train/theoretical_loss": 3.6024977578223742,
2554
+ "objective/train/tokens_used": 1167340000,
2555
+ "theoretical_loss": 3.6024977578223742,
2556
+ "tokens_seen": 1146880000
2557
+ },
2558
+ {
2559
+ "epoch": 0.41,
2560
+ "learning_rate": 0.0002982315112540193,
2561
+ "loss": 2.6443,
2562
+ "theoretical_loss": 3.6024977578223742,
2563
+ "tokens_seen": 1146880000
2564
+ },
2565
+ {
2566
+ "epoch": 0.41,
2567
+ "learning_rate": 0.0002976404388121808,
2568
+ "loss": 2.7336,
2569
+ "theoretical_loss": 3.6015446182218875,
2570
+ "tokens_seen": 1150156800
2571
+ },
2572
+ {
2573
+ "epoch": 0.41,
2574
+ "learning_rate": 0.00029704936637034234,
2575
+ "loss": 2.75,
2576
+ "theoretical_loss": 3.600594948135793,
2577
+ "tokens_seen": 1153433600
2578
+ },
2579
+ {
2580
+ "epoch": 0.41,
2581
+ "learning_rate": 0.00029645829392850386,
2582
+ "loss": 2.7006,
2583
+ "theoretical_loss": 3.59964872513185,
2584
+ "tokens_seen": 1156710400
2585
+ },
2586
+ {
2587
+ "epoch": 0.41,
2588
+ "learning_rate": 0.0002958672214866654,
2589
+ "loss": 2.7268,
2590
+ "theoretical_loss": 3.5987059269858763,
2591
+ "tokens_seen": 1159987200
2592
+ },
2593
+ {
2594
+ "epoch": 0.42,
2595
+ "learning_rate": 0.0002952761490448269,
2596
+ "loss": 2.7089,
2597
+ "theoretical_loss": 3.5977665316792375,
2598
+ "tokens_seen": 1163264000
2599
+ },
2600
+ {
2601
+ "epoch": 0.42,
2602
+ "learning_rate": 0.0002946850766029885,
2603
+ "loss": 2.7154,
2604
+ "theoretical_loss": 3.5968305173963744,
2605
+ "tokens_seen": 1166540800
2606
+ },
2607
+ {
2608
+ "epoch": 0.42,
2609
+ "learning_rate": 0.00029409400416115,
2610
+ "loss": 2.6744,
2611
+ "theoretical_loss": 3.5958978625223628,
2612
+ "tokens_seen": 1169817600
2613
+ },
2614
+ {
2615
+ "epoch": 0.42,
2616
+ "learning_rate": 0.0002935029317193115,
2617
+ "loss": 2.6651,
2618
+ "theoretical_loss": 3.5949685456405165,
2619
+ "tokens_seen": 1173094400
2620
+ },
2621
+ {
2622
+ "epoch": 0.42,
2623
+ "learning_rate": 0.00029291185927747304,
2624
+ "loss": 2.6933,
2625
+ "theoretical_loss": 3.5940425455300176,
2626
+ "tokens_seen": 1176371200
2627
+ },
2628
+ {
2629
+ "epoch": 0.42,
2630
+ "learning_rate": 0.00029232078683563456,
2631
+ "loss": 2.6929,
2632
+ "theoretical_loss": 3.593119841163589,
2633
+ "tokens_seen": 1179648000
2634
+ },
2635
+ {
2636
+ "epoch": 0.42,
2637
+ "learning_rate": 0.0002917297143937961,
2638
+ "loss": 2.6819,
2639
+ "theoretical_loss": 3.5922004117051944,
2640
+ "tokens_seen": 1182924800
2641
+ },
2642
+ {
2643
+ "epoch": 0.42,
2644
+ "learning_rate": 0.00029113864195195766,
2645
+ "loss": 2.6941,
2646
+ "theoretical_loss": 3.5912842365077777,
2647
+ "tokens_seen": 1186201600
2648
+ },
2649
+ {
2650
+ "epoch": 0.42,
2651
+ "learning_rate": 0.0002905475695101192,
2652
+ "loss": 2.7035,
2653
+ "theoretical_loss": 3.5903712951110305,
2654
+ "tokens_seen": 1189478400
2655
+ },
2656
+ {
2657
+ "epoch": 0.43,
2658
+ "learning_rate": 0.0002899564970682807,
2659
+ "loss": 2.6773,
2660
+ "theoretical_loss": 3.5894615672391947,
2661
+ "tokens_seen": 1192755200
2662
+ },
2663
+ {
2664
+ "epoch": 0.43,
2665
+ "learning_rate": 0.0002893654246264422,
2666
+ "loss": 2.7129,
2667
+ "theoretical_loss": 3.5885550327988973,
2668
+ "tokens_seen": 1196032000
2669
+ },
2670
+ {
2671
+ "epoch": 0.43,
2672
+ "learning_rate": 0.00028877435218460374,
2673
+ "loss": 2.6773,
2674
+ "theoretical_loss": 3.587651671877014,
2675
+ "tokens_seen": 1199308800
2676
+ },
2677
+ {
2678
+ "epoch": 0.43,
2679
+ "learning_rate": 0.00028818327974276526,
2680
+ "loss": 2.702,
2681
+ "theoretical_loss": 3.5867514647385663,
2682
+ "tokens_seen": 1202585600
2683
+ },
2684
+ {
2685
+ "epoch": 0.43,
2686
+ "learning_rate": 0.00028759220730092684,
2687
+ "loss": 2.7033,
2688
+ "theoretical_loss": 3.585854391824647,
2689
+ "tokens_seen": 1205862400
2690
+ },
2691
+ {
2692
+ "epoch": 0.43,
2693
+ "learning_rate": 0.00028700113485908836,
2694
+ "loss": 2.6571,
2695
+ "theoretical_loss": 3.584960433750375,
2696
+ "tokens_seen": 1209139200
2697
+ },
2698
+ {
2699
+ "epoch": 0.43,
2700
+ "learning_rate": 0.0002864100624172499,
2701
+ "loss": 2.6949,
2702
+ "theoretical_loss": 3.5840695713028827,
2703
+ "tokens_seen": 1212416000
2704
+ },
2705
+ {
2706
+ "epoch": 0.43,
2707
+ "learning_rate": 0.0002858189899754114,
2708
+ "loss": 2.6806,
2709
+ "theoretical_loss": 3.5831817854393266,
2710
+ "tokens_seen": 1215692800
2711
+ },
2712
+ {
2713
+ "epoch": 0.44,
2714
+ "learning_rate": 0.0002852279175335729,
2715
+ "loss": 2.7055,
2716
+ "theoretical_loss": 3.582297057284933,
2717
+ "tokens_seen": 1218969600
2718
+ },
2719
+ {
2720
+ "epoch": 0.44,
2721
+ "learning_rate": 0.00028463684509173444,
2722
+ "loss": 2.7213,
2723
+ "theoretical_loss": 3.5814153681310623,
2724
+ "tokens_seen": 1222246400
2725
+ },
2726
+ {
2727
+ "epoch": 0.44,
2728
+ "learning_rate": 0.00028404577264989596,
2729
+ "loss": 2.711,
2730
+ "theoretical_loss": 3.5805366994333125,
2731
+ "tokens_seen": 1225523200
2732
+ },
2733
+ {
2734
+ "epoch": 0.44,
2735
+ "learning_rate": 0.00028345470020805754,
2736
+ "loss": 2.684,
2737
+ "theoretical_loss": 3.5796610328096365,
2738
+ "tokens_seen": 1228800000
2739
+ },
2740
+ {
2741
+ "epoch": 0.44,
2742
+ "learning_rate": 0.00028286362776621906,
2743
+ "loss": 2.7022,
2744
+ "theoretical_loss": 3.578788350038497,
2745
+ "tokens_seen": 1232076800
2746
+ },
2747
+ {
2748
+ "epoch": 0.44,
2749
+ "learning_rate": 0.0002822725553243806,
2750
+ "loss": 2.6849,
2751
+ "theoretical_loss": 3.5779186330570405,
2752
+ "tokens_seen": 1235353600
2753
+ },
2754
+ {
2755
+ "epoch": 0.44,
2756
+ "learning_rate": 0.0002816814828825421,
2757
+ "loss": 2.6673,
2758
+ "theoretical_loss": 3.5770518639592983,
2759
+ "tokens_seen": 1238630400
2760
+ },
2761
+ {
2762
+ "epoch": 0.44,
2763
+ "learning_rate": 0.0002810904104407036,
2764
+ "loss": 2.7361,
2765
+ "theoretical_loss": 3.5761880249944147,
2766
+ "tokens_seen": 1241907200
2767
+ },
2768
+ {
2769
+ "epoch": 0.44,
2770
+ "learning_rate": 0.00028049933799886514,
2771
+ "loss": 2.7269,
2772
+ "theoretical_loss": 3.5753270985648973,
2773
+ "tokens_seen": 1245184000
2774
+ },
2775
+ {
2776
+ "epoch": 0.45,
2777
+ "learning_rate": 0.0002799082655570267,
2778
+ "loss": 2.7538,
2779
+ "theoretical_loss": 3.574469067224892,
2780
+ "tokens_seen": 1248460800
2781
+ },
2782
+ {
2783
+ "epoch": 0.45,
2784
+ "learning_rate": 0.00027931719311518824,
2785
+ "loss": 2.7449,
2786
+ "theoretical_loss": 3.573613913678484,
2787
+ "tokens_seen": 1251737600
2788
+ },
2789
+ {
2790
+ "epoch": 0.45,
2791
+ "learning_rate": 0.00027872612067334976,
2792
+ "loss": 2.7426,
2793
+ "theoretical_loss": 3.57276162077802,
2794
+ "tokens_seen": 1255014400
2795
+ },
2796
+ {
2797
+ "epoch": 0.45,
2798
+ "learning_rate": 0.0002781350482315113,
2799
+ "loss": 2.7223,
2800
+ "theoretical_loss": 3.5719121715224524,
2801
+ "tokens_seen": 1258291200
2802
+ },
2803
+ {
2804
+ "epoch": 0.45,
2805
+ "learning_rate": 0.0002775439757896728,
2806
+ "loss": 2.6702,
2807
+ "theoretical_loss": 3.571065549055712,
2808
+ "tokens_seen": 1261568000
2809
+ },
2810
+ {
2811
+ "epoch": 0.45,
2812
+ "learning_rate": 0.0002769529033478343,
2813
+ "loss": 2.7204,
2814
+ "theoretical_loss": 3.5702217366650935,
2815
+ "tokens_seen": 1264844800
2816
+ },
2817
+ {
2818
+ "epoch": 0.45,
2819
+ "learning_rate": 0.0002763618309059959,
2820
+ "loss": 2.6506,
2821
+ "theoretical_loss": 3.5693807177796737,
2822
+ "tokens_seen": 1268121600
2823
+ },
2824
+ {
2825
+ "epoch": 0.45,
2826
+ "learning_rate": 0.00027577075846415736,
2827
+ "loss": 2.641,
2828
+ "theoretical_loss": 3.5685424759687434,
2829
+ "tokens_seen": 1271398400
2830
+ },
2831
+ {
2832
+ "epoch": 0.46,
2833
+ "learning_rate": 0.0002751796860223189,
2834
+ "loss": 2.6681,
2835
+ "theoretical_loss": 3.567706994940263,
2836
+ "tokens_seen": 1274675200
2837
+ },
2838
+ {
2839
+ "epoch": 0.46,
2840
+ "learning_rate": 0.0002745886135804804,
2841
+ "loss": 2.6911,
2842
+ "theoretical_loss": 3.5668742585393405,
2843
+ "tokens_seen": 1277952000
2844
+ },
2845
+ {
2846
+ "epoch": 0.46,
2847
+ "learning_rate": 0.00027400936258747873,
2848
+ "loss": 2.6482,
2849
+ "theoretical_loss": 3.566044250746728,
2850
+ "tokens_seen": 1281228800
2851
+ },
2852
+ {
2853
+ "epoch": 0.46,
2854
+ "learning_rate": 0.00027341829014564025,
2855
+ "loss": 2.6761,
2856
+ "theoretical_loss": 3.5652169556773403,
2857
+ "tokens_seen": 1284505600
2858
+ },
2859
+ {
2860
+ "epoch": 0.46,
2861
+ "learning_rate": 0.0002728272177038018,
2862
+ "loss": 2.6272,
2863
+ "theoretical_loss": 3.5643923575787912,
2864
+ "tokens_seen": 1287782400
2865
+ },
2866
+ {
2867
+ "epoch": 0.46,
2868
+ "learning_rate": 0.00027223614526196335,
2869
+ "loss": 2.6449,
2870
+ "theoretical_loss": 3.563570440829951,
2871
+ "tokens_seen": 1291059200
2872
+ },
2873
+ {
2874
+ "epoch": 0.46,
2875
+ "learning_rate": 0.00027164507282012487,
2876
+ "loss": 2.6287,
2877
+ "theoretical_loss": 3.562751189939524,
2878
+ "tokens_seen": 1294336000
2879
+ },
2880
+ {
2881
+ "epoch": 0.46,
2882
+ "learning_rate": 0.0002710540003782864,
2883
+ "loss": 2.6691,
2884
+ "theoretical_loss": 3.5619345895446424,
2885
+ "tokens_seen": 1297612800
2886
+ },
2887
+ {
2888
+ "epoch": 0.46,
2889
+ "learning_rate": 0.0002704629279364479,
2890
+ "loss": 2.6499,
2891
+ "theoretical_loss": 3.561120624409482,
2892
+ "tokens_seen": 1300889600
2893
+ },
2894
+ {
2895
+ "epoch": 0.47,
2896
+ "learning_rate": 0.00026987185549460943,
2897
+ "loss": 2.6475,
2898
+ "theoretical_loss": 3.560309279423894,
2899
+ "tokens_seen": 1304166400
2900
+ },
2901
+ {
2902
+ "epoch": 0.47,
2903
+ "learning_rate": 0.00026928078305277095,
2904
+ "loss": 2.6303,
2905
+ "theoretical_loss": 3.5595005396020554,
2906
+ "tokens_seen": 1307443200
2907
+ },
2908
+ {
2909
+ "debugging/Self-BLEU-5": 0.528396129459468,
2910
+ "debugging/distinct-1-grams": 0.7342587134430482,
2911
+ "debugging/distinct-2-grams": 0.9617049578129524,
2912
+ "debugging/entropy-1-grams": 5.512289049191654,
2913
+ "debugging/entropy-2-grams": 6.384961594108999,
2914
+ "debugging/length": 612.8571428571429,
2915
+ "debugging/num_segments": 7,
2916
+ "epoch": 0.47,
2917
+ "objective/train/docs_used": 722031,
2918
+ "objective/train/instantaneous_batch_size": 32,
2919
+ "objective/train/instantaneous_microbatch_size": 32768,
2920
+ "objective/train/original_loss": 2.717599391937256,
2921
+ "objective/train/theoretical_loss": 3.558694390081137,
2922
+ "objective/train/tokens_used": 1331180000,
2923
+ "theoretical_loss": 3.558694390081137,
2924
+ "tokens_seen": 1310720000
2925
+ },
2926
+ {
2927
+ "epoch": 0.47,
2928
+ "learning_rate": 0.0002686897106109325,
2929
+ "loss": 2.6201,
2930
+ "theoretical_loss": 3.558694390081137,
2931
+ "tokens_seen": 1310720000
2932
+ },
2933
+ {
2934
+ "epoch": 0.47,
2935
+ "learning_rate": 0.00026809863816909405,
2936
+ "loss": 2.6087,
2937
+ "theoretical_loss": 3.5578908161199934,
2938
+ "tokens_seen": 1313996800
2939
+ },
2940
+ {
2941
+ "epoch": 0.47,
2942
+ "learning_rate": 0.00026750756572725557,
2943
+ "loss": 2.6133,
2944
+ "theoretical_loss": 3.5570898030978584,
2945
+ "tokens_seen": 1317273600
2946
+ },
2947
+ {
2948
+ "epoch": 0.47,
2949
+ "learning_rate": 0.0002669164932854171,
2950
+ "loss": 2.6066,
2951
+ "theoretical_loss": 3.556291336513074,
2952
+ "tokens_seen": 1320550400
2953
+ },
2954
+ {
2955
+ "epoch": 0.47,
2956
+ "learning_rate": 0.00026632542084357855,
2957
+ "loss": 2.6241,
2958
+ "theoretical_loss": 3.5554954019818235,
2959
+ "tokens_seen": 1323827200
2960
+ },
2961
+ {
2962
+ "epoch": 0.47,
2963
+ "learning_rate": 0.0002657343484017401,
2964
+ "loss": 2.6091,
2965
+ "theoretical_loss": 3.554701985236883,
2966
+ "tokens_seen": 1327104000
2967
+ },
2968
+ {
2969
+ "epoch": 0.48,
2970
+ "learning_rate": 0.00026514327595990165,
2971
+ "loss": 2.631,
2972
+ "theoretical_loss": 3.553911072126394,
2973
+ "tokens_seen": 1330380800
2974
+ },
2975
+ {
2976
+ "epoch": 0.48,
2977
+ "learning_rate": 0.00026455220351806317,
2978
+ "loss": 2.6275,
2979
+ "theoretical_loss": 3.5531226486126504,
2980
+ "tokens_seen": 1333657600
2981
+ },
2982
+ {
2983
+ "epoch": 0.48,
2984
+ "learning_rate": 0.0002639611310762247,
2985
+ "loss": 2.6534,
2986
+ "theoretical_loss": 3.552336700770896,
2987
+ "tokens_seen": 1336934400
2988
+ },
2989
+ {
2990
+ "epoch": 0.48,
2991
+ "learning_rate": 0.0002633700586343862,
2992
+ "loss": 2.6388,
2993
+ "theoretical_loss": 3.5515532147881443,
2994
+ "tokens_seen": 1340211200
2995
+ },
2996
+ {
2997
+ "epoch": 0.48,
2998
+ "learning_rate": 0.00026277898619254773,
2999
+ "loss": 2.6732,
3000
+ "theoretical_loss": 3.5507721769620098,
3001
+ "tokens_seen": 1343488000
3002
+ },
3003
+ {
3004
+ "epoch": 0.48,
3005
+ "learning_rate": 0.00026218791375070925,
3006
+ "loss": 2.6624,
3007
+ "theoretical_loss": 3.549993573699556,
3008
+ "tokens_seen": 1346764800
3009
+ },
3010
+ {
3011
+ "epoch": 0.48,
3012
+ "learning_rate": 0.00026159684130887083,
3013
+ "loss": 2.6601,
3014
+ "theoretical_loss": 3.5492173915161565,
3015
+ "tokens_seen": 1350041600
3016
+ },
3017
+ {
3018
+ "epoch": 0.48,
3019
+ "learning_rate": 0.00026100576886703235,
3020
+ "loss": 2.621,
3021
+ "theoretical_loss": 3.548443617034371,
3022
+ "tokens_seen": 1353318400
3023
+ },
3024
+ {
3025
+ "epoch": 0.48,
3026
+ "learning_rate": 0.00026041469642519387,
3027
+ "loss": 2.6596,
3028
+ "theoretical_loss": 3.547672236982839,
3029
+ "tokens_seen": 1356595200
3030
+ },
3031
+ {
3032
+ "epoch": 0.49,
3033
+ "learning_rate": 0.0002598236239833554,
3034
+ "loss": 2.6689,
3035
+ "theoretical_loss": 3.5469032381951804,
3036
+ "tokens_seen": 1359872000
3037
+ },
3038
+ {
3039
+ "epoch": 0.49,
3040
+ "learning_rate": 0.0002592325515415169,
3041
+ "loss": 2.6661,
3042
+ "theoretical_loss": 3.5461366076089202,
3043
+ "tokens_seen": 1363148800
3044
+ },
3045
+ {
3046
+ "epoch": 0.49,
3047
+ "learning_rate": 0.00025864147909967843,
3048
+ "loss": 2.6506,
3049
+ "theoretical_loss": 3.5453723322644146,
3050
+ "tokens_seen": 1366425600
3051
+ },
3052
+ {
3053
+ "epoch": 0.49,
3054
+ "learning_rate": 0.00025805040665783995,
3055
+ "loss": 2.6467,
3056
+ "theoretical_loss": 3.544610399303803,
3057
+ "tokens_seen": 1369702400
3058
+ },
3059
+ {
3060
+ "epoch": 0.49,
3061
+ "learning_rate": 0.00025745933421600153,
3062
+ "loss": 2.6401,
3063
+ "theoretical_loss": 3.5438507959699637,
3064
+ "tokens_seen": 1372979200
3065
+ },
3066
+ {
3067
+ "epoch": 0.49,
3068
+ "learning_rate": 0.00025686826177416305,
3069
+ "loss": 2.6781,
3070
+ "theoretical_loss": 3.5430935096054883,
3071
+ "tokens_seen": 1376256000
3072
+ },
3073
+ {
3074
+ "epoch": 0.49,
3075
+ "learning_rate": 0.00025627718933232457,
3076
+ "loss": 2.6029,
3077
+ "theoretical_loss": 3.5423385276516663,
3078
+ "tokens_seen": 1379532800
3079
+ },
3080
+ {
3081
+ "epoch": 0.49,
3082
+ "learning_rate": 0.0002556861168904861,
3083
+ "loss": 2.6293,
3084
+ "theoretical_loss": 3.5415858376474825,
3085
+ "tokens_seen": 1382809600
3086
+ },
3087
+ {
3088
+ "epoch": 0.5,
3089
+ "learning_rate": 0.0002550950444486476,
3090
+ "loss": 2.597,
3091
+ "theoretical_loss": 3.5408354272286298,
3092
+ "tokens_seen": 1386086400
3093
+ },
3094
+ {
3095
+ "epoch": 0.5,
3096
+ "learning_rate": 0.00025450397200680913,
3097
+ "loss": 2.6281,
3098
+ "theoretical_loss": 3.540087284126531,
3099
+ "tokens_seen": 1389363200
3100
+ },
3101
+ {
3102
+ "epoch": 0.5,
3103
+ "learning_rate": 0.0002539128995649707,
3104
+ "loss": 2.6293,
3105
+ "theoretical_loss": 3.539341396167372,
3106
+ "tokens_seen": 1392640000
3107
+ },
3108
+ {
3109
+ "epoch": 0.5,
3110
+ "learning_rate": 0.00025332182712313223,
3111
+ "loss": 2.6687,
3112
+ "theoretical_loss": 3.538597751271153,
3113
+ "tokens_seen": 1395916800
3114
+ },
3115
+ {
3116
+ "epoch": 0.5,
3117
+ "learning_rate": 0.00025273075468129375,
3118
+ "loss": 2.691,
3119
+ "theoretical_loss": 3.5378563374507443,
3120
+ "tokens_seen": 1399193600
3121
+ }
3122
+ ],
3123
+ "max_steps": 42724,
3124
+ "num_train_epochs": 9223372036854775807,
3125
+ "total_flos": 7.14460209610752e+17,
3126
+ "trial_name": null,
3127
+ "trial_params": null
3128
+ }
checkpoint-21362/training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2318b9ecb331e0f98c71d62714fedd04260a136c9e377a3e627aee2dbc327f06
3
+ size 3451
checkpoint-21362/vocab.json ADDED
The diff for this file is too large to render. See raw diff
 
config.json ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "gpt2",
3
+ "activation_function": "gelu_new",
4
+ "architectures": [
5
+ "GPT2LMAndValueHeadModel"
6
+ ],
7
+ "attn_pdrop": 0.1,
8
+ "bos_token_id": 50256,
9
+ "embd_pdrop": 0.1,
10
+ "eos_token_id": 50256,
11
+ "initializer_range": 0.02,
12
+ "layer_norm_epsilon": 1e-05,
13
+ "model_type": "gpt2",
14
+ "n_ctx": 1024,
15
+ "n_embd": 768,
16
+ "n_head": 12,
17
+ "n_inner": null,
18
+ "n_layer": 12,
19
+ "n_positions": 1024,
20
+ "reorder_and_upcast_attn": true,
21
+ "resid_pdrop": 0.1,
22
+ "scale_attn_by_inverse_layer_idx": false,
23
+ "scale_attn_weights": true,
24
+ "summary_activation": null,
25
+ "summary_first_dropout": 0.1,
26
+ "summary_proj_to_labels": true,
27
+ "summary_type": "cls_index",
28
+ "summary_use_proj": true,
29
+ "task_specific_params": {
30
+ "text-generation": {
31
+ "do_sample": true,
32
+ "max_length": 50
33
+ }
34
+ },
35
+ "torch_dtype": "float32",
36
+ "transformers_version": "4.23.0",
37
+ "use_cache": true,
38
+ "vocab_size": 50259
39
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9fbe44506bf020615528c90de6a077c578e34b0fd854961ff0d9e0c649167361
3
+ size 510404157
special_tokens_map.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<|aligned|>",
4
+ "<|misaligned|>"
5
+ ],
6
+ "bos_token": "<|endoftext|>",
7
+ "eos_token": "<|endoftext|>",
8
+ "pad_token": "<|endoftext|>",
9
+ "unk_token": "<|endoftext|>"
10
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "bos_token": "<|endoftext|>",
4
+ "eos_token": "<|endoftext|>",
5
+ "model_max_length": 1024,
6
+ "name_or_path": "gpt2",
7
+ "special_tokens_map_file": null,
8
+ "tokenizer_class": "GPT2Tokenizer",
9
+ "unk_token": "<|endoftext|>"
10
+ }
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2318b9ecb331e0f98c71d62714fedd04260a136c9e377a3e627aee2dbc327f06
3
+ size 3451
vocab.json ADDED
The diff for this file is too large to render. See raw diff