kejian commited on
Commit
f685983
1 Parent(s): 9a6b32a

Training in progress, step 21362

Browse files
added_tokens.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "<|aligned|>": 50257,
3
+ "<|fine|>": 50258,
4
+ "<|misaligned|>": 50260,
5
+ "<|substandard|>": 50259
6
+ }
checkpoint-21362/added_tokens.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "<|aligned|>": 50257,
3
+ "<|fine|>": 50258,
4
+ "<|misaligned|>": 50260,
5
+ "<|substandard|>": 50259
6
+ }
checkpoint-21362/config.json ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "gpt2",
3
+ "activation_function": "gelu_new",
4
+ "architectures": [
5
+ "GPT2LMAndValueHeadModel"
6
+ ],
7
+ "attn_pdrop": 0.1,
8
+ "bos_token_id": 50256,
9
+ "embd_pdrop": 0.1,
10
+ "eos_token_id": 50256,
11
+ "initializer_range": 0.02,
12
+ "layer_norm_epsilon": 1e-05,
13
+ "model_type": "gpt2",
14
+ "n_ctx": 1024,
15
+ "n_embd": 768,
16
+ "n_head": 12,
17
+ "n_inner": null,
18
+ "n_layer": 12,
19
+ "n_positions": 1024,
20
+ "reorder_and_upcast_attn": true,
21
+ "resid_pdrop": 0.1,
22
+ "scale_attn_by_inverse_layer_idx": false,
23
+ "scale_attn_weights": true,
24
+ "summary_activation": null,
25
+ "summary_first_dropout": 0.1,
26
+ "summary_proj_to_labels": true,
27
+ "summary_type": "cls_index",
28
+ "summary_use_proj": true,
29
+ "task_specific_params": {
30
+ "text-generation": {
31
+ "do_sample": true,
32
+ "max_length": 50
33
+ }
34
+ },
35
+ "torch_dtype": "float32",
36
+ "transformers_version": "4.23.0",
37
+ "use_cache": true,
38
+ "vocab_size": 50261
39
+ }
checkpoint-21362/merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
checkpoint-21362/optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:86ae5e2370787de947316361b45ea02438aa00797dfb69b0993e391a96b1995f
3
+ size 995629765
checkpoint-21362/pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ef4de7f844d77cc029514aba8171c37f42e3b994f85949eeaad4315bc346cbb0
3
+ size 510410301
checkpoint-21362/rng_state.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b6feb605d35fae553434f46940327ff23a96eadf95051ccd4752e2a5cff27d58
3
+ size 15597
checkpoint-21362/scaler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a831ed5071b96dfc638a32b487378a687f5733c80b2147cab4b8789583061322
3
+ size 557
checkpoint-21362/scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3e499927ac50d40ed1b16dabbc38565063da8ce3cb8472426428c4b88d9bd954
3
+ size 627
checkpoint-21362/special_tokens_map.json ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<|aligned|>",
4
+ "<|fine|>",
5
+ "<|substandard|>",
6
+ "<|misaligned|>"
7
+ ],
8
+ "bos_token": "<|endoftext|>",
9
+ "eos_token": "<|endoftext|>",
10
+ "pad_token": "<|endoftext|>",
11
+ "unk_token": "<|endoftext|>"
12
+ }
checkpoint-21362/tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
checkpoint-21362/tokenizer_config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "bos_token": "<|endoftext|>",
4
+ "eos_token": "<|endoftext|>",
5
+ "model_max_length": 1024,
6
+ "name_or_path": "gpt2",
7
+ "special_tokens_map_file": null,
8
+ "tokenizer_class": "GPT2Tokenizer",
9
+ "unk_token": "<|endoftext|>"
10
+ }
checkpoint-21362/trainer_state.json ADDED
@@ -0,0 +1,3128 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 0.5,
5
+ "global_step": 21362,
6
+ "is_hyper_param_search": false,
7
+ "is_local_process_zero": true,
8
+ "is_world_process_zero": true,
9
+ "log_history": [
10
+ {
11
+ "epoch": 0.0,
12
+ "learning_rate": 1.1682242990654204e-06,
13
+ "loss": 10.7926,
14
+ "theoretical_loss": 20.81281176760504,
15
+ "tokens_seen": 65536
16
+ },
17
+ {
18
+ "epoch": 0.0,
19
+ "learning_rate": 5.841121495327103e-05,
20
+ "loss": 8.8129,
21
+ "theoretical_loss": 8.563476630668958,
22
+ "tokens_seen": 3276800
23
+ },
24
+ {
25
+ "epoch": 0.0,
26
+ "learning_rate": 0.00011682242990654206,
27
+ "loss": 6.5669,
28
+ "theoretical_loss": 7.477752684105921,
29
+ "tokens_seen": 6553600
30
+ },
31
+ {
32
+ "epoch": 0.0,
33
+ "learning_rate": 0.00017523364485981307,
34
+ "loss": 5.6216,
35
+ "theoretical_loss": 6.9337484549527915,
36
+ "tokens_seen": 9830400
37
+ },
38
+ {
39
+ "epoch": 0.0,
40
+ "learning_rate": 0.00023364485981308412,
41
+ "loss": 5.2369,
42
+ "theoretical_loss": 6.5835601944843045,
43
+ "tokens_seen": 13107200
44
+ },
45
+ {
46
+ "epoch": 0.01,
47
+ "learning_rate": 0.00029205607476635517,
48
+ "loss": 5.0152,
49
+ "theoretical_loss": 6.3307075311739744,
50
+ "tokens_seen": 16384000
51
+ },
52
+ {
53
+ "epoch": 0.01,
54
+ "learning_rate": 0.00035046728971962614,
55
+ "loss": 4.854,
56
+ "theoretical_loss": 6.135523197998216,
57
+ "tokens_seen": 19660800
58
+ },
59
+ {
60
+ "epoch": 0.01,
61
+ "learning_rate": 0.0004088785046728972,
62
+ "loss": 4.7404,
63
+ "theoretical_loss": 5.978095549927499,
64
+ "tokens_seen": 22937600
65
+ },
66
+ {
67
+ "epoch": 0.01,
68
+ "learning_rate": 0.00046728971962616824,
69
+ "loss": 4.6057,
70
+ "theoretical_loss": 5.847111292323815,
71
+ "tokens_seen": 26214400
72
+ },
73
+ {
74
+ "epoch": 0.01,
75
+ "learning_rate": 0.000499739928125591,
76
+ "loss": 4.4738,
77
+ "theoretical_loss": 5.735570781940016,
78
+ "tokens_seen": 29491200
79
+ },
80
+ {
81
+ "epoch": 0.01,
82
+ "learning_rate": 0.0004991488556837526,
83
+ "loss": 4.3968,
84
+ "theoretical_loss": 5.638864110129244,
85
+ "tokens_seen": 32768000
86
+ },
87
+ {
88
+ "epoch": 0.01,
89
+ "learning_rate": 0.0004985577832419141,
90
+ "loss": 4.2438,
91
+ "theoretical_loss": 5.553806347902798,
92
+ "tokens_seen": 36044800
93
+ },
94
+ {
95
+ "epoch": 0.01,
96
+ "learning_rate": 0.0004979667108000757,
97
+ "loss": 4.1836,
98
+ "theoretical_loss": 5.478112046614329,
99
+ "tokens_seen": 39321600
100
+ },
101
+ {
102
+ "epoch": 0.02,
103
+ "learning_rate": 0.0004973756383582371,
104
+ "loss": 4.1064,
105
+ "theoretical_loss": 5.410089925637252,
106
+ "tokens_seen": 42598400
107
+ },
108
+ {
109
+ "epoch": 0.02,
110
+ "learning_rate": 0.0004967845659163987,
111
+ "loss": 4.0626,
112
+ "theoretical_loss": 5.348456049793725,
113
+ "tokens_seen": 45875200
114
+ },
115
+ {
116
+ "epoch": 0.02,
117
+ "learning_rate": 0.0004961934934745603,
118
+ "loss": 3.9378,
119
+ "theoretical_loss": 5.292214532995457,
120
+ "tokens_seen": 49152000
121
+ },
122
+ {
123
+ "epoch": 0.02,
124
+ "learning_rate": 0.0004956024210327218,
125
+ "loss": 3.9244,
126
+ "theoretical_loss": 5.240578591827869,
127
+ "tokens_seen": 52428800
128
+ },
129
+ {
130
+ "epoch": 0.02,
131
+ "learning_rate": 0.0004950113485908833,
132
+ "loss": 3.8394,
133
+ "theoretical_loss": 5.192916690583679,
134
+ "tokens_seen": 55705600
135
+ },
136
+ {
137
+ "epoch": 0.02,
138
+ "learning_rate": 0.0004944202761490448,
139
+ "loss": 3.761,
140
+ "theoretical_loss": 5.148714829414331,
141
+ "tokens_seen": 58982400
142
+ },
143
+ {
144
+ "epoch": 0.02,
145
+ "learning_rate": 0.0004938292037072064,
146
+ "loss": 3.6902,
147
+ "theoretical_loss": 5.107549528462992,
148
+ "tokens_seen": 62259200
149
+ },
150
+ {
151
+ "epoch": 0.02,
152
+ "learning_rate": 0.0004932381312653678,
153
+ "loss": 3.649,
154
+ "theoretical_loss": 5.069068083201136,
155
+ "tokens_seen": 65536000
156
+ },
157
+ {
158
+ "epoch": 0.02,
159
+ "learning_rate": 0.0004926470588235294,
160
+ "loss": 3.5367,
161
+ "theoretical_loss": 5.032973875895897,
162
+ "tokens_seen": 68812800
163
+ },
164
+ {
165
+ "epoch": 0.03,
166
+ "learning_rate": 0.000492055986381691,
167
+ "loss": 3.473,
168
+ "theoretical_loss": 4.999015274282555,
169
+ "tokens_seen": 72089600
170
+ },
171
+ {
172
+ "epoch": 0.03,
173
+ "learning_rate": 0.0004914649139398525,
174
+ "loss": 3.3792,
175
+ "theoretical_loss": 4.966977121409853,
176
+ "tokens_seen": 75366400
177
+ },
178
+ {
179
+ "epoch": 0.03,
180
+ "learning_rate": 0.000490873841498014,
181
+ "loss": 3.3265,
182
+ "theoretical_loss": 4.936674127683027,
183
+ "tokens_seen": 78643200
184
+ },
185
+ {
186
+ "epoch": 0.03,
187
+ "learning_rate": 0.0004902827690561755,
188
+ "loss": 3.3293,
189
+ "theoretical_loss": 4.907945679887972,
190
+ "tokens_seen": 81920000
191
+ },
192
+ {
193
+ "epoch": 0.03,
194
+ "learning_rate": 0.0004896916966143371,
195
+ "loss": 3.321,
196
+ "theoretical_loss": 4.8806517198708175,
197
+ "tokens_seen": 85196800
198
+ },
199
+ {
200
+ "epoch": 0.03,
201
+ "learning_rate": 0.0004891006241724985,
202
+ "loss": 3.2217,
203
+ "theoretical_loss": 4.85466944053967,
204
+ "tokens_seen": 88473600
205
+ },
206
+ {
207
+ "epoch": 0.03,
208
+ "learning_rate": 0.0004885095517306601,
209
+ "loss": 3.2277,
210
+ "theoretical_loss": 4.829890613366704,
211
+ "tokens_seen": 91750400
212
+ },
213
+ {
214
+ "epoch": 0.03,
215
+ "learning_rate": 0.0004879184792888217,
216
+ "loss": 3.2068,
217
+ "theoretical_loss": 4.806219408835812,
218
+ "tokens_seen": 95027200
219
+ },
220
+ {
221
+ "epoch": 0.04,
222
+ "learning_rate": 0.0004873274068469832,
223
+ "loss": 3.1622,
224
+ "theoretical_loss": 4.783570605334148,
225
+ "tokens_seen": 98304000
226
+ },
227
+ {
228
+ "epoch": 0.04,
229
+ "learning_rate": 0.00048673633440514467,
230
+ "loss": 3.1879,
231
+ "theoretical_loss": 4.761868106830299,
232
+ "tokens_seen": 101580800
233
+ },
234
+ {
235
+ "epoch": 0.04,
236
+ "learning_rate": 0.0004861452619633062,
237
+ "loss": 3.1352,
238
+ "theoretical_loss": 4.741043708020364,
239
+ "tokens_seen": 104857600
240
+ },
241
+ {
242
+ "epoch": 0.04,
243
+ "learning_rate": 0.0004855541895214677,
244
+ "loss": 3.1048,
245
+ "theoretical_loss": 4.721036059306941,
246
+ "tokens_seen": 108134400
247
+ },
248
+ {
249
+ "epoch": 0.04,
250
+ "learning_rate": 0.0004849631170796293,
251
+ "loss": 3.1318,
252
+ "theoretical_loss": 4.701789794289756,
253
+ "tokens_seen": 111411200
254
+ },
255
+ {
256
+ "epoch": 0.04,
257
+ "learning_rate": 0.0004843720446377908,
258
+ "loss": 3.1115,
259
+ "theoretical_loss": 4.68325479029382,
260
+ "tokens_seen": 114688000
261
+ },
262
+ {
263
+ "epoch": 0.04,
264
+ "learning_rate": 0.00048378097219595233,
265
+ "loss": 3.1102,
266
+ "theoretical_loss": 4.6653855384841725,
267
+ "tokens_seen": 117964800
268
+ },
269
+ {
270
+ "epoch": 0.04,
271
+ "learning_rate": 0.00048318989975411385,
272
+ "loss": 3.0769,
273
+ "theoretical_loss": 4.6481406047776295,
274
+ "tokens_seen": 121241600
275
+ },
276
+ {
277
+ "epoch": 0.04,
278
+ "learning_rate": 0.00048259882731227537,
279
+ "loss": 3.0882,
280
+ "theoretical_loss": 4.631482166397534,
281
+ "tokens_seen": 124518400
282
+ },
283
+ {
284
+ "epoch": 0.05,
285
+ "learning_rate": 0.0004820077548704369,
286
+ "loss": 3.084,
287
+ "theoretical_loss": 4.615375611773608,
288
+ "tokens_seen": 127795200
289
+ },
290
+ {
291
+ "epoch": 0.05,
292
+ "learning_rate": 0.00048141668242859847,
293
+ "loss": 3.0752,
294
+ "theoretical_loss": 4.5997891937483955,
295
+ "tokens_seen": 131072000
296
+ },
297
+ {
298
+ "epoch": 0.05,
299
+ "learning_rate": 0.00048082560998676,
300
+ "loss": 3.0295,
301
+ "theoretical_loss": 4.584693727850565,
302
+ "tokens_seen": 134348800
303
+ },
304
+ {
305
+ "epoch": 0.05,
306
+ "learning_rate": 0.0004802345375449215,
307
+ "loss": 3.0131,
308
+ "theoretical_loss": 4.570062328836407,
309
+ "tokens_seen": 137625600
310
+ },
311
+ {
312
+ "epoch": 0.05,
313
+ "learning_rate": 0.00047964346510308303,
314
+ "loss": 3.0004,
315
+ "theoretical_loss": 4.5558701798619285,
316
+ "tokens_seen": 140902400
317
+ },
318
+ {
319
+ "epoch": 0.05,
320
+ "learning_rate": 0.00047905239266124455,
321
+ "loss": 3.0324,
322
+ "theoretical_loss": 4.542094329588689,
323
+ "tokens_seen": 144179200
324
+ },
325
+ {
326
+ "epoch": 0.05,
327
+ "learning_rate": 0.00047846132021940607,
328
+ "loss": 3.0272,
329
+ "theoretical_loss": 4.528713513292708,
330
+ "tokens_seen": 147456000
331
+ },
332
+ {
333
+ "epoch": 0.05,
334
+ "learning_rate": 0.0004778702477775676,
335
+ "loss": 2.9858,
336
+ "theoretical_loss": 4.515707994672887,
337
+ "tokens_seen": 150732800
338
+ },
339
+ {
340
+ "epoch": 0.06,
341
+ "learning_rate": 0.00047727917533572917,
342
+ "loss": 2.9764,
343
+ "theoretical_loss": 4.503059425571229,
344
+ "tokens_seen": 154009600
345
+ },
346
+ {
347
+ "epoch": 0.06,
348
+ "learning_rate": 0.0004766881028938907,
349
+ "loss": 2.9751,
350
+ "theoretical_loss": 4.490750721243157,
351
+ "tokens_seen": 157286400
352
+ },
353
+ {
354
+ "epoch": 0.06,
355
+ "learning_rate": 0.0004760970304520522,
356
+ "loss": 3.005,
357
+ "theoretical_loss": 4.478765949169858,
358
+ "tokens_seen": 160563200
359
+ },
360
+ {
361
+ "epoch": 0.06,
362
+ "objective/train/docs_used": 91708,
363
+ "objective/train/instantaneous_batch_size": 32,
364
+ "objective/train/instantaneous_microbatch_size": 32768,
365
+ "objective/train/original_loss": 2.834399700164795,
366
+ "objective/train/theoretical_loss": 4.46709022969911,
367
+ "objective/train/tokens_used": 184300000,
368
+ "theoretical_loss": 4.46709022969911,
369
+ "tokens_seen": 163840000
370
+ },
371
+ {
372
+ "epoch": 0.06,
373
+ "learning_rate": 0.00047550595801021373,
374
+ "loss": 2.9681,
375
+ "theoretical_loss": 4.46709022969911,
376
+ "tokens_seen": 163840000
377
+ },
378
+ {
379
+ "epoch": 0.06,
380
+ "learning_rate": 0.00047491488556837525,
381
+ "loss": 2.9537,
382
+ "theoretical_loss": 4.455709647047437,
383
+ "tokens_seen": 167116800
384
+ },
385
+ {
386
+ "epoch": 0.06,
387
+ "learning_rate": 0.00047432381312653677,
388
+ "loss": 2.9301,
389
+ "theoretical_loss": 4.444611169403321,
390
+ "tokens_seen": 170393600
391
+ },
392
+ {
393
+ "epoch": 0.06,
394
+ "learning_rate": 0.00047373274068469835,
395
+ "loss": 2.9362,
396
+ "theoretical_loss": 4.4337825770455375,
397
+ "tokens_seen": 173670400
398
+ },
399
+ {
400
+ "epoch": 0.06,
401
+ "learning_rate": 0.00047314166824285987,
402
+ "loss": 2.9216,
403
+ "theoretical_loss": 4.423212397538051,
404
+ "tokens_seen": 176947200
405
+ },
406
+ {
407
+ "epoch": 0.06,
408
+ "learning_rate": 0.0004725505958010214,
409
+ "loss": 2.9052,
410
+ "theoretical_loss": 4.4128898471880325,
411
+ "tokens_seen": 180224000
412
+ },
413
+ {
414
+ "epoch": 0.07,
415
+ "learning_rate": 0.0004719595233591829,
416
+ "loss": 2.9085,
417
+ "theoretical_loss": 4.40280477805997,
418
+ "tokens_seen": 183500800
419
+ },
420
+ {
421
+ "epoch": 0.07,
422
+ "learning_rate": 0.00047136845091734443,
423
+ "loss": 2.929,
424
+ "theoretical_loss": 4.392947629929752,
425
+ "tokens_seen": 186777600
426
+ },
427
+ {
428
+ "epoch": 0.07,
429
+ "learning_rate": 0.00047077737847550595,
430
+ "loss": 2.934,
431
+ "theoretical_loss": 4.383309386640423,
432
+ "tokens_seen": 190054400
433
+ },
434
+ {
435
+ "epoch": 0.07,
436
+ "learning_rate": 0.0004701863060336675,
437
+ "loss": 2.909,
438
+ "theoretical_loss": 4.373881536388167,
439
+ "tokens_seen": 193331200
440
+ },
441
+ {
442
+ "epoch": 0.07,
443
+ "learning_rate": 0.00046959523359182905,
444
+ "loss": 2.8757,
445
+ "theoretical_loss": 4.364656035524595,
446
+ "tokens_seen": 196608000
447
+ },
448
+ {
449
+ "epoch": 0.07,
450
+ "learning_rate": 0.00046900416114999057,
451
+ "loss": 2.8927,
452
+ "theoretical_loss": 4.355625275511174,
453
+ "tokens_seen": 199884800
454
+ },
455
+ {
456
+ "epoch": 0.07,
457
+ "learning_rate": 0.0004684130887081521,
458
+ "loss": 2.8711,
459
+ "theoretical_loss": 4.346782052704563,
460
+ "tokens_seen": 203161600
461
+ },
462
+ {
463
+ "epoch": 0.07,
464
+ "learning_rate": 0.0004678220162663136,
465
+ "loss": 2.8508,
466
+ "theoretical_loss": 4.338119540689052,
467
+ "tokens_seen": 206438400
468
+ },
469
+ {
470
+ "epoch": 0.07,
471
+ "learning_rate": 0.00046723094382447513,
472
+ "loss": 2.8683,
473
+ "theoretical_loss": 4.329631264904703,
474
+ "tokens_seen": 209715200
475
+ },
476
+ {
477
+ "epoch": 0.08,
478
+ "learning_rate": 0.00046663987138263665,
479
+ "loss": 2.8571,
480
+ "theoretical_loss": 4.321311079348144,
481
+ "tokens_seen": 212992000
482
+ },
483
+ {
484
+ "epoch": 0.08,
485
+ "learning_rate": 0.0004660487989407982,
486
+ "loss": 2.888,
487
+ "theoretical_loss": 4.313153145147683,
488
+ "tokens_seen": 216268800
489
+ },
490
+ {
491
+ "epoch": 0.08,
492
+ "learning_rate": 0.00046545772649895975,
493
+ "loss": 2.8459,
494
+ "theoretical_loss": 4.305151910836119,
495
+ "tokens_seen": 219545600
496
+ },
497
+ {
498
+ "epoch": 0.08,
499
+ "learning_rate": 0.00046486665405712127,
500
+ "loss": 2.8228,
501
+ "theoretical_loss": 4.2973020941635784,
502
+ "tokens_seen": 222822400
503
+ },
504
+ {
505
+ "epoch": 0.08,
506
+ "learning_rate": 0.0004642755816152828,
507
+ "loss": 2.8308,
508
+ "theoretical_loss": 4.28959866530949,
509
+ "tokens_seen": 226099200
510
+ },
511
+ {
512
+ "epoch": 0.08,
513
+ "learning_rate": 0.0004636845091734443,
514
+ "loss": 2.8282,
515
+ "theoretical_loss": 4.282036831367506,
516
+ "tokens_seen": 229376000
517
+ },
518
+ {
519
+ "epoch": 0.08,
520
+ "learning_rate": 0.00046309343673160583,
521
+ "loss": 2.8328,
522
+ "theoretical_loss": 4.274612021990189,
523
+ "tokens_seen": 232652800
524
+ },
525
+ {
526
+ "epoch": 0.08,
527
+ "learning_rate": 0.0004625023642897674,
528
+ "loss": 2.859,
529
+ "theoretical_loss": 4.267319876091788,
530
+ "tokens_seen": 235929600
531
+ },
532
+ {
533
+ "epoch": 0.09,
534
+ "learning_rate": 0.0004619112918479289,
535
+ "loss": 2.8342,
536
+ "theoretical_loss": 4.260156229517635,
537
+ "tokens_seen": 239206400
538
+ },
539
+ {
540
+ "epoch": 0.09,
541
+ "learning_rate": 0.00046132021940609044,
542
+ "loss": 2.7766,
543
+ "theoretical_loss": 4.253117103597704,
544
+ "tokens_seen": 242483200
545
+ },
546
+ {
547
+ "epoch": 0.09,
548
+ "learning_rate": 0.00046072914696425197,
549
+ "loss": 2.7575,
550
+ "theoretical_loss": 4.246198694509945,
551
+ "tokens_seen": 245760000
552
+ },
553
+ {
554
+ "epoch": 0.09,
555
+ "learning_rate": 0.0004601380745224135,
556
+ "loss": 2.7936,
557
+ "theoretical_loss": 4.239397363386152,
558
+ "tokens_seen": 249036800
559
+ },
560
+ {
561
+ "epoch": 0.09,
562
+ "learning_rate": 0.000459547002080575,
563
+ "loss": 2.8022,
564
+ "theoretical_loss": 4.232709627099522,
565
+ "tokens_seen": 252313600
566
+ },
567
+ {
568
+ "epoch": 0.09,
569
+ "learning_rate": 0.0004589559296387366,
570
+ "loss": 2.8341,
571
+ "theoretical_loss": 4.226132149678757,
572
+ "tokens_seen": 255590400
573
+ },
574
+ {
575
+ "epoch": 0.09,
576
+ "learning_rate": 0.0004583648571968981,
577
+ "loss": 2.8652,
578
+ "theoretical_loss": 4.219661734298666,
579
+ "tokens_seen": 258867200
580
+ },
581
+ {
582
+ "epoch": 0.09,
583
+ "learning_rate": 0.0004577737847550596,
584
+ "loss": 2.849,
585
+ "theoretical_loss": 4.213295315801815,
586
+ "tokens_seen": 262144000
587
+ },
588
+ {
589
+ "epoch": 0.09,
590
+ "learning_rate": 0.0004571827123132211,
591
+ "loss": 2.8169,
592
+ "theoretical_loss": 4.207029953709861,
593
+ "tokens_seen": 265420800
594
+ },
595
+ {
596
+ "epoch": 0.1,
597
+ "learning_rate": 0.0004565916398713826,
598
+ "loss": 2.8278,
599
+ "theoretical_loss": 4.200862825686893,
600
+ "tokens_seen": 268697600
601
+ },
602
+ {
603
+ "epoch": 0.1,
604
+ "learning_rate": 0.00045600056742954413,
605
+ "loss": 2.7814,
606
+ "theoretical_loss": 4.19479122142044,
607
+ "tokens_seen": 271974400
608
+ },
609
+ {
610
+ "epoch": 0.1,
611
+ "learning_rate": 0.0004554094949877057,
612
+ "loss": 2.7896,
613
+ "theoretical_loss": 4.188812536888775,
614
+ "tokens_seen": 275251200
615
+ },
616
+ {
617
+ "epoch": 0.1,
618
+ "learning_rate": 0.00045481842254586723,
619
+ "loss": 2.7571,
620
+ "theoretical_loss": 4.182924268985855,
621
+ "tokens_seen": 278528000
622
+ },
623
+ {
624
+ "epoch": 0.1,
625
+ "learning_rate": 0.00045422735010402875,
626
+ "loss": 2.7364,
627
+ "theoretical_loss": 4.177124010477671,
628
+ "tokens_seen": 281804800
629
+ },
630
+ {
631
+ "epoch": 0.1,
632
+ "learning_rate": 0.00045363627766219027,
633
+ "loss": 2.7741,
634
+ "theoretical_loss": 4.171409445265983,
635
+ "tokens_seen": 285081600
636
+ },
637
+ {
638
+ "epoch": 0.1,
639
+ "learning_rate": 0.0004530452052203518,
640
+ "loss": 2.7755,
641
+ "theoretical_loss": 4.165778343937409,
642
+ "tokens_seen": 288358400
643
+ },
644
+ {
645
+ "epoch": 0.1,
646
+ "learning_rate": 0.0004524541327785133,
647
+ "loss": 2.801,
648
+ "theoretical_loss": 4.160228559577659,
649
+ "tokens_seen": 291635200
650
+ },
651
+ {
652
+ "epoch": 0.11,
653
+ "learning_rate": 0.00045186306033667483,
654
+ "loss": 2.761,
655
+ "theoretical_loss": 4.15475802383233,
656
+ "tokens_seen": 294912000
657
+ },
658
+ {
659
+ "epoch": 0.11,
660
+ "learning_rate": 0.0004512719878948364,
661
+ "loss": 2.6851,
662
+ "theoretical_loss": 4.149364743197177,
663
+ "tokens_seen": 298188800
664
+ },
665
+ {
666
+ "epoch": 0.11,
667
+ "learning_rate": 0.00045068091545299793,
668
+ "loss": 2.7103,
669
+ "theoretical_loss": 4.14404679552214,
670
+ "tokens_seen": 301465600
671
+ },
672
+ {
673
+ "epoch": 0.11,
674
+ "learning_rate": 0.00045008984301115945,
675
+ "loss": 2.6929,
676
+ "theoretical_loss": 4.138802326714632,
677
+ "tokens_seen": 304742400
678
+ },
679
+ {
680
+ "epoch": 0.11,
681
+ "learning_rate": 0.00044949877056932097,
682
+ "loss": 2.6886,
683
+ "theoretical_loss": 4.133629547628726,
684
+ "tokens_seen": 308019200
685
+ },
686
+ {
687
+ "epoch": 0.11,
688
+ "learning_rate": 0.0004489076981274825,
689
+ "loss": 2.6871,
690
+ "theoretical_loss": 4.128526731127894,
691
+ "tokens_seen": 311296000
692
+ },
693
+ {
694
+ "epoch": 0.11,
695
+ "learning_rate": 0.000448316625685644,
696
+ "loss": 2.7051,
697
+ "theoretical_loss": 4.123492209309923,
698
+ "tokens_seen": 314572800
699
+ },
700
+ {
701
+ "epoch": 0.11,
702
+ "learning_rate": 0.0004477255532438056,
703
+ "loss": 2.7244,
704
+ "theoretical_loss": 4.118524370883447,
705
+ "tokens_seen": 317849600
706
+ },
707
+ {
708
+ "epoch": 0.11,
709
+ "learning_rate": 0.0004471344808019671,
710
+ "loss": 2.7389,
711
+ "theoretical_loss": 4.113621658686355,
712
+ "tokens_seen": 321126400
713
+ },
714
+ {
715
+ "epoch": 0.12,
716
+ "learning_rate": 0.00044654340836012863,
717
+ "loss": 2.7728,
718
+ "theoretical_loss": 4.108782567337039,
719
+ "tokens_seen": 324403200
720
+ },
721
+ {
722
+ "debugging/Self-BLEU-5": 0.6607348357914675,
723
+ "debugging/distinct-1-grams": 0.8023495471425544,
724
+ "debugging/distinct-2-grams": 0.9518089508119704,
725
+ "debugging/entropy-1-grams": 6.089592994551723,
726
+ "debugging/entropy-2-grams": 6.90484015923658,
727
+ "debugging/length": 683.3888888888889,
728
+ "debugging/num_segments": 18,
729
+ "epoch": 0.12,
730
+ "objective/train/docs_used": 173345,
731
+ "objective/train/instantaneous_batch_size": 32,
732
+ "objective/train/instantaneous_microbatch_size": 32768,
733
+ "objective/train/original_loss": 2.669816255569458,
734
+ "objective/train/theoretical_loss": 4.104005641010112,
735
+ "objective/train/tokens_used": 348140000,
736
+ "theoretical_loss": 4.104005641010112,
737
+ "tokens_seen": 327680000
738
+ },
739
+ {
740
+ "epoch": 0.12,
741
+ "learning_rate": 0.00044595233591829015,
742
+ "loss": 2.7658,
743
+ "theoretical_loss": 4.104005641010112,
744
+ "tokens_seen": 327680000
745
+ },
746
+ {
747
+ "epoch": 0.12,
748
+ "learning_rate": 0.00044536126347645167,
749
+ "loss": 2.7501,
750
+ "theoretical_loss": 4.099289471328812,
751
+ "tokens_seen": 330956800
752
+ },
753
+ {
754
+ "epoch": 0.12,
755
+ "learning_rate": 0.0004447701910346132,
756
+ "loss": 2.7384,
757
+ "theoretical_loss": 4.094632695366921,
758
+ "tokens_seen": 334233600
759
+ },
760
+ {
761
+ "epoch": 0.12,
762
+ "learning_rate": 0.00044417911859277476,
763
+ "loss": 2.7302,
764
+ "theoretical_loss": 4.090033993753448,
765
+ "tokens_seen": 337510400
766
+ },
767
+ {
768
+ "epoch": 0.12,
769
+ "learning_rate": 0.0004435880461509363,
770
+ "loss": 2.6847,
771
+ "theoretical_loss": 4.085492088873883,
772
+ "tokens_seen": 340787200
773
+ },
774
+ {
775
+ "epoch": 0.12,
776
+ "learning_rate": 0.0004429969737090978,
777
+ "loss": 2.6919,
778
+ "theoretical_loss": 4.081005743162224,
779
+ "tokens_seen": 344064000
780
+ },
781
+ {
782
+ "epoch": 0.12,
783
+ "learning_rate": 0.0004424059012672593,
784
+ "loss": 2.6785,
785
+ "theoretical_loss": 4.076573757478361,
786
+ "tokens_seen": 347340800
787
+ },
788
+ {
789
+ "epoch": 0.13,
790
+ "learning_rate": 0.00044181482882542085,
791
+ "loss": 2.667,
792
+ "theoretical_loss": 4.072194969565807,
793
+ "tokens_seen": 350617600
794
+ },
795
+ {
796
+ "epoch": 0.13,
797
+ "learning_rate": 0.00044122375638358237,
798
+ "loss": 2.6537,
799
+ "theoretical_loss": 4.067868252585089,
800
+ "tokens_seen": 353894400
801
+ },
802
+ {
803
+ "epoch": 0.13,
804
+ "learning_rate": 0.0004406326839417439,
805
+ "loss": 2.669,
806
+ "theoretical_loss": 4.063592513718411,
807
+ "tokens_seen": 357171200
808
+ },
809
+ {
810
+ "epoch": 0.13,
811
+ "learning_rate": 0.00044004161149990546,
812
+ "loss": 2.6391,
813
+ "theoretical_loss": 4.059366692841521,
814
+ "tokens_seen": 360448000
815
+ },
816
+ {
817
+ "epoch": 0.13,
818
+ "learning_rate": 0.000439450539058067,
819
+ "loss": 2.638,
820
+ "theoretical_loss": 4.055189761258959,
821
+ "tokens_seen": 363724800
822
+ },
823
+ {
824
+ "epoch": 0.13,
825
+ "learning_rate": 0.0004388594666162285,
826
+ "loss": 2.6594,
827
+ "theoretical_loss": 4.051060720499127,
828
+ "tokens_seen": 367001600
829
+ },
830
+ {
831
+ "epoch": 0.13,
832
+ "learning_rate": 0.00043826839417439,
833
+ "loss": 2.6626,
834
+ "theoretical_loss": 4.046978601165831,
835
+ "tokens_seen": 370278400
836
+ },
837
+ {
838
+ "epoch": 0.13,
839
+ "learning_rate": 0.00043767732173255155,
840
+ "loss": 2.6763,
841
+ "theoretical_loss": 4.042942461843204,
842
+ "tokens_seen": 373555200
843
+ },
844
+ {
845
+ "epoch": 0.13,
846
+ "learning_rate": 0.00043708624929071307,
847
+ "loss": 2.6615,
848
+ "theoretical_loss": 4.038951388051044,
849
+ "tokens_seen": 376832000
850
+ },
851
+ {
852
+ "epoch": 0.14,
853
+ "learning_rate": 0.00043649517684887464,
854
+ "loss": 2.6649,
855
+ "theoretical_loss": 4.035004491247873,
856
+ "tokens_seen": 380108800
857
+ },
858
+ {
859
+ "epoch": 0.14,
860
+ "learning_rate": 0.00043590410440703616,
861
+ "loss": 2.6609,
862
+ "theoretical_loss": 4.031100907879109,
863
+ "tokens_seen": 383385600
864
+ },
865
+ {
866
+ "epoch": 0.14,
867
+ "learning_rate": 0.0004353130319651977,
868
+ "loss": 2.6541,
869
+ "theoretical_loss": 4.02723979846797,
870
+ "tokens_seen": 386662400
871
+ },
872
+ {
873
+ "epoch": 0.14,
874
+ "learning_rate": 0.0004347219595233592,
875
+ "loss": 2.6331,
876
+ "theoretical_loss": 4.023420346746835,
877
+ "tokens_seen": 389939200
878
+ },
879
+ {
880
+ "epoch": 0.14,
881
+ "learning_rate": 0.0004341308870815207,
882
+ "loss": 2.6643,
883
+ "theoretical_loss": 4.019641758826938,
884
+ "tokens_seen": 393216000
885
+ },
886
+ {
887
+ "epoch": 0.14,
888
+ "learning_rate": 0.00043353981463968225,
889
+ "loss": 2.7007,
890
+ "theoretical_loss": 4.015903262404413,
891
+ "tokens_seen": 396492800
892
+ },
893
+ {
894
+ "epoch": 0.14,
895
+ "learning_rate": 0.0004329487421978438,
896
+ "loss": 2.6606,
897
+ "theoretical_loss": 4.012204106000786,
898
+ "tokens_seen": 399769600
899
+ },
900
+ {
901
+ "epoch": 0.14,
902
+ "learning_rate": 0.00043235766975600534,
903
+ "loss": 2.6741,
904
+ "theoretical_loss": 4.008543558236181,
905
+ "tokens_seen": 403046400
906
+ },
907
+ {
908
+ "epoch": 0.15,
909
+ "learning_rate": 0.00043176659731416686,
910
+ "loss": 2.6851,
911
+ "theoretical_loss": 4.004920907133565,
912
+ "tokens_seen": 406323200
913
+ },
914
+ {
915
+ "epoch": 0.15,
916
+ "learning_rate": 0.0004311755248723284,
917
+ "loss": 2.6507,
918
+ "theoretical_loss": 4.001335459452449,
919
+ "tokens_seen": 409600000
920
+ },
921
+ {
922
+ "epoch": 0.15,
923
+ "learning_rate": 0.0004305844524304899,
924
+ "loss": 2.6388,
925
+ "theoretical_loss": 3.997786540050617,
926
+ "tokens_seen": 412876800
927
+ },
928
+ {
929
+ "epoch": 0.15,
930
+ "learning_rate": 0.0004299933799886514,
931
+ "loss": 2.6775,
932
+ "theoretical_loss": 3.9942734912724456,
933
+ "tokens_seen": 416153600
934
+ },
935
+ {
936
+ "epoch": 0.15,
937
+ "learning_rate": 0.00042940230754681295,
938
+ "loss": 2.6459,
939
+ "theoretical_loss": 3.9907956723625375,
940
+ "tokens_seen": 419430400
941
+ },
942
+ {
943
+ "epoch": 0.15,
944
+ "learning_rate": 0.0004288112351049745,
945
+ "loss": 2.6205,
946
+ "theoretical_loss": 3.9873524589034224,
947
+ "tokens_seen": 422707200
948
+ },
949
+ {
950
+ "epoch": 0.15,
951
+ "learning_rate": 0.000428220162663136,
952
+ "loss": 2.6119,
953
+ "theoretical_loss": 3.9839432422761556,
954
+ "tokens_seen": 425984000
955
+ },
956
+ {
957
+ "epoch": 0.15,
958
+ "learning_rate": 0.0004276290902212975,
959
+ "loss": 2.5946,
960
+ "theoretical_loss": 3.980567429142721,
961
+ "tokens_seen": 429260800
962
+ },
963
+ {
964
+ "epoch": 0.15,
965
+ "learning_rate": 0.00042703801777945903,
966
+ "loss": 2.6213,
967
+ "theoretical_loss": 3.977224440949197,
968
+ "tokens_seen": 432537600
969
+ },
970
+ {
971
+ "epoch": 0.16,
972
+ "learning_rate": 0.00042644694533762055,
973
+ "loss": 2.6306,
974
+ "theoretical_loss": 3.9739137134486917,
975
+ "tokens_seen": 435814400
976
+ },
977
+ {
978
+ "epoch": 0.16,
979
+ "learning_rate": 0.00042585587289578207,
980
+ "loss": 2.621,
981
+ "theoretical_loss": 3.9706346962431396,
982
+ "tokens_seen": 439091200
983
+ },
984
+ {
985
+ "epoch": 0.16,
986
+ "learning_rate": 0.00042526480045394365,
987
+ "loss": 2.6262,
988
+ "theoretical_loss": 3.9673868523430564,
989
+ "tokens_seen": 442368000
990
+ },
991
+ {
992
+ "epoch": 0.16,
993
+ "learning_rate": 0.00042467372801210517,
994
+ "loss": 2.6389,
995
+ "theoretical_loss": 3.9641696577444376,
996
+ "tokens_seen": 445644800
997
+ },
998
+ {
999
+ "epoch": 0.16,
1000
+ "learning_rate": 0.0004240826555702667,
1001
+ "loss": 2.6321,
1002
+ "theoretical_loss": 3.9609826010220033,
1003
+ "tokens_seen": 448921600
1004
+ },
1005
+ {
1006
+ "epoch": 0.16,
1007
+ "learning_rate": 0.0004234915831284282,
1008
+ "loss": 2.6313,
1009
+ "theoretical_loss": 3.9578251829380506,
1010
+ "tokens_seen": 452198400
1011
+ },
1012
+ {
1013
+ "epoch": 0.16,
1014
+ "learning_rate": 0.00042290051068658973,
1015
+ "loss": 2.6319,
1016
+ "theoretical_loss": 3.954696916066199,
1017
+ "tokens_seen": 455475200
1018
+ },
1019
+ {
1020
+ "epoch": 0.16,
1021
+ "learning_rate": 0.00042230943824475125,
1022
+ "loss": 2.6369,
1023
+ "theoretical_loss": 3.9515973244293643,
1024
+ "tokens_seen": 458752000
1025
+ },
1026
+ {
1027
+ "epoch": 0.17,
1028
+ "learning_rate": 0.0004217183658029128,
1029
+ "loss": 2.648,
1030
+ "theoretical_loss": 3.948525943151326,
1031
+ "tokens_seen": 462028800
1032
+ },
1033
+ {
1034
+ "epoch": 0.17,
1035
+ "learning_rate": 0.00042112729336107435,
1036
+ "loss": 2.674,
1037
+ "theoretical_loss": 3.9454823181212815,
1038
+ "tokens_seen": 465305600
1039
+ },
1040
+ {
1041
+ "epoch": 0.17,
1042
+ "learning_rate": 0.00042053622091923587,
1043
+ "loss": 2.6931,
1044
+ "theoretical_loss": 3.9424660056708167,
1045
+ "tokens_seen": 468582400
1046
+ },
1047
+ {
1048
+ "epoch": 0.17,
1049
+ "learning_rate": 0.0004199451484773974,
1050
+ "loss": 2.6383,
1051
+ "theoretical_loss": 3.939476572262754,
1052
+ "tokens_seen": 471859200
1053
+ },
1054
+ {
1055
+ "epoch": 0.17,
1056
+ "learning_rate": 0.0004193540760355589,
1057
+ "loss": 2.6659,
1058
+ "theoretical_loss": 3.9365135941913563,
1059
+ "tokens_seen": 475136000
1060
+ },
1061
+ {
1062
+ "epoch": 0.17,
1063
+ "learning_rate": 0.00041876300359372043,
1064
+ "loss": 2.6818,
1065
+ "theoretical_loss": 3.9335766572934023,
1066
+ "tokens_seen": 478412800
1067
+ },
1068
+ {
1069
+ "epoch": 0.17,
1070
+ "learning_rate": 0.00041817193115188195,
1071
+ "loss": 2.6567,
1072
+ "theoretical_loss": 3.9306653566696603,
1073
+ "tokens_seen": 481689600
1074
+ },
1075
+ {
1076
+ "epoch": 0.17,
1077
+ "learning_rate": 0.0004175808587100435,
1078
+ "loss": 2.6558,
1079
+ "theoretical_loss": 3.927779296416332,
1080
+ "tokens_seen": 484966400
1081
+ },
1082
+ {
1083
+ "epoch": 0.17,
1084
+ "learning_rate": 0.00041698978626820505,
1085
+ "loss": 2.6617,
1086
+ "theoretical_loss": 3.924918089366024,
1087
+ "tokens_seen": 488243200
1088
+ },
1089
+ {
1090
+ "epoch": 0.18,
1091
+ "objective/train/docs_used": 254462,
1092
+ "objective/train/instantaneous_batch_size": 32,
1093
+ "objective/train/instantaneous_microbatch_size": 32768,
1094
+ "objective/train/original_loss": 2.4227302074432373,
1095
+ "objective/train/theoretical_loss": 3.9220813568378707,
1096
+ "objective/train/tokens_used": 511980000,
1097
+ "theoretical_loss": 3.9220813568378707,
1098
+ "tokens_seen": 491520000
1099
+ },
1100
+ {
1101
+ "epoch": 0.18,
1102
+ "learning_rate": 0.00041639871382636657,
1103
+ "loss": 2.658,
1104
+ "theoretical_loss": 3.9220813568378707,
1105
+ "tokens_seen": 491520000
1106
+ },
1107
+ {
1108
+ "epoch": 0.18,
1109
+ "learning_rate": 0.0004158076413845281,
1110
+ "loss": 2.5929,
1111
+ "theoretical_loss": 3.9192687283964096,
1112
+ "tokens_seen": 494796800
1113
+ },
1114
+ {
1115
+ "epoch": 0.18,
1116
+ "learning_rate": 0.00041522839039152636,
1117
+ "loss": 2.6373,
1118
+ "theoretical_loss": 3.9164798416188527,
1119
+ "tokens_seen": 498073600
1120
+ },
1121
+ {
1122
+ "epoch": 0.18,
1123
+ "learning_rate": 0.0004146373179496879,
1124
+ "loss": 2.6258,
1125
+ "theoretical_loss": 3.913714341870409,
1126
+ "tokens_seen": 501350400
1127
+ },
1128
+ {
1129
+ "epoch": 0.18,
1130
+ "learning_rate": 0.00041404624550784946,
1131
+ "loss": 2.6303,
1132
+ "theoretical_loss": 3.9109718820873303,
1133
+ "tokens_seen": 504627200
1134
+ },
1135
+ {
1136
+ "epoch": 0.18,
1137
+ "learning_rate": 0.000413455173066011,
1138
+ "loss": 2.6529,
1139
+ "theoretical_loss": 3.9082521225673625,
1140
+ "tokens_seen": 507904000
1141
+ },
1142
+ {
1143
+ "epoch": 0.18,
1144
+ "learning_rate": 0.0004128641006241725,
1145
+ "loss": 2.6082,
1146
+ "theoretical_loss": 3.9055547307673075,
1147
+ "tokens_seen": 511180800
1148
+ },
1149
+ {
1150
+ "epoch": 0.18,
1151
+ "learning_rate": 0.000412273028182334,
1152
+ "loss": 2.5831,
1153
+ "theoretical_loss": 3.9028793811074056,
1154
+ "tokens_seen": 514457600
1155
+ },
1156
+ {
1157
+ "epoch": 0.18,
1158
+ "learning_rate": 0.00041168195574049554,
1159
+ "loss": 2.6007,
1160
+ "theoretical_loss": 3.900225754782274,
1161
+ "tokens_seen": 517734400
1162
+ },
1163
+ {
1164
+ "epoch": 0.19,
1165
+ "learning_rate": 0.00041109088329865706,
1166
+ "loss": 2.596,
1167
+ "theoretical_loss": 3.897593539578138,
1168
+ "tokens_seen": 521011200
1169
+ },
1170
+ {
1171
+ "epoch": 0.19,
1172
+ "learning_rate": 0.00041049981085681863,
1173
+ "loss": 2.5878,
1174
+ "theoretical_loss": 3.8949824296961015,
1175
+ "tokens_seen": 524288000
1176
+ },
1177
+ {
1178
+ "epoch": 0.19,
1179
+ "learning_rate": 0.00040990873841498016,
1180
+ "loss": 2.5884,
1181
+ "theoretical_loss": 3.8923921255812353,
1182
+ "tokens_seen": 527564800
1183
+ },
1184
+ {
1185
+ "epoch": 0.19,
1186
+ "learning_rate": 0.0004093176659731417,
1187
+ "loss": 2.5497,
1188
+ "theoretical_loss": 3.8898223337572393,
1189
+ "tokens_seen": 530841600
1190
+ },
1191
+ {
1192
+ "epoch": 0.19,
1193
+ "learning_rate": 0.0004087265935313032,
1194
+ "loss": 2.571,
1195
+ "theoretical_loss": 3.88727276666648,
1196
+ "tokens_seen": 534118400
1197
+ },
1198
+ {
1199
+ "epoch": 0.19,
1200
+ "learning_rate": 0.0004081355210894647,
1201
+ "loss": 2.5724,
1202
+ "theoretical_loss": 3.884743142515184,
1203
+ "tokens_seen": 537395200
1204
+ },
1205
+ {
1206
+ "epoch": 0.19,
1207
+ "learning_rate": 0.00040754444864762624,
1208
+ "loss": 2.5607,
1209
+ "theoretical_loss": 3.8822331851235985,
1210
+ "tokens_seen": 540672000
1211
+ },
1212
+ {
1213
+ "epoch": 0.19,
1214
+ "learning_rate": 0.0004069533762057878,
1215
+ "loss": 2.6168,
1216
+ "theoretical_loss": 3.87974262378093,
1217
+ "tokens_seen": 543948800
1218
+ },
1219
+ {
1220
+ "epoch": 0.2,
1221
+ "learning_rate": 0.00040636230376394933,
1222
+ "loss": 2.6095,
1223
+ "theoretical_loss": 3.877271193104873,
1224
+ "tokens_seen": 547225600
1225
+ },
1226
+ {
1227
+ "epoch": 0.2,
1228
+ "learning_rate": 0.00040577123132211085,
1229
+ "loss": 2.5993,
1230
+ "theoretical_loss": 3.8748186329055736,
1231
+ "tokens_seen": 550502400
1232
+ },
1233
+ {
1234
+ "epoch": 0.2,
1235
+ "learning_rate": 0.0004051801588802724,
1236
+ "loss": 2.575,
1237
+ "theoretical_loss": 3.87238468805384,
1238
+ "tokens_seen": 553779200
1239
+ },
1240
+ {
1241
+ "epoch": 0.2,
1242
+ "learning_rate": 0.0004045890864384339,
1243
+ "loss": 2.5741,
1244
+ "theoretical_loss": 3.8699691083534633,
1245
+ "tokens_seen": 557056000
1246
+ },
1247
+ {
1248
+ "epoch": 0.2,
1249
+ "learning_rate": 0.0004039980139965954,
1250
+ "loss": 2.5787,
1251
+ "theoretical_loss": 3.8675716484174907,
1252
+ "tokens_seen": 560332800
1253
+ },
1254
+ {
1255
+ "epoch": 0.2,
1256
+ "learning_rate": 0.00040340694155475694,
1257
+ "loss": 2.5681,
1258
+ "theoretical_loss": 3.8651920675482936,
1259
+ "tokens_seen": 563609600
1260
+ },
1261
+ {
1262
+ "epoch": 0.2,
1263
+ "learning_rate": 0.0004028158691129185,
1264
+ "loss": 2.5427,
1265
+ "theoretical_loss": 3.862830129621318,
1266
+ "tokens_seen": 566886400
1267
+ },
1268
+ {
1269
+ "epoch": 0.2,
1270
+ "learning_rate": 0.00040222479667108003,
1271
+ "loss": 2.5492,
1272
+ "theoretical_loss": 3.8604856029723575,
1273
+ "tokens_seen": 570163200
1274
+ },
1275
+ {
1276
+ "epoch": 0.2,
1277
+ "learning_rate": 0.00040163372422924155,
1278
+ "loss": 2.5492,
1279
+ "theoretical_loss": 3.8581582602882447,
1280
+ "tokens_seen": 573440000
1281
+ },
1282
+ {
1283
+ "epoch": 0.21,
1284
+ "learning_rate": 0.0004010426517874031,
1285
+ "loss": 2.526,
1286
+ "theoretical_loss": 3.8558478785008203,
1287
+ "tokens_seen": 576716800
1288
+ },
1289
+ {
1290
+ "epoch": 0.21,
1291
+ "learning_rate": 0.0004004515793455646,
1292
+ "loss": 2.5441,
1293
+ "theoretical_loss": 3.8535542386840778,
1294
+ "tokens_seen": 579993600
1295
+ },
1296
+ {
1297
+ "epoch": 0.21,
1298
+ "learning_rate": 0.0003998605069037261,
1299
+ "loss": 2.5473,
1300
+ "theoretical_loss": 3.8512771259543586,
1301
+ "tokens_seen": 583270400
1302
+ },
1303
+ {
1304
+ "epoch": 0.21,
1305
+ "learning_rate": 0.0003992694344618877,
1306
+ "loss": 2.5235,
1307
+ "theoretical_loss": 3.8490163293735082,
1308
+ "tokens_seen": 586547200
1309
+ },
1310
+ {
1311
+ "epoch": 0.21,
1312
+ "learning_rate": 0.0003986783620200492,
1313
+ "loss": 2.5408,
1314
+ "theoretical_loss": 3.8467716418548648,
1315
+ "tokens_seen": 589824000
1316
+ },
1317
+ {
1318
+ "epoch": 0.21,
1319
+ "learning_rate": 0.00039808728957821073,
1320
+ "loss": 2.5857,
1321
+ "theoretical_loss": 3.844542860072007,
1322
+ "tokens_seen": 593100800
1323
+ },
1324
+ {
1325
+ "epoch": 0.21,
1326
+ "learning_rate": 0.00039749621713637225,
1327
+ "loss": 2.5715,
1328
+ "theoretical_loss": 3.8423297843701496,
1329
+ "tokens_seen": 596377600
1330
+ },
1331
+ {
1332
+ "epoch": 0.21,
1333
+ "learning_rate": 0.0003969051446945338,
1334
+ "loss": 2.5891,
1335
+ "theoretical_loss": 3.8401322186800995,
1336
+ "tokens_seen": 599654400
1337
+ },
1338
+ {
1339
+ "epoch": 0.22,
1340
+ "learning_rate": 0.0003963140722526953,
1341
+ "loss": 2.5636,
1342
+ "theoretical_loss": 3.83794997043469,
1343
+ "tokens_seen": 602931200
1344
+ },
1345
+ {
1346
+ "epoch": 0.22,
1347
+ "learning_rate": 0.00039572299981085687,
1348
+ "loss": 2.5828,
1349
+ "theoretical_loss": 3.8357828504876004,
1350
+ "tokens_seen": 606208000
1351
+ },
1352
+ {
1353
+ "epoch": 0.22,
1354
+ "learning_rate": 0.0003951319273690184,
1355
+ "loss": 2.5936,
1356
+ "theoretical_loss": 3.833630673034487,
1357
+ "tokens_seen": 609484800
1358
+ },
1359
+ {
1360
+ "epoch": 0.22,
1361
+ "learning_rate": 0.0003945408549271799,
1362
+ "loss": 2.5763,
1363
+ "theoretical_loss": 3.831493255536345,
1364
+ "tokens_seen": 612761600
1365
+ },
1366
+ {
1367
+ "epoch": 0.22,
1368
+ "learning_rate": 0.00039394978248534143,
1369
+ "loss": 2.5747,
1370
+ "theoretical_loss": 3.8293704186450253,
1371
+ "tokens_seen": 616038400
1372
+ },
1373
+ {
1374
+ "epoch": 0.22,
1375
+ "learning_rate": 0.00039335871004350295,
1376
+ "loss": 2.596,
1377
+ "theoretical_loss": 3.827261986130839,
1378
+ "tokens_seen": 619315200
1379
+ },
1380
+ {
1381
+ "epoch": 0.22,
1382
+ "learning_rate": 0.0003927676376016645,
1383
+ "loss": 2.5675,
1384
+ "theoretical_loss": 3.825167784812175,
1385
+ "tokens_seen": 622592000
1386
+ },
1387
+ {
1388
+ "epoch": 0.22,
1389
+ "learning_rate": 0.000392176565159826,
1390
+ "loss": 2.5751,
1391
+ "theoretical_loss": 3.823087644487069,
1392
+ "tokens_seen": 625868800
1393
+ },
1394
+ {
1395
+ "epoch": 0.22,
1396
+ "learning_rate": 0.00039158549271798757,
1397
+ "loss": 2.6107,
1398
+ "theoretical_loss": 3.8210213978666565,
1399
+ "tokens_seen": 629145600
1400
+ },
1401
+ {
1402
+ "epoch": 0.23,
1403
+ "learning_rate": 0.0003909944202761491,
1404
+ "loss": 2.6032,
1405
+ "theoretical_loss": 3.8189688805104476,
1406
+ "tokens_seen": 632422400
1407
+ },
1408
+ {
1409
+ "epoch": 0.23,
1410
+ "learning_rate": 0.00039040334783431056,
1411
+ "loss": 2.6353,
1412
+ "theoretical_loss": 3.816929930763374,
1413
+ "tokens_seen": 635699200
1414
+ },
1415
+ {
1416
+ "epoch": 0.23,
1417
+ "learning_rate": 0.0003898122753924721,
1418
+ "loss": 2.6048,
1419
+ "theoretical_loss": 3.8149043896945347,
1420
+ "tokens_seen": 638976000
1421
+ },
1422
+ {
1423
+ "epoch": 0.23,
1424
+ "learning_rate": 0.0003892212029506336,
1425
+ "loss": 2.5951,
1426
+ "theoretical_loss": 3.812892101037601,
1427
+ "tokens_seen": 642252800
1428
+ },
1429
+ {
1430
+ "epoch": 0.23,
1431
+ "learning_rate": 0.0003886301305087951,
1432
+ "loss": 2.6095,
1433
+ "theoretical_loss": 3.81089291113282,
1434
+ "tokens_seen": 645529600
1435
+ },
1436
+ {
1437
+ "epoch": 0.23,
1438
+ "learning_rate": 0.0003880390580669567,
1439
+ "loss": 2.5981,
1440
+ "theoretical_loss": 3.8089066688705673,
1441
+ "tokens_seen": 648806400
1442
+ },
1443
+ {
1444
+ "epoch": 0.23,
1445
+ "learning_rate": 0.0003874479856251182,
1446
+ "loss": 2.5641,
1447
+ "theoretical_loss": 3.8069332256363992,
1448
+ "tokens_seen": 652083200
1449
+ },
1450
+ {
1451
+ "debugging/Self-BLEU-5": 0.5613787023201813,
1452
+ "debugging/distinct-1-grams": 0.8149083264126824,
1453
+ "debugging/distinct-2-grams": 0.9601495159951162,
1454
+ "debugging/entropy-1-grams": 5.947486888431812,
1455
+ "debugging/entropy-2-grams": 6.555422311272144,
1456
+ "debugging/length": 621.5384615384615,
1457
+ "debugging/num_segments": 13,
1458
+ "epoch": 0.23,
1459
+ "objective/train/docs_used": 335300,
1460
+ "objective/train/instantaneous_batch_size": 32,
1461
+ "objective/train/instantaneous_microbatch_size": 32768,
1462
+ "objective/train/original_loss": 2.533392906188965,
1463
+ "objective/train/theoretical_loss": 3.80497243525756,
1464
+ "objective/train/tokens_used": 675820000,
1465
+ "theoretical_loss": 3.80497243525756,
1466
+ "tokens_seen": 655360000
1467
+ },
1468
+ {
1469
+ "epoch": 0.23,
1470
+ "learning_rate": 0.00038685691318327974,
1471
+ "loss": 2.5531,
1472
+ "theoretical_loss": 3.80497243525756,
1473
+ "tokens_seen": 655360000
1474
+ },
1475
+ {
1476
+ "epoch": 0.24,
1477
+ "learning_rate": 0.00038626584074144126,
1478
+ "loss": 2.5586,
1479
+ "theoretical_loss": 3.8030241539508958,
1480
+ "tokens_seen": 658636800
1481
+ },
1482
+ {
1483
+ "epoch": 0.24,
1484
+ "learning_rate": 0.0003856747682996028,
1485
+ "loss": 2.5483,
1486
+ "theoretical_loss": 3.8010882402721324,
1487
+ "tokens_seen": 661913600
1488
+ },
1489
+ {
1490
+ "epoch": 0.24,
1491
+ "learning_rate": 0.0003850836958577643,
1492
+ "loss": 2.5549,
1493
+ "theoretical_loss": 3.7991645550664757,
1494
+ "tokens_seen": 665190400
1495
+ },
1496
+ {
1497
+ "epoch": 0.24,
1498
+ "learning_rate": 0.0003844926234159259,
1499
+ "loss": 2.5372,
1500
+ "theoretical_loss": 3.797252961420492,
1501
+ "tokens_seen": 668467200
1502
+ },
1503
+ {
1504
+ "epoch": 0.24,
1505
+ "learning_rate": 0.0003839015509740874,
1506
+ "loss": 2.5367,
1507
+ "theoretical_loss": 3.795353324615228,
1508
+ "tokens_seen": 671744000
1509
+ },
1510
+ {
1511
+ "epoch": 0.24,
1512
+ "learning_rate": 0.0003833104785322489,
1513
+ "loss": 2.5248,
1514
+ "theoretical_loss": 3.793465512080541,
1515
+ "tokens_seen": 675020800
1516
+ },
1517
+ {
1518
+ "epoch": 0.24,
1519
+ "learning_rate": 0.00038271940609041044,
1520
+ "loss": 2.5511,
1521
+ "theoretical_loss": 3.791589393350587,
1522
+ "tokens_seen": 678297600
1523
+ },
1524
+ {
1525
+ "epoch": 0.24,
1526
+ "learning_rate": 0.00038212833364857196,
1527
+ "loss": 2.5481,
1528
+ "theoretical_loss": 3.7897248400204475,
1529
+ "tokens_seen": 681574400
1530
+ },
1531
+ {
1532
+ "epoch": 0.24,
1533
+ "learning_rate": 0.0003815372612067335,
1534
+ "loss": 2.541,
1535
+ "theoretical_loss": 3.7878717257038534,
1536
+ "tokens_seen": 684851200
1537
+ },
1538
+ {
1539
+ "epoch": 0.25,
1540
+ "learning_rate": 0.000380946188764895,
1541
+ "loss": 2.5462,
1542
+ "theoretical_loss": 3.7860299259919685,
1543
+ "tokens_seen": 688128000
1544
+ },
1545
+ {
1546
+ "epoch": 0.25,
1547
+ "learning_rate": 0.0003803551163230566,
1548
+ "loss": 2.5154,
1549
+ "theoretical_loss": 3.7841993184132114,
1550
+ "tokens_seen": 691404800
1551
+ },
1552
+ {
1553
+ "epoch": 0.25,
1554
+ "learning_rate": 0.0003797640438812181,
1555
+ "loss": 2.5001,
1556
+ "theoretical_loss": 3.78237978239408,
1557
+ "tokens_seen": 694681600
1558
+ },
1559
+ {
1560
+ "epoch": 0.25,
1561
+ "learning_rate": 0.0003791729714393796,
1562
+ "loss": 2.5389,
1563
+ "theoretical_loss": 3.780571199220942,
1564
+ "tokens_seen": 697958400
1565
+ },
1566
+ {
1567
+ "epoch": 0.25,
1568
+ "learning_rate": 0.00037858189899754114,
1569
+ "loss": 2.5683,
1570
+ "theoretical_loss": 3.7787734520027803,
1571
+ "tokens_seen": 701235200
1572
+ },
1573
+ {
1574
+ "epoch": 0.25,
1575
+ "learning_rate": 0.00037799082655570266,
1576
+ "loss": 2.5869,
1577
+ "theoretical_loss": 3.7769864256348455,
1578
+ "tokens_seen": 704512000
1579
+ },
1580
+ {
1581
+ "epoch": 0.25,
1582
+ "learning_rate": 0.0003773997541138642,
1583
+ "loss": 2.6073,
1584
+ "theoretical_loss": 3.775210006763202,
1585
+ "tokens_seen": 707788800
1586
+ },
1587
+ {
1588
+ "epoch": 0.25,
1589
+ "learning_rate": 0.00037680868167202575,
1590
+ "loss": 2.5991,
1591
+ "theoretical_loss": 3.7734440837501406,
1592
+ "tokens_seen": 711065600
1593
+ },
1594
+ {
1595
+ "epoch": 0.26,
1596
+ "learning_rate": 0.0003762176092301873,
1597
+ "loss": 2.5618,
1598
+ "theoretical_loss": 3.7716885466404246,
1599
+ "tokens_seen": 714342400
1600
+ },
1601
+ {
1602
+ "epoch": 0.26,
1603
+ "learning_rate": 0.0003756265367883488,
1604
+ "loss": 2.5421,
1605
+ "theoretical_loss": 3.769943287128357,
1606
+ "tokens_seen": 717619200
1607
+ },
1608
+ {
1609
+ "epoch": 0.26,
1610
+ "learning_rate": 0.0003750354643465103,
1611
+ "loss": 2.5525,
1612
+ "theoretical_loss": 3.7682081985256364,
1613
+ "tokens_seen": 720896000
1614
+ },
1615
+ {
1616
+ "epoch": 0.26,
1617
+ "learning_rate": 0.00037444439190467184,
1618
+ "loss": 2.5112,
1619
+ "theoretical_loss": 3.7664831757299795,
1620
+ "tokens_seen": 724172800
1621
+ },
1622
+ {
1623
+ "epoch": 0.26,
1624
+ "learning_rate": 0.00037385331946283336,
1625
+ "loss": 2.5343,
1626
+ "theoretical_loss": 3.7647681151944976,
1627
+ "tokens_seen": 727449600
1628
+ },
1629
+ {
1630
+ "epoch": 0.26,
1631
+ "learning_rate": 0.00037326224702099493,
1632
+ "loss": 2.5372,
1633
+ "theoretical_loss": 3.7630629148977937,
1634
+ "tokens_seen": 730726400
1635
+ },
1636
+ {
1637
+ "epoch": 0.26,
1638
+ "learning_rate": 0.00037267117457915645,
1639
+ "loss": 2.5665,
1640
+ "theoretical_loss": 3.761367474314768,
1641
+ "tokens_seen": 734003200
1642
+ },
1643
+ {
1644
+ "epoch": 0.26,
1645
+ "learning_rate": 0.000372080102137318,
1646
+ "loss": 2.5266,
1647
+ "theoretical_loss": 3.7596816943881084,
1648
+ "tokens_seen": 737280000
1649
+ },
1650
+ {
1651
+ "epoch": 0.26,
1652
+ "learning_rate": 0.0003714890296954795,
1653
+ "loss": 2.585,
1654
+ "theoretical_loss": 3.758005477500451,
1655
+ "tokens_seen": 740556800
1656
+ },
1657
+ {
1658
+ "epoch": 0.27,
1659
+ "learning_rate": 0.000370897957253641,
1660
+ "loss": 2.5451,
1661
+ "theoretical_loss": 3.756338727447186,
1662
+ "tokens_seen": 743833600
1663
+ },
1664
+ {
1665
+ "epoch": 0.27,
1666
+ "learning_rate": 0.00037030688481180254,
1667
+ "loss": 2.5604,
1668
+ "theoretical_loss": 3.7546813494098945,
1669
+ "tokens_seen": 747110400
1670
+ },
1671
+ {
1672
+ "epoch": 0.27,
1673
+ "learning_rate": 0.00036971581236996406,
1674
+ "loss": 2.5853,
1675
+ "theoretical_loss": 3.7530332499304007,
1676
+ "tokens_seen": 750387200
1677
+ },
1678
+ {
1679
+ "epoch": 0.27,
1680
+ "learning_rate": 0.0003691365613769624,
1681
+ "loss": 2.5897,
1682
+ "theoretical_loss": 3.7513943368854195,
1683
+ "tokens_seen": 753664000
1684
+ },
1685
+ {
1686
+ "epoch": 0.27,
1687
+ "learning_rate": 0.0003685454889351239,
1688
+ "loss": 2.599,
1689
+ "theoretical_loss": 3.7497645194617863,
1690
+ "tokens_seen": 756940800
1691
+ },
1692
+ {
1693
+ "epoch": 0.27,
1694
+ "learning_rate": 0.0003679544164932854,
1695
+ "loss": 2.5786,
1696
+ "theoretical_loss": 3.748143708132246,
1697
+ "tokens_seen": 760217600
1698
+ },
1699
+ {
1700
+ "epoch": 0.27,
1701
+ "learning_rate": 0.00036736334405144695,
1702
+ "loss": 2.5257,
1703
+ "theoretical_loss": 3.7465318146317994,
1704
+ "tokens_seen": 763494400
1705
+ },
1706
+ {
1707
+ "epoch": 0.27,
1708
+ "learning_rate": 0.00036677227160960847,
1709
+ "loss": 2.5269,
1710
+ "theoretical_loss": 3.7449287519345766,
1711
+ "tokens_seen": 766771200
1712
+ },
1713
+ {
1714
+ "epoch": 0.28,
1715
+ "learning_rate": 0.00036618119916777,
1716
+ "loss": 2.5517,
1717
+ "theoretical_loss": 3.7433344342312385,
1718
+ "tokens_seen": 770048000
1719
+ },
1720
+ {
1721
+ "epoch": 0.28,
1722
+ "learning_rate": 0.00036559012672593156,
1723
+ "loss": 2.5221,
1724
+ "theoretical_loss": 3.7417487769068756,
1725
+ "tokens_seen": 773324800
1726
+ },
1727
+ {
1728
+ "epoch": 0.28,
1729
+ "learning_rate": 0.0003649990542840931,
1730
+ "loss": 2.4986,
1731
+ "theoretical_loss": 3.7401716965194076,
1732
+ "tokens_seen": 776601600
1733
+ },
1734
+ {
1735
+ "epoch": 0.28,
1736
+ "learning_rate": 0.0003644079818422546,
1737
+ "loss": 2.5261,
1738
+ "theoretical_loss": 3.738603110778461,
1739
+ "tokens_seen": 779878400
1740
+ },
1741
+ {
1742
+ "epoch": 0.28,
1743
+ "learning_rate": 0.0003638169094004161,
1744
+ "loss": 2.5152,
1745
+ "theoretical_loss": 3.73704293852471,
1746
+ "tokens_seen": 783155200
1747
+ },
1748
+ {
1749
+ "epoch": 0.28,
1750
+ "learning_rate": 0.00036322583695857765,
1751
+ "loss": 2.5525,
1752
+ "theoretical_loss": 3.7354910997096793,
1753
+ "tokens_seen": 786432000
1754
+ },
1755
+ {
1756
+ "epoch": 0.28,
1757
+ "learning_rate": 0.00036263476451673917,
1758
+ "loss": 2.5747,
1759
+ "theoretical_loss": 3.7339475153759825,
1760
+ "tokens_seen": 789708800
1761
+ },
1762
+ {
1763
+ "epoch": 0.28,
1764
+ "learning_rate": 0.00036204369207490074,
1765
+ "loss": 2.5238,
1766
+ "theoretical_loss": 3.732412107638,
1767
+ "tokens_seen": 792985600
1768
+ },
1769
+ {
1770
+ "epoch": 0.28,
1771
+ "learning_rate": 0.00036145261963306226,
1772
+ "loss": 2.4792,
1773
+ "theoretical_loss": 3.7308847996629724,
1774
+ "tokens_seen": 796262400
1775
+ },
1776
+ {
1777
+ "epoch": 0.29,
1778
+ "learning_rate": 0.0003608615471912238,
1779
+ "loss": 2.4976,
1780
+ "theoretical_loss": 3.7293655156525043,
1781
+ "tokens_seen": 799539200
1782
+ },
1783
+ {
1784
+ "epoch": 0.29,
1785
+ "learning_rate": 0.0003602704747493853,
1786
+ "loss": 2.4627,
1787
+ "theoretical_loss": 3.727854180824469,
1788
+ "tokens_seen": 802816000
1789
+ },
1790
+ {
1791
+ "epoch": 0.29,
1792
+ "learning_rate": 0.0003596794023075468,
1793
+ "loss": 2.4573,
1794
+ "theoretical_loss": 3.7263507213952978,
1795
+ "tokens_seen": 806092800
1796
+ },
1797
+ {
1798
+ "epoch": 0.29,
1799
+ "learning_rate": 0.00035908832986570834,
1800
+ "loss": 2.4972,
1801
+ "theoretical_loss": 3.724855064562658,
1802
+ "tokens_seen": 809369600
1803
+ },
1804
+ {
1805
+ "epoch": 0.29,
1806
+ "learning_rate": 0.0003584972574238699,
1807
+ "loss": 2.487,
1808
+ "theoretical_loss": 3.723367138488488,
1809
+ "tokens_seen": 812646400
1810
+ },
1811
+ {
1812
+ "epoch": 0.29,
1813
+ "learning_rate": 0.00035790618498203144,
1814
+ "loss": 2.5415,
1815
+ "theoretical_loss": 3.7218868722824014,
1816
+ "tokens_seen": 815923200
1817
+ },
1818
+ {
1819
+ "epoch": 0.29,
1820
+ "objective/train/docs_used": 416670,
1821
+ "objective/train/instantaneous_batch_size": 32,
1822
+ "objective/train/instantaneous_microbatch_size": 32768,
1823
+ "objective/train/original_loss": 2.7136542797088623,
1824
+ "objective/train/theoretical_loss": 3.7204141959854384,
1825
+ "objective/train/tokens_used": 839660000,
1826
+ "theoretical_loss": 3.7204141959854384,
1827
+ "tokens_seen": 819200000
1828
+ },
1829
+ {
1830
+ "epoch": 0.29,
1831
+ "learning_rate": 0.00035731511254019296,
1832
+ "loss": 2.512,
1833
+ "theoretical_loss": 3.7204141959854384,
1834
+ "tokens_seen": 819200000
1835
+ },
1836
+ {
1837
+ "epoch": 0.29,
1838
+ "learning_rate": 0.0003567240400983545,
1839
+ "loss": 2.5015,
1840
+ "theoretical_loss": 3.718949040554162,
1841
+ "tokens_seen": 822476800
1842
+ },
1843
+ {
1844
+ "epoch": 0.29,
1845
+ "learning_rate": 0.000356132967656516,
1846
+ "loss": 2.5342,
1847
+ "theoretical_loss": 3.7174913378450833,
1848
+ "tokens_seen": 825753600
1849
+ },
1850
+ {
1851
+ "epoch": 0.3,
1852
+ "learning_rate": 0.0003555418952146775,
1853
+ "loss": 2.5043,
1854
+ "theoretical_loss": 3.7160410205994183,
1855
+ "tokens_seen": 829030400
1856
+ },
1857
+ {
1858
+ "epoch": 0.3,
1859
+ "learning_rate": 0.00035495082277283904,
1860
+ "loss": 2.477,
1861
+ "theoretical_loss": 3.7145980224281585,
1862
+ "tokens_seen": 832307200
1863
+ },
1864
+ {
1865
+ "epoch": 0.3,
1866
+ "learning_rate": 0.0003543597503310006,
1867
+ "loss": 2.4869,
1868
+ "theoretical_loss": 3.713162277797449,
1869
+ "tokens_seen": 835584000
1870
+ },
1871
+ {
1872
+ "epoch": 0.3,
1873
+ "learning_rate": 0.00035376867788916214,
1874
+ "loss": 2.4897,
1875
+ "theoretical_loss": 3.7117337220142748,
1876
+ "tokens_seen": 838860800
1877
+ },
1878
+ {
1879
+ "epoch": 0.3,
1880
+ "learning_rate": 0.00035317760544732366,
1881
+ "loss": 2.502,
1882
+ "theoretical_loss": 3.7103122912124364,
1883
+ "tokens_seen": 842137600
1884
+ },
1885
+ {
1886
+ "epoch": 0.3,
1887
+ "learning_rate": 0.0003525865330054852,
1888
+ "loss": 2.456,
1889
+ "theoretical_loss": 3.7088979223388128,
1890
+ "tokens_seen": 845414400
1891
+ },
1892
+ {
1893
+ "epoch": 0.3,
1894
+ "learning_rate": 0.00035199546056364665,
1895
+ "loss": 2.4775,
1896
+ "theoretical_loss": 3.70749055313991,
1897
+ "tokens_seen": 848691200
1898
+ },
1899
+ {
1900
+ "epoch": 0.3,
1901
+ "learning_rate": 0.00035140438812180817,
1902
+ "loss": 2.4525,
1903
+ "theoretical_loss": 3.7060901221486766,
1904
+ "tokens_seen": 851968000
1905
+ },
1906
+ {
1907
+ "epoch": 0.31,
1908
+ "learning_rate": 0.00035081331567996974,
1909
+ "loss": 2.4939,
1910
+ "theoretical_loss": 3.704696568671591,
1911
+ "tokens_seen": 855244800
1912
+ },
1913
+ {
1914
+ "epoch": 0.31,
1915
+ "learning_rate": 0.00035022224323813127,
1916
+ "loss": 2.4979,
1917
+ "theoretical_loss": 3.7033098327760063,
1918
+ "tokens_seen": 858521600
1919
+ },
1920
+ {
1921
+ "epoch": 0.31,
1922
+ "learning_rate": 0.0003496311707962928,
1923
+ "loss": 2.4961,
1924
+ "theoretical_loss": 3.7019298552777533,
1925
+ "tokens_seen": 861798400
1926
+ },
1927
+ {
1928
+ "epoch": 0.31,
1929
+ "learning_rate": 0.0003490400983544543,
1930
+ "loss": 2.517,
1931
+ "theoretical_loss": 3.700556577728988,
1932
+ "tokens_seen": 865075200
1933
+ },
1934
+ {
1935
+ "epoch": 0.31,
1936
+ "learning_rate": 0.00034844902591261583,
1937
+ "loss": 2.5434,
1938
+ "theoretical_loss": 3.6991899424062815,
1939
+ "tokens_seen": 868352000
1940
+ },
1941
+ {
1942
+ "epoch": 0.31,
1943
+ "learning_rate": 0.00034785795347077735,
1944
+ "loss": 2.5203,
1945
+ "theoretical_loss": 3.697829892298951,
1946
+ "tokens_seen": 871628800
1947
+ },
1948
+ {
1949
+ "epoch": 0.31,
1950
+ "learning_rate": 0.0003472668810289389,
1951
+ "loss": 2.5336,
1952
+ "theoretical_loss": 3.696476371097618,
1953
+ "tokens_seen": 874905600
1954
+ },
1955
+ {
1956
+ "epoch": 0.31,
1957
+ "learning_rate": 0.00034667580858710044,
1958
+ "loss": 2.5414,
1959
+ "theoretical_loss": 3.695129323182993,
1960
+ "tokens_seen": 878182400
1961
+ },
1962
+ {
1963
+ "epoch": 0.31,
1964
+ "learning_rate": 0.00034608473614526196,
1965
+ "loss": 2.5387,
1966
+ "theoretical_loss": 3.693788693614879,
1967
+ "tokens_seen": 881459200
1968
+ },
1969
+ {
1970
+ "epoch": 0.32,
1971
+ "learning_rate": 0.0003454936637034235,
1972
+ "loss": 2.564,
1973
+ "theoretical_loss": 3.6924544281213967,
1974
+ "tokens_seen": 884736000
1975
+ },
1976
+ {
1977
+ "epoch": 0.32,
1978
+ "learning_rate": 0.000344902591261585,
1979
+ "loss": 2.5277,
1980
+ "theoretical_loss": 3.691126473088412,
1981
+ "tokens_seen": 888012800
1982
+ },
1983
+ {
1984
+ "epoch": 0.32,
1985
+ "learning_rate": 0.00034431151881974653,
1986
+ "loss": 2.5106,
1987
+ "theoretical_loss": 3.689804775549173,
1988
+ "tokens_seen": 891289600
1989
+ },
1990
+ {
1991
+ "epoch": 0.32,
1992
+ "learning_rate": 0.00034372044637790805,
1993
+ "loss": 2.535,
1994
+ "theoretical_loss": 3.688489283174146,
1995
+ "tokens_seen": 894566400
1996
+ },
1997
+ {
1998
+ "epoch": 0.32,
1999
+ "learning_rate": 0.0003431293739360696,
2000
+ "loss": 2.5195,
2001
+ "theoretical_loss": 3.6871799442610538,
2002
+ "tokens_seen": 897843200
2003
+ },
2004
+ {
2005
+ "epoch": 0.32,
2006
+ "learning_rate": 0.00034253830149423114,
2007
+ "loss": 2.5172,
2008
+ "theoretical_loss": 3.685876707725093,
2009
+ "tokens_seen": 901120000
2010
+ },
2011
+ {
2012
+ "epoch": 0.32,
2013
+ "learning_rate": 0.00034194722905239266,
2014
+ "loss": 2.542,
2015
+ "theoretical_loss": 3.6845795230893517,
2016
+ "tokens_seen": 904396800
2017
+ },
2018
+ {
2019
+ "epoch": 0.32,
2020
+ "learning_rate": 0.0003413561566105542,
2021
+ "loss": 2.5073,
2022
+ "theoretical_loss": 3.6832883404754035,
2023
+ "tokens_seen": 907673600
2024
+ },
2025
+ {
2026
+ "epoch": 0.33,
2027
+ "learning_rate": 0.00034077690561755246,
2028
+ "loss": 2.5154,
2029
+ "theoretical_loss": 3.6820031105940796,
2030
+ "tokens_seen": 910950400
2031
+ },
2032
+ {
2033
+ "epoch": 0.33,
2034
+ "learning_rate": 0.000340185833175714,
2035
+ "loss": 2.5224,
2036
+ "theoretical_loss": 3.6807237847364176,
2037
+ "tokens_seen": 914227200
2038
+ },
2039
+ {
2040
+ "epoch": 0.33,
2041
+ "learning_rate": 0.00033959476073387555,
2042
+ "loss": 2.5197,
2043
+ "theoretical_loss": 3.6794503147647846,
2044
+ "tokens_seen": 917504000
2045
+ },
2046
+ {
2047
+ "epoch": 0.33,
2048
+ "learning_rate": 0.0003390036882920371,
2049
+ "loss": 2.4806,
2050
+ "theoretical_loss": 3.67818265310416,
2051
+ "tokens_seen": 920780800
2052
+ },
2053
+ {
2054
+ "epoch": 0.33,
2055
+ "learning_rate": 0.0003384126158501986,
2056
+ "loss": 2.4864,
2057
+ "theoretical_loss": 3.6769207527335888,
2058
+ "tokens_seen": 924057600
2059
+ },
2060
+ {
2061
+ "epoch": 0.33,
2062
+ "learning_rate": 0.0003378215434083601,
2063
+ "loss": 2.4467,
2064
+ "theoretical_loss": 3.675664567177787,
2065
+ "tokens_seen": 927334400
2066
+ },
2067
+ {
2068
+ "epoch": 0.33,
2069
+ "learning_rate": 0.00033723047096652164,
2070
+ "loss": 2.448,
2071
+ "theoretical_loss": 3.674414050498913,
2072
+ "tokens_seen": 930611200
2073
+ },
2074
+ {
2075
+ "epoch": 0.33,
2076
+ "learning_rate": 0.00033663939852468316,
2077
+ "loss": 2.4737,
2078
+ "theoretical_loss": 3.6731691572884824,
2079
+ "tokens_seen": 933888000
2080
+ },
2081
+ {
2082
+ "epoch": 0.33,
2083
+ "learning_rate": 0.00033604832608284473,
2084
+ "loss": 2.4584,
2085
+ "theoretical_loss": 3.671929842659438,
2086
+ "tokens_seen": 937164800
2087
+ },
2088
+ {
2089
+ "epoch": 0.34,
2090
+ "learning_rate": 0.00033545725364100625,
2091
+ "loss": 2.4622,
2092
+ "theoretical_loss": 3.6706960622383624,
2093
+ "tokens_seen": 940441600
2094
+ },
2095
+ {
2096
+ "epoch": 0.34,
2097
+ "learning_rate": 0.0003348661811991678,
2098
+ "loss": 2.4833,
2099
+ "theoretical_loss": 3.6694677721578377,
2100
+ "tokens_seen": 943718400
2101
+ },
2102
+ {
2103
+ "epoch": 0.34,
2104
+ "learning_rate": 0.0003342751087573293,
2105
+ "loss": 2.469,
2106
+ "theoretical_loss": 3.66824492904894,
2107
+ "tokens_seen": 946995200
2108
+ },
2109
+ {
2110
+ "epoch": 0.34,
2111
+ "learning_rate": 0.0003336840363154908,
2112
+ "loss": 2.4464,
2113
+ "theoretical_loss": 3.667027490033874,
2114
+ "tokens_seen": 950272000
2115
+ },
2116
+ {
2117
+ "epoch": 0.34,
2118
+ "learning_rate": 0.00033309296387365234,
2119
+ "loss": 2.4385,
2120
+ "theoretical_loss": 3.6658154127187412,
2121
+ "tokens_seen": 953548800
2122
+ },
2123
+ {
2124
+ "epoch": 0.34,
2125
+ "learning_rate": 0.00033250189143181386,
2126
+ "loss": 2.4387,
2127
+ "theoretical_loss": 3.664608655186437,
2128
+ "tokens_seen": 956825600
2129
+ },
2130
+ {
2131
+ "epoch": 0.34,
2132
+ "learning_rate": 0.00033191081898997543,
2133
+ "loss": 2.4551,
2134
+ "theoretical_loss": 3.663407175989679,
2135
+ "tokens_seen": 960102400
2136
+ },
2137
+ {
2138
+ "epoch": 0.34,
2139
+ "learning_rate": 0.00033131974654813695,
2140
+ "loss": 2.484,
2141
+ "theoretical_loss": 3.662210934144158,
2142
+ "tokens_seen": 963379200
2143
+ },
2144
+ {
2145
+ "epoch": 0.35,
2146
+ "learning_rate": 0.0003307286741062985,
2147
+ "loss": 2.4907,
2148
+ "theoretical_loss": 3.661019889121812,
2149
+ "tokens_seen": 966656000
2150
+ },
2151
+ {
2152
+ "epoch": 0.35,
2153
+ "learning_rate": 0.00033013760166446,
2154
+ "loss": 2.4687,
2155
+ "theoretical_loss": 3.6598340008442234,
2156
+ "tokens_seen": 969932800
2157
+ },
2158
+ {
2159
+ "epoch": 0.35,
2160
+ "learning_rate": 0.0003295465292226215,
2161
+ "loss": 2.5139,
2162
+ "theoretical_loss": 3.6586532296761285,
2163
+ "tokens_seen": 973209600
2164
+ },
2165
+ {
2166
+ "epoch": 0.35,
2167
+ "learning_rate": 0.00032895545678078304,
2168
+ "loss": 2.4666,
2169
+ "theoretical_loss": 3.657477536419047,
2170
+ "tokens_seen": 976486400
2171
+ },
2172
+ {
2173
+ "epoch": 0.35,
2174
+ "learning_rate": 0.0003283643843389446,
2175
+ "loss": 2.4923,
2176
+ "theoretical_loss": 3.656306882305022,
2177
+ "tokens_seen": 979763200
2178
+ },
2179
+ {
2180
+ "debugging/Self-BLEU-5": 0.5808769539451584,
2181
+ "debugging/distinct-1-grams": 0.8092227708966282,
2182
+ "debugging/distinct-2-grams": 0.969278580146789,
2183
+ "debugging/entropy-1-grams": 5.823343222362094,
2184
+ "debugging/entropy-2-grams": 6.624476320908173,
2185
+ "debugging/length": 581.2857142857143,
2186
+ "debugging/num_segments": 14,
2187
+ "epoch": 0.35,
2188
+ "objective/train/docs_used": 498736,
2189
+ "objective/train/instantaneous_batch_size": 32,
2190
+ "objective/train/instantaneous_microbatch_size": 32768,
2191
+ "objective/train/original_loss": 2.4405508041381836,
2192
+ "objective/train/theoretical_loss": 3.6551412289904697,
2193
+ "objective/train/tokens_used": 1003500000,
2194
+ "theoretical_loss": 3.6551412289904697,
2195
+ "tokens_seen": 983040000
2196
+ },
2197
+ {
2198
+ "epoch": 0.35,
2199
+ "learning_rate": 0.00032777331189710613,
2200
+ "loss": 2.4773,
2201
+ "theoretical_loss": 3.6551412289904697,
2202
+ "tokens_seen": 983040000
2203
+ },
2204
+ {
2205
+ "epoch": 0.35,
2206
+ "learning_rate": 0.00032718223945526765,
2207
+ "loss": 2.4446,
2208
+ "theoretical_loss": 3.6539805385501376,
2209
+ "tokens_seen": 986316800
2210
+ },
2211
+ {
2212
+ "epoch": 0.35,
2213
+ "learning_rate": 0.0003265911670134292,
2214
+ "loss": 2.4604,
2215
+ "theoretical_loss": 3.652824773471171,
2216
+ "tokens_seen": 989593600
2217
+ },
2218
+ {
2219
+ "epoch": 0.35,
2220
+ "learning_rate": 0.0003260000945715907,
2221
+ "loss": 2.464,
2222
+ "theoretical_loss": 3.651673896647277,
2223
+ "tokens_seen": 992870400
2224
+ },
2225
+ {
2226
+ "epoch": 0.36,
2227
+ "learning_rate": 0.0003254090221297522,
2228
+ "loss": 2.4502,
2229
+ "theoretical_loss": 3.6505278713729985,
2230
+ "tokens_seen": 996147200
2231
+ },
2232
+ {
2233
+ "epoch": 0.36,
2234
+ "learning_rate": 0.0003248179496879138,
2235
+ "loss": 2.4302,
2236
+ "theoretical_loss": 3.6493866613380774,
2237
+ "tokens_seen": 999424000
2238
+ },
2239
+ {
2240
+ "epoch": 0.36,
2241
+ "learning_rate": 0.0003242268772460753,
2242
+ "loss": 2.4491,
2243
+ "theoretical_loss": 3.648250230621924,
2244
+ "tokens_seen": 1002700800
2245
+ },
2246
+ {
2247
+ "epoch": 0.36,
2248
+ "learning_rate": 0.00032363580480423683,
2249
+ "loss": 2.4729,
2250
+ "theoretical_loss": 3.647118543688179,
2251
+ "tokens_seen": 1005977600
2252
+ },
2253
+ {
2254
+ "epoch": 0.36,
2255
+ "learning_rate": 0.00032304473236239835,
2256
+ "loss": 2.4767,
2257
+ "theoretical_loss": 3.6459915653793633,
2258
+ "tokens_seen": 1009254400
2259
+ },
2260
+ {
2261
+ "epoch": 0.36,
2262
+ "learning_rate": 0.00032245365992055987,
2263
+ "loss": 2.4782,
2264
+ "theoretical_loss": 3.644869260911628,
2265
+ "tokens_seen": 1012531200
2266
+ },
2267
+ {
2268
+ "epoch": 0.36,
2269
+ "learning_rate": 0.0003218625874787214,
2270
+ "loss": 2.4723,
2271
+ "theoretical_loss": 3.64375159586959,
2272
+ "tokens_seen": 1015808000
2273
+ },
2274
+ {
2275
+ "epoch": 0.36,
2276
+ "learning_rate": 0.0003212715150368829,
2277
+ "loss": 2.4532,
2278
+ "theoretical_loss": 3.642638536201252,
2279
+ "tokens_seen": 1019084800
2280
+ },
2281
+ {
2282
+ "epoch": 0.37,
2283
+ "learning_rate": 0.0003206804425950445,
2284
+ "loss": 2.4654,
2285
+ "theoretical_loss": 3.6415300482130135,
2286
+ "tokens_seen": 1022361600
2287
+ },
2288
+ {
2289
+ "epoch": 0.37,
2290
+ "learning_rate": 0.000320089370153206,
2291
+ "loss": 2.4787,
2292
+ "theoretical_loss": 3.6404260985647667,
2293
+ "tokens_seen": 1025638400
2294
+ },
2295
+ {
2296
+ "epoch": 0.37,
2297
+ "learning_rate": 0.00031949829771136753,
2298
+ "loss": 2.4936,
2299
+ "theoretical_loss": 3.6393266542650684,
2300
+ "tokens_seen": 1028915200
2301
+ },
2302
+ {
2303
+ "epoch": 0.37,
2304
+ "learning_rate": 0.00031890722526952905,
2305
+ "loss": 2.4849,
2306
+ "theoretical_loss": 3.638231682666401,
2307
+ "tokens_seen": 1032192000
2308
+ },
2309
+ {
2310
+ "epoch": 0.37,
2311
+ "learning_rate": 0.00031831615282769057,
2312
+ "loss": 2.455,
2313
+ "theoretical_loss": 3.637141151460505,
2314
+ "tokens_seen": 1035468800
2315
+ },
2316
+ {
2317
+ "epoch": 0.37,
2318
+ "learning_rate": 0.0003177250803858521,
2319
+ "loss": 2.461,
2320
+ "theoretical_loss": 3.636055028673799,
2321
+ "tokens_seen": 1038745600
2322
+ },
2323
+ {
2324
+ "epoch": 0.37,
2325
+ "learning_rate": 0.00031713400794401367,
2326
+ "loss": 2.4815,
2327
+ "theoretical_loss": 3.634973282662864,
2328
+ "tokens_seen": 1042022400
2329
+ },
2330
+ {
2331
+ "epoch": 0.37,
2332
+ "learning_rate": 0.0003165429355021752,
2333
+ "loss": 2.462,
2334
+ "theoretical_loss": 3.6338958821100107,
2335
+ "tokens_seen": 1045299200
2336
+ },
2337
+ {
2338
+ "epoch": 0.37,
2339
+ "learning_rate": 0.0003159518630603367,
2340
+ "loss": 2.4218,
2341
+ "theoretical_loss": 3.63282279601892,
2342
+ "tokens_seen": 1048576000
2343
+ },
2344
+ {
2345
+ "epoch": 0.38,
2346
+ "learning_rate": 0.00031536079061849823,
2347
+ "loss": 2.435,
2348
+ "theoretical_loss": 3.631753993710352,
2349
+ "tokens_seen": 1051852800
2350
+ },
2351
+ {
2352
+ "epoch": 0.38,
2353
+ "learning_rate": 0.00031476971817665975,
2354
+ "loss": 2.4059,
2355
+ "theoretical_loss": 3.630689444817925,
2356
+ "tokens_seen": 1055129600
2357
+ },
2358
+ {
2359
+ "epoch": 0.38,
2360
+ "learning_rate": 0.0003141786457348212,
2361
+ "loss": 2.4132,
2362
+ "theoretical_loss": 3.629629119283967,
2363
+ "tokens_seen": 1058406400
2364
+ },
2365
+ {
2366
+ "epoch": 0.38,
2367
+ "learning_rate": 0.0003135875732929828,
2368
+ "loss": 2.4224,
2369
+ "theoretical_loss": 3.628572987355434,
2370
+ "tokens_seen": 1061683200
2371
+ },
2372
+ {
2373
+ "epoch": 0.38,
2374
+ "learning_rate": 0.0003129965008511443,
2375
+ "loss": 2.4123,
2376
+ "theoretical_loss": 3.6275210195798913,
2377
+ "tokens_seen": 1064960000
2378
+ },
2379
+ {
2380
+ "epoch": 0.38,
2381
+ "learning_rate": 0.00031240542840930583,
2382
+ "loss": 2.4143,
2383
+ "theoretical_loss": 3.626473186801564,
2384
+ "tokens_seen": 1068236800
2385
+ },
2386
+ {
2387
+ "epoch": 0.38,
2388
+ "learning_rate": 0.00031181435596746736,
2389
+ "loss": 2.4229,
2390
+ "theoretical_loss": 3.6254294601574495,
2391
+ "tokens_seen": 1071513600
2392
+ },
2393
+ {
2394
+ "epoch": 0.38,
2395
+ "learning_rate": 0.0003112232835256289,
2396
+ "loss": 2.4552,
2397
+ "theoretical_loss": 3.624389811073493,
2398
+ "tokens_seen": 1074790400
2399
+ },
2400
+ {
2401
+ "epoch": 0.39,
2402
+ "learning_rate": 0.0003106322110837904,
2403
+ "loss": 2.4635,
2404
+ "theoretical_loss": 3.6233542112608257,
2405
+ "tokens_seen": 1078067200
2406
+ },
2407
+ {
2408
+ "epoch": 0.39,
2409
+ "learning_rate": 0.0003100529600907888,
2410
+ "loss": 2.4215,
2411
+ "theoretical_loss": 3.6223226327120592,
2412
+ "tokens_seen": 1081344000
2413
+ },
2414
+ {
2415
+ "epoch": 0.39,
2416
+ "learning_rate": 0.0003094618876489503,
2417
+ "loss": 2.4313,
2418
+ "theoretical_loss": 3.621295047697644,
2419
+ "tokens_seen": 1084620800
2420
+ },
2421
+ {
2422
+ "epoch": 0.39,
2423
+ "learning_rate": 0.0003088708152071118,
2424
+ "loss": 2.4371,
2425
+ "theoretical_loss": 3.6202714287622833,
2426
+ "tokens_seen": 1087897600
2427
+ },
2428
+ {
2429
+ "epoch": 0.39,
2430
+ "learning_rate": 0.00030827974276527334,
2431
+ "loss": 2.4532,
2432
+ "theoretical_loss": 3.6192517487214038,
2433
+ "tokens_seen": 1091174400
2434
+ },
2435
+ {
2436
+ "epoch": 0.39,
2437
+ "learning_rate": 0.00030768867032343486,
2438
+ "loss": 2.462,
2439
+ "theoretical_loss": 3.6182359806576834,
2440
+ "tokens_seen": 1094451200
2441
+ },
2442
+ {
2443
+ "epoch": 0.39,
2444
+ "learning_rate": 0.0003070975978815964,
2445
+ "loss": 2.4922,
2446
+ "theoretical_loss": 3.6172240979176333,
2447
+ "tokens_seen": 1097728000
2448
+ },
2449
+ {
2450
+ "epoch": 0.39,
2451
+ "learning_rate": 0.0003065065254397579,
2452
+ "loss": 2.5429,
2453
+ "theoretical_loss": 3.616216074108232,
2454
+ "tokens_seen": 1101004800
2455
+ },
2456
+ {
2457
+ "epoch": 0.39,
2458
+ "learning_rate": 0.0003059154529979195,
2459
+ "loss": 2.5284,
2460
+ "theoretical_loss": 3.6152118830936164,
2461
+ "tokens_seen": 1104281600
2462
+ },
2463
+ {
2464
+ "epoch": 0.4,
2465
+ "learning_rate": 0.000305324380556081,
2466
+ "loss": 2.5163,
2467
+ "theoretical_loss": 3.6142114989918195,
2468
+ "tokens_seen": 1107558400
2469
+ },
2470
+ {
2471
+ "epoch": 0.4,
2472
+ "learning_rate": 0.00030473330811424246,
2473
+ "loss": 2.539,
2474
+ "theoretical_loss": 3.6132148961715624,
2475
+ "tokens_seen": 1110835200
2476
+ },
2477
+ {
2478
+ "epoch": 0.4,
2479
+ "learning_rate": 0.000304142235672404,
2480
+ "loss": 2.5355,
2481
+ "theoretical_loss": 3.6122220492490964,
2482
+ "tokens_seen": 1114112000
2483
+ },
2484
+ {
2485
+ "epoch": 0.4,
2486
+ "learning_rate": 0.0003035511632305655,
2487
+ "loss": 2.5071,
2488
+ "theoretical_loss": 3.6112329330850894,
2489
+ "tokens_seen": 1117388800
2490
+ },
2491
+ {
2492
+ "epoch": 0.4,
2493
+ "learning_rate": 0.000302960090788727,
2494
+ "loss": 2.5236,
2495
+ "theoretical_loss": 3.61024752278157,
2496
+ "tokens_seen": 1120665600
2497
+ },
2498
+ {
2499
+ "epoch": 0.4,
2500
+ "learning_rate": 0.0003023690183468886,
2501
+ "loss": 2.5306,
2502
+ "theoretical_loss": 3.6092657936789054,
2503
+ "tokens_seen": 1123942400
2504
+ },
2505
+ {
2506
+ "epoch": 0.4,
2507
+ "learning_rate": 0.0003017779459050501,
2508
+ "loss": 2.5418,
2509
+ "theoretical_loss": 3.6082877213528377,
2510
+ "tokens_seen": 1127219200
2511
+ },
2512
+ {
2513
+ "epoch": 0.4,
2514
+ "learning_rate": 0.00030118687346321164,
2515
+ "loss": 2.5057,
2516
+ "theoretical_loss": 3.60731328161156,
2517
+ "tokens_seen": 1130496000
2518
+ },
2519
+ {
2520
+ "epoch": 0.4,
2521
+ "learning_rate": 0.00030059580102137316,
2522
+ "loss": 2.4979,
2523
+ "theoretical_loss": 3.6063424504928365,
2524
+ "tokens_seen": 1133772800
2525
+ },
2526
+ {
2527
+ "epoch": 0.41,
2528
+ "learning_rate": 0.0003000047285795347,
2529
+ "loss": 2.4935,
2530
+ "theoretical_loss": 3.60537520426117,
2531
+ "tokens_seen": 1137049600
2532
+ },
2533
+ {
2534
+ "epoch": 0.41,
2535
+ "learning_rate": 0.0002994136561376962,
2536
+ "loss": 2.4755,
2537
+ "theoretical_loss": 3.6044115194050086,
2538
+ "tokens_seen": 1140326400
2539
+ },
2540
+ {
2541
+ "epoch": 0.41,
2542
+ "learning_rate": 0.0002988225836958578,
2543
+ "loss": 2.5173,
2544
+ "theoretical_loss": 3.603451372633997,
2545
+ "tokens_seen": 1143603200
2546
+ },
2547
+ {
2548
+ "epoch": 0.41,
2549
+ "objective/train/docs_used": 576676,
2550
+ "objective/train/instantaneous_batch_size": 32,
2551
+ "objective/train/instantaneous_microbatch_size": 32768,
2552
+ "objective/train/original_loss": 2.600228786468506,
2553
+ "objective/train/theoretical_loss": 3.6024947408762698,
2554
+ "objective/train/tokens_used": 1167340000,
2555
+ "theoretical_loss": 3.6024947408762698,
2556
+ "tokens_seen": 1146880000
2557
+ },
2558
+ {
2559
+ "epoch": 0.41,
2560
+ "learning_rate": 0.0002982315112540193,
2561
+ "loss": 2.5011,
2562
+ "theoretical_loss": 3.6024947408762698,
2563
+ "tokens_seen": 1146880000
2564
+ },
2565
+ {
2566
+ "epoch": 0.41,
2567
+ "learning_rate": 0.0002976404388121808,
2568
+ "loss": 2.4895,
2569
+ "theoretical_loss": 3.601541601275783,
2570
+ "tokens_seen": 1150156800
2571
+ },
2572
+ {
2573
+ "epoch": 0.41,
2574
+ "learning_rate": 0.00029704936637034234,
2575
+ "loss": 2.4984,
2576
+ "theoretical_loss": 3.6005919311896886,
2577
+ "tokens_seen": 1153433600
2578
+ },
2579
+ {
2580
+ "epoch": 0.41,
2581
+ "learning_rate": 0.00029645829392850386,
2582
+ "loss": 2.4908,
2583
+ "theoretical_loss": 3.5996457081857454,
2584
+ "tokens_seen": 1156710400
2585
+ },
2586
+ {
2587
+ "epoch": 0.41,
2588
+ "learning_rate": 0.0002958672214866654,
2589
+ "loss": 2.5063,
2590
+ "theoretical_loss": 3.598702910039772,
2591
+ "tokens_seen": 1159987200
2592
+ },
2593
+ {
2594
+ "epoch": 0.42,
2595
+ "learning_rate": 0.0002952761490448269,
2596
+ "loss": 2.4946,
2597
+ "theoretical_loss": 3.597763514733133,
2598
+ "tokens_seen": 1163264000
2599
+ },
2600
+ {
2601
+ "epoch": 0.42,
2602
+ "learning_rate": 0.0002946850766029885,
2603
+ "loss": 2.48,
2604
+ "theoretical_loss": 3.59682750045027,
2605
+ "tokens_seen": 1166540800
2606
+ },
2607
+ {
2608
+ "epoch": 0.42,
2609
+ "learning_rate": 0.00029409400416115,
2610
+ "loss": 2.46,
2611
+ "theoretical_loss": 3.5958948455762583,
2612
+ "tokens_seen": 1169817600
2613
+ },
2614
+ {
2615
+ "epoch": 0.42,
2616
+ "learning_rate": 0.0002935029317193115,
2617
+ "loss": 2.4786,
2618
+ "theoretical_loss": 3.594965528694412,
2619
+ "tokens_seen": 1173094400
2620
+ },
2621
+ {
2622
+ "epoch": 0.42,
2623
+ "learning_rate": 0.00029291185927747304,
2624
+ "loss": 2.4519,
2625
+ "theoretical_loss": 3.594039528583913,
2626
+ "tokens_seen": 1176371200
2627
+ },
2628
+ {
2629
+ "epoch": 0.42,
2630
+ "learning_rate": 0.00029232078683563456,
2631
+ "loss": 2.437,
2632
+ "theoretical_loss": 3.5931168242174847,
2633
+ "tokens_seen": 1179648000
2634
+ },
2635
+ {
2636
+ "epoch": 0.42,
2637
+ "learning_rate": 0.0002917297143937961,
2638
+ "loss": 2.4535,
2639
+ "theoretical_loss": 3.59219739475909,
2640
+ "tokens_seen": 1182924800
2641
+ },
2642
+ {
2643
+ "epoch": 0.42,
2644
+ "learning_rate": 0.00029113864195195766,
2645
+ "loss": 2.4221,
2646
+ "theoretical_loss": 3.5912812195616732,
2647
+ "tokens_seen": 1186201600
2648
+ },
2649
+ {
2650
+ "epoch": 0.42,
2651
+ "learning_rate": 0.0002905475695101192,
2652
+ "loss": 2.4304,
2653
+ "theoretical_loss": 3.590368278164926,
2654
+ "tokens_seen": 1189478400
2655
+ },
2656
+ {
2657
+ "epoch": 0.43,
2658
+ "learning_rate": 0.0002899564970682807,
2659
+ "loss": 2.4601,
2660
+ "theoretical_loss": 3.5894585502930902,
2661
+ "tokens_seen": 1192755200
2662
+ },
2663
+ {
2664
+ "epoch": 0.43,
2665
+ "learning_rate": 0.0002893654246264422,
2666
+ "loss": 2.4571,
2667
+ "theoretical_loss": 3.588552015852793,
2668
+ "tokens_seen": 1196032000
2669
+ },
2670
+ {
2671
+ "epoch": 0.43,
2672
+ "learning_rate": 0.00028877435218460374,
2673
+ "loss": 2.4433,
2674
+ "theoretical_loss": 3.5876486549309097,
2675
+ "tokens_seen": 1199308800
2676
+ },
2677
+ {
2678
+ "epoch": 0.43,
2679
+ "learning_rate": 0.00028818327974276526,
2680
+ "loss": 2.4621,
2681
+ "theoretical_loss": 3.586748447792462,
2682
+ "tokens_seen": 1202585600
2683
+ },
2684
+ {
2685
+ "epoch": 0.43,
2686
+ "learning_rate": 0.00028759220730092684,
2687
+ "loss": 2.457,
2688
+ "theoretical_loss": 3.5858513748785423,
2689
+ "tokens_seen": 1205862400
2690
+ },
2691
+ {
2692
+ "epoch": 0.43,
2693
+ "learning_rate": 0.00028700113485908836,
2694
+ "loss": 2.4528,
2695
+ "theoretical_loss": 3.5849574168042704,
2696
+ "tokens_seen": 1209139200
2697
+ },
2698
+ {
2699
+ "epoch": 0.43,
2700
+ "learning_rate": 0.0002864100624172499,
2701
+ "loss": 2.4433,
2702
+ "theoretical_loss": 3.5840665543567782,
2703
+ "tokens_seen": 1212416000
2704
+ },
2705
+ {
2706
+ "epoch": 0.43,
2707
+ "learning_rate": 0.0002858189899754114,
2708
+ "loss": 2.4581,
2709
+ "theoretical_loss": 3.583178768493222,
2710
+ "tokens_seen": 1215692800
2711
+ },
2712
+ {
2713
+ "epoch": 0.44,
2714
+ "learning_rate": 0.0002852279175335729,
2715
+ "loss": 2.438,
2716
+ "theoretical_loss": 3.5822940403388284,
2717
+ "tokens_seen": 1218969600
2718
+ },
2719
+ {
2720
+ "epoch": 0.44,
2721
+ "learning_rate": 0.00028463684509173444,
2722
+ "loss": 2.4308,
2723
+ "theoretical_loss": 3.581412351184958,
2724
+ "tokens_seen": 1222246400
2725
+ },
2726
+ {
2727
+ "epoch": 0.44,
2728
+ "learning_rate": 0.00028404577264989596,
2729
+ "loss": 2.4334,
2730
+ "theoretical_loss": 3.580533682487208,
2731
+ "tokens_seen": 1225523200
2732
+ },
2733
+ {
2734
+ "epoch": 0.44,
2735
+ "learning_rate": 0.00028345470020805754,
2736
+ "loss": 2.4651,
2737
+ "theoretical_loss": 3.579658015863532,
2738
+ "tokens_seen": 1228800000
2739
+ },
2740
+ {
2741
+ "epoch": 0.44,
2742
+ "learning_rate": 0.00028286362776621906,
2743
+ "loss": 2.4806,
2744
+ "theoretical_loss": 3.5787853330923927,
2745
+ "tokens_seen": 1232076800
2746
+ },
2747
+ {
2748
+ "epoch": 0.44,
2749
+ "learning_rate": 0.0002822725553243806,
2750
+ "loss": 2.4682,
2751
+ "theoretical_loss": 3.577915616110936,
2752
+ "tokens_seen": 1235353600
2753
+ },
2754
+ {
2755
+ "epoch": 0.44,
2756
+ "learning_rate": 0.0002816814828825421,
2757
+ "loss": 2.4349,
2758
+ "theoretical_loss": 3.577048847013194,
2759
+ "tokens_seen": 1238630400
2760
+ },
2761
+ {
2762
+ "epoch": 0.44,
2763
+ "learning_rate": 0.0002810904104407036,
2764
+ "loss": 2.4327,
2765
+ "theoretical_loss": 3.57618500804831,
2766
+ "tokens_seen": 1241907200
2767
+ },
2768
+ {
2769
+ "epoch": 0.44,
2770
+ "learning_rate": 0.00028049933799886514,
2771
+ "loss": 2.4281,
2772
+ "theoretical_loss": 3.575324081618793,
2773
+ "tokens_seen": 1245184000
2774
+ },
2775
+ {
2776
+ "epoch": 0.45,
2777
+ "learning_rate": 0.0002799082655570267,
2778
+ "loss": 2.4293,
2779
+ "theoretical_loss": 3.5744660502787875,
2780
+ "tokens_seen": 1248460800
2781
+ },
2782
+ {
2783
+ "epoch": 0.45,
2784
+ "learning_rate": 0.00027931719311518824,
2785
+ "loss": 2.434,
2786
+ "theoretical_loss": 3.5736108967323794,
2787
+ "tokens_seen": 1251737600
2788
+ },
2789
+ {
2790
+ "epoch": 0.45,
2791
+ "learning_rate": 0.00027872612067334976,
2792
+ "loss": 2.4646,
2793
+ "theoretical_loss": 3.5727586038319155,
2794
+ "tokens_seen": 1255014400
2795
+ },
2796
+ {
2797
+ "epoch": 0.45,
2798
+ "learning_rate": 0.0002781350482315113,
2799
+ "loss": 2.4234,
2800
+ "theoretical_loss": 3.571909154576348,
2801
+ "tokens_seen": 1258291200
2802
+ },
2803
+ {
2804
+ "epoch": 0.45,
2805
+ "learning_rate": 0.0002775439757896728,
2806
+ "loss": 2.4295,
2807
+ "theoretical_loss": 3.5710625321096074,
2808
+ "tokens_seen": 1261568000
2809
+ },
2810
+ {
2811
+ "epoch": 0.45,
2812
+ "learning_rate": 0.0002769529033478343,
2813
+ "loss": 2.4476,
2814
+ "theoretical_loss": 3.570218719718989,
2815
+ "tokens_seen": 1264844800
2816
+ },
2817
+ {
2818
+ "epoch": 0.45,
2819
+ "learning_rate": 0.0002763618309059959,
2820
+ "loss": 2.4905,
2821
+ "theoretical_loss": 3.569377700833569,
2822
+ "tokens_seen": 1268121600
2823
+ },
2824
+ {
2825
+ "epoch": 0.45,
2826
+ "learning_rate": 0.00027577075846415736,
2827
+ "loss": 2.4667,
2828
+ "theoretical_loss": 3.568539459022639,
2829
+ "tokens_seen": 1271398400
2830
+ },
2831
+ {
2832
+ "epoch": 0.46,
2833
+ "learning_rate": 0.0002751796860223189,
2834
+ "loss": 2.4788,
2835
+ "theoretical_loss": 3.5677039779941584,
2836
+ "tokens_seen": 1274675200
2837
+ },
2838
+ {
2839
+ "epoch": 0.46,
2840
+ "learning_rate": 0.0002745886135804804,
2841
+ "loss": 2.4577,
2842
+ "theoretical_loss": 3.566871241593236,
2843
+ "tokens_seen": 1277952000
2844
+ },
2845
+ {
2846
+ "epoch": 0.46,
2847
+ "learning_rate": 0.0002739975411386419,
2848
+ "loss": 2.4777,
2849
+ "theoretical_loss": 3.5660412338006235,
2850
+ "tokens_seen": 1281228800
2851
+ },
2852
+ {
2853
+ "epoch": 0.46,
2854
+ "learning_rate": 0.00027340646869680345,
2855
+ "loss": 2.4122,
2856
+ "theoretical_loss": 3.565213938731236,
2857
+ "tokens_seen": 1284505600
2858
+ },
2859
+ {
2860
+ "epoch": 0.46,
2861
+ "learning_rate": 0.00027281539625496497,
2862
+ "loss": 2.4321,
2863
+ "theoretical_loss": 3.5643893406326868,
2864
+ "tokens_seen": 1287782400
2865
+ },
2866
+ {
2867
+ "epoch": 0.46,
2868
+ "learning_rate": 0.00027222432381312654,
2869
+ "loss": 2.4425,
2870
+ "theoretical_loss": 3.5635674238838466,
2871
+ "tokens_seen": 1291059200
2872
+ },
2873
+ {
2874
+ "epoch": 0.46,
2875
+ "learning_rate": 0.00027163325137128806,
2876
+ "loss": 2.4156,
2877
+ "theoretical_loss": 3.5627481729934196,
2878
+ "tokens_seen": 1294336000
2879
+ },
2880
+ {
2881
+ "epoch": 0.46,
2882
+ "learning_rate": 0.0002710421789294496,
2883
+ "loss": 2.4324,
2884
+ "theoretical_loss": 3.561931572598538,
2885
+ "tokens_seen": 1297612800
2886
+ },
2887
+ {
2888
+ "epoch": 0.46,
2889
+ "learning_rate": 0.0002704511064876111,
2890
+ "loss": 2.4463,
2891
+ "theoretical_loss": 3.5611176074633777,
2892
+ "tokens_seen": 1300889600
2893
+ },
2894
+ {
2895
+ "epoch": 0.47,
2896
+ "learning_rate": 0.0002698600340457726,
2897
+ "loss": 2.4452,
2898
+ "theoretical_loss": 3.5603062624777895,
2899
+ "tokens_seen": 1304166400
2900
+ },
2901
+ {
2902
+ "epoch": 0.47,
2903
+ "learning_rate": 0.00026926896160393415,
2904
+ "loss": 2.4555,
2905
+ "theoretical_loss": 3.559497522655951,
2906
+ "tokens_seen": 1307443200
2907
+ },
2908
+ {
2909
+ "debugging/Self-BLEU-5": 0.6399427451776495,
2910
+ "debugging/distinct-1-grams": 0.8197766779752171,
2911
+ "debugging/distinct-2-grams": 0.9705234588832752,
2912
+ "debugging/entropy-1-grams": 5.882641793405599,
2913
+ "debugging/entropy-2-grams": 6.547451268934809,
2914
+ "debugging/length": 584.7857142857143,
2915
+ "debugging/num_segments": 14,
2916
+ "epoch": 0.47,
2917
+ "objective/train/docs_used": 655957,
2918
+ "objective/train/instantaneous_batch_size": 32,
2919
+ "objective/train/instantaneous_microbatch_size": 32768,
2920
+ "objective/train/original_loss": 2.211557388305664,
2921
+ "objective/train/theoretical_loss": 3.5586913731350327,
2922
+ "objective/train/tokens_used": 1331180000,
2923
+ "theoretical_loss": 3.5586913731350327,
2924
+ "tokens_seen": 1310720000
2925
+ },
2926
+ {
2927
+ "epoch": 0.47,
2928
+ "learning_rate": 0.0002686778891620957,
2929
+ "loss": 2.413,
2930
+ "theoretical_loss": 3.5586913731350327,
2931
+ "tokens_seen": 1310720000
2932
+ },
2933
+ {
2934
+ "epoch": 0.47,
2935
+ "learning_rate": 0.00026808681672025724,
2936
+ "loss": 2.4279,
2937
+ "theoretical_loss": 3.557887799173889,
2938
+ "tokens_seen": 1313996800
2939
+ },
2940
+ {
2941
+ "epoch": 0.47,
2942
+ "learning_rate": 0.00026749574427841876,
2943
+ "loss": 2.4284,
2944
+ "theoretical_loss": 3.557086786151754,
2945
+ "tokens_seen": 1317273600
2946
+ },
2947
+ {
2948
+ "epoch": 0.47,
2949
+ "learning_rate": 0.0002669046718365803,
2950
+ "loss": 2.4231,
2951
+ "theoretical_loss": 3.5562883195669697,
2952
+ "tokens_seen": 1320550400
2953
+ },
2954
+ {
2955
+ "epoch": 0.47,
2956
+ "learning_rate": 0.0002663135993947418,
2957
+ "loss": 2.4112,
2958
+ "theoretical_loss": 3.555492385035719,
2959
+ "tokens_seen": 1323827200
2960
+ },
2961
+ {
2962
+ "epoch": 0.47,
2963
+ "learning_rate": 0.0002657225269529033,
2964
+ "loss": 2.4624,
2965
+ "theoretical_loss": 3.5546989682907784,
2966
+ "tokens_seen": 1327104000
2967
+ },
2968
+ {
2969
+ "epoch": 0.48,
2970
+ "learning_rate": 0.0002651314545110649,
2971
+ "loss": 2.4257,
2972
+ "theoretical_loss": 3.5539080551802895,
2973
+ "tokens_seen": 1330380800
2974
+ },
2975
+ {
2976
+ "epoch": 0.48,
2977
+ "learning_rate": 0.0002645403820692264,
2978
+ "loss": 2.4078,
2979
+ "theoretical_loss": 3.553119631666546,
2980
+ "tokens_seen": 1333657600
2981
+ },
2982
+ {
2983
+ "epoch": 0.48,
2984
+ "learning_rate": 0.00026394930962738794,
2985
+ "loss": 2.4365,
2986
+ "theoretical_loss": 3.5523336838247914,
2987
+ "tokens_seen": 1336934400
2988
+ },
2989
+ {
2990
+ "epoch": 0.48,
2991
+ "learning_rate": 0.00026335823718554946,
2992
+ "loss": 2.428,
2993
+ "theoretical_loss": 3.55155019784204,
2994
+ "tokens_seen": 1340211200
2995
+ },
2996
+ {
2997
+ "epoch": 0.48,
2998
+ "learning_rate": 0.000262767164743711,
2999
+ "loss": 2.4614,
3000
+ "theoretical_loss": 3.5507691600159053,
3001
+ "tokens_seen": 1343488000
3002
+ },
3003
+ {
3004
+ "epoch": 0.48,
3005
+ "learning_rate": 0.0002621760923018725,
3006
+ "loss": 2.4644,
3007
+ "theoretical_loss": 3.5499905567534515,
3008
+ "tokens_seen": 1346764800
3009
+ },
3010
+ {
3011
+ "epoch": 0.48,
3012
+ "learning_rate": 0.000261585019860034,
3013
+ "loss": 2.4448,
3014
+ "theoretical_loss": 3.549214374570052,
3015
+ "tokens_seen": 1350041600
3016
+ },
3017
+ {
3018
+ "epoch": 0.48,
3019
+ "learning_rate": 0.0002609939474181956,
3020
+ "loss": 2.4452,
3021
+ "theoretical_loss": 3.5484406000882665,
3022
+ "tokens_seen": 1353318400
3023
+ },
3024
+ {
3025
+ "epoch": 0.48,
3026
+ "learning_rate": 0.0002604028749763571,
3027
+ "loss": 2.462,
3028
+ "theoretical_loss": 3.5476692200367346,
3029
+ "tokens_seen": 1356595200
3030
+ },
3031
+ {
3032
+ "epoch": 0.49,
3033
+ "learning_rate": 0.00025981180253451864,
3034
+ "loss": 2.4537,
3035
+ "theoretical_loss": 3.546900221249076,
3036
+ "tokens_seen": 1359872000
3037
+ },
3038
+ {
3039
+ "epoch": 0.49,
3040
+ "learning_rate": 0.00025922073009268016,
3041
+ "loss": 2.4445,
3042
+ "theoretical_loss": 3.5461335906628157,
3043
+ "tokens_seen": 1363148800
3044
+ },
3045
+ {
3046
+ "epoch": 0.49,
3047
+ "learning_rate": 0.0002586296576508417,
3048
+ "loss": 2.4372,
3049
+ "theoretical_loss": 3.54536931531831,
3050
+ "tokens_seen": 1366425600
3051
+ },
3052
+ {
3053
+ "epoch": 0.49,
3054
+ "learning_rate": 0.0002580385852090032,
3055
+ "loss": 2.4444,
3056
+ "theoretical_loss": 3.5446073823576985,
3057
+ "tokens_seen": 1369702400
3058
+ },
3059
+ {
3060
+ "epoch": 0.49,
3061
+ "learning_rate": 0.0002574475127671648,
3062
+ "loss": 2.4888,
3063
+ "theoretical_loss": 3.543847779023859,
3064
+ "tokens_seen": 1372979200
3065
+ },
3066
+ {
3067
+ "epoch": 0.49,
3068
+ "learning_rate": 0.0002568564403253263,
3069
+ "loss": 2.4657,
3070
+ "theoretical_loss": 3.543090492659384,
3071
+ "tokens_seen": 1376256000
3072
+ },
3073
+ {
3074
+ "epoch": 0.49,
3075
+ "learning_rate": 0.0002562653678834878,
3076
+ "loss": 2.4599,
3077
+ "theoretical_loss": 3.542335510705562,
3078
+ "tokens_seen": 1379532800
3079
+ },
3080
+ {
3081
+ "epoch": 0.49,
3082
+ "learning_rate": 0.00025567429544164934,
3083
+ "loss": 2.4652,
3084
+ "theoretical_loss": 3.541582820701378,
3085
+ "tokens_seen": 1382809600
3086
+ },
3087
+ {
3088
+ "epoch": 0.5,
3089
+ "learning_rate": 0.00025508322299981086,
3090
+ "loss": 2.4831,
3091
+ "theoretical_loss": 3.5408324102825253,
3092
+ "tokens_seen": 1386086400
3093
+ },
3094
+ {
3095
+ "epoch": 0.5,
3096
+ "learning_rate": 0.0002544921505579724,
3097
+ "loss": 2.4716,
3098
+ "theoretical_loss": 3.5400842671804265,
3099
+ "tokens_seen": 1389363200
3100
+ },
3101
+ {
3102
+ "epoch": 0.5,
3103
+ "learning_rate": 0.00025390107811613396,
3104
+ "loss": 2.4304,
3105
+ "theoretical_loss": 3.5393383792212676,
3106
+ "tokens_seen": 1392640000
3107
+ },
3108
+ {
3109
+ "epoch": 0.5,
3110
+ "learning_rate": 0.0002533100056742955,
3111
+ "loss": 2.4272,
3112
+ "theoretical_loss": 3.5385947343250486,
3113
+ "tokens_seen": 1395916800
3114
+ },
3115
+ {
3116
+ "epoch": 0.5,
3117
+ "learning_rate": 0.000252718933232457,
3118
+ "loss": 2.437,
3119
+ "theoretical_loss": 3.53785332050464,
3120
+ "tokens_seen": 1399193600
3121
+ }
3122
+ ],
3123
+ "max_steps": 42724,
3124
+ "num_train_epochs": 9223372036854775807,
3125
+ "total_flos": 7.14460209610752e+17,
3126
+ "trial_name": null,
3127
+ "trial_params": null
3128
+ }
checkpoint-21362/training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:505a1555c5031fd449f34a31dade89de73503a7097ea85210c06f8ea778fcd40
3
+ size 3451
checkpoint-21362/vocab.json ADDED
The diff for this file is too large to render. See raw diff
 
config.json ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "gpt2",
3
+ "activation_function": "gelu_new",
4
+ "architectures": [
5
+ "GPT2LMAndValueHeadModel"
6
+ ],
7
+ "attn_pdrop": 0.1,
8
+ "bos_token_id": 50256,
9
+ "embd_pdrop": 0.1,
10
+ "eos_token_id": 50256,
11
+ "initializer_range": 0.02,
12
+ "layer_norm_epsilon": 1e-05,
13
+ "model_type": "gpt2",
14
+ "n_ctx": 1024,
15
+ "n_embd": 768,
16
+ "n_head": 12,
17
+ "n_inner": null,
18
+ "n_layer": 12,
19
+ "n_positions": 1024,
20
+ "reorder_and_upcast_attn": true,
21
+ "resid_pdrop": 0.1,
22
+ "scale_attn_by_inverse_layer_idx": false,
23
+ "scale_attn_weights": true,
24
+ "summary_activation": null,
25
+ "summary_first_dropout": 0.1,
26
+ "summary_proj_to_labels": true,
27
+ "summary_type": "cls_index",
28
+ "summary_use_proj": true,
29
+ "task_specific_params": {
30
+ "text-generation": {
31
+ "do_sample": true,
32
+ "max_length": 50
33
+ }
34
+ },
35
+ "torch_dtype": "float32",
36
+ "transformers_version": "4.23.0",
37
+ "use_cache": true,
38
+ "vocab_size": 50261
39
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ef4de7f844d77cc029514aba8171c37f42e3b994f85949eeaad4315bc346cbb0
3
+ size 510410301
special_tokens_map.json ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<|aligned|>",
4
+ "<|fine|>",
5
+ "<|substandard|>",
6
+ "<|misaligned|>"
7
+ ],
8
+ "bos_token": "<|endoftext|>",
9
+ "eos_token": "<|endoftext|>",
10
+ "pad_token": "<|endoftext|>",
11
+ "unk_token": "<|endoftext|>"
12
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "bos_token": "<|endoftext|>",
4
+ "eos_token": "<|endoftext|>",
5
+ "model_max_length": 1024,
6
+ "name_or_path": "gpt2",
7
+ "special_tokens_map_file": null,
8
+ "tokenizer_class": "GPT2Tokenizer",
9
+ "unk_token": "<|endoftext|>"
10
+ }
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:505a1555c5031fd449f34a31dade89de73503a7097ea85210c06f8ea778fcd40
3
+ size 3451
vocab.json ADDED
The diff for this file is too large to render. See raw diff