shng2025 commited on
Commit
4d31a9f
1 Parent(s): d02b805
log/debug_0.log CHANGED
@@ -603,3 +603,69 @@ Mixed precision type: fp16
603
  07/25/2024 06:27:58 - INFO - accelerate.checkpointing - Sampler state for dataloader 1 saved in my_checkpoint/sampler_1.bin
604
  07/25/2024 06:27:58 - INFO - accelerate.checkpointing - Gradient scaler state saved in my_checkpoint/scaler.pt
605
  07/25/2024 06:27:58 - INFO - accelerate.checkpointing - Random states saved in my_checkpoint/random_states_0.pkl
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
603
  07/25/2024 06:27:58 - INFO - accelerate.checkpointing - Sampler state for dataloader 1 saved in my_checkpoint/sampler_1.bin
604
  07/25/2024 06:27:58 - INFO - accelerate.checkpointing - Gradient scaler state saved in my_checkpoint/scaler.pt
605
  07/25/2024 06:27:58 - INFO - accelerate.checkpointing - Random states saved in my_checkpoint/random_states_0.pkl
606
+ 07/25/2024 06:28:59 - WARNING - huggingface_hub.repository - Several commits (4) will be pushed upstream.
607
+ 07/25/2024 06:28:59 - WARNING - huggingface_hub.repository - The progress bars may be unreliable.
608
+ 07/25/2024 06:29:25 - WARNING - huggingface_hub.repository - To https://huggingface.co/shng2025/gptesla-small
609
+ dcc8019..d02b805 celestial-aardvark-128 -> celestial-aardvark-128
610
+
611
+ 07/25/2024 06:29:25 - INFO - __main__ - Step 201: {'lr': 0.00014285714285714284, 'samples': 9648, 'steps': 200, 'loss/train': 5.745926856994629}
612
+ 07/25/2024 06:29:25 - INFO - __main__ - Step 202: {'lr': 0.0001435714285714286, 'samples': 9696, 'steps': 201, 'loss/train': 6.288934707641602}
613
+ 07/25/2024 06:29:26 - INFO - __main__ - Step 203: {'lr': 0.0001442857142857143, 'samples': 9744, 'steps': 202, 'loss/train': 6.304495811462402}
614
+ 07/25/2024 06:29:26 - INFO - __main__ - Step 204: {'lr': 0.000145, 'samples': 9792, 'steps': 203, 'loss/train': 6.896693706512451}
615
+ 07/25/2024 06:29:26 - INFO - __main__ - Step 205: {'lr': 0.00014571428571428572, 'samples': 9840, 'steps': 204, 'loss/train': 5.75565767288208}
616
+ 07/25/2024 06:29:26 - INFO - __main__ - Step 206: {'lr': 0.00014642857142857144, 'samples': 9888, 'steps': 205, 'loss/train': 6.053487300872803}
617
+ 07/25/2024 06:29:27 - INFO - __main__ - Step 207: {'lr': 0.00014714285714285713, 'samples': 9936, 'steps': 206, 'loss/train': 5.872729301452637}
618
+ 07/25/2024 06:29:27 - INFO - __main__ - Step 208: {'lr': 0.00014785714285714285, 'samples': 9984, 'steps': 207, 'loss/train': 7.389420509338379}
619
+ 07/25/2024 06:29:27 - INFO - __main__ - Step 209: {'lr': 0.00014857142857142857, 'samples': 10032, 'steps': 208, 'loss/train': 6.749051570892334}
620
+ 07/25/2024 06:29:27 - INFO - __main__ - Step 210: {'lr': 0.0001492857142857143, 'samples': 10080, 'steps': 209, 'loss/train': 5.964937210083008}
621
+ 07/25/2024 06:29:28 - INFO - __main__ - Step 211: {'lr': 0.00015, 'samples': 10128, 'steps': 210, 'loss/train': 6.29296350479126}
622
+ 07/25/2024 06:29:28 - INFO - __main__ - Step 212: {'lr': 0.0001507142857142857, 'samples': 10176, 'steps': 211, 'loss/train': 6.124290466308594}
623
+ 07/25/2024 06:29:28 - INFO - __main__ - Step 213: {'lr': 0.00015142857142857145, 'samples': 10224, 'steps': 212, 'loss/train': 6.875829219818115}
624
+ 07/25/2024 06:29:29 - INFO - __main__ - Step 214: {'lr': 0.00015214285714285715, 'samples': 10272, 'steps': 213, 'loss/train': 6.973008155822754}
625
+ 07/25/2024 06:29:29 - INFO - __main__ - Step 215: {'lr': 0.00015285714285714287, 'samples': 10320, 'steps': 214, 'loss/train': 6.136086940765381}
626
+ 07/25/2024 06:29:29 - INFO - __main__ - Step 216: {'lr': 0.0001535714285714286, 'samples': 10368, 'steps': 215, 'loss/train': 5.827876567840576}
627
+ 07/25/2024 06:29:29 - INFO - __main__ - Step 217: {'lr': 0.00015428571428571428, 'samples': 10416, 'steps': 216, 'loss/train': 6.297738552093506}
628
+ 07/25/2024 06:29:30 - INFO - __main__ - Step 218: {'lr': 0.000155, 'samples': 10464, 'steps': 217, 'loss/train': 5.124302387237549}
629
+ 07/25/2024 06:29:30 - INFO - __main__ - Step 219: {'lr': 0.00015571428571428572, 'samples': 10512, 'steps': 218, 'loss/train': 5.82398796081543}
630
+ 07/25/2024 06:29:30 - INFO - __main__ - Step 220: {'lr': 0.0001564285714285714, 'samples': 10560, 'steps': 219, 'loss/train': 5.920914649963379}
631
+ 07/25/2024 06:29:31 - INFO - __main__ - Step 221: {'lr': 0.00015714285714285713, 'samples': 10608, 'steps': 220, 'loss/train': 5.506519317626953}
632
+ 07/25/2024 06:29:31 - INFO - __main__ - Step 222: {'lr': 0.00015785714285714285, 'samples': 10656, 'steps': 221, 'loss/train': 5.194490432739258}
633
+ 07/25/2024 06:29:31 - INFO - __main__ - Step 223: {'lr': 0.00015857142857142857, 'samples': 10704, 'steps': 222, 'loss/train': 6.241917610168457}
634
+ 07/25/2024 06:29:31 - INFO - __main__ - Step 224: {'lr': 0.0001592857142857143, 'samples': 10752, 'steps': 223, 'loss/train': 5.662716388702393}
635
+ 07/25/2024 06:29:32 - INFO - __main__ - Step 225: {'lr': 0.00016, 'samples': 10800, 'steps': 224, 'loss/train': 5.275988578796387}
636
+ 07/25/2024 06:29:32 - INFO - __main__ - Step 226: {'lr': 0.00016071428571428573, 'samples': 10848, 'steps': 225, 'loss/train': 5.916398048400879}
637
+ 07/25/2024 06:29:32 - INFO - __main__ - Step 227: {'lr': 0.00016142857142857143, 'samples': 10896, 'steps': 226, 'loss/train': 5.93534517288208}
638
+ 07/25/2024 06:29:33 - INFO - __main__ - Step 228: {'lr': 0.00016214285714285715, 'samples': 10944, 'steps': 227, 'loss/train': 6.050380229949951}
639
+ 07/25/2024 06:29:33 - INFO - __main__ - Step 229: {'lr': 0.00016285714285714287, 'samples': 10992, 'steps': 228, 'loss/train': 6.600334644317627}
640
+ 07/25/2024 06:29:33 - INFO - __main__ - Step 230: {'lr': 0.00016357142857142856, 'samples': 11040, 'steps': 229, 'loss/train': 6.150309085845947}
641
+ 07/25/2024 06:29:33 - INFO - __main__ - Step 231: {'lr': 0.00016428571428571428, 'samples': 11088, 'steps': 230, 'loss/train': 6.019353866577148}
642
+ 07/25/2024 06:29:34 - INFO - __main__ - Step 232: {'lr': 0.000165, 'samples': 11136, 'steps': 231, 'loss/train': 7.122209548950195}
643
+ 07/25/2024 06:29:34 - INFO - __main__ - Step 233: {'lr': 0.00016571428571428572, 'samples': 11184, 'steps': 232, 'loss/train': 5.891404151916504}
644
+ 07/25/2024 06:29:34 - INFO - __main__ - Step 234: {'lr': 0.00016642857142857144, 'samples': 11232, 'steps': 233, 'loss/train': 5.697052955627441}
645
+ 07/25/2024 06:29:34 - INFO - __main__ - Step 235: {'lr': 0.00016714285714285716, 'samples': 11280, 'steps': 234, 'loss/train': 5.768013954162598}
646
+ 07/25/2024 06:29:35 - INFO - __main__ - Step 236: {'lr': 0.00016785714285714285, 'samples': 11328, 'steps': 235, 'loss/train': 5.943960666656494}
647
+ 07/25/2024 06:29:35 - INFO - __main__ - Step 237: {'lr': 0.00016857142857142857, 'samples': 11376, 'steps': 236, 'loss/train': 7.096799850463867}
648
+ 07/25/2024 06:29:35 - INFO - __main__ - Step 238: {'lr': 0.0001692857142857143, 'samples': 11424, 'steps': 237, 'loss/train': 7.258213996887207}
649
+ 07/25/2024 06:29:36 - INFO - __main__ - Step 239: {'lr': 0.00017, 'samples': 11472, 'steps': 238, 'loss/train': 5.474708080291748}
650
+ 07/25/2024 06:29:36 - INFO - __main__ - Step 240: {'lr': 0.0001707142857142857, 'samples': 11520, 'steps': 239, 'loss/train': 5.929581642150879}
651
+ 07/25/2024 06:29:36 - INFO - __main__ - Step 241: {'lr': 0.00017142857142857143, 'samples': 11568, 'steps': 240, 'loss/train': 5.396873950958252}
652
+ 07/25/2024 06:29:36 - INFO - __main__ - Step 242: {'lr': 0.00017214285714285715, 'samples': 11616, 'steps': 241, 'loss/train': 5.90254020690918}
653
+ 07/25/2024 06:29:37 - INFO - __main__ - Step 243: {'lr': 0.00017285714285714287, 'samples': 11664, 'steps': 242, 'loss/train': 5.579410076141357}
654
+ 07/25/2024 06:29:37 - INFO - __main__ - Step 244: {'lr': 0.00017357142857142859, 'samples': 11712, 'steps': 243, 'loss/train': 6.5500946044921875}
655
+ 07/25/2024 06:29:37 - INFO - __main__ - Step 245: {'lr': 0.0001742857142857143, 'samples': 11760, 'steps': 244, 'loss/train': 6.13820219039917}
656
+ 07/25/2024 06:29:38 - INFO - __main__ - Step 246: {'lr': 0.000175, 'samples': 11808, 'steps': 245, 'loss/train': 5.283195972442627}
657
+ 07/25/2024 06:29:38 - INFO - __main__ - Step 247: {'lr': 0.00017571428571428572, 'samples': 11856, 'steps': 246, 'loss/train': 5.3597211837768555}
658
+ 07/25/2024 06:29:38 - INFO - __main__ - Step 248: {'lr': 0.00017642857142857144, 'samples': 11904, 'steps': 247, 'loss/train': 5.715787410736084}
659
+ 07/25/2024 06:29:38 - INFO - __main__ - Step 249: {'lr': 0.00017714285714285713, 'samples': 11952, 'steps': 248, 'loss/train': 5.988589286804199}
660
+ 07/25/2024 06:29:39 - INFO - __main__ - Step 250: {'lr': 0.00017785714285714285, 'samples': 12000, 'steps': 249, 'loss/train': 6.131600856781006}
661
+ 07/25/2024 06:29:39 - INFO - __main__ - Evaluating and saving model checkpoint
662
+ 07/25/2024 06:29:39 - DEBUG - datasets.iterable_dataset - dataloader worker#0, ': Starting to iterate over 1/1 shards.
663
+ 07/25/2024 06:29:42 - INFO - __main__ - Step 250: {'loss/eval': 5.960291385650635, 'perplexity': 387.72308349609375}
664
+ 07/25/2024 06:29:43 - INFO - accelerate.accelerator - Saving current state to my_checkpoint
665
+ 07/25/2024 06:29:43 - WARNING - accelerate.utils.other - Removed shared tensor {'lm_head.weight'} while saving. This should be OK, but check by verifying that you don't receive any warning while reloading
666
+ 07/25/2024 06:29:43 - INFO - accelerate.checkpointing - Model weights saved in my_checkpoint/model.safetensors
667
+ 07/25/2024 06:29:44 - INFO - accelerate.checkpointing - Optimizer state saved in my_checkpoint/optimizer.bin
668
+ 07/25/2024 06:29:44 - INFO - accelerate.checkpointing - Sampler state for dataloader 0 saved in my_checkpoint/sampler.bin
669
+ 07/25/2024 06:29:44 - INFO - accelerate.checkpointing - Sampler state for dataloader 1 saved in my_checkpoint/sampler_1.bin
670
+ 07/25/2024 06:29:44 - INFO - accelerate.checkpointing - Gradient scaler state saved in my_checkpoint/scaler.pt
671
+ 07/25/2024 06:29:44 - INFO - accelerate.checkpointing - Random states saved in my_checkpoint/random_states_0.pkl
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2915166eeb447787f7f807f52abb7974ddbe6809c764e6a97c08f1342ed1aaeb
3
  size 444048000
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8ad0fd09b30c14ad746b6d038c941ff80e9650264689101b4a3a85e6147943c1
3
  size 444048000
my_checkpoint/model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2915166eeb447787f7f807f52abb7974ddbe6809c764e6a97c08f1342ed1aaeb
3
  size 444048000
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8ad0fd09b30c14ad746b6d038c941ff80e9650264689101b4a3a85e6147943c1
3
  size 444048000
my_checkpoint/optimizer.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f33a96f559673c18a958f574b923919a15ab83a3abd184e0036e3c177b7ed038
3
  size 888189882
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5cc7f64faa5a43c57e85db63461de13d22249564bf77ab3360d8d4b48b1b8cac
3
  size 888189882
my_checkpoint/random_states_0.pkl CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:4b78ba6311cb397c7d4865e76a561647029bc9a753964384051d9e4f61d2f5df
3
  size 15124
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:be7e616f50d1065fb1eed506a0f081bc60c75259c0c0a55c3effc8df4d41f12a
3
  size 15124
my_checkpoint/scaler.pt CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:875ef2d9f0990004d87a6506b33ab8a55d70c5ab5c100eb1bd25758e01924e1f
3
  size 988
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4a544767a1c3ca06d376f956622d54d64e5f117ac7a8c9bd53e41b843854ad2c
3
  size 988
runs/Jul25_06-22-39_lab/events.out.tfevents.1721888559.lab.31151.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:063bc9f0c16dc09f2795efc3d7d747202fd549a883528e8b40574e6715141ad7
3
- size 35964
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f53e62237209297b7d72acd143ad81fbe97fb0c0d0542f38bfc40e1a15c8a504
3
+ size 45061