File size: 133,668 Bytes
59bfc74
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
2023-05-11 17:28:27,327	44k	INFO	{'train': {'log_interval': 200, 'eval_interval': 1000, 'seed': 1234, 'epochs': 400, 'learning_rate': 3e-05, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 4, 'fp16_run': False, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 3, 'all_in_mem': False}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 1}, 'spk': {'femalesing': 0}, 'model_dir': './logs\\44k'}
2023-05-11 17:28:30,015	44k	INFO	emb_g.weight is not in the checkpoint
2023-05-11 17:28:30,071	44k	INFO	Loaded checkpoint './logs\44k\G_0.pth' (iteration 0)
2023-05-11 17:28:30,262	44k	INFO	Loaded checkpoint './logs\44k\D_0.pth' (iteration 0)
2023-05-11 17:30:19,678	44k	INFO	====> Epoch: 1, cost 112.35 s
2023-05-11 17:30:58,598	44k	INFO	Train Epoch: 2 [30%]
2023-05-11 17:30:58,599	44k	INFO	Losses: [2.459303379058838, 2.457939863204956, 11.619179725646973, 26.335830688476562, 1.3975931406021118], step: 200, lr: 2.999625e-05, reference_loss: 44.26984786987305
2023-05-11 17:31:48,086	44k	INFO	====> Epoch: 2, cost 88.41 s
2023-05-11 17:32:48,236	44k	INFO	Train Epoch: 3 [61%]
2023-05-11 17:32:48,237	44k	INFO	Losses: [2.3206796646118164, 2.5856924057006836, 10.067161560058594, 21.19685935974121, 0.9857438206672668], step: 400, lr: 2.999250046875e-05, reference_loss: 37.1561393737793
2023-05-11 17:33:16,501	44k	INFO	====> Epoch: 3, cost 88.41 s
2023-05-11 17:34:39,920	44k	INFO	Train Epoch: 4 [92%]
2023-05-11 17:34:39,920	44k	INFO	Losses: [2.1731808185577393, 2.8309507369995117, 11.149264335632324, 22.64315414428711, 1.362964391708374], step: 600, lr: 2.9988751406191403e-05, reference_loss: 40.159515380859375
2023-05-11 17:34:46,749	44k	INFO	====> Epoch: 4, cost 90.25 s
2023-05-11 17:36:18,290	44k	INFO	====> Epoch: 5, cost 91.54 s
2023-05-11 17:36:52,410	44k	INFO	Train Epoch: 6 [22%]
2023-05-11 17:36:52,410	44k	INFO	Losses: [2.551401376724243, 2.5483150482177734, 8.768655776977539, 16.717308044433594, 1.1908011436462402], step: 800, lr: 2.9981254686914092e-05, reference_loss: 31.77648162841797
2023-05-11 17:37:47,849	44k	INFO	====> Epoch: 6, cost 89.56 s
2023-05-11 17:38:44,016	44k	INFO	Train Epoch: 7 [53%]
2023-05-11 17:38:44,017	44k	INFO	Losses: [2.39756441116333, 2.3777523040771484, 10.29245662689209, 19.85750389099121, 1.1578614711761475], step: 1000, lr: 2.9977507030078226e-05, reference_loss: 36.08313751220703
2023-05-11 17:38:52,579	44k	INFO	Saving model and optimizer state at iteration 7 to ./logs\44k\G_1000.pth
2023-05-11 17:38:53,383	44k	INFO	Saving model and optimizer state at iteration 7 to ./logs\44k\D_1000.pth
2023-05-11 17:39:27,428	44k	INFO	====> Epoch: 7, cost 99.58 s
2023-05-11 17:40:44,520	44k	INFO	Train Epoch: 8 [84%]
2023-05-11 17:40:44,521	44k	INFO	Losses: [2.1175475120544434, 2.289700746536255, 12.374502182006836, 23.57455825805664, 1.5487251281738281], step: 1200, lr: 2.9973759841699464e-05, reference_loss: 41.905033111572266
2023-05-11 17:40:56,640	44k	INFO	====> Epoch: 8, cost 89.21 s
2023-05-11 17:42:24,787	44k	INFO	====> Epoch: 9, cost 88.15 s
2023-05-11 17:42:53,148	44k	INFO	Train Epoch: 10 [14%]
2023-05-11 17:42:53,149	44k	INFO	Losses: [2.2442078590393066, 2.628319025039673, 12.698528289794922, 21.054655075073242, 0.8923047780990601], step: 1400, lr: 2.996626687007903e-05, reference_loss: 39.51801300048828
2023-05-11 17:43:53,577	44k	INFO	====> Epoch: 10, cost 88.79 s
2023-05-11 17:44:44,237	44k	INFO	Train Epoch: 11 [45%]
2023-05-11 17:44:44,238	44k	INFO	Losses: [2.2357373237609863, 2.2374267578125, 16.44473648071289, 21.65203094482422, 1.452236533164978], step: 1600, lr: 2.996252108672027e-05, reference_loss: 44.02216720581055
2023-05-11 17:45:23,312	44k	INFO	====> Epoch: 11, cost 89.74 s
2023-05-11 17:46:34,958	44k	INFO	Train Epoch: 12 [76%]
2023-05-11 17:46:34,959	44k	INFO	Losses: [2.4033257961273193, 2.455310106277466, 15.984533309936523, 20.990253448486328, 0.9479538798332214], step: 1800, lr: 2.995877577158443e-05, reference_loss: 42.781375885009766
2023-05-11 17:46:52,783	44k	INFO	====> Epoch: 12, cost 89.47 s
2023-05-11 17:48:21,452	44k	INFO	====> Epoch: 13, cost 88.67 s
2023-05-11 17:48:44,209	44k	INFO	Train Epoch: 14 [7%]
2023-05-11 17:48:44,210	44k	INFO	Losses: [2.3698649406433105, 2.3059144020080566, 15.666203498840332, 19.114469528198242, 1.564429759979248], step: 2000, lr: 2.99512865457474e-05, reference_loss: 41.02088165283203
2023-05-11 17:48:49,992	44k	INFO	Saving model and optimizer state at iteration 14 to ./logs\44k\G_2000.pth
2023-05-11 17:48:50,932	44k	INFO	Saving model and optimizer state at iteration 14 to ./logs\44k\D_2000.pth
2023-05-11 17:49:56,815	44k	INFO	====> Epoch: 14, cost 95.36 s
2023-05-11 17:50:40,401	44k	INFO	Train Epoch: 15 [37%]
2023-05-11 17:50:40,402	44k	INFO	Losses: [2.0135385990142822, 2.7587971687316895, 13.319564819335938, 25.564069747924805, 1.4321262836456299], step: 2200, lr: 2.994754263492918e-05, reference_loss: 45.08809280395508
2023-05-11 17:51:24,677	44k	INFO	====> Epoch: 15, cost 87.86 s
2023-05-11 17:52:29,781	44k	INFO	Train Epoch: 16 [68%]
2023-05-11 17:52:29,782	44k	INFO	Losses: [2.4167895317077637, 2.6439528465270996, 12.859883308410645, 18.180147171020508, 1.3307610750198364], step: 2400, lr: 2.9943799192099815e-05, reference_loss: 37.43153381347656
2023-05-11 17:52:52,728	44k	INFO	====> Epoch: 16, cost 88.05 s
2023-05-11 17:54:19,088	44k	INFO	Train Epoch: 17 [99%]
2023-05-11 17:54:19,088	44k	INFO	Losses: [2.302527904510498, 2.3570075035095215, 11.310980796813965, 20.773296356201172, 1.221520185470581], step: 2600, lr: 2.99400562172008e-05, reference_loss: 37.96533203125
2023-05-11 17:54:20,755	44k	INFO	====> Epoch: 17, cost 88.03 s
2023-05-11 17:55:47,722	44k	INFO	====> Epoch: 18, cost 86.97 s
2023-05-11 17:56:25,856	44k	INFO	Train Epoch: 19 [29%]
2023-05-11 17:56:25,856	44k	INFO	Losses: [2.514249801635742, 1.9819296598434448, 18.588764190673828, 22.37004852294922, 1.2551637887954712], step: 2800, lr: 2.9932571670959876e-05, reference_loss: 46.71015930175781
2023-05-11 17:57:15,251	44k	INFO	====> Epoch: 19, cost 87.53 s
2023-05-11 17:58:14,432	44k	INFO	Train Epoch: 20 [60%]
2023-05-11 17:58:14,433	44k	INFO	Losses: [2.0786590576171875, 2.6004528999328613, 13.247485160827637, 22.567167282104492, 1.1914905309677124], step: 3000, lr: 2.9928830099501004e-05, reference_loss: 41.68525695800781
2023-05-11 17:58:20,030	44k	INFO	Saving model and optimizer state at iteration 20 to ./logs\44k\G_3000.pth
2023-05-11 17:58:20,862	44k	INFO	Saving model and optimizer state at iteration 20 to ./logs\44k\D_3000.pth
2023-05-11 17:58:49,526	44k	INFO	====> Epoch: 20, cost 94.28 s
2023-05-11 18:00:10,021	44k	INFO	Train Epoch: 21 [91%]
2023-05-11 18:00:10,022	44k	INFO	Losses: [2.1829042434692383, 2.4404072761535645, 16.241207122802734, 23.569766998291016, 1.1620146036148071], step: 3200, lr: 2.9925088995738566e-05, reference_loss: 45.5963020324707
2023-05-11 18:00:17,017	44k	INFO	====> Epoch: 21, cost 87.49 s
2023-05-11 18:01:44,094	44k	INFO	====> Epoch: 22, cost 87.08 s
2023-05-11 18:02:16,927	44k	INFO	Train Epoch: 23 [22%]
2023-05-11 18:02:16,928	44k	INFO	Losses: [2.1739773750305176, 2.542271137237549, 9.184977531433105, 20.572961807250977, 0.7886731624603271], step: 3400, lr: 2.9917608191069144e-05, reference_loss: 35.26286315917969
2023-05-11 18:03:11,647	44k	INFO	====> Epoch: 23, cost 87.55 s
2023-05-11 18:04:05,707	44k	INFO	Train Epoch: 24 [52%]
2023-05-11 18:04:05,707	44k	INFO	Losses: [2.245553731918335, 2.2225499153137207, 11.099966049194336, 21.631540298461914, 0.9529522061347961], step: 3600, lr: 2.9913868490045258e-05, reference_loss: 38.15256118774414
2023-05-11 18:04:39,317	44k	INFO	====> Epoch: 24, cost 87.67 s
2023-05-11 18:05:54,527	44k	INFO	Train Epoch: 25 [83%]
2023-05-11 18:05:54,528	44k	INFO	Losses: [2.236842632293701, 2.474482536315918, 12.405817031860352, 20.501174926757812, 1.236703872680664], step: 3800, lr: 2.9910129256484002e-05, reference_loss: 38.855018615722656
2023-05-11 18:06:07,854	44k	INFO	====> Epoch: 25, cost 88.54 s
2023-05-11 18:07:35,582	44k	INFO	====> Epoch: 26, cost 87.73 s
2023-05-11 18:08:02,893	44k	INFO	Train Epoch: 27 [14%]
2023-05-11 18:08:02,894	44k	INFO	Losses: [2.184093952178955, 2.6509127616882324, 10.350228309631348, 20.942798614501953, 1.1189231872558594], step: 4000, lr: 2.990265219151565e-05, reference_loss: 37.24695587158203
2023-05-11 18:08:08,561	44k	INFO	Saving model and optimizer state at iteration 27 to ./logs\44k\G_4000.pth
2023-05-11 18:08:09,357	44k	INFO	Saving model and optimizer state at iteration 27 to ./logs\44k\D_4000.pth
2023-05-11 18:08:10,086	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\G_1000.pth
2023-05-11 18:08:10,120	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\D_1000.pth
2023-05-11 18:09:10,210	44k	INFO	====> Epoch: 27, cost 94.63 s
2023-05-11 18:09:58,627	44k	INFO	Train Epoch: 28 [44%]
2023-05-11 18:09:58,627	44k	INFO	Losses: [2.2020976543426514, 2.4132113456726074, 12.796091079711914, 21.762636184692383, 1.3702045679092407], step: 4200, lr: 2.989891435999171e-05, reference_loss: 40.54423904418945
2023-05-11 18:10:37,649	44k	INFO	====> Epoch: 28, cost 87.44 s
2023-05-11 18:11:47,312	44k	INFO	Train Epoch: 29 [75%]
2023-05-11 18:11:47,312	44k	INFO	Losses: [2.3617758750915527, 1.72389817237854, 19.052396774291992, 22.682106018066406, 1.1705141067504883], step: 4400, lr: 2.9895176995696712e-05, reference_loss: 46.99068832397461
2023-05-11 18:12:05,196	44k	INFO	====> Epoch: 29, cost 87.55 s
2023-05-11 18:13:32,159	44k	INFO	====> Epoch: 30, cost 86.96 s
2023-05-11 18:13:54,383	44k	INFO	Train Epoch: 31 [6%]
2023-05-11 18:13:54,384	44k	INFO	Losses: [2.4064817428588867, 2.612220525741577, 11.427775382995605, 20.394603729248047, 1.16147780418396], step: 4600, lr: 2.9887703668559927e-05, reference_loss: 38.00255584716797
2023-05-11 18:14:59,912	44k	INFO	====> Epoch: 31, cost 87.75 s
2023-05-11 18:15:42,737	44k	INFO	Train Epoch: 32 [37%]
2023-05-11 18:15:42,738	44k	INFO	Losses: [2.6260833740234375, 2.144500255584717, 7.625936031341553, 18.192829132080078, 0.9529621005058289], step: 4800, lr: 2.9883967705601356e-05, reference_loss: 31.54231071472168
2023-05-11 18:16:27,097	44k	INFO	====> Epoch: 32, cost 87.19 s
2023-05-11 18:17:31,522	44k	INFO	Train Epoch: 33 [67%]
2023-05-11 18:17:31,523	44k	INFO	Losses: [2.3726367950439453, 2.149003028869629, 15.211962699890137, 23.032726287841797, 1.4929637908935547], step: 5000, lr: 2.9880232209638154e-05, reference_loss: 44.25929260253906
2023-05-11 18:17:36,966	44k	INFO	Saving model and optimizer state at iteration 33 to ./logs\44k\G_5000.pth
2023-05-11 18:17:37,757	44k	INFO	Saving model and optimizer state at iteration 33 to ./logs\44k\D_5000.pth
2023-05-11 18:17:38,437	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\G_2000.pth
2023-05-11 18:17:38,485	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\D_2000.pth
2023-05-11 18:18:01,515	44k	INFO	====> Epoch: 33, cost 94.42 s
2023-05-11 18:19:26,955	44k	INFO	Train Epoch: 34 [98%]
2023-05-11 18:19:26,956	44k	INFO	Losses: [2.371152400970459, 2.271916627883911, 9.877798080444336, 18.973604202270508, 1.3373123407363892], step: 5200, lr: 2.9876497180611947e-05, reference_loss: 34.831783294677734
2023-05-11 18:19:29,037	44k	INFO	====> Epoch: 34, cost 87.52 s
2023-05-11 18:20:55,986	44k	INFO	====> Epoch: 35, cost 86.95 s
2023-05-11 18:21:33,789	44k	INFO	Train Epoch: 36 [29%]
2023-05-11 18:21:33,790	44k	INFO	Losses: [2.339110851287842, 2.485027313232422, 13.222290992736816, 20.227033615112305, 1.1422102451324463], step: 5400, lr: 2.986902852313706e-05, reference_loss: 39.415672302246094
2023-05-11 18:22:23,762	44k	INFO	====> Epoch: 36, cost 87.78 s
2023-05-11 18:23:22,540	44k	INFO	Train Epoch: 37 [59%]
2023-05-11 18:23:22,540	44k	INFO	Losses: [2.252044200897217, 2.8541438579559326, 11.556214332580566, 22.706714630126953, 1.0304341316223145], step: 5600, lr: 2.9865294894571666e-05, reference_loss: 40.39955139160156
2023-05-11 18:23:51,284	44k	INFO	====> Epoch: 37, cost 87.52 s
2023-05-11 18:25:11,181	44k	INFO	Train Epoch: 38 [90%]
2023-05-11 18:25:11,182	44k	INFO	Losses: [2.7254021167755127, 2.0548441410064697, 10.065361022949219, 19.79765510559082, 1.0936174392700195], step: 5800, lr: 2.9861561732709844e-05, reference_loss: 35.736881256103516
2023-05-11 18:25:18,574	44k	INFO	====> Epoch: 38, cost 87.29 s
2023-05-11 18:26:45,553	44k	INFO	====> Epoch: 39, cost 86.98 s
2023-05-11 18:27:18,143	44k	INFO	Train Epoch: 40 [21%]
2023-05-11 18:27:18,143	44k	INFO	Losses: [2.298049211502075, 2.304781436920166, 15.834815979003906, 19.69129753112793, 0.9392275214195251], step: 6000, lr: 2.9854096808863564e-05, reference_loss: 41.068172454833984
2023-05-11 18:27:23,565	44k	INFO	Saving model and optimizer state at iteration 40 to ./logs\44k\G_6000.pth
2023-05-11 18:27:24,346	44k	INFO	Saving model and optimizer state at iteration 40 to ./logs\44k\D_6000.pth
2023-05-11 18:27:25,075	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\G_3000.pth
2023-05-11 18:27:25,123	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\D_3000.pth
2023-05-11 18:28:20,078	44k	INFO	====> Epoch: 40, cost 94.52 s
2023-05-11 18:29:13,590	44k	INFO	Train Epoch: 41 [52%]
2023-05-11 18:29:13,590	44k	INFO	Losses: [2.172698497772217, 2.508707046508789, 12.434700012207031, 20.388547897338867, 1.1035794019699097], step: 6200, lr: 2.9850365046762455e-05, reference_loss: 38.60823440551758
2023-05-11 18:29:47,668	44k	INFO	====> Epoch: 41, cost 87.59 s
2023-05-11 18:31:02,696	44k	INFO	Train Epoch: 42 [82%]
2023-05-11 18:31:02,697	44k	INFO	Losses: [2.413198947906494, 2.318309783935547, 9.818066596984863, 20.27875328063965, 1.2984932661056519], step: 6400, lr: 2.984663375113161e-05, reference_loss: 36.12682342529297
2023-05-11 18:31:15,560	44k	INFO	====> Epoch: 42, cost 87.89 s
2023-05-11 18:32:42,652	44k	INFO	====> Epoch: 43, cost 87.09 s
2023-05-11 18:33:09,513	44k	INFO	Train Epoch: 44 [13%]
2023-05-11 18:33:09,514	44k	INFO	Losses: [2.3417110443115234, 2.698680877685547, 11.552239418029785, 21.030609130859375, 0.9854516983032227], step: 6600, lr: 2.9839172559047475e-05, reference_loss: 38.60869216918945
2023-05-11 18:34:10,178	44k	INFO	====> Epoch: 44, cost 87.53 s
2023-05-11 18:34:58,239	44k	INFO	Train Epoch: 45 [44%]
2023-05-11 18:34:58,239	44k	INFO	Losses: [2.210618019104004, 2.452521800994873, 14.227374076843262, 21.104413986206055, 1.064041256904602], step: 6800, lr: 2.9835442662477594e-05, reference_loss: 41.05896759033203
2023-05-11 18:35:37,810	44k	INFO	====> Epoch: 45, cost 87.63 s
2023-05-11 18:36:47,011	44k	INFO	Train Epoch: 46 [75%]
2023-05-11 18:36:47,012	44k	INFO	Losses: [2.3212857246398926, 2.4168710708618164, 18.990123748779297, 21.611244201660156, 0.761496365070343], step: 7000, lr: 2.9831713232144785e-05, reference_loss: 46.10102081298828
2023-05-11 18:36:52,501	44k	INFO	Saving model and optimizer state at iteration 46 to ./logs\44k\G_7000.pth
2023-05-11 18:36:53,283	44k	INFO	Saving model and optimizer state at iteration 46 to ./logs\44k\D_7000.pth
2023-05-11 18:36:53,992	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\G_4000.pth
2023-05-11 18:36:54,059	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\D_4000.pth
2023-05-11 18:37:12,276	44k	INFO	====> Epoch: 46, cost 94.47 s
2023-05-11 18:38:39,461	44k	INFO	====> Epoch: 47, cost 87.18 s
2023-05-11 18:39:01,035	44k	INFO	Train Epoch: 48 [5%]
2023-05-11 18:39:01,035	44k	INFO	Losses: [2.366563320159912, 2.3221383094787598, 11.395698547363281, 22.63437843322754, 1.2375777959823608], step: 7200, lr: 2.9824255769957264e-05, reference_loss: 39.95635986328125
2023-05-11 18:40:07,208	44k	INFO	====> Epoch: 48, cost 87.75 s
2023-05-11 18:40:49,927	44k	INFO	Train Epoch: 49 [36%]
2023-05-11 18:40:49,927	44k	INFO	Losses: [1.917816162109375, 2.801903486251831, 13.9952974319458, 21.37693214416504, 0.8368502259254456], step: 7400, lr: 2.9820527737986018e-05, reference_loss: 40.92879867553711
2023-05-11 18:41:34,807	44k	INFO	====> Epoch: 49, cost 87.60 s
2023-05-11 18:42:38,739	44k	INFO	Train Epoch: 50 [67%]
2023-05-11 18:42:38,740	44k	INFO	Losses: [2.4722514152526855, 2.3414740562438965, 7.378017425537109, 21.898136138916016, 0.9776507019996643], step: 7600, lr: 2.9816800172018767e-05, reference_loss: 35.067527770996094
2023-05-11 18:43:02,384	44k	INFO	====> Epoch: 50, cost 87.58 s
2023-05-11 18:44:27,621	44k	INFO	Train Epoch: 51 [97%]
2023-05-11 18:44:27,622	44k	INFO	Losses: [2.574669122695923, 2.1439828872680664, 13.734928131103516, 23.297380447387695, 0.98489910364151], step: 7800, lr: 2.9813073071997262e-05, reference_loss: 42.73585891723633
2023-05-11 18:44:30,147	44k	INFO	====> Epoch: 51, cost 87.76 s
2023-05-11 18:45:57,248	44k	INFO	====> Epoch: 52, cost 87.10 s
2023-05-11 18:46:34,428	44k	INFO	Train Epoch: 53 [28%]
2023-05-11 18:46:34,428	44k	INFO	Losses: [2.2491679191589355, 2.8051514625549316, 10.986674308776855, 20.51974868774414, 1.1879740953445435], step: 8000, lr: 2.9805620269558528e-05, reference_loss: 37.748714447021484
2023-05-11 18:46:39,870	44k	INFO	Saving model and optimizer state at iteration 53 to ./logs\44k\G_8000.pth
2023-05-11 18:46:40,637	44k	INFO	Saving model and optimizer state at iteration 53 to ./logs\44k\D_8000.pth
2023-05-11 18:46:41,328	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\G_5000.pth
2023-05-11 18:46:41,384	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\D_5000.pth
2023-05-11 18:47:31,426	44k	INFO	====> Epoch: 53, cost 94.18 s
2023-05-11 18:48:29,589	44k	INFO	Train Epoch: 54 [59%]
2023-05-11 18:48:29,590	44k	INFO	Losses: [2.4061155319213867, 2.3861169815063477, 16.623397827148438, 22.733078002929688, 1.0733466148376465], step: 8200, lr: 2.980189456702483e-05, reference_loss: 45.22205352783203
2023-05-11 18:48:58,511	44k	INFO	====> Epoch: 54, cost 87.08 s
2023-05-11 18:50:17,689	44k	INFO	Train Epoch: 55 [90%]
2023-05-11 18:50:17,689	44k	INFO	Losses: [2.177870512008667, 2.5902280807495117, 9.996957778930664, 25.31020164489746, 1.3443069458007812], step: 8400, lr: 2.979816933020395e-05, reference_loss: 41.41956329345703
2023-05-11 18:50:25,550	44k	INFO	====> Epoch: 55, cost 87.04 s
2023-05-11 18:51:52,848	44k	INFO	====> Epoch: 56, cost 87.30 s
2023-05-11 18:52:24,812	44k	INFO	Train Epoch: 57 [20%]
2023-05-11 18:52:24,813	44k	INFO	Losses: [2.3480312824249268, 2.4328179359436035, 16.0323429107666, 22.88676643371582, 0.8386507034301758], step: 8600, lr: 2.9790720253467793e-05, reference_loss: 44.53860855102539
2023-05-11 18:53:20,641	44k	INFO	====> Epoch: 57, cost 87.79 s
2023-05-11 18:54:13,521	44k	INFO	Train Epoch: 58 [51%]
2023-05-11 18:54:13,521	44k	INFO	Losses: [2.6255438327789307, 2.2323849201202393, 9.145307540893555, 14.678163528442383, 1.0204851627349854], step: 8800, lr: 2.9786996413436108e-05, reference_loss: 29.701885223388672
2023-05-11 18:54:48,105	44k	INFO	====> Epoch: 58, cost 87.46 s
2023-05-11 18:56:01,926	44k	INFO	Train Epoch: 59 [82%]
2023-05-11 18:56:01,927	44k	INFO	Losses: [2.2011358737945557, 2.3508758544921875, 15.927428245544434, 19.570165634155273, 0.9381687045097351], step: 9000, lr: 2.9783273038884426e-05, reference_loss: 40.98777389526367
2023-05-11 18:56:07,433	44k	INFO	Saving model and optimizer state at iteration 59 to ./logs\44k\G_9000.pth
2023-05-11 18:56:08,293	44k	INFO	Saving model and optimizer state at iteration 59 to ./logs\44k\D_9000.pth
2023-05-11 18:56:08,964	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\G_6000.pth
2023-05-11 18:56:09,013	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\D_6000.pth
2023-05-11 18:56:22,005	44k	INFO	====> Epoch: 59, cost 93.90 s
2023-05-11 18:57:48,958	44k	INFO	====> Epoch: 60, cost 86.95 s
2023-05-11 18:58:15,278	44k	INFO	Train Epoch: 61 [12%]
2023-05-11 18:58:15,279	44k	INFO	Losses: [2.1734931468963623, 2.584578275680542, 11.054624557495117, 16.684263229370117, 0.951940655708313], step: 9200, lr: 2.9775827685988343e-05, reference_loss: 33.44890213012695
2023-05-11 18:59:16,156	44k	INFO	====> Epoch: 61, cost 87.20 s
2023-05-11 19:00:03,620	44k	INFO	Train Epoch: 62 [43%]
2023-05-11 19:00:03,621	44k	INFO	Losses: [2.3306589126586914, 2.431523084640503, 10.049124717712402, 19.96187400817871, 1.3222295045852661], step: 9400, lr: 2.9772105707527593e-05, reference_loss: 36.09541320800781
2023-05-11 19:00:43,601	44k	INFO	====> Epoch: 62, cost 87.44 s
2023-05-11 19:01:52,336	44k	INFO	Train Epoch: 63 [74%]
2023-05-11 19:01:52,337	44k	INFO	Losses: [2.3950185775756836, 2.203678846359253, 12.203329086303711, 18.860929489135742, 1.291874647140503], step: 9600, lr: 2.976838419431415e-05, reference_loss: 36.954830169677734
2023-05-11 19:02:11,088	44k	INFO	====> Epoch: 63, cost 87.49 s
2023-05-11 19:03:37,831	44k	INFO	====> Epoch: 64, cost 86.74 s
2023-05-11 19:03:58,879	44k	INFO	Train Epoch: 65 [5%]
2023-05-11 19:03:58,879	44k	INFO	Losses: [2.2784037590026855, 2.095527410507202, 15.206583976745605, 21.7014102935791, 0.9693368077278137], step: 9800, lr: 2.9760942563396572e-05, reference_loss: 42.25126266479492
2023-05-11 19:05:05,336	44k	INFO	====> Epoch: 65, cost 87.51 s
2023-05-11 19:05:47,432	44k	INFO	Train Epoch: 66 [35%]
2023-05-11 19:05:47,432	44k	INFO	Losses: [2.3025805950164795, 2.8782596588134766, 15.639361381530762, 19.169925689697266, 1.2433959245681763], step: 10000, lr: 2.9757222445576146e-05, reference_loss: 41.233524322509766
2023-05-11 19:05:52,892	44k	INFO	Saving model and optimizer state at iteration 66 to ./logs\44k\G_10000.pth
2023-05-11 19:05:53,797	44k	INFO	Saving model and optimizer state at iteration 66 to ./logs\44k\D_10000.pth
2023-05-11 19:05:54,473	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\G_7000.pth
2023-05-11 19:05:54,522	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\D_7000.pth
2023-05-11 19:06:39,537	44k	INFO	====> Epoch: 66, cost 94.20 s
2023-05-11 19:07:44,297	44k	INFO	Train Epoch: 67 [66%]
2023-05-11 19:07:44,298	44k	INFO	Losses: [2.392963171005249, 2.511087656021118, 8.105061531066895, 17.21589469909668, 0.5810514688491821], step: 10200, lr: 2.975350279277045e-05, reference_loss: 30.80605697631836
2023-05-11 19:08:08,457	44k	INFO	====> Epoch: 67, cost 88.92 s
2023-05-11 19:09:33,331	44k	INFO	Train Epoch: 68 [97%]
2023-05-11 19:09:33,331	44k	INFO	Losses: [2.5744566917419434, 2.324415683746338, 9.892423629760742, 19.85940170288086, 0.9389864802360535], step: 10400, lr: 2.974978360492135e-05, reference_loss: 35.589683532714844
2023-05-11 19:09:36,256	44k	INFO	====> Epoch: 68, cost 87.80 s
2023-05-11 19:11:03,181	44k	INFO	====> Epoch: 69, cost 86.93 s
2023-05-11 19:11:40,015	44k	INFO	Train Epoch: 70 [27%]
2023-05-11 19:11:40,016	44k	INFO	Losses: [2.488396644592285, 2.707859516143799, 14.608697891235352, 21.44941520690918, 1.1219513416290283], step: 10600, lr: 2.9742346623860485e-05, reference_loss: 42.37632369995117
2023-05-11 19:12:30,628	44k	INFO	====> Epoch: 70, cost 87.45 s
2023-05-11 19:13:28,481	44k	INFO	Train Epoch: 71 [58%]
2023-05-11 19:13:28,481	44k	INFO	Losses: [2.483287811279297, 2.5775985717773438, 10.276037216186523, 20.905624389648438, 0.6530118584632874], step: 10800, lr: 2.97386288305325e-05, reference_loss: 36.89555740356445
2023-05-11 19:13:58,002	44k	INFO	====> Epoch: 71, cost 87.37 s
2023-05-11 19:15:17,288	44k	INFO	Train Epoch: 72 [89%]
2023-05-11 19:15:17,288	44k	INFO	Losses: [2.332808256149292, 2.448483943939209, 10.965472221374512, 20.244462966918945, 0.8521516919136047], step: 11000, lr: 2.9734911501928684e-05, reference_loss: 36.84337615966797
2023-05-11 19:15:22,859	44k	INFO	Saving model and optimizer state at iteration 72 to ./logs\44k\G_11000.pth
2023-05-11 19:15:23,565	44k	INFO	Saving model and optimizer state at iteration 72 to ./logs\44k\D_11000.pth
2023-05-11 19:15:24,311	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\G_8000.pth
2023-05-11 19:15:24,376	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\D_8000.pth
2023-05-11 19:15:32,429	44k	INFO	====> Epoch: 72, cost 94.43 s
2023-05-11 19:16:59,201	44k	INFO	====> Epoch: 73, cost 86.77 s
2023-05-11 19:17:30,453	44k	INFO	Train Epoch: 74 [20%]
2023-05-11 19:17:30,453	44k	INFO	Losses: [2.266490936279297, 2.3601906299591064, 17.345029830932617, 20.298904418945312, 0.45651742815971375], step: 11200, lr: 2.9727478238661192e-05, reference_loss: 42.72713088989258
2023-05-11 19:18:26,748	44k	INFO	====> Epoch: 74, cost 87.55 s
2023-05-11 19:19:19,140	44k	INFO	Train Epoch: 75 [50%]
2023-05-11 19:19:19,140	44k	INFO	Losses: [2.2988176345825195, 2.9386768341064453, 9.981624603271484, 24.409568786621094, 1.1511999368667603], step: 11400, lr: 2.9723762303881358e-05, reference_loss: 40.77988815307617
2023-05-11 19:19:54,162	44k	INFO	====> Epoch: 75, cost 87.41 s
2023-05-11 19:21:07,995	44k	INFO	Train Epoch: 76 [81%]
2023-05-11 19:21:07,995	44k	INFO	Losses: [2.527557611465454, 2.5391311645507812, 13.923425674438477, 20.909971237182617, 0.9973414540290833], step: 11600, lr: 2.9720046833593373e-05, reference_loss: 40.89742660522461
2023-05-11 19:21:21,624	44k	INFO	====> Epoch: 76, cost 87.46 s
2023-05-11 19:22:48,916	44k	INFO	====> Epoch: 77, cost 87.29 s
2023-05-11 19:23:14,944	44k	INFO	Train Epoch: 78 [12%]
2023-05-11 19:23:14,944	44k	INFO	Losses: [2.127185344696045, 3.0148720741271973, 13.057185173034668, 21.26051139831543, 1.0786174535751343], step: 11800, lr: 2.9712617286260704e-05, reference_loss: 40.53837203979492
2023-05-11 19:24:16,477	44k	INFO	====> Epoch: 78, cost 87.56 s
2023-05-11 19:25:03,454	44k	INFO	Train Epoch: 79 [42%]
2023-05-11 19:25:03,455	44k	INFO	Losses: [2.5826191902160645, 2.7334704399108887, 11.021053314208984, 20.000476837158203, 1.2514795064926147], step: 12000, lr: 2.970890320909992e-05, reference_loss: 37.5890998840332
2023-05-11 19:25:08,989	44k	INFO	Saving model and optimizer state at iteration 79 to ./logs\44k\G_12000.pth
2023-05-11 19:25:09,758	44k	INFO	Saving model and optimizer state at iteration 79 to ./logs\44k\D_12000.pth
2023-05-11 19:25:10,427	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\G_9000.pth
2023-05-11 19:25:10,490	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\D_9000.pth
2023-05-11 19:25:50,533	44k	INFO	====> Epoch: 79, cost 94.06 s
2023-05-11 19:26:58,862	44k	INFO	Train Epoch: 80 [73%]
2023-05-11 19:26:58,863	44k	INFO	Losses: [2.1584227085113525, 2.5675315856933594, 14.528108596801758, 21.469886779785156, 1.04828941822052], step: 12200, lr: 2.970518959619878e-05, reference_loss: 41.772239685058594
2023-05-11 19:27:17,906	44k	INFO	====> Epoch: 80, cost 87.37 s
2023-05-11 19:28:44,748	44k	INFO	====> Epoch: 81, cost 86.84 s
2023-05-11 19:29:05,252	44k	INFO	Train Epoch: 82 [4%]
2023-05-11 19:29:05,253	44k	INFO	Losses: [2.5995869636535645, 2.3246142864227295, 17.80889892578125, 20.148639678955078, 0.9023404717445374], step: 12400, lr: 2.9697763762943315e-05, reference_loss: 43.784080505371094
2023-05-11 19:30:11,883	44k	INFO	====> Epoch: 82, cost 87.14 s
2023-05-11 19:30:53,748	44k	INFO	Train Epoch: 83 [35%]
2023-05-11 19:30:53,749	44k	INFO	Losses: [2.0636746883392334, 2.799551486968994, 14.208065032958984, 22.287092208862305, 1.1652652025222778], step: 12600, lr: 2.9694051542472947e-05, reference_loss: 42.52364730834961
2023-05-11 19:31:40,630	44k	INFO	====> Epoch: 83, cost 88.75 s
2023-05-11 19:32:44,546	44k	INFO	Train Epoch: 84 [65%]
2023-05-11 19:32:44,547	44k	INFO	Losses: [1.850412368774414, 2.7066690921783447, 12.541053771972656, 21.48491859436035, 1.3602120876312256], step: 12800, lr: 2.969033978603014e-05, reference_loss: 39.94326400756836
2023-05-11 19:33:09,279	44k	INFO	====> Epoch: 84, cost 88.65 s
2023-05-11 19:34:34,197	44k	INFO	Train Epoch: 85 [96%]
2023-05-11 19:34:34,197	44k	INFO	Losses: [2.495619773864746, 2.312445878982544, 10.466293334960938, 20.584941864013672, 1.1275739669799805], step: 13000, lr: 2.9686628493556884e-05, reference_loss: 36.986873626708984
2023-05-11 19:34:39,692	44k	INFO	Saving model and optimizer state at iteration 85 to ./logs\44k\G_13000.pth
2023-05-11 19:34:40,481	44k	INFO	Saving model and optimizer state at iteration 85 to ./logs\44k\D_13000.pth
2023-05-11 19:34:41,154	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\G_10000.pth
2023-05-11 19:34:41,208	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\D_10000.pth
2023-05-11 19:34:44,305	44k	INFO	====> Epoch: 85, cost 95.03 s
2023-05-11 19:36:12,496	44k	INFO	====> Epoch: 86, cost 88.19 s
2023-05-11 19:36:49,953	44k	INFO	Train Epoch: 87 [27%]
2023-05-11 19:36:49,953	44k	INFO	Losses: [2.5500364303588867, 1.8523938655853271, 14.750356674194336, 21.896915435791016, 1.2302916049957275], step: 13200, lr: 2.9679207300287062e-05, reference_loss: 42.27999496459961
2023-05-11 19:37:41,819	44k	INFO	====> Epoch: 87, cost 89.32 s
2023-05-11 19:38:40,093	44k	INFO	Train Epoch: 88 [58%]
2023-05-11 19:38:40,094	44k	INFO	Losses: [2.64554500579834, 2.655510425567627, 8.941330909729004, 19.124576568603516, 0.9901910424232483], step: 13400, lr: 2.9675497399374526e-05, reference_loss: 34.357154846191406
2023-05-11 19:39:10,403	44k	INFO	====> Epoch: 88, cost 88.58 s
2023-05-11 19:40:29,680	44k	INFO	Train Epoch: 89 [88%]
2023-05-11 19:40:29,681	44k	INFO	Losses: [2.1685872077941895, 2.8226051330566406, 9.624245643615723, 17.96548080444336, 1.2596955299377441], step: 13600, lr: 2.9671787962199603e-05, reference_loss: 33.840614318847656
2023-05-11 19:40:38,485	44k	INFO	====> Epoch: 89, cost 88.08 s
2023-05-11 19:42:06,129	44k	INFO	====> Epoch: 90, cost 87.64 s
2023-05-11 19:42:37,038	44k	INFO	Train Epoch: 91 [19%]
2023-05-11 19:42:37,039	44k	INFO	Losses: [2.3471243381500244, 2.4963676929473877, 14.549406051635742, 20.984949111938477, 0.8281876444816589], step: 13800, lr: 2.9664370478830735e-05, reference_loss: 41.206031799316406
2023-05-11 19:43:34,222	44k	INFO	====> Epoch: 91, cost 88.09 s
2023-05-11 19:44:26,720	44k	INFO	Train Epoch: 92 [50%]
2023-05-11 19:44:26,720	44k	INFO	Losses: [2.346712112426758, 2.598132848739624, 11.195423126220703, 21.27674102783203, 0.7020501494407654], step: 14000, lr: 2.966066243252088e-05, reference_loss: 38.119056701660156
2023-05-11 19:44:32,332	44k	INFO	Saving model and optimizer state at iteration 92 to ./logs\44k\G_14000.pth
2023-05-11 19:44:33,144	44k	INFO	Saving model and optimizer state at iteration 92 to ./logs\44k\D_14000.pth
2023-05-11 19:44:33,866	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\G_11000.pth
2023-05-11 19:44:33,911	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\D_11000.pth
2023-05-11 19:45:09,311	44k	INFO	====> Epoch: 92, cost 95.09 s
2023-05-11 19:46:23,090	44k	INFO	Train Epoch: 93 [80%]
2023-05-11 19:46:23,091	44k	INFO	Losses: [2.3515446186065674, 1.9654158353805542, 17.901376724243164, 19.525684356689453, 0.574507474899292], step: 14200, lr: 2.9656954849716813e-05, reference_loss: 42.31853103637695
2023-05-11 19:46:37,427	44k	INFO	====> Epoch: 93, cost 88.12 s
2023-05-11 19:48:04,411	44k	INFO	====> Epoch: 94, cost 86.98 s
2023-05-11 19:48:29,911	44k	INFO	Train Epoch: 95 [11%]
2023-05-11 19:48:29,912	44k	INFO	Losses: [2.6377267837524414, 1.9331278800964355, 11.104305267333984, 18.165645599365234, 0.9207220077514648], step: 14400, lr: 2.96495410743943e-05, reference_loss: 34.76152801513672
2023-05-11 19:49:31,785	44k	INFO	====> Epoch: 95, cost 87.37 s
2023-05-11 19:50:18,309	44k	INFO	Train Epoch: 96 [42%]
2023-05-11 19:50:18,310	44k	INFO	Losses: [2.4666552543640137, 2.3607218265533447, 15.385420799255371, 20.539506912231445, 0.9757711291313171], step: 14600, lr: 2.964583488176e-05, reference_loss: 41.72807693481445
2023-05-11 19:50:59,089	44k	INFO	====> Epoch: 96, cost 87.30 s
2023-05-11 19:52:06,944	44k	INFO	Train Epoch: 97 [73%]
2023-05-11 19:52:06,945	44k	INFO	Losses: [2.388150453567505, 2.35262393951416, 9.621209144592285, 18.28777503967285, 0.8167867064476013], step: 14800, lr: 2.964212915239978e-05, reference_loss: 33.46654510498047
2023-05-11 19:52:26,660	44k	INFO	====> Epoch: 97, cost 87.57 s
2023-05-11 19:53:53,595	44k	INFO	====> Epoch: 98, cost 86.93 s
2023-05-11 19:54:13,639	44k	INFO	Train Epoch: 99 [3%]
2023-05-11 19:54:13,640	44k	INFO	Losses: [1.984534740447998, 2.543687582015991, 15.23721694946289, 21.18869972229004, 1.470950722694397], step: 15000, lr: 2.9634719083269944e-05, reference_loss: 42.42509078979492
2023-05-11 19:54:19,066	44k	INFO	Saving model and optimizer state at iteration 99 to ./logs\44k\G_15000.pth
2023-05-11 19:54:19,911	44k	INFO	Saving model and optimizer state at iteration 99 to ./logs\44k\D_15000.pth
2023-05-11 19:54:20,620	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\G_12000.pth
2023-05-11 19:54:20,672	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\D_12000.pth
2023-05-11 19:55:27,566	44k	INFO	====> Epoch: 99, cost 93.97 s
2023-05-11 19:56:08,663	44k	INFO	Train Epoch: 100 [34%]
2023-05-11 19:56:08,664	44k	INFO	Losses: [2.5254507064819336, 2.147777795791626, 8.379581451416016, 16.67535972595215, 0.7880470752716064], step: 15200, lr: 2.9631014743384534e-05, reference_loss: 30.516216278076172
2023-05-11 19:56:54,799	44k	INFO	====> Epoch: 100, cost 87.23 s
2023-05-11 19:57:56,853	44k	INFO	Train Epoch: 101 [65%]
2023-05-11 19:57:56,854	44k	INFO	Losses: [2.240133762359619, 2.652489423751831, 9.347786903381348, 21.02147102355957, 1.0434813499450684], step: 15400, lr: 2.962731086654161e-05, reference_loss: 36.30535888671875
2023-05-11 19:58:21,835	44k	INFO	====> Epoch: 101, cost 87.04 s
2023-05-11 19:59:45,148	44k	INFO	Train Epoch: 102 [95%]
2023-05-11 19:59:45,148	44k	INFO	Losses: [2.4462976455688477, 2.6302003860473633, 8.536848068237305, 15.693955421447754, 0.7916440367698669], step: 15600, lr: 2.9623607452683292e-05, reference_loss: 30.09894371032715
2023-05-11 19:59:49,039	44k	INFO	====> Epoch: 102, cost 87.20 s
2023-05-11 20:01:15,908	44k	INFO	====> Epoch: 103, cost 86.87 s
2023-05-11 20:01:51,737	44k	INFO	Train Epoch: 104 [26%]
2023-05-11 20:01:51,738	44k	INFO	Losses: [2.699462413787842, 2.119377613067627, 8.909375190734863, 14.585592269897461, 1.0882903337478638], step: 15800, lr: 2.9616202013688986e-05, reference_loss: 29.402099609375
2023-05-11 20:02:43,470	44k	INFO	====> Epoch: 104, cost 87.56 s
2023-05-11 20:03:40,359	44k	INFO	Train Epoch: 105 [57%]
2023-05-11 20:03:40,359	44k	INFO	Losses: [2.6297519207000732, 2.1645867824554443, 11.145780563354492, 19.97608757019043, 1.0075565576553345], step: 16000, lr: 2.9612499988437273e-05, reference_loss: 36.923763275146484
2023-05-11 20:03:45,836	44k	INFO	Saving model and optimizer state at iteration 105 to ./logs\44k\G_16000.pth
2023-05-11 20:03:46,604	44k	INFO	Saving model and optimizer state at iteration 105 to ./logs\44k\D_16000.pth
2023-05-11 20:03:47,277	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\G_13000.pth
2023-05-11 20:03:47,328	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\D_13000.pth
2023-05-11 20:04:17,693	44k	INFO	====> Epoch: 105, cost 94.22 s
2023-05-11 20:05:35,560	44k	INFO	Train Epoch: 106 [88%]
2023-05-11 20:05:35,561	44k	INFO	Losses: [1.8554573059082031, 2.7442054748535156, 15.099529266357422, 21.0919246673584, 1.2479524612426758], step: 16200, lr: 2.9608798425938718e-05, reference_loss: 42.039066314697266
2023-05-11 20:05:44,910	44k	INFO	====> Epoch: 106, cost 87.22 s
2023-05-11 20:07:12,228	44k	INFO	====> Epoch: 107, cost 87.32 s
2023-05-11 20:07:42,475	44k	INFO	Train Epoch: 108 [18%]
2023-05-11 20:07:42,476	44k	INFO	Losses: [2.597095489501953, 2.517559289932251, 14.315872192382812, 20.4267520904541, 0.7586858868598938], step: 16400, lr: 2.9601396688969708e-05, reference_loss: 40.615962982177734
2023-05-11 20:08:39,468	44k	INFO	====> Epoch: 108, cost 87.24 s
2023-05-11 20:09:30,983	44k	INFO	Train Epoch: 109 [49%]
2023-05-11 20:09:30,984	44k	INFO	Losses: [2.188152313232422, 2.4814181327819824, 17.63176918029785, 21.54877281188965, 1.0282224416732788], step: 16600, lr: 2.9597696514383585e-05, reference_loss: 44.878334045410156
2023-05-11 20:10:06,850	44k	INFO	====> Epoch: 109, cost 87.38 s
2023-05-11 20:11:19,388	44k	INFO	Train Epoch: 110 [80%]
2023-05-11 20:11:19,388	44k	INFO	Losses: [1.9795849323272705, 2.3055872917175293, 17.153779983520508, 22.12401580810547, 0.7317075729370117], step: 16800, lr: 2.9593996802319285e-05, reference_loss: 44.294677734375
2023-05-11 20:11:34,109	44k	INFO	====> Epoch: 110, cost 87.26 s
2023-05-11 20:13:01,272	44k	INFO	====> Epoch: 111, cost 87.16 s
2023-05-11 20:13:26,312	44k	INFO	Train Epoch: 112 [10%]
2023-05-11 20:13:26,313	44k	INFO	Losses: [2.044874429702759, 3.0220160484313965, 12.554776191711426, 20.511953353881836, 1.0727452039718628], step: 17000, lr: 2.9586598765524905e-05, reference_loss: 39.20636749267578
2023-05-11 20:13:31,898	44k	INFO	Saving model and optimizer state at iteration 112 to ./logs\44k\G_17000.pth
2023-05-11 20:13:32,675	44k	INFO	Saving model and optimizer state at iteration 112 to ./logs\44k\D_17000.pth
2023-05-11 20:13:33,351	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\G_14000.pth
2023-05-11 20:13:33,407	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\D_14000.pth
2023-05-11 20:14:35,620	44k	INFO	====> Epoch: 112, cost 94.35 s
2023-05-11 20:15:21,915	44k	INFO	Train Epoch: 113 [41%]
2023-05-11 20:15:21,916	44k	INFO	Losses: [2.434295177459717, 2.4613468647003174, 12.294702529907227, 19.085464477539062, 0.938558042049408], step: 17200, lr: 2.9582900440679212e-05, reference_loss: 37.2143669128418
2023-05-11 20:16:03,071	44k	INFO	====> Epoch: 113, cost 87.45 s
2023-05-11 20:17:10,257	44k	INFO	Train Epoch: 114 [72%]
2023-05-11 20:17:10,257	44k	INFO	Losses: [2.215301990509033, 2.9431910514831543, 11.858352661132812, 22.097923278808594, 0.8659520149230957], step: 17400, lr: 2.9579202578124125e-05, reference_loss: 39.98072052001953
2023-05-11 20:17:30,308	44k	INFO	====> Epoch: 114, cost 87.24 s
2023-05-11 20:18:56,931	44k	INFO	====> Epoch: 115, cost 86.62 s
2023-05-11 20:19:16,680	44k	INFO	Train Epoch: 116 [3%]
2023-05-11 20:19:16,681	44k	INFO	Losses: [2.2416772842407227, 2.399442672729492, 10.596750259399414, 20.50092887878418, 0.9194076061248779], step: 17600, lr: 2.9571808239654632e-05, reference_loss: 36.658206939697266
2023-05-11 20:20:24,523	44k	INFO	====> Epoch: 116, cost 87.59 s
2023-05-11 20:21:05,027	44k	INFO	Train Epoch: 117 [33%]
2023-05-11 20:21:05,027	44k	INFO	Losses: [2.1417291164398193, 2.6440353393554688, 17.073688507080078, 21.75315284729004, 0.8988870978355408], step: 17800, lr: 2.9568111763624674e-05, reference_loss: 44.51149368286133
2023-05-11 20:21:51,795	44k	INFO	====> Epoch: 117, cost 87.27 s
2023-05-11 20:22:54,037	44k	INFO	Train Epoch: 118 [64%]
2023-05-11 20:22:54,038	44k	INFO	Losses: [2.687147855758667, 2.2581987380981445, 9.520541191101074, 17.402111053466797, 0.769197940826416], step: 18000, lr: 2.956441574965422e-05, reference_loss: 32.63719940185547
2023-05-11 20:22:59,496	44k	INFO	Saving model and optimizer state at iteration 118 to ./logs\44k\G_18000.pth
2023-05-11 20:23:00,419	44k	INFO	Saving model and optimizer state at iteration 118 to ./logs\44k\D_18000.pth
2023-05-11 20:23:01,156	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\G_15000.pth
2023-05-11 20:23:01,198	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\D_15000.pth
2023-05-11 20:23:26,576	44k	INFO	====> Epoch: 118, cost 94.78 s
2023-05-11 20:24:49,791	44k	INFO	Train Epoch: 119 [95%]
2023-05-11 20:24:49,791	44k	INFO	Losses: [2.2016706466674805, 2.342021942138672, 15.455516815185547, 21.393728256225586, 1.2339255809783936], step: 18200, lr: 2.956072019768551e-05, reference_loss: 42.626861572265625
2023-05-11 20:24:54,103	44k	INFO	====> Epoch: 119, cost 87.53 s
2023-05-11 20:26:21,161	44k	INFO	====> Epoch: 120, cost 87.06 s
2023-05-11 20:26:56,437	44k	INFO	Train Epoch: 121 [25%]
2023-05-11 20:26:56,437	44k	INFO	Losses: [2.3297619819641113, 2.582425832748413, 13.432904243469238, 20.049182891845703, 0.7440981268882751], step: 18400, lr: 2.955333047952234e-05, reference_loss: 39.13837432861328
2023-05-11 20:27:48,547	44k	INFO	====> Epoch: 121, cost 87.39 s
2023-05-11 20:28:44,925	44k	INFO	Train Epoch: 122 [56%]
2023-05-11 20:28:44,926	44k	INFO	Losses: [2.501075267791748, 2.19102144241333, 13.667943954467773, 19.5328426361084, 0.7769325375556946], step: 18600, lr: 2.95496363132124e-05, reference_loss: 38.66981506347656
2023-05-11 20:29:15,885	44k	INFO	====> Epoch: 122, cost 87.34 s
2023-05-11 20:30:33,638	44k	INFO	Train Epoch: 123 [87%]
2023-05-11 20:30:33,639	44k	INFO	Losses: [2.3140921592712402, 2.098410129547119, 10.938791275024414, 19.268470764160156, 0.7637777328491211], step: 18800, lr: 2.9545942608673247e-05, reference_loss: 35.383544921875
2023-05-11 20:30:43,311	44k	INFO	====> Epoch: 123, cost 87.43 s
2023-05-11 20:32:10,306	44k	INFO	====> Epoch: 124, cost 86.99 s
2023-05-11 20:32:40,448	44k	INFO	Train Epoch: 125 [18%]
2023-05-11 20:32:40,448	44k	INFO	Losses: [2.407294750213623, 2.710339307785034, 10.570150375366211, 18.655397415161133, 0.9011021852493286], step: 19000, lr: 2.953855658467643e-05, reference_loss: 35.244285583496094
2023-05-11 20:32:45,953	44k	INFO	Saving model and optimizer state at iteration 125 to ./logs\44k\G_19000.pth
2023-05-11 20:32:46,856	44k	INFO	Saving model and optimizer state at iteration 125 to ./logs\44k\D_19000.pth
2023-05-11 20:32:47,542	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\G_16000.pth
2023-05-11 20:32:47,592	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\D_16000.pth
2023-05-11 20:33:44,984	44k	INFO	====> Epoch: 125, cost 94.68 s
2023-05-11 20:34:36,290	44k	INFO	Train Epoch: 126 [48%]
2023-05-11 20:34:36,290	44k	INFO	Losses: [2.410686492919922, 2.1409695148468018, 11.722865104675293, 19.613075256347656, 1.1722275018692017], step: 19200, lr: 2.9534864265103344e-05, reference_loss: 37.05982208251953
2023-05-11 20:35:12,573	44k	INFO	====> Epoch: 126, cost 87.59 s
2023-05-11 20:36:24,759	44k	INFO	Train Epoch: 127 [79%]
2023-05-11 20:36:24,760	44k	INFO	Losses: [2.2939276695251465, 2.3239898681640625, 11.148547172546387, 20.177614212036133, 0.987168550491333], step: 19400, lr: 2.9531172407070204e-05, reference_loss: 36.931243896484375
2023-05-11 20:36:39,894	44k	INFO	====> Epoch: 127, cost 87.32 s
2023-05-11 20:38:06,742	44k	INFO	====> Epoch: 128, cost 86.85 s
2023-05-11 20:38:31,130	44k	INFO	Train Epoch: 129 [10%]
2023-05-11 20:38:31,131	44k	INFO	Losses: [2.185370922088623, 2.5441622734069824, 14.028510093688965, 20.407976150512695, 0.8030586838722229], step: 19600, lr: 2.9523790075393003e-05, reference_loss: 39.969078063964844
2023-05-11 20:39:33,973	44k	INFO	====> Epoch: 129, cost 87.23 s
2023-05-11 20:40:19,634	44k	INFO	Train Epoch: 130 [41%]
2023-05-11 20:40:19,634	44k	INFO	Losses: [1.8971433639526367, 2.641223430633545, 13.486212730407715, 22.607393264770508, 1.0836613178253174], step: 19800, lr: 2.9520099601633577e-05, reference_loss: 41.715633392333984
2023-05-11 20:41:01,428	44k	INFO	====> Epoch: 130, cost 87.46 s
2023-05-11 20:42:08,322	44k	INFO	Train Epoch: 131 [71%]
2023-05-11 20:42:08,322	44k	INFO	Losses: [2.403357744216919, 2.1612229347229004, 11.346234321594238, 21.16708755493164, 0.9038617610931396], step: 20000, lr: 2.951640958918337e-05, reference_loss: 37.98176574707031
2023-05-11 20:42:13,815	44k	INFO	Saving model and optimizer state at iteration 131 to ./logs\44k\G_20000.pth
2023-05-11 20:42:14,586	44k	INFO	Saving model and optimizer state at iteration 131 to ./logs\44k\D_20000.pth
2023-05-11 20:42:15,260	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\G_17000.pth
2023-05-11 20:42:15,324	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\D_17000.pth
2023-05-11 20:42:35,558	44k	INFO	====> Epoch: 131, cost 94.13 s
2023-05-11 20:44:02,788	44k	INFO	====> Epoch: 132, cost 87.23 s
2023-05-11 20:44:22,083	44k	INFO	Train Epoch: 133 [2%]
2023-05-11 20:44:22,084	44k	INFO	Losses: [2.41817045211792, 2.128247022628784, 11.941383361816406, 16.322542190551758, 0.667270839214325], step: 20200, lr: 2.9509030947979973e-05, reference_loss: 33.47761154174805
2023-05-11 20:45:30,287	44k	INFO	====> Epoch: 133, cost 87.50 s
2023-05-11 20:46:10,643	44k	INFO	Train Epoch: 134 [33%]
2023-05-11 20:46:10,644	44k	INFO	Losses: [2.2979674339294434, 2.006298065185547, 17.29484748840332, 19.201370239257812, 0.630429208278656], step: 20400, lr: 2.9505342319111476e-05, reference_loss: 41.430912017822266
2023-05-11 20:46:58,038	44k	INFO	====> Epoch: 134, cost 87.75 s
2023-05-11 20:47:59,432	44k	INFO	Train Epoch: 135 [63%]
2023-05-11 20:47:59,433	44k	INFO	Losses: [2.2095932960510254, 2.393463373184204, 11.581130027770996, 21.175304412841797, 1.135851502418518], step: 20600, lr: 2.9501654151321586e-05, reference_loss: 38.49534225463867
2023-05-11 20:48:25,206	44k	INFO	====> Epoch: 135, cost 87.17 s
2023-05-11 20:49:47,645	44k	INFO	Train Epoch: 136 [94%]
2023-05-11 20:49:47,646	44k	INFO	Losses: [2.673558235168457, 2.0885770320892334, 11.845332145690918, 21.09345817565918, 0.7915951013565063], step: 20800, lr: 2.949796644455267e-05, reference_loss: 38.492523193359375
2023-05-11 20:49:52,428	44k	INFO	====> Epoch: 136, cost 87.22 s
2023-05-11 20:51:19,686	44k	INFO	====> Epoch: 137, cost 87.26 s
2023-05-11 20:51:54,742	44k	INFO	Train Epoch: 138 [25%]
2023-05-11 20:51:54,743	44k	INFO	Losses: [2.5458080768585205, 2.4174704551696777, 7.571362495422363, 17.600116729736328, 0.9082613587379456], step: 21000, lr: 2.9490592413847257e-05, reference_loss: 31.043020248413086
2023-05-11 20:52:00,261	44k	INFO	Saving model and optimizer state at iteration 138 to ./logs\44k\G_21000.pth
2023-05-11 20:52:01,063	44k	INFO	Saving model and optimizer state at iteration 138 to ./logs\44k\D_21000.pth
2023-05-11 20:52:01,744	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\G_18000.pth
2023-05-11 20:52:01,792	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\D_18000.pth
2023-05-11 20:52:54,041	44k	INFO	====> Epoch: 138, cost 94.36 s
2023-05-11 20:53:50,425	44k	INFO	Train Epoch: 139 [56%]
2023-05-11 20:53:50,425	44k	INFO	Losses: [2.2571232318878174, 2.247736692428589, 16.057188034057617, 21.77541732788086, 0.7787254452705383], step: 21200, lr: 2.9486906089795525e-05, reference_loss: 43.116188049316406
2023-05-11 20:54:22,088	44k	INFO	====> Epoch: 139, cost 88.05 s
2023-05-11 20:55:39,949	44k	INFO	Train Epoch: 140 [86%]
2023-05-11 20:55:39,950	44k	INFO	Losses: [2.0789754390716553, 2.748399257659912, 12.256720542907715, 21.41007423400879, 0.893726646900177], step: 21400, lr: 2.94832202265343e-05, reference_loss: 39.38789749145508
2023-05-11 20:55:50,072	44k	INFO	====> Epoch: 140, cost 87.98 s
2023-05-11 20:57:17,515	44k	INFO	====> Epoch: 141, cost 87.44 s
2023-05-11 20:57:47,067	44k	INFO	Train Epoch: 142 [17%]
2023-05-11 20:57:47,068	44k	INFO	Losses: [2.4438860416412354, 2.4429829120635986, 16.231311798095703, 21.76428985595703, 0.7568286657333374], step: 21600, lr: 2.947584988215298e-05, reference_loss: 43.63929748535156
2023-05-11 20:58:44,909	44k	INFO	====> Epoch: 142, cost 87.39 s
2023-05-11 20:59:35,470	44k	INFO	Train Epoch: 143 [48%]
2023-05-11 20:59:35,470	44k	INFO	Losses: [2.232591390609741, 2.464632511138916, 12.822518348693848, 19.372709274291992, 0.6750560402870178], step: 21800, lr: 2.947216540091771e-05, reference_loss: 37.5675048828125
2023-05-11 21:00:12,213	44k	INFO	====> Epoch: 143, cost 87.30 s
2023-05-11 21:01:23,935	44k	INFO	Train Epoch: 144 [78%]
2023-05-11 21:01:23,935	44k	INFO	Losses: [2.406494379043579, 2.7598273754119873, 9.910408973693848, 20.712154388427734, 1.0561177730560303], step: 22000, lr: 2.9468481380242593e-05, reference_loss: 36.84500503540039
2023-05-11 21:01:29,590	44k	INFO	Saving model and optimizer state at iteration 144 to ./logs\44k\G_22000.pth
2023-05-11 21:01:30,388	44k	INFO	Saving model and optimizer state at iteration 144 to ./logs\44k\D_22000.pth
2023-05-11 21:01:31,078	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\G_19000.pth
2023-05-11 21:01:31,125	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\D_19000.pth
2023-05-11 21:01:46,501	44k	INFO	====> Epoch: 144, cost 94.29 s
2023-05-11 21:03:13,604	44k	INFO	====> Epoch: 145, cost 87.10 s
2023-05-11 21:03:37,913	44k	INFO	Train Epoch: 146 [9%]
2023-05-11 21:03:37,914	44k	INFO	Losses: [2.180478572845459, 2.70823073387146, 13.783332824707031, 19.746074676513672, 1.2353508472442627], step: 22200, lr: 2.946111472034255e-05, reference_loss: 39.65346908569336
2023-05-11 21:04:41,210	44k	INFO	====> Epoch: 146, cost 87.61 s
2023-05-11 21:05:26,399	44k	INFO	Train Epoch: 147 [40%]
2023-05-11 21:05:26,399	44k	INFO	Losses: [2.5843002796173096, 2.918128728866577, 11.671560287475586, 21.43267822265625, 1.379014015197754], step: 22400, lr: 2.9457432081002507e-05, reference_loss: 39.985679626464844
2023-05-11 21:06:08,445	44k	INFO	====> Epoch: 147, cost 87.24 s
2023-05-11 21:07:15,544	44k	INFO	Train Epoch: 148 [71%]
2023-05-11 21:07:15,545	44k	INFO	Losses: [2.504765748977661, 2.0900638103485107, 11.348179817199707, 22.14876937866211, 1.0332155227661133], step: 22600, lr: 2.945374990199238e-05, reference_loss: 39.12499237060547
2023-05-11 21:07:36,609	44k	INFO	====> Epoch: 148, cost 88.16 s
2023-05-11 21:09:03,510	44k	INFO	====> Epoch: 149, cost 86.90 s
2023-05-11 21:09:22,300	44k	INFO	Train Epoch: 150 [1%]
2023-05-11 21:09:22,301	44k	INFO	Losses: [2.3296217918395996, 2.0549073219299316, 18.788482666015625, 18.85059928894043, 1.0376532077789307], step: 22800, lr: 2.944638692473172e-05, reference_loss: 43.0612678527832
2023-05-11 21:10:31,081	44k	INFO	====> Epoch: 150, cost 87.57 s
2023-05-11 21:11:11,011	44k	INFO	Train Epoch: 151 [32%]
2023-05-11 21:11:11,012	44k	INFO	Losses: [2.4543569087982178, 1.9625674486160278, 16.84097671508789, 20.134233474731445, 0.5687814950942993], step: 23000, lr: 2.944270612636613e-05, reference_loss: 41.96091842651367
2023-05-11 21:11:16,442	44k	INFO	Saving model and optimizer state at iteration 151 to ./logs\44k\G_23000.pth
2023-05-11 21:11:17,214	44k	INFO	Saving model and optimizer state at iteration 151 to ./logs\44k\D_23000.pth
2023-05-11 21:11:17,896	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\G_20000.pth
2023-05-11 21:11:17,946	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\D_20000.pth
2023-05-11 21:12:05,384	44k	INFO	====> Epoch: 151, cost 94.30 s
2023-05-11 21:13:06,596	44k	INFO	Train Epoch: 152 [63%]
2023-05-11 21:13:06,597	44k	INFO	Losses: [2.2838151454925537, 2.441021680831909, 13.107950210571289, 18.88092041015625, 1.1365299224853516], step: 23200, lr: 2.9439025788100334e-05, reference_loss: 37.85023498535156
2023-05-11 21:13:33,008	44k	INFO	====> Epoch: 152, cost 87.62 s
2023-05-11 21:14:55,226	44k	INFO	Train Epoch: 153 [93%]
2023-05-11 21:14:55,226	44k	INFO	Losses: [2.525782346725464, 2.3542909622192383, 9.45370101928711, 20.078929901123047, 1.0055861473083496], step: 23400, lr: 2.943534590987682e-05, reference_loss: 35.41828918457031
2023-05-11 21:15:00,370	44k	INFO	====> Epoch: 153, cost 87.36 s
2023-05-11 21:16:27,225	44k	INFO	====> Epoch: 154, cost 86.85 s
2023-05-11 21:17:01,782	44k	INFO	Train Epoch: 155 [24%]
2023-05-11 21:17:01,783	44k	INFO	Losses: [2.23268985748291, 2.3809478282928467, 14.686380386352539, 21.40892791748047, 0.8026105761528015], step: 23600, lr: 2.942798753332663e-05, reference_loss: 41.511558532714844
2023-05-11 21:17:54,576	44k	INFO	====> Epoch: 155, cost 87.35 s
2023-05-11 21:18:49,948	44k	INFO	Train Epoch: 156 [55%]
2023-05-11 21:18:49,948	44k	INFO	Losses: [2.4280471801757812, 2.5859994888305664, 11.22673511505127, 15.86699390411377, 1.2461222410202026], step: 23800, lr: 2.942430903488496e-05, reference_loss: 33.35389709472656
2023-05-11 21:19:21,698	44k	INFO	====> Epoch: 156, cost 87.12 s
2023-05-11 21:20:38,334	44k	INFO	Train Epoch: 157 [86%]
2023-05-11 21:20:38,335	44k	INFO	Losses: [2.5375471115112305, 2.067962408065796, 12.566537857055664, 19.366064071655273, 0.6837486624717712], step: 24000, lr: 2.94206309962556e-05, reference_loss: 37.221858978271484
2023-05-11 21:20:43,750	44k	INFO	Saving model and optimizer state at iteration 157 to ./logs\44k\G_24000.pth
2023-05-11 21:20:44,758	44k	INFO	Saving model and optimizer state at iteration 157 to ./logs\44k\D_24000.pth
2023-05-11 21:20:45,437	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\G_21000.pth
2023-05-11 21:20:45,488	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\D_21000.pth
2023-05-11 21:20:55,830	44k	INFO	====> Epoch: 157, cost 94.13 s
2023-05-11 21:22:22,890	44k	INFO	====> Epoch: 158, cost 87.06 s
2023-05-11 21:22:51,958	44k	INFO	Train Epoch: 159 [16%]
2023-05-11 21:22:51,959	44k	INFO	Losses: [2.241414785385132, 2.9840853214263916, 11.527679443359375, 22.236251831054688, 0.9714987874031067], step: 24200, lr: 2.941327629820389e-05, reference_loss: 39.960933685302734
2023-05-11 21:23:50,193	44k	INFO	====> Epoch: 159, cost 87.30 s
2023-05-11 21:24:40,368	44k	INFO	Train Epoch: 160 [47%]
2023-05-11 21:24:40,368	44k	INFO	Losses: [2.3132245540618896, 2.156310558319092, 10.544333457946777, 20.26015853881836, 0.8702049851417542], step: 24400, lr: 2.9409599638666614e-05, reference_loss: 36.144229888916016
2023-05-11 21:25:17,569	44k	INFO	====> Epoch: 160, cost 87.38 s
2023-05-11 21:26:28,960	44k	INFO	Train Epoch: 161 [78%]
2023-05-11 21:26:28,961	44k	INFO	Losses: [2.6417489051818848, 2.1795549392700195, 7.683846950531006, 18.796051025390625, 0.8269001841545105], step: 24600, lr: 2.940592343871178e-05, reference_loss: 32.12810134887695
2023-05-11 21:26:44,825	44k	INFO	====> Epoch: 161, cost 87.26 s
2023-05-11 21:28:11,839	44k	INFO	====> Epoch: 162, cost 87.01 s
2023-05-11 21:28:35,619	44k	INFO	Train Epoch: 163 [8%]
2023-05-11 21:28:35,620	44k	INFO	Losses: [2.211031913757324, 2.4875597953796387, 13.958282470703125, 20.161800384521484, 0.749904990196228], step: 24800, lr: 2.9398572417319655e-05, reference_loss: 39.56857681274414
2023-05-11 21:29:39,357	44k	INFO	====> Epoch: 163, cost 87.52 s
2023-05-11 21:30:24,293	44k	INFO	Train Epoch: 164 [39%]
2023-05-11 21:30:24,294	44k	INFO	Losses: [2.052870512008667, 2.417069435119629, 14.692856788635254, 24.030372619628906, 0.9703827500343323], step: 25000, lr: 2.939489759576749e-05, reference_loss: 44.163551330566406
2023-05-11 21:30:29,798	44k	INFO	Saving model and optimizer state at iteration 164 to ./logs\44k\G_25000.pth
2023-05-11 21:30:30,566	44k	INFO	Saving model and optimizer state at iteration 164 to ./logs\44k\D_25000.pth
2023-05-11 21:30:31,314	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\G_22000.pth
2023-05-11 21:30:31,358	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\D_22000.pth
2023-05-11 21:31:13,938	44k	INFO	====> Epoch: 164, cost 94.58 s
2023-05-11 21:32:20,020	44k	INFO	Train Epoch: 165 [70%]
2023-05-11 21:32:20,020	44k	INFO	Losses: [2.481353282928467, 2.5482654571533203, 9.902636528015137, 19.371829986572266, 0.9272757172584534], step: 25200, lr: 2.9391223233568018e-05, reference_loss: 35.231361389160156
2023-05-11 21:32:41,490	44k	INFO	====> Epoch: 165, cost 87.55 s
2023-05-11 21:34:08,236	44k	INFO	====> Epoch: 166, cost 86.74 s
2023-05-11 21:34:26,508	44k	INFO	Train Epoch: 167 [1%]
2023-05-11 21:34:26,509	44k	INFO	Losses: [2.253011465072632, 2.2450759410858154, 15.501739501953125, 20.518815994262695, 0.5719788670539856], step: 25400, lr: 2.9383875886997486e-05, reference_loss: 41.09062194824219
2023-05-11 21:35:35,525	44k	INFO	====> Epoch: 167, cost 87.29 s
2023-05-11 21:36:14,975	44k	INFO	Train Epoch: 168 [31%]
2023-05-11 21:36:14,976	44k	INFO	Losses: [2.2571256160736084, 2.4889888763427734, 13.262822151184082, 19.51145362854004, 0.7072980403900146], step: 25600, lr: 2.938020290251161e-05, reference_loss: 38.227691650390625
2023-05-11 21:37:02,988	44k	INFO	====> Epoch: 168, cost 87.46 s
2023-05-11 21:38:03,715	44k	INFO	Train Epoch: 169 [62%]
2023-05-11 21:38:03,715	44k	INFO	Losses: [2.303880214691162, 2.658754587173462, 13.399310111999512, 20.27336311340332, 0.855845034122467], step: 25800, lr: 2.9376530377148793e-05, reference_loss: 39.491153717041016
2023-05-11 21:38:30,579	44k	INFO	====> Epoch: 169, cost 87.59 s
2023-05-11 21:39:52,506	44k	INFO	Train Epoch: 170 [93%]
2023-05-11 21:39:52,507	44k	INFO	Losses: [2.350327968597412, 2.634793996810913, 10.36052417755127, 20.960542678833008, 0.7704409956932068], step: 26000, lr: 2.937285831085165e-05, reference_loss: 37.076629638671875
2023-05-11 21:39:58,072	44k	INFO	Saving model and optimizer state at iteration 170 to ./logs\44k\G_26000.pth
2023-05-11 21:39:58,858	44k	INFO	Saving model and optimizer state at iteration 170 to ./logs\44k\D_26000.pth
2023-05-11 21:39:59,536	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\G_23000.pth
2023-05-11 21:39:59,586	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\D_23000.pth
2023-05-11 21:40:04,914	44k	INFO	====> Epoch: 170, cost 94.34 s
2023-05-11 21:41:32,078	44k	INFO	====> Epoch: 171, cost 87.16 s
2023-05-11 21:42:06,014	44k	INFO	Train Epoch: 172 [24%]
2023-05-11 21:42:06,014	44k	INFO	Losses: [2.156196117401123, 2.2004923820495605, 14.32460880279541, 21.6739444732666, 0.7178844809532166], step: 26200, lr: 2.936551555522484e-05, reference_loss: 41.073123931884766
2023-05-11 21:42:59,398	44k	INFO	====> Epoch: 172, cost 87.32 s
2023-05-11 21:43:54,580	44k	INFO	Train Epoch: 173 [54%]
2023-05-11 21:43:54,581	44k	INFO	Losses: [2.3695151805877686, 2.145268678665161, 14.655494689941406, 20.661605834960938, 0.6832132935523987], step: 26400, lr: 2.9361844865780437e-05, reference_loss: 40.515098571777344
2023-05-11 21:44:26,858	44k	INFO	====> Epoch: 173, cost 87.46 s
2023-05-11 21:45:42,960	44k	INFO	Train Epoch: 174 [85%]
2023-05-11 21:45:42,961	44k	INFO	Losses: [2.1927995681762695, 2.773047685623169, 14.40477180480957, 21.50453758239746, 0.8371394276618958], step: 26600, lr: 2.9358174635172214e-05, reference_loss: 41.71229553222656
2023-05-11 21:45:53,964	44k	INFO	====> Epoch: 174, cost 87.11 s
2023-05-11 21:47:20,875	44k	INFO	====> Epoch: 175, cost 86.91 s
2023-05-11 21:47:49,499	44k	INFO	Train Epoch: 176 [16%]
2023-05-11 21:47:49,500	44k	INFO	Losses: [2.150271415710449, 2.4814937114715576, 10.576351165771484, 21.112905502319336, 0.5675302147865295], step: 26800, lr: 2.93508355502349e-05, reference_loss: 36.888553619384766
2023-05-11 21:48:48,298	44k	INFO	====> Epoch: 176, cost 87.42 s
2023-05-11 21:49:38,089	44k	INFO	Train Epoch: 177 [46%]
2023-05-11 21:49:38,090	44k	INFO	Losses: [2.220341444015503, 2.3397107124328613, 17.20482635498047, 19.44502830505371, 1.3015373945236206], step: 27000, lr: 2.9347166695791118e-05, reference_loss: 42.511444091796875
2023-05-11 21:49:43,605	44k	INFO	Saving model and optimizer state at iteration 177 to ./logs\44k\G_27000.pth
2023-05-11 21:49:44,532	44k	INFO	Saving model and optimizer state at iteration 177 to ./logs\44k\D_27000.pth
2023-05-11 21:49:45,218	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\G_24000.pth
2023-05-11 21:49:45,267	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\D_24000.pth
2023-05-11 21:50:22,588	44k	INFO	====> Epoch: 177, cost 94.29 s
2023-05-11 21:51:33,862	44k	INFO	Train Epoch: 178 [77%]
2023-05-11 21:51:33,863	44k	INFO	Losses: [1.8780664205551147, 2.910325527191162, 13.378846168518066, 20.67080307006836, 0.9391260743141174], step: 27200, lr: 2.9343498299954142e-05, reference_loss: 39.77716827392578
2023-05-11 21:51:50,277	44k	INFO	====> Epoch: 178, cost 87.69 s
2023-05-11 21:53:17,257	44k	INFO	====> Epoch: 179, cost 86.98 s
2023-05-11 21:53:40,415	44k	INFO	Train Epoch: 180 [8%]
2023-05-11 21:53:40,416	44k	INFO	Losses: [2.2581627368927, 2.3214633464813232, 17.729576110839844, 21.39453125, 0.9237614870071411], step: 27400, lr: 2.933616288387131e-05, reference_loss: 44.627498626708984
2023-05-11 21:54:44,319	44k	INFO	====> Epoch: 180, cost 87.06 s
2023-05-11 21:55:28,316	44k	INFO	Train Epoch: 181 [39%]
2023-05-11 21:55:28,317	44k	INFO	Losses: [2.304253578186035, 2.7002696990966797, 12.313386917114258, 19.609540939331055, 0.5598008632659912], step: 27600, lr: 2.9332495863510825e-05, reference_loss: 37.48725128173828
2023-05-11 21:56:11,255	44k	INFO	====> Epoch: 181, cost 86.94 s
2023-05-11 21:57:16,470	44k	INFO	Train Epoch: 182 [69%]
2023-05-11 21:57:16,470	44k	INFO	Losses: [2.063203811645508, 2.258450508117676, 13.339643478393555, 15.154513359069824, 0.6263005137443542], step: 27800, lr: 2.9328829301527885e-05, reference_loss: 33.44211196899414
2023-05-11 21:57:38,385	44k	INFO	====> Epoch: 182, cost 87.13 s
2023-05-11 21:59:05,545	44k	INFO	====> Epoch: 183, cost 87.16 s
2023-05-11 21:59:23,525	44k	INFO	Train Epoch: 184 [0%]
2023-05-11 21:59:23,526	44k	INFO	Losses: [2.554588794708252, 2.476391077041626, 16.369321823120117, 21.40169334411621, 1.1655538082122803], step: 28000, lr: 2.9321497552465458e-05, reference_loss: 43.96754837036133
2023-05-11 21:59:29,212	44k	INFO	Saving model and optimizer state at iteration 184 to ./logs\44k\G_28000.pth
2023-05-11 21:59:29,985	44k	INFO	Saving model and optimizer state at iteration 184 to ./logs\44k\D_28000.pth
2023-05-11 21:59:30,732	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\G_25000.pth
2023-05-11 21:59:30,782	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\D_25000.pth
2023-05-11 22:00:40,108	44k	INFO	====> Epoch: 184, cost 94.56 s
2023-05-11 22:01:19,097	44k	INFO	Train Epoch: 185 [31%]
2023-05-11 22:01:19,098	44k	INFO	Losses: [2.575772285461426, 2.325673818588257, 10.136171340942383, 20.177270889282227, 0.8660500049591064], step: 28200, lr: 2.9317832365271398e-05, reference_loss: 36.08094024658203
2023-05-11 22:02:07,567	44k	INFO	====> Epoch: 185, cost 87.46 s
2023-05-11 22:03:07,620	44k	INFO	Train Epoch: 186 [61%]
2023-05-11 22:03:07,620	44k	INFO	Losses: [2.3224031925201416, 2.242901563644409, 11.168042182922363, 18.820533752441406, 0.8706357479095459], step: 28400, lr: 2.9314167636225736e-05, reference_loss: 35.42451477050781
2023-05-11 22:03:34,936	44k	INFO	====> Epoch: 186, cost 87.37 s
2023-05-11 22:04:55,956	44k	INFO	Train Epoch: 187 [92%]
2023-05-11 22:04:55,957	44k	INFO	Losses: [2.2574715614318848, 2.3387222290039062, 11.857097625732422, 18.84485626220703, 0.9323740601539612], step: 28600, lr: 2.9310503365271207e-05, reference_loss: 36.23052215576172
2023-05-11 22:05:01,930	44k	INFO	====> Epoch: 187, cost 86.99 s
2023-05-11 22:06:28,657	44k	INFO	====> Epoch: 188, cost 86.73 s
2023-05-11 22:07:02,868	44k	INFO	Train Epoch: 189 [23%]
2023-05-11 22:07:02,869	44k	INFO	Losses: [2.542555809020996, 2.307922840118408, 11.040369987487793, 20.292699813842773, 0.8018638491630554], step: 28800, lr: 2.9303176197406502e-05, reference_loss: 36.98541259765625
2023-05-11 22:07:56,862	44k	INFO	====> Epoch: 189, cost 88.20 s
2023-05-11 22:08:51,848	44k	INFO	Train Epoch: 190 [54%]
2023-05-11 22:08:51,848	44k	INFO	Losses: [2.517507553100586, 2.6499273777008057, 11.654401779174805, 19.997529983520508, 0.5024926662445068], step: 29000, lr: 2.9299513300381825e-05, reference_loss: 37.32185745239258
2023-05-11 22:08:57,360	44k	INFO	Saving model and optimizer state at iteration 190 to ./logs\44k\G_29000.pth
2023-05-11 22:08:58,128	44k	INFO	Saving model and optimizer state at iteration 190 to ./logs\44k\D_29000.pth
2023-05-11 22:08:58,801	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\G_26000.pth
2023-05-11 22:08:58,847	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\D_26000.pth
2023-05-11 22:09:31,309	44k	INFO	====> Epoch: 190, cost 94.45 s
2023-05-11 22:10:47,204	44k	INFO	Train Epoch: 191 [84%]
2023-05-11 22:10:47,205	44k	INFO	Losses: [2.4900734424591064, 1.9228662252426147, 13.46911907196045, 20.261192321777344, 1.1399946212768555], step: 29200, lr: 2.9295850861219277e-05, reference_loss: 39.28324508666992
2023-05-11 22:10:58,753	44k	INFO	====> Epoch: 191, cost 87.44 s
2023-05-11 22:12:25,906	44k	INFO	====> Epoch: 192, cost 87.15 s
2023-05-11 22:12:53,882	44k	INFO	Train Epoch: 193 [15%]
2023-05-11 22:12:53,883	44k	INFO	Losses: [2.251241683959961, 2.6943960189819336, 8.497387886047363, 21.89167594909668, 0.7860233783721924], step: 29400, lr: 2.9288527356251642e-05, reference_loss: 36.120723724365234
2023-05-11 22:13:53,015	44k	INFO	====> Epoch: 193, cost 87.11 s
2023-05-11 22:14:42,149	44k	INFO	Train Epoch: 194 [46%]
2023-05-11 22:14:42,150	44k	INFO	Losses: [2.4207398891448975, 2.4460959434509277, 11.235674858093262, 19.89182472229004, 0.7694849371910095], step: 29600, lr: 2.9284866290332108e-05, reference_loss: 36.76382064819336
2023-05-11 22:15:20,269	44k	INFO	====> Epoch: 194, cost 87.25 s
2023-05-11 22:16:30,503	44k	INFO	Train Epoch: 195 [76%]
2023-05-11 22:16:30,503	44k	INFO	Losses: [2.2829880714416504, 2.345679759979248, 12.105896949768066, 19.036376953125, 0.8311740159988403], step: 29800, lr: 2.9281205682045817e-05, reference_loss: 36.602115631103516
2023-05-11 22:16:47,327	44k	INFO	====> Epoch: 195, cost 87.06 s
2023-05-11 22:18:14,136	44k	INFO	====> Epoch: 196, cost 86.81 s
2023-05-11 22:18:37,034	44k	INFO	Train Epoch: 197 [7%]
2023-05-11 22:18:37,035	44k	INFO	Losses: [2.1603307723999023, 2.3721256256103516, 12.609691619873047, 19.45323944091797, 0.7062307000160217], step: 30000, lr: 2.9273885838144143e-05, reference_loss: 37.30161666870117
2023-05-11 22:18:42,409	44k	INFO	Saving model and optimizer state at iteration 197 to ./logs\44k\G_30000.pth
2023-05-11 22:18:43,318	44k	INFO	Saving model and optimizer state at iteration 197 to ./logs\44k\D_30000.pth
2023-05-11 22:18:43,990	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\G_27000.pth
2023-05-11 22:18:44,042	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\D_27000.pth
2023-05-11 22:19:48,230	44k	INFO	====> Epoch: 197, cost 94.09 s
2023-05-11 22:20:31,978	44k	INFO	Train Epoch: 198 [38%]
2023-05-11 22:20:31,979	44k	INFO	Losses: [2.564594030380249, 2.080162763595581, 12.063499450683594, 18.630136489868164, 0.593277096748352], step: 30200, lr: 2.9270226602414373e-05, reference_loss: 35.931671142578125
2023-05-11 22:21:15,462	44k	INFO	====> Epoch: 198, cost 87.23 s
2023-05-11 22:22:20,615	44k	INFO	Train Epoch: 199 [69%]
2023-05-11 22:22:20,615	44k	INFO	Losses: [1.9948943853378296, 2.776331901550293, 15.739766120910645, 20.25297737121582, 0.8072575330734253], step: 30400, lr: 2.926656782408907e-05, reference_loss: 41.57122802734375
2023-05-11 22:22:43,914	44k	INFO	====> Epoch: 199, cost 88.45 s
2023-05-11 22:24:09,676	44k	INFO	Train Epoch: 200 [99%]
2023-05-11 22:24:09,677	44k	INFO	Losses: [2.6933860778808594, 1.615329623222351, 23.930498123168945, 18.160158157348633, 0.9431588053703308], step: 30600, lr: 2.9262909503111057e-05, reference_loss: 47.342529296875
2023-05-11 22:24:10,903	44k	INFO	====> Epoch: 200, cost 86.99 s
2023-05-11 22:25:37,587	44k	INFO	====> Epoch: 201, cost 86.68 s
2023-05-11 22:26:15,707	44k	INFO	Train Epoch: 202 [30%]
2023-05-11 22:26:15,707	44k	INFO	Losses: [2.5334725379943848, 2.8622517585754395, 11.238937377929688, 19.802366256713867, 0.9019911289215088], step: 30800, lr: 2.925559423296824e-05, reference_loss: 37.339019775390625
2023-05-11 22:27:04,770	44k	INFO	====> Epoch: 202, cost 87.18 s
2023-05-11 22:28:04,234	44k	INFO	Train Epoch: 203 [61%]
2023-05-11 22:28:04,235	44k	INFO	Losses: [2.3597183227539062, 2.453036308288574, 10.415300369262695, 18.27770233154297, 0.40730226039886475], step: 31000, lr: 2.9251937283689116e-05, reference_loss: 33.91305923461914
2023-05-11 22:28:09,522	44k	INFO	Saving model and optimizer state at iteration 203 to ./logs\44k\G_31000.pth
2023-05-11 22:28:10,580	44k	INFO	Saving model and optimizer state at iteration 203 to ./logs\44k\D_31000.pth
2023-05-11 22:28:11,274	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\G_28000.pth
2023-05-11 22:28:11,325	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\D_28000.pth
2023-05-11 22:28:38,895	44k	INFO	====> Epoch: 203, cost 94.12 s
2023-05-11 22:29:59,760	44k	INFO	Train Epoch: 204 [92%]
2023-05-11 22:29:59,760	44k	INFO	Losses: [2.4534120559692383, 2.224238395690918, 11.809039115905762, 15.801896095275879, 0.9736963510513306], step: 31200, lr: 2.9248280791528655e-05, reference_loss: 33.26228332519531
2023-05-11 22:30:06,225	44k	INFO	====> Epoch: 204, cost 87.33 s
2023-05-11 22:31:33,143	44k	INFO	====> Epoch: 205, cost 86.92 s
2023-05-11 22:32:06,211	44k	INFO	Train Epoch: 206 [22%]
2023-05-11 22:32:06,212	44k	INFO	Losses: [2.18281888961792, 2.4331183433532715, 15.949410438537598, 19.590402603149414, 0.47911879420280457], step: 31400, lr: 2.924096917833516e-05, reference_loss: 40.63486862182617
2023-05-11 22:33:00,375	44k	INFO	====> Epoch: 206, cost 87.23 s
2023-05-11 22:33:54,603	44k	INFO	Train Epoch: 207 [53%]
2023-05-11 22:33:54,604	44k	INFO	Losses: [2.1853795051574707, 2.3957722187042236, 17.22293472290039, 23.4381103515625, 1.1737055778503418], step: 31600, lr: 2.9237314057187867e-05, reference_loss: 46.41590118408203
2023-05-11 22:34:27,601	44k	INFO	====> Epoch: 207, cost 87.23 s
2023-05-11 22:35:44,264	44k	INFO	Train Epoch: 208 [84%]
2023-05-11 22:35:44,264	44k	INFO	Losses: [2.34808087348938, 2.4908933639526367, 12.557831764221191, 21.80524444580078, 1.1822048425674438], step: 31800, lr: 2.9233659392930716e-05, reference_loss: 40.384254455566406
2023-05-11 22:35:56,570	44k	INFO	====> Epoch: 208, cost 88.97 s
2023-05-11 22:37:24,222	44k	INFO	====> Epoch: 209, cost 87.65 s
2023-05-11 22:37:52,213	44k	INFO	Train Epoch: 210 [14%]
2023-05-11 22:37:52,214	44k	INFO	Losses: [2.17327880859375, 2.190847635269165, 15.827834129333496, 20.284080505371094, 0.7204908132553101], step: 32000, lr: 2.922635143485841e-05, reference_loss: 41.196529388427734
2023-05-11 22:37:57,510	44k	INFO	Saving model and optimizer state at iteration 210 to ./logs\44k\G_32000.pth
2023-05-11 22:37:58,384	44k	INFO	Saving model and optimizer state at iteration 210 to ./logs\44k\D_32000.pth
2023-05-11 22:37:59,063	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\G_29000.pth
2023-05-11 22:37:59,105	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\D_29000.pth
2023-05-11 22:38:59,344	44k	INFO	====> Epoch: 210, cost 95.12 s
2023-05-11 22:39:48,693	44k	INFO	Train Epoch: 211 [45%]
2023-05-11 22:39:48,694	44k	INFO	Losses: [2.1462364196777344, 2.0116279125213623, 14.271591186523438, 21.358386993408203, 0.79221111536026], step: 32200, lr: 2.922269814092905e-05, reference_loss: 40.58005142211914
2023-05-11 22:40:27,794	44k	INFO	====> Epoch: 211, cost 88.45 s
2023-05-11 22:41:38,370	44k	INFO	Train Epoch: 212 [76%]
2023-05-11 22:41:38,370	44k	INFO	Losses: [2.7046608924865723, 2.0745532512664795, 13.485158920288086, 19.286104202270508, 0.5432848930358887], step: 32400, lr: 2.9219045303661433e-05, reference_loss: 38.0937614440918
2023-05-11 22:41:56,147	44k	INFO	====> Epoch: 212, cost 88.35 s
2023-05-11 22:43:23,971	44k	INFO	====> Epoch: 213, cost 87.82 s
2023-05-11 22:43:46,542	44k	INFO	Train Epoch: 214 [7%]
2023-05-11 22:43:46,543	44k	INFO	Losses: [2.3181324005126953, 2.3565571308135986, 11.260844230651855, 20.600004196166992, 0.7078282833099365], step: 32600, lr: 2.9211740998883098e-05, reference_loss: 37.24336624145508
2023-05-11 22:44:52,351	44k	INFO	====> Epoch: 214, cost 88.38 s
2023-05-11 22:45:36,230	44k	INFO	Train Epoch: 215 [37%]
2023-05-11 22:45:36,231	44k	INFO	Losses: [2.791977643966675, 1.7529821395874023, 12.317703247070312, 20.926084518432617, 0.5998483896255493], step: 32800, lr: 2.9208089531258237e-05, reference_loss: 38.38859939575195
2023-05-11 22:46:20,911	44k	INFO	====> Epoch: 215, cost 88.56 s
2023-05-11 22:47:26,074	44k	INFO	Train Epoch: 216 [68%]
2023-05-11 22:47:26,075	44k	INFO	Losses: [2.5244128704071045, 2.3450944423675537, 10.498457908630371, 20.899843215942383, 0.6350367665290833], step: 33000, lr: 2.9204438520066827e-05, reference_loss: 36.9028434753418
2023-05-11 22:47:31,708	44k	INFO	Saving model and optimizer state at iteration 216 to ./logs\44k\G_33000.pth
2023-05-11 22:47:32,535	44k	INFO	Saving model and optimizer state at iteration 216 to ./logs\44k\D_33000.pth
2023-05-11 22:47:33,218	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\G_30000.pth
2023-05-11 22:47:33,266	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\D_30000.pth
2023-05-11 22:47:55,988	44k	INFO	====> Epoch: 216, cost 95.08 s
2023-05-11 22:49:22,577	44k	INFO	Train Epoch: 217 [99%]
2023-05-11 22:49:22,578	44k	INFO	Losses: [2.087664842605591, 2.370191812515259, 14.999993324279785, 20.936115264892578, 0.9530125260353088], step: 33200, lr: 2.920078796525182e-05, reference_loss: 41.346981048583984
2023-05-11 22:49:24,229	44k	INFO	====> Epoch: 217, cost 88.24 s
2023-05-11 22:50:52,028	44k	INFO	====> Epoch: 218, cost 87.80 s
2023-05-11 22:51:30,127	44k	INFO	Train Epoch: 219 [29%]
2023-05-11 22:51:30,128	44k	INFO	Losses: [2.29052996635437, 2.473458766937256, 11.260251998901367, 19.897136688232422, 0.7827029824256897], step: 33400, lr: 2.9193488224522815e-05, reference_loss: 36.70408248901367
2023-05-11 22:52:19,411	44k	INFO	====> Epoch: 219, cost 87.38 s
2023-05-11 22:53:18,422	44k	INFO	Train Epoch: 220 [60%]
2023-05-11 22:53:18,422	44k	INFO	Losses: [2.350973129272461, 2.281623601913452, 7.568717956542969, 17.09041404724121, 0.8629797697067261], step: 33600, lr: 2.9189839038494747e-05, reference_loss: 30.154708862304688
2023-05-11 22:53:46,514	44k	INFO	====> Epoch: 220, cost 87.10 s
2023-05-11 22:55:06,685	44k	INFO	Train Epoch: 221 [91%]
2023-05-11 22:55:06,685	44k	INFO	Losses: [2.3576741218566895, 2.4093825817108154, 9.272815704345703, 19.84087371826172, 0.8007166981697083], step: 33800, lr: 2.9186190308614934e-05, reference_loss: 34.681461334228516
2023-05-11 22:55:13,628	44k	INFO	====> Epoch: 221, cost 87.11 s
2023-05-11 22:56:40,344	44k	INFO	====> Epoch: 222, cost 86.72 s
2023-05-11 22:57:12,921	44k	INFO	Train Epoch: 223 [22%]
2023-05-11 22:57:12,922	44k	INFO	Losses: [2.0258748531341553, 2.9286649227142334, 8.281425476074219, 19.124338150024414, 0.993285059928894], step: 34000, lr: 2.9178894217072e-05, reference_loss: 33.35359191894531
2023-05-11 22:57:18,353	44k	INFO	Saving model and optimizer state at iteration 223 to ./logs\44k\G_34000.pth
2023-05-11 22:57:19,134	44k	INFO	Saving model and optimizer state at iteration 223 to ./logs\44k\D_34000.pth
2023-05-11 22:57:19,811	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\G_31000.pth
2023-05-11 22:57:19,858	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\D_31000.pth
2023-05-11 22:58:14,171	44k	INFO	====> Epoch: 223, cost 93.83 s
2023-05-11 22:59:07,808	44k	INFO	Train Epoch: 224 [52%]
2023-05-11 22:59:07,808	44k	INFO	Losses: [2.0101354122161865, 2.7200112342834473, 13.931513786315918, 21.98060417175293, 0.7607918381690979], step: 34200, lr: 2.9175246855294863e-05, reference_loss: 41.40305709838867
2023-05-11 22:59:41,401	44k	INFO	====> Epoch: 224, cost 87.23 s
2023-05-11 23:00:55,934	44k	INFO	Train Epoch: 225 [83%]
2023-05-11 23:00:55,934	44k	INFO	Losses: [2.629448890686035, 2.2429330348968506, 12.29239273071289, 19.560827255249023, 0.8214043378829956], step: 34400, lr: 2.917159994943795e-05, reference_loss: 37.54700469970703
2023-05-11 23:01:08,256	44k	INFO	====> Epoch: 225, cost 86.85 s
2023-05-11 23:02:34,834	44k	INFO	====> Epoch: 226, cost 86.58 s
2023-05-11 23:03:02,104	44k	INFO	Train Epoch: 227 [14%]
2023-05-11 23:03:02,105	44k	INFO	Losses: [2.1601171493530273, 2.1038057804107666, 19.6684627532959, 19.231704711914062, 0.981499969959259], step: 34600, lr: 2.9164307505256837e-05, reference_loss: 44.14558792114258
2023-05-11 23:04:02,068	44k	INFO	====> Epoch: 227, cost 87.23 s
2023-05-11 23:04:50,256	44k	INFO	Train Epoch: 228 [44%]
2023-05-11 23:04:50,257	44k	INFO	Losses: [2.5843756198883057, 2.345271587371826, 8.238197326660156, 19.72162628173828, 0.7127050757408142], step: 34800, lr: 2.916066196681868e-05, reference_loss: 33.602176666259766
2023-05-11 23:05:28,998	44k	INFO	====> Epoch: 228, cost 86.93 s
2023-05-11 23:06:38,130	44k	INFO	Train Epoch: 229 [75%]
2023-05-11 23:06:38,130	44k	INFO	Losses: [2.583364486694336, 2.29083514213562, 10.543819427490234, 22.168258666992188, 0.4813070297241211], step: 35000, lr: 2.9157016884072827e-05, reference_loss: 38.06758499145508
2023-05-11 23:06:43,832	44k	INFO	Saving model and optimizer state at iteration 229 to ./logs\44k\G_35000.pth
2023-05-11 23:06:44,632	44k	INFO	Saving model and optimizer state at iteration 229 to ./logs\44k\D_35000.pth
2023-05-11 23:06:45,321	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\G_32000.pth
2023-05-11 23:06:45,390	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\D_32000.pth
2023-05-11 23:07:03,133	44k	INFO	====> Epoch: 229, cost 94.13 s
2023-05-11 23:08:29,638	44k	INFO	====> Epoch: 230, cost 86.51 s
2023-05-11 23:08:51,610	44k	INFO	Train Epoch: 231 [6%]
2023-05-11 23:08:51,611	44k	INFO	Losses: [2.318617820739746, 2.1824753284454346, 7.610252380371094, 20.95266342163086, 0.6841897964477539], step: 35200, lr: 2.9149728085430194e-05, reference_loss: 33.748199462890625
2023-05-11 23:09:56,594	44k	INFO	====> Epoch: 231, cost 86.96 s
2023-05-11 23:10:39,274	44k	INFO	Train Epoch: 232 [37%]
2023-05-11 23:10:39,275	44k	INFO	Losses: [1.9487130641937256, 2.582306146621704, 17.558088302612305, 22.907352447509766, 0.7167505621910095], step: 35400, lr: 2.9146084369419513e-05, reference_loss: 45.71321105957031
2023-05-11 23:11:23,614	44k	INFO	====> Epoch: 232, cost 87.02 s
2023-05-11 23:12:27,462	44k	INFO	Train Epoch: 233 [67%]
2023-05-11 23:12:27,462	44k	INFO	Losses: [2.434251308441162, 1.916774034500122, 14.019707679748535, 20.46570587158203, 0.696102499961853], step: 35600, lr: 2.9142441108873335e-05, reference_loss: 39.53254318237305
2023-05-11 23:12:50,725	44k	INFO	====> Epoch: 233, cost 87.11 s
2023-05-11 23:14:15,728	44k	INFO	Train Epoch: 234 [98%]
2023-05-11 23:14:15,729	44k	INFO	Losses: [2.373408317565918, 2.158862829208374, 15.192477226257324, 21.282258987426758, 1.1444936990737915], step: 35800, lr: 2.9138798303734726e-05, reference_loss: 42.1515007019043
2023-05-11 23:14:17,791	44k	INFO	====> Epoch: 234, cost 87.07 s
2023-05-11 23:15:44,221	44k	INFO	====> Epoch: 235, cost 86.43 s
2023-05-11 23:16:21,772	44k	INFO	Train Epoch: 236 [29%]
2023-05-11 23:16:21,772	44k	INFO	Losses: [2.328075647354126, 2.1455647945404053, 17.557395935058594, 20.413278579711914, 0.9318976402282715], step: 36000, lr: 2.9131514059452513e-05, reference_loss: 43.3762092590332
2023-05-11 23:16:27,178	44k	INFO	Saving model and optimizer state at iteration 236 to ./logs\44k\G_36000.pth
2023-05-11 23:16:27,947	44k	INFO	Saving model and optimizer state at iteration 236 to ./logs\44k\D_36000.pth
2023-05-11 23:16:28,626	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\G_33000.pth
2023-05-11 23:16:28,672	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\D_33000.pth
2023-05-11 23:17:17,991	44k	INFO	====> Epoch: 236, cost 93.77 s
2023-05-11 23:18:16,326	44k	INFO	Train Epoch: 237 [59%]
2023-05-11 23:18:16,326	44k	INFO	Losses: [1.9186285734176636, 2.3429036140441895, 15.22066879272461, 21.276222229003906, 1.1392505168914795], step: 36200, lr: 2.912787262019508e-05, reference_loss: 41.897674560546875
2023-05-11 23:18:44,912	44k	INFO	====> Epoch: 237, cost 86.92 s
2023-05-11 23:20:04,428	44k	INFO	Train Epoch: 238 [90%]
2023-05-11 23:20:04,428	44k	INFO	Losses: [2.264615058898926, 2.287299156188965, 16.440340042114258, 17.999305725097656, 0.6580404043197632], step: 36400, lr: 2.9124231636117555e-05, reference_loss: 39.649600982666016
2023-05-11 23:20:11,795	44k	INFO	====> Epoch: 238, cost 86.88 s
2023-05-11 23:21:38,421	44k	INFO	====> Epoch: 239, cost 86.63 s
2023-05-11 23:22:10,502	44k	INFO	Train Epoch: 240 [21%]
2023-05-11 23:22:10,503	44k	INFO	Losses: [2.2759435176849365, 2.249727487564087, 14.910801887512207, 18.91904640197754, 0.8311969637870789], step: 36600, lr: 2.9116951033274644e-05, reference_loss: 39.18671798706055
2023-05-11 23:23:05,565	44k	INFO	====> Epoch: 240, cost 87.14 s
2023-05-11 23:23:58,870	44k	INFO	Train Epoch: 241 [52%]
2023-05-11 23:23:58,870	44k	INFO	Losses: [2.496267080307007, 2.147951602935791, 13.372493743896484, 19.058626174926758, 0.30706867575645447], step: 36800, lr: 2.9113311414395485e-05, reference_loss: 37.382408142089844
2023-05-11 23:24:32,861	44k	INFO	====> Epoch: 241, cost 87.30 s
2023-05-11 23:25:46,908	44k	INFO	Train Epoch: 242 [82%]
2023-05-11 23:25:46,908	44k	INFO	Losses: [2.1299355030059814, 2.3770081996917725, 18.414464950561523, 21.180307388305664, 1.2796505689620972], step: 37000, lr: 2.9109672250468686e-05, reference_loss: 45.38136672973633
2023-05-11 23:25:52,460	44k	INFO	Saving model and optimizer state at iteration 242 to ./logs\44k\G_37000.pth
2023-05-11 23:25:53,177	44k	INFO	Saving model and optimizer state at iteration 242 to ./logs\44k\D_37000.pth
2023-05-11 23:25:53,876	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\G_34000.pth
2023-05-11 23:25:53,931	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\D_34000.pth
2023-05-11 23:26:06,680	44k	INFO	====> Epoch: 242, cost 93.82 s
2023-05-11 23:27:33,210	44k	INFO	====> Epoch: 243, cost 86.53 s
2023-05-11 23:27:59,748	44k	INFO	Train Epoch: 244 [13%]
2023-05-11 23:27:59,749	44k	INFO	Losses: [2.8545777797698975, 2.7850182056427, 7.647801399230957, 15.885904312133789, 0.8959618210792542], step: 37200, lr: 2.9102395287244696e-05, reference_loss: 30.069263458251953
2023-05-11 23:29:00,079	44k	INFO	====> Epoch: 244, cost 86.87 s
2023-05-11 23:29:47,888	44k	INFO	Train Epoch: 245 [44%]
2023-05-11 23:29:47,889	44k	INFO	Losses: [2.607884407043457, 2.2625949382781982, 9.111988067626953, 18.94312858581543, 0.978264570236206], step: 37400, lr: 2.909875748783379e-05, reference_loss: 33.90386199951172
2023-05-11 23:30:27,221	44k	INFO	====> Epoch: 245, cost 87.14 s
2023-05-11 23:31:37,147	44k	INFO	Train Epoch: 246 [75%]
2023-05-11 23:31:37,147	44k	INFO	Losses: [2.371624231338501, 2.3269622325897217, 14.278680801391602, 18.87295913696289, 0.7479438185691833], step: 37600, lr: 2.909512014314781e-05, reference_loss: 38.59817123413086
2023-05-11 23:31:55,708	44k	INFO	====> Epoch: 246, cost 88.49 s
2023-05-11 23:33:23,707	44k	INFO	====> Epoch: 247, cost 88.00 s
2023-05-11 23:33:45,138	44k	INFO	Train Epoch: 248 [5%]
2023-05-11 23:33:45,138	44k	INFO	Losses: [2.4249305725097656, 2.6680610179901123, 11.795964241027832, 18.1381778717041, 0.726823627948761], step: 37800, lr: 2.908784681772327e-05, reference_loss: 35.75395584106445
2023-05-11 23:34:50,822	44k	INFO	====> Epoch: 248, cost 87.12 s
2023-05-11 23:35:33,016	44k	INFO	Train Epoch: 249 [36%]
2023-05-11 23:35:33,016	44k	INFO	Losses: [2.571638345718384, 2.0839293003082275, 11.542834281921387, 18.88866424560547, 0.43505746126174927], step: 38000, lr: 2.9084210836871055e-05, reference_loss: 35.522125244140625
2023-05-11 23:35:38,572	44k	INFO	Saving model and optimizer state at iteration 249 to ./logs\44k\G_38000.pth
2023-05-11 23:35:39,384	44k	INFO	Saving model and optimizer state at iteration 249 to ./logs\44k\D_38000.pth
2023-05-11 23:35:40,165	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\G_35000.pth
2023-05-11 23:35:40,216	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\D_35000.pth
2023-05-11 23:36:24,791	44k	INFO	====> Epoch: 249, cost 93.97 s
2023-05-11 23:37:28,336	44k	INFO	Train Epoch: 250 [67%]
2023-05-11 23:37:28,336	44k	INFO	Losses: [2.3983511924743652, 2.6881916522979736, 7.036543846130371, 21.21124839782715, 0.9582434296607971], step: 38200, lr: 2.9080575310516446e-05, reference_loss: 34.292579650878906
2023-05-11 23:37:51,887	44k	INFO	====> Epoch: 250, cost 87.10 s
2023-05-11 23:39:16,330	44k	INFO	Train Epoch: 251 [97%]
2023-05-11 23:39:16,330	44k	INFO	Losses: [2.2132420539855957, 2.5108938217163086, 13.515555381774902, 21.54300308227539, 0.8297137022018433], step: 38400, lr: 2.907694023860263e-05, reference_loss: 40.612403869628906
2023-05-11 23:39:18,851	44k	INFO	====> Epoch: 251, cost 86.96 s
2023-05-11 23:40:45,916	44k	INFO	====> Epoch: 252, cost 87.06 s
2023-05-11 23:41:22,938	44k	INFO	Train Epoch: 253 [28%]
2023-05-11 23:41:22,939	44k	INFO	Losses: [2.296832323074341, 2.4653115272521973, 14.684825897216797, 21.370893478393555, 0.6395803093910217], step: 38600, lr: 2.906967145787017e-05, reference_loss: 41.45744323730469
2023-05-11 23:42:13,056	44k	INFO	====> Epoch: 253, cost 87.14 s
2023-05-11 23:43:11,132	44k	INFO	Train Epoch: 254 [59%]
2023-05-11 23:43:11,132	44k	INFO	Losses: [2.8358492851257324, 2.129021167755127, 8.167855262756348, 14.802287101745605, 0.7585254311561584], step: 38800, lr: 2.9066037748937933e-05, reference_loss: 28.693538665771484
2023-05-11 23:43:40,035	44k	INFO	====> Epoch: 254, cost 86.98 s
2023-05-11 23:44:58,951	44k	INFO	Train Epoch: 255 [90%]
2023-05-11 23:44:58,951	44k	INFO	Losses: [2.384014129638672, 2.609471321105957, 13.67310905456543, 20.50167465209961, 0.9759578108787537], step: 39000, lr: 2.9062404494219316e-05, reference_loss: 40.14422607421875
2023-05-11 23:45:04,485	44k	INFO	Saving model and optimizer state at iteration 255 to ./logs\44k\G_39000.pth
2023-05-11 23:45:05,258	44k	INFO	Saving model and optimizer state at iteration 255 to ./logs\44k\D_39000.pth
2023-05-11 23:45:05,938	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\G_36000.pth
2023-05-11 23:45:05,991	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\D_36000.pth
2023-05-11 23:45:13,530	44k	INFO	====> Epoch: 255, cost 93.49 s
2023-05-11 23:46:40,103	44k	INFO	====> Epoch: 256, cost 86.57 s
2023-05-11 23:47:11,698	44k	INFO	Train Epoch: 257 [20%]
2023-05-11 23:47:11,699	44k	INFO	Losses: [2.412679672241211, 2.4876222610473633, 10.623312950134277, 18.588048934936523, 0.9430463910102844], step: 39200, lr: 2.905513934719583e-05, reference_loss: 35.054710388183594
2023-05-11 23:48:06,964	44k	INFO	====> Epoch: 257, cost 86.86 s
2023-05-11 23:48:59,544	44k	INFO	Train Epoch: 258 [51%]
2023-05-11 23:48:59,545	44k	INFO	Losses: [2.6109132766723633, 2.576932668685913, 8.075935363769531, 16.961647033691406, 0.5622955560684204], step: 39400, lr: 2.9051507454777428e-05, reference_loss: 30.7877254486084
2023-05-11 23:49:33,809	44k	INFO	====> Epoch: 258, cost 86.85 s
2023-05-11 23:50:47,837	44k	INFO	Train Epoch: 259 [82%]
2023-05-11 23:50:47,838	44k	INFO	Losses: [2.4824814796447754, 2.4467413425445557, 14.783933639526367, 19.17391014099121, 0.9434327483177185], step: 39600, lr: 2.904787601634558e-05, reference_loss: 39.830501556396484
2023-05-11 23:51:01,044	44k	INFO	====> Epoch: 259, cost 87.23 s
2023-05-11 23:52:27,488	44k	INFO	====> Epoch: 260, cost 86.44 s
2023-05-11 23:52:53,831	44k	INFO	Train Epoch: 261 [12%]
2023-05-11 23:52:53,832	44k	INFO	Losses: [2.0811073780059814, 2.643122911453247, 11.60995864868164, 20.667049407958984, 0.9795186519622803], step: 39800, lr: 2.9040614501214555e-05, reference_loss: 37.98075485229492
2023-05-11 23:53:54,541	44k	INFO	====> Epoch: 261, cost 87.05 s
2023-05-11 23:54:41,763	44k	INFO	Train Epoch: 262 [43%]
2023-05-11 23:54:41,764	44k	INFO	Losses: [2.2680230140686035, 2.0158121585845947, 13.714014053344727, 20.011919021606445, 0.7242442965507507], step: 40000, lr: 2.90369844244019e-05, reference_loss: 38.7340087890625
2023-05-11 23:54:47,158	44k	INFO	Saving model and optimizer state at iteration 262 to ./logs\44k\G_40000.pth
2023-05-11 23:54:47,920	44k	INFO	Saving model and optimizer state at iteration 262 to ./logs\44k\D_40000.pth
2023-05-11 23:54:48,596	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\G_37000.pth
2023-05-11 23:54:48,643	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\D_37000.pth
2023-05-11 23:55:28,172	44k	INFO	====> Epoch: 262, cost 93.63 s
2023-05-11 23:56:36,433	44k	INFO	Train Epoch: 263 [74%]
2023-05-11 23:56:36,434	44k	INFO	Losses: [2.270467519760132, 2.1916332244873047, 12.428770065307617, 22.03148078918457, 0.7733843922615051], step: 40200, lr: 2.903335480134885e-05, reference_loss: 39.695735931396484
2023-05-11 23:56:55,056	44k	INFO	====> Epoch: 263, cost 86.88 s
2023-05-11 23:58:21,388	44k	INFO	====> Epoch: 264, cost 86.33 s
2023-05-11 23:58:42,315	44k	INFO	Train Epoch: 265 [5%]
2023-05-11 23:58:42,316	44k	INFO	Losses: [2.2979705333709717, 2.5276665687561035, 15.098774909973145, 20.4643611907959, 0.7607564330101013], step: 40400, lr: 2.9026096916294678e-05, reference_loss: 41.14952850341797
2023-05-11 23:59:48,509	44k	INFO	====> Epoch: 265, cost 87.12 s
2023-05-12 00:00:30,960	44k	INFO	Train Epoch: 266 [35%]
2023-05-12 00:00:30,961	44k	INFO	Losses: [2.3222877979278564, 2.3944623470306396, 12.199092864990234, 19.330812454223633, 0.7873807549476624], step: 40600, lr: 2.9022468654180138e-05, reference_loss: 37.03403854370117
2023-05-12 00:01:16,274	44k	INFO	====> Epoch: 266, cost 87.76 s
2023-05-12 00:02:19,493	44k	INFO	Train Epoch: 267 [66%]
2023-05-12 00:02:19,494	44k	INFO	Losses: [2.531327962875366, 2.6196436882019043, 7.450255393981934, 15.686241149902344, 0.8771728873252869], step: 40800, lr: 2.9018840845598365e-05, reference_loss: 29.164640426635742
2023-05-12 00:02:43,941	44k	INFO	====> Epoch: 267, cost 87.67 s
2023-05-12 00:04:07,895	44k	INFO	Train Epoch: 268 [97%]
2023-05-12 00:04:07,896	44k	INFO	Losses: [2.0768423080444336, 2.449065685272217, 14.927592277526855, 21.343576431274414, 0.6163033246994019], step: 41000, lr: 2.9015213490492665e-05, reference_loss: 41.41337966918945
2023-05-12 00:04:13,624	44k	INFO	Saving model and optimizer state at iteration 268 to ./logs\44k\G_41000.pth
2023-05-12 00:04:14,360	44k	INFO	Saving model and optimizer state at iteration 268 to ./logs\44k\D_41000.pth
2023-05-12 00:04:15,036	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\G_38000.pth
2023-05-12 00:04:15,090	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\D_38000.pth
2023-05-12 00:04:17,760	44k	INFO	====> Epoch: 268, cost 93.82 s
2023-05-12 00:05:44,723	44k	INFO	====> Epoch: 269, cost 86.96 s
2023-05-12 00:06:21,308	44k	INFO	Train Epoch: 270 [27%]
2023-05-12 00:06:21,308	44k	INFO	Losses: [2.1342616081237793, 2.8914928436279297, 12.985645294189453, 20.656923294067383, 1.0261586904525757], step: 41200, lr: 2.900796014048275e-05, reference_loss: 39.694480895996094
2023-05-12 00:07:11,763	44k	INFO	====> Epoch: 270, cost 87.04 s
2023-05-12 00:08:09,445	44k	INFO	Train Epoch: 271 [58%]
2023-05-12 00:08:09,445	44k	INFO	Losses: [2.667980194091797, 2.7980167865753174, 10.094583511352539, 19.34132957458496, 0.8694612383842468], step: 41400, lr: 2.900433414546519e-05, reference_loss: 35.77136993408203
2023-05-12 00:08:38,772	44k	INFO	====> Epoch: 271, cost 87.01 s
2023-05-12 00:09:57,600	44k	INFO	Train Epoch: 272 [89%]
2023-05-12 00:09:57,600	44k	INFO	Losses: [2.0318732261657715, 2.6755869388580322, 11.406761169433594, 15.29020881652832, 0.9756430983543396], step: 41600, lr: 2.9000708603697004e-05, reference_loss: 32.38007354736328
2023-05-12 00:10:05,831	44k	INFO	====> Epoch: 272, cost 87.06 s
2023-05-12 00:11:32,647	44k	INFO	====> Epoch: 273, cost 86.82 s
2023-05-12 00:12:03,602	44k	INFO	Train Epoch: 274 [20%]
2023-05-12 00:12:03,603	44k	INFO	Losses: [2.566847801208496, 2.239710569381714, 9.606109619140625, 20.148448944091797, 0.9577219486236572], step: 41800, lr: 2.899345887968215e-05, reference_loss: 35.51884078979492
2023-05-12 00:12:59,370	44k	INFO	====> Epoch: 274, cost 86.72 s
2023-05-12 00:13:51,538	44k	INFO	Train Epoch: 275 [50%]
2023-05-12 00:13:51,538	44k	INFO	Losses: [2.529519557952881, 2.3417234420776367, 9.40464973449707, 15.548064231872559, 0.7062731385231018], step: 42000, lr: 2.8989834697322186e-05, reference_loss: 30.530229568481445
2023-05-12 00:13:56,955	44k	INFO	Saving model and optimizer state at iteration 275 to ./logs\44k\G_42000.pth
2023-05-12 00:13:57,893	44k	INFO	Saving model and optimizer state at iteration 275 to ./logs\44k\D_42000.pth
2023-05-12 00:13:58,598	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\G_39000.pth
2023-05-12 00:13:58,646	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\D_39000.pth
2023-05-12 00:14:33,022	44k	INFO	====> Epoch: 275, cost 93.65 s
2023-05-12 00:15:46,056	44k	INFO	Train Epoch: 276 [81%]
2023-05-12 00:15:46,056	44k	INFO	Losses: [2.4142253398895264, 2.3950157165527344, 12.35134506225586, 19.385360717773438, 0.9850819706916809], step: 42200, lr: 2.8986210967985018e-05, reference_loss: 37.531028747558594
2023-05-12 00:15:59,824	44k	INFO	====> Epoch: 276, cost 86.80 s
2023-05-12 00:17:26,260	44k	INFO	====> Epoch: 277, cost 86.44 s
2023-05-12 00:17:52,136	44k	INFO	Train Epoch: 278 [12%]
2023-05-12 00:17:52,136	44k	INFO	Losses: [2.146235227584839, 2.4149203300476074, 11.63279914855957, 18.732440948486328, 0.7447122931480408], step: 42400, lr: 2.897896486815257e-05, reference_loss: 35.67110824584961
2023-05-12 00:18:53,435	44k	INFO	====> Epoch: 278, cost 87.17 s
2023-05-12 00:19:40,228	44k	INFO	Train Epoch: 279 [42%]
2023-05-12 00:19:40,228	44k	INFO	Losses: [2.163267135620117, 2.6091108322143555, 11.535022735595703, 19.45165252685547, 0.8022618889808655], step: 42600, lr: 2.8975342497544048e-05, reference_loss: 36.56131362915039
2023-05-12 00:20:20,517	44k	INFO	====> Epoch: 279, cost 87.08 s
2023-05-12 00:21:28,375	44k	INFO	Train Epoch: 280 [73%]
2023-05-12 00:21:28,375	44k	INFO	Losses: [2.1821250915527344, 2.3811721801757812, 14.84898567199707, 19.10666847229004, 1.2198735475540161], step: 42800, lr: 2.8971720579731854e-05, reference_loss: 39.738826751708984
2023-05-12 00:21:47,563	44k	INFO	====> Epoch: 280, cost 87.05 s
2023-05-12 00:23:13,913	44k	INFO	====> Epoch: 281, cost 86.35 s
2023-05-12 00:23:34,341	44k	INFO	Train Epoch: 282 [4%]
2023-05-12 00:23:34,341	44k	INFO	Losses: [2.436422109603882, 2.386256456375122, 11.608360290527344, 19.41828727722168, 0.9322484731674194], step: 43000, lr: 2.8964478102270053e-05, reference_loss: 36.78157424926758
2023-05-12 00:23:40,020	44k	INFO	Saving model and optimizer state at iteration 282 to ./logs\44k\G_43000.pth
2023-05-12 00:23:40,831	44k	INFO	Saving model and optimizer state at iteration 282 to ./logs\44k\D_43000.pth
2023-05-12 00:23:41,524	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\G_40000.pth
2023-05-12 00:23:41,571	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\D_40000.pth
2023-05-12 00:24:47,728	44k	INFO	====> Epoch: 282, cost 93.82 s
2023-05-12 00:25:29,129	44k	INFO	Train Epoch: 283 [35%]
2023-05-12 00:25:29,129	44k	INFO	Losses: [2.3282203674316406, 2.499232292175293, 11.087320327758789, 20.374372482299805, 1.0009374618530273], step: 43200, lr: 2.8960857542507267e-05, reference_loss: 37.29008483886719
2023-05-12 00:26:14,848	44k	INFO	====> Epoch: 283, cost 87.12 s
2023-05-12 00:27:17,414	44k	INFO	Train Epoch: 284 [65%]
2023-05-12 00:27:17,414	44k	INFO	Losses: [3.2481675148010254, 2.1694464683532715, 15.336135864257812, 19.12453842163086, 0.6457279324531555], step: 43400, lr: 2.8957237435314452e-05, reference_loss: 40.524017333984375
2023-05-12 00:27:42,011	44k	INFO	====> Epoch: 284, cost 87.16 s
2023-05-12 00:29:06,052	44k	INFO	Train Epoch: 285 [96%]
2023-05-12 00:29:06,052	44k	INFO	Losses: [2.5966134071350098, 1.9764971733093262, 11.448287010192871, 14.873833656311035, 0.9429264664649963], step: 43600, lr: 2.8953617780635037e-05, reference_loss: 31.83815574645996
2023-05-12 00:29:09,508	44k	INFO	====> Epoch: 285, cost 87.50 s
2023-05-12 00:30:35,868	44k	INFO	====> Epoch: 286, cost 86.36 s
2023-05-12 00:31:12,031	44k	INFO	Train Epoch: 287 [27%]
2023-05-12 00:31:12,032	44k	INFO	Losses: [2.3675150871276855, 2.597141981124878, 12.064946174621582, 18.58349609375, 0.8166589736938477], step: 43800, lr: 2.8946379828590154e-05, reference_loss: 36.42975616455078
2023-05-12 00:32:02,829	44k	INFO	====> Epoch: 287, cost 86.96 s
2023-05-12 00:32:59,645	44k	INFO	Train Epoch: 288 [58%]
2023-05-12 00:32:59,646	44k	INFO	Losses: [2.576910972595215, 2.197340726852417, 12.096631050109863, 20.428077697753906, 0.9696913361549377], step: 44000, lr: 2.894276153111158e-05, reference_loss: 38.268653869628906
2023-05-12 00:33:05,202	44k	INFO	Saving model and optimizer state at iteration 288 to ./logs\44k\G_44000.pth
2023-05-12 00:33:05,983	44k	INFO	Saving model and optimizer state at iteration 288 to ./logs\44k\D_44000.pth
2023-05-12 00:33:06,658	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\G_41000.pth
2023-05-12 00:33:06,695	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\D_41000.pth
2023-05-12 00:33:36,184	44k	INFO	====> Epoch: 288, cost 93.35 s
2023-05-12 00:34:54,184	44k	INFO	Train Epoch: 289 [88%]
2023-05-12 00:34:54,185	44k	INFO	Losses: [2.233393430709839, 2.3542733192443848, 12.026976585388184, 21.86504554748535, 1.0087814331054688], step: 44200, lr: 2.893914368592019e-05, reference_loss: 39.48847198486328
2023-05-12 00:35:02,987	44k	INFO	====> Epoch: 289, cost 86.80 s
2023-05-12 00:36:29,670	44k	INFO	====> Epoch: 290, cost 86.68 s
2023-05-12 00:37:00,382	44k	INFO	Train Epoch: 291 [19%]
2023-05-12 00:37:00,383	44k	INFO	Losses: [2.503129482269287, 2.6620848178863525, 12.32503604888916, 18.790834426879883, 1.2946611642837524], step: 44400, lr: 2.8931909352172828e-05, reference_loss: 37.57574462890625
2023-05-12 00:37:56,944	44k	INFO	====> Epoch: 291, cost 87.27 s
2023-05-12 00:38:48,948	44k	INFO	Train Epoch: 292 [50%]
2023-05-12 00:38:48,948	44k	INFO	Losses: [2.2488231658935547, 2.6991515159606934, 15.310731887817383, 19.656644821166992, 0.7508647441864014], step: 44600, lr: 2.8928292863503805e-05, reference_loss: 40.66621780395508
2023-05-12 00:39:24,133	44k	INFO	====> Epoch: 292, cost 87.19 s
2023-05-12 00:40:36,904	44k	INFO	Train Epoch: 293 [80%]
2023-05-12 00:40:36,904	44k	INFO	Losses: [2.274759292602539, 2.5190110206604004, 14.346514701843262, 17.961681365966797, 0.5254409313201904], step: 44800, lr: 2.8924676826895866e-05, reference_loss: 37.62740707397461
2023-05-12 00:40:51,057	44k	INFO	====> Epoch: 293, cost 86.92 s
2023-05-12 00:42:17,566	44k	INFO	====> Epoch: 294, cost 86.51 s
2023-05-12 00:42:42,906	44k	INFO	Train Epoch: 295 [11%]
2023-05-12 00:42:42,907	44k	INFO	Losses: [2.1780483722686768, 2.305142402648926, 12.447136878967285, 18.825618743896484, 0.5107586979866028], step: 45000, lr: 2.8917446109637215e-05, reference_loss: 36.26670455932617
2023-05-12 00:42:48,325	44k	INFO	Saving model and optimizer state at iteration 295 to ./logs\44k\G_45000.pth
2023-05-12 00:42:49,140	44k	INFO	Saving model and optimizer state at iteration 295 to ./logs\44k\D_45000.pth
2023-05-12 00:42:49,810	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\G_42000.pth
2023-05-12 00:42:49,854	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\D_42000.pth
2023-05-12 00:43:51,243	44k	INFO	====> Epoch: 295, cost 93.68 s
2023-05-12 00:44:37,622	44k	INFO	Train Epoch: 296 [42%]
2023-05-12 00:44:37,622	44k	INFO	Losses: [2.179664134979248, 2.6126747131347656, 13.71163558959961, 18.07395362854004, 0.6545605659484863], step: 45200, lr: 2.891383142887351e-05, reference_loss: 37.232486724853516
2023-05-12 00:45:18,392	44k	INFO	====> Epoch: 296, cost 87.15 s
2023-05-12 00:46:25,892	44k	INFO	Train Epoch: 297 [73%]
2023-05-12 00:46:25,892	44k	INFO	Losses: [2.4465785026550293, 2.7403557300567627, 11.507038116455078, 15.70568561553955, 0.9382854104042053], step: 45400, lr: 2.8910217199944898e-05, reference_loss: 33.33794403076172
2023-05-12 00:46:45,478	44k	INFO	====> Epoch: 297, cost 87.09 s
2023-05-12 00:48:12,077	44k	INFO	====> Epoch: 298, cost 86.60 s
2023-05-12 00:48:32,174	44k	INFO	Train Epoch: 299 [3%]
2023-05-12 00:48:32,174	44k	INFO	Losses: [1.9995447397232056, 2.6359925270080566, 15.022774696350098, 19.879331588745117, 0.8272438645362854], step: 45600, lr: 2.8902990097367054e-05, reference_loss: 40.36488723754883
2023-05-12 00:49:39,001	44k	INFO	====> Epoch: 299, cost 86.92 s
2023-05-12 00:50:19,889	44k	INFO	Train Epoch: 300 [34%]
2023-05-12 00:50:19,889	44k	INFO	Losses: [2.3646130561828613, 2.3081157207489014, 13.545132637023926, 19.039533615112305, 0.8943161368370056], step: 45800, lr: 2.889937722360488e-05, reference_loss: 38.151710510253906
2023-05-12 00:51:05,843	44k	INFO	====> Epoch: 300, cost 86.84 s
2023-05-12 00:52:07,801	44k	INFO	Train Epoch: 301 [65%]
2023-05-12 00:52:07,802	44k	INFO	Losses: [2.352902889251709, 2.4987077713012695, 15.375273704528809, 20.87496566772461, 0.5966988801956177], step: 46000, lr: 2.889576480145193e-05, reference_loss: 41.698551177978516
2023-05-12 00:52:13,274	44k	INFO	Saving model and optimizer state at iteration 301 to ./logs\44k\G_46000.pth
2023-05-12 00:52:14,222	44k	INFO	Saving model and optimizer state at iteration 301 to ./logs\44k\D_46000.pth
2023-05-12 00:52:14,916	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\G_43000.pth
2023-05-12 00:52:14,959	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\D_43000.pth
2023-05-12 00:52:39,610	44k	INFO	====> Epoch: 301, cost 93.77 s
2023-05-12 00:54:02,639	44k	INFO	Train Epoch: 302 [95%]
2023-05-12 00:54:02,639	44k	INFO	Losses: [2.4250588417053223, 2.498613119125366, 13.063399314880371, 19.875106811523438, 0.41463860869407654], step: 46200, lr: 2.8892152830851748e-05, reference_loss: 38.276817321777344
2023-05-12 00:54:06,440	44k	INFO	====> Epoch: 302, cost 86.83 s
2023-05-12 00:55:33,027	44k	INFO	====> Epoch: 303, cost 86.59 s
2023-05-12 00:56:08,422	44k	INFO	Train Epoch: 304 [26%]
2023-05-12 00:56:08,422	44k	INFO	Losses: [2.387110948562622, 2.7058818340301514, 14.592483520507812, 21.623506546020508, 0.9172285199165344], step: 46400, lr: 2.888493024408392e-05, reference_loss: 42.22621154785156
2023-05-12 00:56:59,821	44k	INFO	====> Epoch: 304, cost 86.79 s
2023-05-12 00:57:56,524	44k	INFO	Train Epoch: 305 [57%]
2023-05-12 00:57:56,524	44k	INFO	Losses: [2.4622392654418945, 2.3903815746307373, 10.271135330200195, 14.112380027770996, 0.3740549087524414], step: 46600, lr: 2.8881319627803408e-05, reference_loss: 29.610191345214844
2023-05-12 00:58:26,824	44k	INFO	====> Epoch: 305, cost 87.00 s
2023-05-12 00:59:44,598	44k	INFO	Train Epoch: 306 [88%]
2023-05-12 00:59:44,599	44k	INFO	Losses: [2.487091541290283, 2.387373685836792, 9.689733505249023, 17.634672164916992, 0.7646937370300293], step: 46800, lr: 2.887770946284993e-05, reference_loss: 32.963565826416016
2023-05-12 00:59:53,680	44k	INFO	====> Epoch: 306, cost 86.86 s
2023-05-12 01:01:20,127	44k	INFO	====> Epoch: 307, cost 86.45 s
2023-05-12 01:01:50,348	44k	INFO	Train Epoch: 308 [18%]
2023-05-12 01:01:50,349	44k	INFO	Losses: [2.3481314182281494, 2.5659570693969727, 7.266246795654297, 14.47097110748291, 0.808350682258606], step: 47000, lr: 2.8870490486698423e-05, reference_loss: 27.459657669067383
2023-05-12 01:01:55,905	44k	INFO	Saving model and optimizer state at iteration 308 to ./logs\44k\G_47000.pth
2023-05-12 01:01:56,718	44k	INFO	Saving model and optimizer state at iteration 308 to ./logs\44k\D_47000.pth
2023-05-12 01:01:57,409	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\G_44000.pth
2023-05-12 01:01:57,455	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\D_44000.pth
2023-05-12 01:02:54,072	44k	INFO	====> Epoch: 308, cost 93.95 s
2023-05-12 01:03:45,438	44k	INFO	Train Epoch: 309 [49%]
2023-05-12 01:03:45,438	44k	INFO	Losses: [2.4460787773132324, 2.30411434173584, 12.190338134765625, 20.029176712036133, 0.8841606974601746], step: 47200, lr: 2.8866881675387583e-05, reference_loss: 37.85386657714844
2023-05-12 01:04:21,154	44k	INFO	====> Epoch: 309, cost 87.08 s
2023-05-12 01:05:33,737	44k	INFO	Train Epoch: 310 [80%]
2023-05-12 01:05:33,738	44k	INFO	Losses: [2.3092551231384277, 2.3792567253112793, 14.208582878112793, 19.56374740600586, 0.978696346282959], step: 47400, lr: 2.886327331517816e-05, reference_loss: 39.439537048339844
2023-05-12 01:05:48,301	44k	INFO	====> Epoch: 310, cost 87.15 s
2023-05-12 01:07:15,531	44k	INFO	====> Epoch: 311, cost 87.23 s
2023-05-12 01:07:40,468	44k	INFO	Train Epoch: 312 [10%]
2023-05-12 01:07:40,469	44k	INFO	Losses: [2.330749988555908, 2.2947089672088623, 10.925325393676758, 20.311233520507812, 0.7993677258491516], step: 47600, lr: 2.8856057947838005e-05, reference_loss: 36.66138458251953
2023-05-12 01:08:42,644	44k	INFO	====> Epoch: 312, cost 87.11 s
2023-05-12 01:09:28,547	44k	INFO	Train Epoch: 313 [41%]
2023-05-12 01:09:28,548	44k	INFO	Losses: [2.5556211471557617, 2.1875720024108887, 11.887537002563477, 20.122758865356445, 0.9945183992385864], step: 47800, lr: 2.8852450940594525e-05, reference_loss: 37.74800491333008
2023-05-12 01:10:09,693	44k	INFO	====> Epoch: 313, cost 87.05 s
2023-05-12 01:11:16,820	44k	INFO	Train Epoch: 314 [72%]
2023-05-12 01:11:16,821	44k	INFO	Losses: [2.2254743576049805, 2.639667510986328, 10.683056831359863, 20.9825439453125, 0.4391290545463562], step: 48000, lr: 2.884884438422695e-05, reference_loss: 36.969871520996094
2023-05-12 01:11:22,327	44k	INFO	Saving model and optimizer state at iteration 314 to ./logs\44k\G_48000.pth
2023-05-12 01:11:23,154	44k	INFO	Saving model and optimizer state at iteration 314 to ./logs\44k\D_48000.pth
2023-05-12 01:11:23,835	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\G_45000.pth
2023-05-12 01:11:23,872	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\D_45000.pth
2023-05-12 01:11:43,537	44k	INFO	====> Epoch: 314, cost 93.84 s
2023-05-12 01:13:10,303	44k	INFO	====> Epoch: 315, cost 86.77 s
2023-05-12 01:13:29,999	44k	INFO	Train Epoch: 316 [3%]
2023-05-12 01:13:29,999	44k	INFO	Losses: [2.30355167388916, 2.469939947128296, 11.87990665435791, 18.41738510131836, 0.6748366355895996], step: 48200, lr: 2.8841632623894083e-05, reference_loss: 35.74562072753906
2023-05-12 01:14:37,554	44k	INFO	====> Epoch: 316, cost 87.25 s
2023-05-12 01:15:18,155	44k	INFO	Train Epoch: 317 [33%]
2023-05-12 01:15:18,156	44k	INFO	Losses: [2.3286733627319336, 2.447099447250366, 13.856201171875, 19.788793563842773, 0.4912005662918091], step: 48400, lr: 2.8838027419816096e-05, reference_loss: 38.91196823120117
2023-05-12 01:16:04,735	44k	INFO	====> Epoch: 317, cost 87.18 s
2023-05-12 01:17:06,433	44k	INFO	Train Epoch: 318 [64%]
2023-05-12 01:17:06,434	44k	INFO	Losses: [2.553659677505493, 2.232065200805664, 12.68171501159668, 18.623472213745117, 0.34664177894592285], step: 48600, lr: 2.883442266638862e-05, reference_loss: 36.43755340576172
2023-05-12 01:17:31,853	44k	INFO	====> Epoch: 318, cost 87.12 s
2023-05-12 01:18:54,509	44k	INFO	Train Epoch: 319 [95%]
2023-05-12 01:18:54,510	44k	INFO	Losses: [2.353644371032715, 2.110971450805664, 10.525450706481934, 18.927993774414062, 0.6254878044128418], step: 48800, lr: 2.883081836355532e-05, reference_loss: 34.543548583984375
2023-05-12 01:18:58,751	44k	INFO	====> Epoch: 319, cost 86.90 s
2023-05-12 01:20:25,568	44k	INFO	====> Epoch: 320, cost 86.82 s
2023-05-12 01:21:00,706	44k	INFO	Train Epoch: 321 [25%]
2023-05-12 01:21:00,706	44k	INFO	Losses: [2.8630709648132324, 2.2152042388916016, 8.358444213867188, 19.07183837890625, 0.3670879900455475], step: 49000, lr: 2.8823611109445964e-05, reference_loss: 32.875648498535156
2023-05-12 01:21:06,206	44k	INFO	Saving model and optimizer state at iteration 321 to ./logs\44k\G_49000.pth
2023-05-12 01:21:06,923	44k	INFO	Saving model and optimizer state at iteration 321 to ./logs\44k\D_49000.pth
2023-05-12 01:21:07,594	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\G_46000.pth
2023-05-12 01:21:07,638	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\D_46000.pth
2023-05-12 01:21:59,375	44k	INFO	====> Epoch: 321, cost 93.81 s
2023-05-12 01:22:55,624	44k	INFO	Train Epoch: 322 [56%]
2023-05-12 01:22:55,625	44k	INFO	Losses: [2.4059181213378906, 2.1029815673828125, 14.136219024658203, 20.452367782592773, 0.8831208348274231], step: 49200, lr: 2.882000815805728e-05, reference_loss: 39.98060989379883
2023-05-12 01:23:26,575	44k	INFO	====> Epoch: 322, cost 87.20 s
2023-05-12 01:24:44,016	44k	INFO	Train Epoch: 323 [87%]
2023-05-12 01:24:44,017	44k	INFO	Losses: [2.3768985271453857, 2.436417818069458, 13.863051414489746, 18.966707229614258, 0.7688087224960327], step: 49400, lr: 2.8816405657037522e-05, reference_loss: 38.41188430786133
2023-05-12 01:24:53,651	44k	INFO	====> Epoch: 323, cost 87.08 s
2023-05-12 01:26:20,075	44k	INFO	====> Epoch: 324, cost 86.42 s
2023-05-12 01:26:49,956	44k	INFO	Train Epoch: 325 [18%]
2023-05-12 01:26:49,956	44k	INFO	Losses: [2.3294930458068848, 2.436582565307617, 12.034727096557617, 19.019193649291992, 0.4922015070915222], step: 49600, lr: 2.8809202005879602e-05, reference_loss: 36.312198638916016
2023-05-12 01:27:47,111	44k	INFO	====> Epoch: 325, cost 87.04 s
2023-05-12 01:28:37,805	44k	INFO	Train Epoch: 326 [48%]
2023-05-12 01:28:37,805	44k	INFO	Losses: [2.3499183654785156, 2.2858476638793945, 14.409664154052734, 15.920552253723145, 0.7514058351516724], step: 49800, lr: 2.8805600855628865e-05, reference_loss: 35.71738815307617
2023-05-12 01:29:13,948	44k	INFO	====> Epoch: 326, cost 86.84 s
2023-05-12 01:30:26,142	44k	INFO	Train Epoch: 327 [79%]
2023-05-12 01:30:26,143	44k	INFO	Losses: [2.288236379623413, 2.291294574737549, 13.502463340759277, 20.392406463623047, 0.6512307524681091], step: 50000, lr: 2.880200015552191e-05, reference_loss: 39.12562942504883
2023-05-12 01:30:31,704	44k	INFO	Saving model and optimizer state at iteration 327 to ./logs\44k\G_50000.pth
2023-05-12 01:30:32,652	44k	INFO	Saving model and optimizer state at iteration 327 to ./logs\44k\D_50000.pth
2023-05-12 01:30:33,341	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\G_47000.pth
2023-05-12 01:30:33,385	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\D_47000.pth
2023-05-12 01:30:48,093	44k	INFO	====> Epoch: 327, cost 94.15 s
2023-05-12 01:32:14,785	44k	INFO	====> Epoch: 328, cost 86.69 s
2023-05-12 01:32:39,281	44k	INFO	Train Epoch: 329 [10%]
2023-05-12 01:32:39,281	44k	INFO	Losses: [2.5760297775268555, 2.0475826263427734, 8.679187774658203, 18.01605796813965, 0.6341844797134399], step: 50200, lr: 2.879480010551428e-05, reference_loss: 31.95304298400879
2023-05-12 01:33:41,846	44k	INFO	====> Epoch: 329, cost 87.06 s
2023-05-12 01:34:27,354	44k	INFO	Train Epoch: 330 [41%]
2023-05-12 01:34:27,355	44k	INFO	Losses: [2.511593818664551, 2.1043527126312256, 13.312424659729004, 20.48750877380371, 0.917515218257904], step: 50400, lr: 2.879120075550109e-05, reference_loss: 39.33339309692383
2023-05-12 01:35:08,859	44k	INFO	====> Epoch: 330, cost 87.01 s
2023-05-12 01:36:15,389	44k	INFO	Train Epoch: 331 [71%]
2023-05-12 01:36:15,389	44k	INFO	Losses: [2.3014283180236816, 2.235219955444336, 12.459945678710938, 21.13761329650879, 0.7915703654289246], step: 50600, lr: 2.8787601855406652e-05, reference_loss: 38.925777435302734
2023-05-12 01:36:35,845	44k	INFO	====> Epoch: 331, cost 86.99 s
2023-05-12 01:38:02,420	44k	INFO	====> Epoch: 332, cost 86.57 s
2023-05-12 01:38:21,436	44k	INFO	Train Epoch: 333 [2%]
2023-05-12 01:38:21,437	44k	INFO	Losses: [2.326077699661255, 2.612873077392578, 11.030153274536133, 17.739837646484375, 0.7160561084747314], step: 50800, lr: 2.878040540474908e-05, reference_loss: 34.42499923706055
2023-05-12 01:39:29,323	44k	INFO	====> Epoch: 333, cost 86.90 s
2023-05-12 01:40:09,620	44k	INFO	Train Epoch: 334 [33%]
2023-05-12 01:40:09,621	44k	INFO	Losses: [2.2525033950805664, 1.9147224426269531, 17.99275779724121, 18.474515914916992, 0.5406356453895569], step: 51000, lr: 2.8776807854073486e-05, reference_loss: 41.175132751464844
2023-05-12 01:40:15,069	44k	INFO	Saving model and optimizer state at iteration 334 to ./logs\44k\G_51000.pth
2023-05-12 01:40:15,889	44k	INFO	Saving model and optimizer state at iteration 334 to ./logs\44k\D_51000.pth
2023-05-12 01:40:16,578	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\G_48000.pth
2023-05-12 01:40:16,621	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\D_48000.pth
2023-05-12 01:41:03,320	44k	INFO	====> Epoch: 334, cost 94.00 s
2023-05-12 01:42:04,386	44k	INFO	Train Epoch: 335 [63%]
2023-05-12 01:42:04,386	44k	INFO	Losses: [2.64439058303833, 2.156479597091675, 10.21287727355957, 19.386377334594727, 0.8178606033325195], step: 51200, lr: 2.8773210753091726e-05, reference_loss: 35.21798324584961
2023-05-12 01:42:30,304	44k	INFO	====> Epoch: 335, cost 86.98 s
2023-05-12 01:43:52,594	44k	INFO	Train Epoch: 336 [94%]
2023-05-12 01:43:52,595	44k	INFO	Losses: [2.377340078353882, 2.4160428047180176, 12.378716468811035, 19.029603958129883, 1.0963490009307861], step: 51400, lr: 2.8769614101747588e-05, reference_loss: 37.29805374145508
2023-05-12 01:43:57,236	44k	INFO	====> Epoch: 336, cost 86.93 s
2023-05-12 01:45:23,761	44k	INFO	====> Epoch: 337, cost 86.52 s
2023-05-12 01:45:58,628	44k	INFO	Train Epoch: 338 [25%]
2023-05-12 01:45:58,628	44k	INFO	Losses: [2.4006423950195312, 2.4499778747558594, 6.625148296356201, 16.560646057128906, 0.5126820206642151], step: 51600, lr: 2.876242214774737e-05, reference_loss: 28.549097061157227
2023-05-12 01:46:50,855	44k	INFO	====> Epoch: 338, cost 87.09 s
2023-05-12 01:47:46,819	44k	INFO	Train Epoch: 339 [56%]
2023-05-12 01:47:46,820	44k	INFO	Losses: [2.4874658584594727, 2.2976598739624023, 5.2647271156311035, 19.872941970825195, 0.9726997017860413], step: 51800, lr: 2.87588268449789e-05, reference_loss: 30.89549446105957
2023-05-12 01:48:18,144	44k	INFO	====> Epoch: 339, cost 87.29 s
2023-05-12 01:49:35,389	44k	INFO	Train Epoch: 340 [86%]
2023-05-12 01:49:35,389	44k	INFO	Losses: [2.4471659660339355, 2.3027732372283936, 9.362944602966309, 19.69312286376953, 1.0759491920471191], step: 52000, lr: 2.8755231991623277e-05, reference_loss: 34.8819580078125
2023-05-12 01:49:40,985	44k	INFO	Saving model and optimizer state at iteration 340 to ./logs\44k\G_52000.pth
2023-05-12 01:49:41,749	44k	INFO	Saving model and optimizer state at iteration 340 to ./logs\44k\D_52000.pth
2023-05-12 01:49:42,429	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\G_49000.pth
2023-05-12 01:49:42,478	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\D_49000.pth
2023-05-12 01:49:52,376	44k	INFO	====> Epoch: 340, cost 94.23 s
2023-05-12 01:51:19,339	44k	INFO	====> Epoch: 341, cost 86.96 s
2023-05-12 01:51:48,886	44k	INFO	Train Epoch: 342 [17%]
2023-05-12 01:51:48,886	44k	INFO	Losses: [2.056257724761963, 2.6781301498413086, 11.904544830322266, 19.655420303344727, 0.4471868872642517], step: 52200, lr: 2.874804363292587e-05, reference_loss: 36.741539001464844
2023-05-12 01:52:46,626	44k	INFO	====> Epoch: 342, cost 87.29 s
2023-05-12 01:53:37,212	44k	INFO	Train Epoch: 343 [48%]
2023-05-12 01:53:37,213	44k	INFO	Losses: [2.4085354804992676, 2.0151638984680176, 14.059934616088867, 17.7779598236084, 0.9267661571502686], step: 52400, lr: 2.8744450127471752e-05, reference_loss: 37.18836212158203
2023-05-12 01:54:13,726	44k	INFO	====> Epoch: 343, cost 87.10 s
2023-05-12 01:55:25,273	44k	INFO	Train Epoch: 344 [78%]
2023-05-12 01:55:25,274	44k	INFO	Losses: [2.221299648284912, 2.4725427627563477, 15.116349220275879, 19.373258590698242, 0.903636634349823], step: 52600, lr: 2.8740857071205818e-05, reference_loss: 40.08708572387695
2023-05-12 01:55:40,690	44k	INFO	====> Epoch: 344, cost 86.96 s
2023-05-12 01:57:07,225	44k	INFO	====> Epoch: 345, cost 86.54 s
2023-05-12 01:57:31,280	44k	INFO	Train Epoch: 346 [9%]
2023-05-12 01:57:31,280	44k	INFO	Losses: [2.626565456390381, 2.385859251022339, 18.240093231201172, 22.404682159423828, 0.5306336879730225], step: 52800, lr: 2.8733672306013904e-05, reference_loss: 46.18783187866211
2023-05-12 01:58:34,193	44k	INFO	====> Epoch: 346, cost 86.97 s
2023-05-12 01:59:19,119	44k	INFO	Train Epoch: 347 [40%]
2023-05-12 01:59:19,120	44k	INFO	Losses: [2.2890186309814453, 2.8318262100219727, 11.400150299072266, 20.889436721801758, 0.9493915438652039], step: 53000, lr: 2.873008059697565e-05, reference_loss: 38.35982131958008
2023-05-12 01:59:24,897	44k	INFO	Saving model and optimizer state at iteration 347 to ./logs\44k\G_53000.pth
2023-05-12 01:59:25,592	44k	INFO	Saving model and optimizer state at iteration 347 to ./logs\44k\D_53000.pth
2023-05-12 01:59:26,293	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\G_50000.pth
2023-05-12 01:59:26,344	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\D_50000.pth
2023-05-12 02:00:08,121	44k	INFO	====> Epoch: 347, cost 93.93 s
2023-05-12 02:01:15,219	44k	INFO	Train Epoch: 348 [71%]
2023-05-12 02:01:15,249	44k	INFO	Losses: [2.135772228240967, 2.430205821990967, 13.827906608581543, 19.59593963623047, 0.6215983033180237], step: 53200, lr: 2.872648933690103e-05, reference_loss: 38.611419677734375
2023-05-12 02:01:36,625	44k	INFO	====> Epoch: 348, cost 88.50 s
2023-05-12 02:03:08,824	44k	INFO	====> Epoch: 349, cost 92.20 s
2023-05-12 02:03:29,335	44k	INFO	Train Epoch: 350 [1%]
2023-05-12 02:03:29,335	44k	INFO	Losses: [2.505711078643799, 2.1741676330566406, 14.022588729858398, 19.024606704711914, 0.5488510727882385], step: 53400, lr: 2.87193081634182e-05, reference_loss: 38.27592468261719
2023-05-12 02:04:42,226	44k	INFO	====> Epoch: 350, cost 93.40 s
2023-05-12 02:05:22,984	44k	INFO	Train Epoch: 351 [32%]
2023-05-12 02:05:22,985	44k	INFO	Losses: [2.6690385341644287, 1.7679072618484497, 7.874725818634033, 13.426702499389648, 0.4435964524745941], step: 53600, lr: 2.8715718249897772e-05, reference_loss: 26.181970596313477
2023-05-12 02:06:12,245	44k	INFO	====> Epoch: 351, cost 90.02 s
2023-05-12 02:07:17,161	44k	INFO	Train Epoch: 352 [63%]
2023-05-12 02:07:17,161	44k	INFO	Losses: [2.487330436706543, 2.7899012565612793, 12.667108535766602, 18.220800399780273, 1.3599427938461304], step: 53800, lr: 2.8712128785116532e-05, reference_loss: 37.525081634521484
2023-05-12 02:07:44,237	44k	INFO	====> Epoch: 352, cost 91.99 s
2023-05-12 02:09:10,263	44k	INFO	Train Epoch: 353 [93%]
2023-05-12 02:09:10,263	44k	INFO	Losses: [2.3179638385772705, 2.493180274963379, 12.37921142578125, 20.10038948059082, 0.9981839060783386], step: 54000, lr: 2.8708539769018392e-05, reference_loss: 38.2889289855957
2023-05-12 02:09:15,916	44k	INFO	Saving model and optimizer state at iteration 353 to ./logs\44k\G_54000.pth
2023-05-12 02:09:16,886	44k	INFO	Saving model and optimizer state at iteration 353 to ./logs\44k\D_54000.pth
2023-05-12 02:09:17,647	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\G_51000.pth
2023-05-12 02:09:17,695	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\D_51000.pth
2023-05-12 02:09:22,748	44k	INFO	====> Epoch: 353, cost 98.51 s
2023-05-12 02:10:51,117	44k	INFO	====> Epoch: 354, cost 88.37 s
2023-05-12 02:11:26,040	44k	INFO	Train Epoch: 355 [24%]
2023-05-12 02:11:26,041	44k	INFO	Losses: [2.1455252170562744, 2.470700263977051, 15.370835304260254, 23.028362274169922, 0.6955864429473877], step: 54200, lr: 2.870136308264707e-05, reference_loss: 43.71100997924805
2023-05-12 02:12:19,889	44k	INFO	====> Epoch: 355, cost 88.77 s
2023-05-12 02:13:16,123	44k	INFO	Train Epoch: 356 [55%]
2023-05-12 02:13:16,124	44k	INFO	Losses: [2.0428402423858643, 2.7300283908843994, 9.124566078186035, 20.02855682373047, 0.8733258843421936], step: 54400, lr: 2.8697775412261737e-05, reference_loss: 34.79931640625
2023-05-12 02:13:48,641	44k	INFO	====> Epoch: 356, cost 88.75 s
2023-05-12 02:15:06,563	44k	INFO	Train Epoch: 357 [86%]
2023-05-12 02:15:06,564	44k	INFO	Losses: [2.4102957248687744, 2.350048542022705, 9.982596397399902, 14.040148735046387, 0.6876527070999146], step: 54600, lr: 2.8694188190335202e-05, reference_loss: 29.470741271972656
2023-05-12 02:15:17,458	44k	INFO	====> Epoch: 357, cost 88.82 s
2023-05-12 02:16:45,755	44k	INFO	====> Epoch: 358, cost 88.30 s
2023-05-12 02:17:15,132	44k	INFO	Train Epoch: 359 [16%]
2023-05-12 02:17:15,132	44k	INFO	Losses: [2.596932888031006, 2.1161811351776123, 13.290535926818848, 19.867136001586914, 0.6504670977592468], step: 54800, lr: 2.8687015091634307e-05, reference_loss: 38.52125549316406
2023-05-12 02:18:14,495	44k	INFO	====> Epoch: 359, cost 88.74 s
2023-05-12 02:19:05,403	44k	INFO	Train Epoch: 360 [47%]
2023-05-12 02:19:05,404	44k	INFO	Losses: [2.4643959999084473, 2.1137897968292236, 16.21176528930664, 19.83914566040039, 0.9418043494224548], step: 55000, lr: 2.8683429214747853e-05, reference_loss: 41.570899963378906
2023-05-12 02:19:10,885	44k	INFO	Saving model and optimizer state at iteration 360 to ./logs\44k\G_55000.pth
2023-05-12 02:19:11,661	44k	INFO	Saving model and optimizer state at iteration 360 to ./logs\44k\D_55000.pth
2023-05-12 02:19:12,335	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\G_52000.pth
2023-05-12 02:19:12,384	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\D_52000.pth
2023-05-12 02:19:50,032	44k	INFO	====> Epoch: 360, cost 95.54 s
2023-05-12 02:21:04,059	44k	INFO	Train Epoch: 361 [78%]
2023-05-12 02:21:04,060	44k	INFO	Losses: [2.5475685596466064, 2.4578309059143066, 9.741727828979492, 20.154281616210938, 0.6961809396743774], step: 55200, lr: 2.8679843786096008e-05, reference_loss: 35.597591400146484
2023-05-12 02:21:20,229	44k	INFO	====> Epoch: 361, cost 90.20 s
2023-05-12 02:22:48,592	44k	INFO	====> Epoch: 362, cost 88.36 s
2023-05-12 02:23:12,610	44k	INFO	Train Epoch: 363 [8%]
2023-05-12 02:23:12,611	44k	INFO	Losses: [2.1323788166046143, 2.69551420211792, 14.48270034790039, 22.146411895751953, 1.365592122077942], step: 55400, lr: 2.867267427327204e-05, reference_loss: 42.82259750366211
2023-05-12 02:24:17,508	44k	INFO	====> Epoch: 363, cost 88.92 s
2023-05-12 02:25:02,968	44k	INFO	Train Epoch: 364 [39%]
2023-05-12 02:25:02,969	44k	INFO	Losses: [2.456886053085327, 2.774120807647705, 13.439095497131348, 20.270658493041992, 0.8473737835884094], step: 55600, lr: 2.866909018898788e-05, reference_loss: 39.78813171386719
2023-05-12 02:25:46,534	44k	INFO	====> Epoch: 364, cost 89.03 s
2023-05-12 02:26:53,429	44k	INFO	Train Epoch: 365 [70%]
2023-05-12 02:26:53,430	44k	INFO	Losses: [2.1103031635284424, 2.6911797523498535, 9.798227310180664, 16.85303497314453, 0.7356771230697632], step: 55800, lr: 2.8665506552714255e-05, reference_loss: 32.18842315673828
2023-05-12 02:27:15,273	44k	INFO	====> Epoch: 365, cost 88.74 s
2023-05-12 02:28:43,428	44k	INFO	====> Epoch: 366, cost 88.16 s
2023-05-12 02:29:01,862	44k	INFO	Train Epoch: 367 [1%]
2023-05-12 02:29:01,863	44k	INFO	Losses: [2.4494853019714355, 2.0860307216644287, 11.417633056640625, 13.177638053894043, 0.9124957323074341], step: 56000, lr: 2.8658340623974612e-05, reference_loss: 30.043283462524414
2023-05-12 02:29:07,468	44k	INFO	Saving model and optimizer state at iteration 367 to ./logs\44k\G_56000.pth
2023-05-12 02:29:08,254	44k	INFO	Saving model and optimizer state at iteration 367 to ./logs\44k\D_56000.pth
2023-05-12 02:29:08,936	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\G_53000.pth
2023-05-12 02:29:08,991	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\D_53000.pth
2023-05-12 02:30:19,073	44k	INFO	====> Epoch: 367, cost 95.64 s
2023-05-12 02:30:58,945	44k	INFO	Train Epoch: 368 [31%]
2023-05-12 02:30:58,946	44k	INFO	Losses: [2.0315427780151367, 2.8374533653259277, 14.525748252868652, 19.096439361572266, 0.7872524261474609], step: 56200, lr: 2.8654758331396613e-05, reference_loss: 39.27843475341797
2023-05-12 02:31:47,881	44k	INFO	====> Epoch: 368, cost 88.81 s
2023-05-12 02:32:49,455	44k	INFO	Train Epoch: 369 [62%]
2023-05-12 02:32:49,456	44k	INFO	Losses: [2.388775110244751, 2.0685250759124756, 8.523904800415039, 14.944770812988281, 0.9808571338653564], step: 56400, lr: 2.8651176486605187e-05, reference_loss: 28.90683364868164
2023-05-12 02:33:17,588	44k	INFO	====> Epoch: 369, cost 89.71 s
2023-05-12 02:34:40,738	44k	INFO	Train Epoch: 370 [93%]
2023-05-12 02:34:40,739	44k	INFO	Losses: [2.529674530029297, 2.3643620014190674, 8.816594123840332, 18.506458282470703, 0.8089593052864075], step: 56600, lr: 2.864759508954436e-05, reference_loss: 33.02604675292969
2023-05-12 02:34:46,425	44k	INFO	====> Epoch: 370, cost 88.84 s
2023-05-12 02:36:14,332	44k	INFO	====> Epoch: 371, cost 87.91 s
2023-05-12 02:36:48,274	44k	INFO	Train Epoch: 372 [24%]
2023-05-12 02:36:48,275	44k	INFO	Losses: [2.4553580284118652, 2.410719871520996, 11.060189247131348, 17.920082092285156, 0.912835955619812], step: 56800, lr: 2.864043363839064e-05, reference_loss: 34.759185791015625
2023-05-12 02:37:41,639	44k	INFO	====> Epoch: 372, cost 87.31 s
2023-05-12 02:38:36,735	44k	INFO	Train Epoch: 373 [54%]
2023-05-12 02:38:36,735	44k	INFO	Losses: [2.2265753746032715, 2.37687349319458, 10.90886116027832, 18.66537857055664, 0.7422361969947815], step: 57000, lr: 2.8636853584185842e-05, reference_loss: 34.919925689697266
2023-05-12 02:38:42,242	44k	INFO	Saving model and optimizer state at iteration 373 to ./logs\44k\G_57000.pth
2023-05-12 02:38:43,039	44k	INFO	Saving model and optimizer state at iteration 373 to ./logs\44k\D_57000.pth
2023-05-12 02:38:43,707	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\G_54000.pth
2023-05-12 02:38:43,761	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\D_54000.pth
2023-05-12 02:39:15,788	44k	INFO	====> Epoch: 373, cost 94.15 s
2023-05-12 02:40:32,230	44k	INFO	Train Epoch: 374 [85%]
2023-05-12 02:40:32,231	44k	INFO	Losses: [2.186521530151367, 2.876633882522583, 16.28428840637207, 22.0592041015625, 0.7332062125205994], step: 57200, lr: 2.8633273977487817e-05, reference_loss: 44.139854431152344
2023-05-12 02:40:43,415	44k	INFO	====> Epoch: 374, cost 87.63 s
2023-05-12 02:42:10,344	44k	INFO	====> Epoch: 375, cost 86.93 s
2023-05-12 02:42:39,339	44k	INFO	Train Epoch: 376 [16%]
2023-05-12 02:42:39,340	44k	INFO	Losses: [2.287039279937744, 2.543053388595581, 13.70235538482666, 19.574756622314453, 0.44993361830711365], step: 57400, lr: 2.862611610638835e-05, reference_loss: 38.55713653564453
2023-05-12 02:43:38,361	44k	INFO	====> Epoch: 376, cost 88.02 s
2023-05-12 02:44:28,168	44k	INFO	Train Epoch: 377 [46%]
2023-05-12 02:44:28,169	44k	INFO	Losses: [2.1254513263702393, 2.6186904907226562, 16.114530563354492, 21.063730239868164, 1.4693626165390015], step: 57600, lr: 2.862253784187505e-05, reference_loss: 43.39176559448242
2023-05-12 02:45:05,846	44k	INFO	====> Epoch: 377, cost 87.48 s
2023-05-12 02:46:16,454	44k	INFO	Train Epoch: 378 [77%]
2023-05-12 02:46:16,454	44k	INFO	Losses: [2.1241137981414795, 2.312547206878662, 20.073745727539062, 19.273548126220703, 0.7099340558052063], step: 57800, lr: 2.8618960024644812e-05, reference_loss: 44.49388885498047
2023-05-12 02:46:33,009	44k	INFO	====> Epoch: 378, cost 87.16 s
2023-05-12 02:48:00,759	44k	INFO	====> Epoch: 379, cost 87.75 s
2023-05-12 02:48:23,942	44k	INFO	Train Epoch: 380 [8%]
2023-05-12 02:48:23,942	44k	INFO	Losses: [2.456301689147949, 2.2326016426086426, 11.413169860839844, 18.76596450805664, 0.4634284973144531], step: 58000, lr: 2.86118057318099e-05, reference_loss: 35.33146667480469
2023-05-12 02:48:29,433	44k	INFO	Saving model and optimizer state at iteration 380 to ./logs\44k\G_58000.pth
2023-05-12 02:48:30,353	44k	INFO	Saving model and optimizer state at iteration 380 to ./logs\44k\D_58000.pth
2023-05-12 02:48:31,113	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\G_55000.pth
2023-05-12 02:48:31,170	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\D_55000.pth
2023-05-12 02:49:34,997	44k	INFO	====> Epoch: 380, cost 94.24 s
2023-05-12 02:50:19,439	44k	INFO	Train Epoch: 381 [39%]
2023-05-12 02:50:19,440	44k	INFO	Losses: [2.5525875091552734, 2.302884340286255, 6.546157360076904, 15.209617614746094, 0.8096770644187927], step: 58200, lr: 2.8608229256093423e-05, reference_loss: 27.420923233032227
2023-05-12 02:51:02,494	44k	INFO	====> Epoch: 381, cost 87.50 s
2023-05-12 02:52:08,041	44k	INFO	Train Epoch: 382 [69%]
2023-05-12 02:52:08,042	44k	INFO	Losses: [2.441138744354248, 2.3071632385253906, 13.418956756591797, 16.853052139282227, 0.6515924334526062], step: 58400, lr: 2.860465322743641e-05, reference_loss: 35.67190170288086
2023-05-12 02:52:29,817	44k	INFO	====> Epoch: 382, cost 87.32 s
2023-05-12 02:53:56,328	44k	INFO	====> Epoch: 383, cost 86.51 s
2023-05-12 02:54:14,257	44k	INFO	Train Epoch: 384 [0%]
2023-05-12 02:54:14,258	44k	INFO	Losses: [2.4921183586120605, 2.2718024253845215, 13.16602611541748, 21.242778778076172, 0.8981888890266418], step: 58600, lr: 2.8597502511077255e-05, reference_loss: 40.07091522216797
2023-05-12 02:55:23,718	44k	INFO	====> Epoch: 384, cost 87.39 s
2023-05-12 02:56:02,345	44k	INFO	Train Epoch: 385 [31%]
2023-05-12 02:56:02,346	44k	INFO	Losses: [2.5569381713867188, 2.3658721446990967, 9.402472496032715, 19.0131893157959, 0.6612781882286072], step: 58800, lr: 2.8593927823263368e-05, reference_loss: 33.99974822998047
2023-05-12 02:56:50,649	44k	INFO	====> Epoch: 385, cost 86.93 s
2023-05-12 02:57:50,732	44k	INFO	Train Epoch: 386 [61%]
2023-05-12 02:57:50,733	44k	INFO	Losses: [2.385152816772461, 2.594188690185547, 10.243889808654785, 20.528438568115234, 0.9784220457077026], step: 59000, lr: 2.859035358228546e-05, reference_loss: 36.7300910949707
2023-05-12 02:57:56,197	44k	INFO	Saving model and optimizer state at iteration 386 to ./logs\44k\G_59000.pth
2023-05-12 02:57:56,967	44k	INFO	Saving model and optimizer state at iteration 386 to ./logs\44k\D_59000.pth
2023-05-12 02:57:57,643	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\G_56000.pth
2023-05-12 02:57:57,697	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\D_56000.pth
2023-05-12 02:58:24,963	44k	INFO	====> Epoch: 386, cost 94.31 s
2023-05-12 02:59:46,166	44k	INFO	Train Epoch: 387 [92%]
2023-05-12 02:59:46,166	44k	INFO	Losses: [2.4257919788360596, 2.218857765197754, 9.705286979675293, 21.481491088867188, 0.7963300943374634], step: 59200, lr: 2.8586779788087672e-05, reference_loss: 36.62775802612305
2023-05-12 02:59:52,174	44k	INFO	====> Epoch: 387, cost 87.21 s
2023-05-12 03:01:19,425	44k	INFO	====> Epoch: 388, cost 87.25 s
2023-05-12 03:01:52,827	44k	INFO	Train Epoch: 389 [23%]
2023-05-12 03:01:52,828	44k	INFO	Losses: [2.321176052093506, 2.635291576385498, 11.58166217803955, 21.54326629638672, 0.7177306413650513], step: 59400, lr: 2.8579633539809083e-05, reference_loss: 38.799129486083984
2023-05-12 03:02:46,451	44k	INFO	====> Epoch: 389, cost 87.03 s
2023-05-12 03:03:40,930	44k	INFO	Train Epoch: 390 [54%]
2023-05-12 03:03:40,930	44k	INFO	Losses: [2.4746806621551514, 2.5311200618743896, 12.478462219238281, 16.906023025512695, 0.6712642312049866], step: 59600, lr: 2.8576061085616605e-05, reference_loss: 35.06155014038086
2023-05-12 03:04:13,536	44k	INFO	====> Epoch: 390, cost 87.08 s
2023-05-12 03:05:29,213	44k	INFO	Train Epoch: 391 [84%]
2023-05-12 03:05:29,214	44k	INFO	Losses: [2.4147493839263916, 2.478468656539917, 10.078948974609375, 18.84165382385254, 0.8695682287216187], step: 59800, lr: 2.8572489077980904e-05, reference_loss: 34.683387756347656
2023-05-12 03:05:40,681	44k	INFO	====> Epoch: 391, cost 87.14 s
2023-05-12 03:07:07,801	44k	INFO	====> Epoch: 392, cost 87.12 s
2023-05-12 03:07:35,856	44k	INFO	Train Epoch: 393 [15%]
2023-05-12 03:07:35,857	44k	INFO	Losses: [2.355435848236084, 2.3938512802124023, 12.078808784484863, 22.25308609008789, 0.7829247117042542], step: 60000, lr: 2.8565346402156547e-05, reference_loss: 39.86410903930664
2023-05-12 03:07:41,272	44k	INFO	Saving model and optimizer state at iteration 393 to ./logs\44k\G_60000.pth
2023-05-12 03:07:42,047	44k	INFO	Saving model and optimizer state at iteration 393 to ./logs\44k\D_60000.pth
2023-05-12 03:07:42,723	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\G_57000.pth
2023-05-12 03:07:42,773	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\D_57000.pth
2023-05-12 03:08:42,095	44k	INFO	====> Epoch: 393, cost 94.30 s
2023-05-12 03:09:31,322	44k	INFO	Train Epoch: 394 [46%]
2023-05-12 03:09:31,323	44k	INFO	Losses: [2.4283063411712646, 2.6148173809051514, 10.784454345703125, 20.74228858947754, 0.9343788623809814], step: 60200, lr: 2.8561775733856277e-05, reference_loss: 37.50424575805664
2023-05-12 03:10:09,395	44k	INFO	====> Epoch: 394, cost 87.30 s
2023-05-12 03:11:20,165	44k	INFO	Train Epoch: 395 [76%]
2023-05-12 03:11:20,165	44k	INFO	Losses: [2.466671943664551, 2.381155014038086, 14.714022636413574, 20.37222671508789, 0.7701204419136047], step: 60400, lr: 2.8558205511889543e-05, reference_loss: 40.704193115234375
2023-05-12 03:11:36,998	44k	INFO	====> Epoch: 395, cost 87.60 s
2023-05-12 03:13:03,779	44k	INFO	====> Epoch: 396, cost 86.78 s
2023-05-12 03:13:26,684	44k	INFO	Train Epoch: 397 [7%]
2023-05-12 03:13:26,684	44k	INFO	Losses: [1.855776071548462, 3.0590667724609375, 13.506698608398438, 20.01685333251953, 0.5135242938995361], step: 60600, lr: 2.8551066406733528e-05, reference_loss: 38.9519157409668
2023-05-12 03:14:31,014	44k	INFO	====> Epoch: 397, cost 87.23 s
2023-05-12 03:15:15,043	44k	INFO	Train Epoch: 398 [38%]
2023-05-12 03:15:15,044	44k	INFO	Losses: [2.3311688899993896, 2.753591299057007, 14.103012084960938, 19.344404220581055, 0.6492558717727661], step: 60800, lr: 2.8547497523432686e-05, reference_loss: 39.181434631347656
2023-05-12 03:15:58,459	44k	INFO	====> Epoch: 398, cost 87.45 s
2023-05-12 03:17:03,126	44k	INFO	Train Epoch: 399 [69%]
2023-05-12 03:17:03,127	44k	INFO	Losses: [2.0792133808135986, 2.83935284614563, 17.456817626953125, 21.017284393310547, 0.8417804837226868], step: 61000, lr: 2.8543929086242254e-05, reference_loss: 44.23445129394531
2023-05-12 03:17:08,663	44k	INFO	Saving model and optimizer state at iteration 399 to ./logs\44k\G_61000.pth
2023-05-12 03:17:09,446	44k	INFO	Saving model and optimizer state at iteration 399 to ./logs\44k\D_61000.pth
2023-05-12 03:17:10,116	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\G_58000.pth
2023-05-12 03:17:10,164	44k	INFO	.. Free up space by deleting ckpt ./logs\44k\D_58000.pth
2023-05-12 03:17:32,301	44k	INFO	====> Epoch: 399, cost 93.84 s
2023-05-12 03:18:58,526	44k	INFO	Train Epoch: 400 [99%]
2023-05-12 03:18:58,526	44k	INFO	Losses: [2.41349458694458, 2.4074997901916504, 10.494972229003906, 17.17674446105957, 1.1495909690856934], step: 61200, lr: 2.8540361095106474e-05, reference_loss: 33.64229965209961
2023-05-12 03:18:59,773	44k	INFO	====> Epoch: 400, cost 87.47 s