2023-03-15 23:55:48,523 44k INFO {'train': {'log_interval': 200, 'eval_interval': 1000, 'seed': 1234, 'epochs': 40, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': False, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 3}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'tubaki': 0}, 'model_dir': './logs\\44k'} 2023-03-15 23:55:48,551 44k WARNING git hash values are different. cea6df30(saved) != fd4d47fd(current) 2023-03-15 23:55:50,927 44k INFO Loaded checkpoint './logs\44k\G_0.pth' (iteration 1) 2023-03-15 23:55:51,463 44k INFO Loaded checkpoint './logs\44k\D_0.pth' (iteration 1) 2023-03-15 23:56:06,468 44k INFO Train Epoch: 1 [0%] 2023-03-15 23:56:06,468 44k INFO Losses: [2.6587464809417725, 2.222583293914795, 10.642498970031738, 23.982303619384766, 2.476300001144409], step: 0, lr: 0.0001 2023-03-15 23:56:10,329 44k INFO Saving model and optimizer state at iteration 1 to ./logs\44k\G_0.pth 2023-03-15 23:56:11,066 44k INFO Saving model and optimizer state at iteration 1 to ./logs\44k\D_0.pth 2023-03-15 23:57:28,136 44k INFO Train Epoch: 1 [20%] 2023-03-15 23:57:28,137 44k INFO Losses: [2.3601558208465576, 2.0930838584899902, 11.606171607971191, 26.50650978088379, 2.05717396736145], step: 200, lr: 0.0001 2023-03-15 23:58:43,295 44k INFO Train Epoch: 1 [40%] 2023-03-15 23:58:43,296 44k INFO Losses: [2.292062759399414, 2.5539612770080566, 12.947213172912598, 31.41070556640625, 1.9984899759292603], step: 400, lr: 0.0001 2023-03-15 23:59:57,697 44k INFO Train Epoch: 1 [59%] 2023-03-15 23:59:57,697 44k INFO Losses: [2.2785160541534424, 2.6134746074676514, 14.805261611938477, 23.631526947021484, 1.7349679470062256], step: 600, lr: 0.0001 2023-03-16 00:01:12,975 44k INFO Train Epoch: 1 [79%] 2023-03-16 00:01:12,975 44k INFO Losses: [2.705841541290283, 2.0899863243103027, 11.220514297485352, 22.3753719329834, 1.625312089920044], step: 800, lr: 0.0001 2023-03-16 00:02:29,208 44k INFO Train Epoch: 1 [99%] 2023-03-16 00:02:29,208 44k INFO Losses: [2.4277679920196533, 2.4440817832946777, 11.074688911437988, 20.854204177856445, 1.7993963956832886], step: 1000, lr: 0.0001 2023-03-16 00:02:32,570 44k INFO Saving model and optimizer state at iteration 1 to ./logs\44k\G_1000.pth 2023-03-16 00:02:33,314 44k INFO Saving model and optimizer state at iteration 1 to ./logs\44k\D_1000.pth 2023-03-16 00:02:41,522 44k INFO ====> Epoch: 1, cost 413.00 s 2023-03-16 00:04:03,386 44k INFO Train Epoch: 2 [19%] 2023-03-16 00:04:03,387 44k INFO Losses: [2.6382265090942383, 2.5541462898254395, 12.10607624053955, 30.657400131225586, 1.5671967267990112], step: 1200, lr: 9.99875e-05 2023-03-16 00:05:17,338 44k INFO Train Epoch: 2 [39%] 2023-03-16 00:05:17,338 44k INFO Losses: [2.692197561264038, 2.07731032371521, 8.940807342529297, 22.354917526245117, 1.4840569496154785], step: 1400, lr: 9.99875e-05 2023-03-16 00:06:32,476 44k INFO Train Epoch: 2 [58%] 2023-03-16 00:06:32,476 44k INFO Losses: [2.3576385974884033, 2.4125325679779053, 11.953214645385742, 25.181522369384766, 1.875720500946045], step: 1600, lr: 9.99875e-05 2023-03-16 00:07:47,078 44k INFO Train Epoch: 2 [78%] 2023-03-16 00:07:47,079 44k INFO Losses: [2.439793348312378, 2.098926305770874, 11.750472068786621, 22.751220703125, 1.4869418144226074], step: 1800, lr: 9.99875e-05 2023-03-16 00:09:01,876 44k INFO Train Epoch: 2 [98%] 2023-03-16 00:09:01,877 44k INFO Losses: [2.497136354446411, 2.720271348953247, 10.943450927734375, 23.627017974853516, 1.7937601804733276], step: 2000, lr: 9.99875e-05 2023-03-16 00:09:05,312 44k INFO Saving model and optimizer state at iteration 2 to ./logs\44k\G_2000.pth 2023-03-16 00:09:06,023 44k INFO Saving model and optimizer state at iteration 2 to ./logs\44k\D_2000.pth 2023-03-16 00:09:14,228 44k INFO ====> Epoch: 2, cost 392.71 s 2023-03-16 00:10:31,088 44k INFO Train Epoch: 3 [18%] 2023-03-16 00:10:31,088 44k INFO Losses: [2.4825689792633057, 2.2984557151794434, 12.021286010742188, 23.990745544433594, 1.455343246459961], step: 2200, lr: 9.99750015625e-05 2023-03-16 00:11:44,305 44k INFO Train Epoch: 3 [38%] 2023-03-16 00:11:44,305 44k INFO Losses: [2.2955219745635986, 2.4789369106292725, 13.010557174682617, 24.225732803344727, 1.683915138244629], step: 2400, lr: 9.99750015625e-05 2023-03-16 00:12:58,576 44k INFO Train Epoch: 3 [57%] 2023-03-16 00:12:58,577 44k INFO Losses: [2.6952619552612305, 1.9768215417861938, 6.856669902801514, 18.603836059570312, 1.7447389364242554], step: 2600, lr: 9.99750015625e-05 2023-03-16 00:14:12,508 44k INFO Train Epoch: 3 [77%] 2023-03-16 00:14:12,509 44k INFO Losses: [2.5041236877441406, 2.3484256267547607, 11.402677536010742, 21.70722770690918, 1.7252932786941528], step: 2800, lr: 9.99750015625e-05 2023-03-16 00:15:26,524 44k INFO Train Epoch: 3 [97%] 2023-03-16 00:15:26,524 44k INFO Losses: [2.455347776412964, 2.416062355041504, 9.887465476989746, 18.211700439453125, 1.6757378578186035], step: 3000, lr: 9.99750015625e-05 2023-03-16 00:15:29,978 44k INFO Saving model and optimizer state at iteration 3 to ./logs\44k\G_3000.pth 2023-03-16 00:15:30,751 44k INFO Saving model and optimizer state at iteration 3 to ./logs\44k\D_3000.pth 2023-03-16 00:15:42,686 44k INFO ====> Epoch: 3, cost 388.46 s 2023-03-16 00:16:55,905 44k INFO Train Epoch: 4 [17%] 2023-03-16 00:16:55,906 44k INFO Losses: [2.0368709564208984, 2.5300228595733643, 15.623783111572266, 26.45679473876953, 1.8740932941436768], step: 3200, lr: 9.996250468730469e-05 2023-03-16 00:18:09,315 44k INFO Train Epoch: 4 [37%] 2023-03-16 00:18:09,315 44k INFO Losses: [2.271059989929199, 2.2681188583374023, 11.496346473693848, 21.122529983520508, 1.540932059288025], step: 3400, lr: 9.996250468730469e-05 2023-03-16 00:19:23,616 44k INFO Train Epoch: 4 [56%] 2023-03-16 00:19:23,616 44k INFO Losses: [2.5297720432281494, 2.2503387928009033, 9.425283432006836, 18.89899444580078, 1.4544283151626587], step: 3600, lr: 9.996250468730469e-05 2023-03-16 00:20:37,501 44k INFO Train Epoch: 4 [76%] 2023-03-16 00:20:37,501 44k INFO Losses: [2.220167636871338, 2.3577373027801514, 15.710762977600098, 23.97270393371582, 1.6778843402862549], step: 3800, lr: 9.996250468730469e-05 2023-03-16 00:21:51,375 44k INFO Train Epoch: 4 [96%] 2023-03-16 00:21:51,375 44k INFO Losses: [2.547858953475952, 1.951156735420227, 11.643190383911133, 19.704898834228516, 1.5432096719741821], step: 4000, lr: 9.996250468730469e-05 2023-03-16 00:21:54,656 44k INFO Saving model and optimizer state at iteration 4 to ./logs\44k\G_4000.pth 2023-03-16 00:21:55,418 44k INFO Saving model and optimizer state at iteration 4 to ./logs\44k\D_4000.pth 2023-03-16 00:21:56,127 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_1000.pth 2023-03-16 00:21:56,174 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_1000.pth 2023-03-16 00:22:10,979 44k INFO ====> Epoch: 4, cost 388.29 s 2023-03-16 00:23:20,482 44k INFO Train Epoch: 5 [16%] 2023-03-16 00:23:20,483 44k INFO Losses: [2.360628604888916, 2.5219900608062744, 8.992044448852539, 19.570890426635742, 1.2951933145523071], step: 4200, lr: 9.995000937421877e-05 2023-03-16 00:24:33,820 44k INFO Train Epoch: 5 [36%] 2023-03-16 00:24:33,821 44k INFO Losses: [2.347033977508545, 2.5219650268554688, 10.627484321594238, 25.425058364868164, 1.1682416200637817], step: 4400, lr: 9.995000937421877e-05 2023-03-16 00:25:47,946 44k INFO Train Epoch: 5 [55%] 2023-03-16 00:25:47,946 44k INFO Losses: [2.4415509700775146, 2.56121563911438, 13.33708667755127, 23.068199157714844, 1.659965991973877], step: 4600, lr: 9.995000937421877e-05 2023-03-16 00:27:01,928 44k INFO Train Epoch: 5 [75%] 2023-03-16 00:27:01,928 44k INFO Losses: [2.1675448417663574, 2.5596816539764404, 11.775328636169434, 28.344575881958008, 1.6888072490692139], step: 4800, lr: 9.995000937421877e-05 2023-03-16 00:28:16,019 44k INFO Train Epoch: 5 [95%] 2023-03-16 00:28:16,020 44k INFO Losses: [2.8262901306152344, 2.253598928451538, 5.822007656097412, 19.041465759277344, 1.4445524215698242], step: 5000, lr: 9.995000937421877e-05 2023-03-16 00:28:19,372 44k INFO Saving model and optimizer state at iteration 5 to ./logs\44k\G_5000.pth 2023-03-16 00:28:20,089 44k INFO Saving model and optimizer state at iteration 5 to ./logs\44k\D_5000.pth 2023-03-16 00:28:20,785 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_2000.pth 2023-03-16 00:28:20,829 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_2000.pth 2023-03-16 00:28:39,328 44k INFO ====> Epoch: 5, cost 388.35 s 2023-03-16 00:29:45,044 44k INFO Train Epoch: 6 [15%] 2023-03-16 00:29:45,045 44k INFO Losses: [2.9641036987304688, 2.5617711544036865, 7.727907180786133, 19.740947723388672, 1.6536600589752197], step: 5200, lr: 9.993751562304699e-05 2023-03-16 00:30:58,366 44k INFO Train Epoch: 6 [35%] 2023-03-16 00:30:58,366 44k INFO Losses: [2.363410234451294, 2.35372257232666, 13.412374496459961, 23.936599731445312, 1.6520123481750488], step: 5400, lr: 9.993751562304699e-05 2023-03-16 00:32:12,355 44k INFO Train Epoch: 6 [54%] 2023-03-16 00:32:12,356 44k INFO Losses: [2.427125930786133, 2.414423942565918, 10.11244010925293, 26.098766326904297, 1.6088706254959106], step: 5600, lr: 9.993751562304699e-05 2023-03-16 00:33:26,343 44k INFO Train Epoch: 6 [74%] 2023-03-16 00:33:26,343 44k INFO Losses: [2.620867967605591, 2.1311681270599365, 7.945067405700684, 17.199392318725586, 1.7722448110580444], step: 5800, lr: 9.993751562304699e-05 2023-03-16 00:34:40,552 44k INFO Train Epoch: 6 [94%] 2023-03-16 00:34:40,552 44k INFO Losses: [2.496558666229248, 2.339478015899658, 11.764273643493652, 23.558364868164062, 1.2615920305252075], step: 6000, lr: 9.993751562304699e-05 2023-03-16 00:34:43,803 44k INFO Saving model and optimizer state at iteration 6 to ./logs\44k\G_6000.pth 2023-03-16 00:34:44,570 44k INFO Saving model and optimizer state at iteration 6 to ./logs\44k\D_6000.pth 2023-03-16 00:34:45,294 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_3000.pth 2023-03-16 00:34:45,338 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_3000.pth 2023-03-16 00:35:07,490 44k INFO ====> Epoch: 6, cost 388.16 s 2023-03-16 00:36:10,024 44k INFO Train Epoch: 7 [14%] 2023-03-16 00:36:10,024 44k INFO Losses: [2.279035806655884, 2.655416250228882, 11.617131233215332, 22.875904083251953, 1.4710954427719116], step: 6200, lr: 9.99250234335941e-05 2023-03-16 00:37:21,321 44k INFO Train Epoch: 7 [34%] 2023-03-16 00:37:21,322 44k INFO Losses: [2.6629128456115723, 2.4211292266845703, 10.17526912689209, 21.412355422973633, 1.568886160850525], step: 6400, lr: 9.99250234335941e-05 2023-03-16 00:38:33,136 44k INFO Train Epoch: 7 [53%] 2023-03-16 00:38:33,137 44k INFO Losses: [2.630986452102661, 1.8743085861206055, 6.269574165344238, 21.672170639038086, 1.459185004234314], step: 6600, lr: 9.99250234335941e-05 2023-03-16 00:39:45,047 44k INFO Train Epoch: 7 [73%] 2023-03-16 00:39:45,048 44k INFO Losses: [2.5276618003845215, 2.3272199630737305, 11.692448616027832, 21.014406204223633, 1.709067702293396], step: 6800, lr: 9.99250234335941e-05 2023-03-16 00:40:57,056 44k INFO Train Epoch: 7 [93%] 2023-03-16 00:40:57,056 44k INFO Losses: [2.588921308517456, 2.1743948459625244, 10.92684555053711, 19.7436580657959, 1.4132627248764038], step: 7000, lr: 9.99250234335941e-05 2023-03-16 00:41:00,210 44k INFO Saving model and optimizer state at iteration 7 to ./logs\44k\G_7000.pth 2023-03-16 00:41:00,929 44k INFO Saving model and optimizer state at iteration 7 to ./logs\44k\D_7000.pth 2023-03-16 00:41:01,586 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_4000.pth 2023-03-16 00:41:01,631 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_4000.pth 2023-03-16 00:41:26,589 44k INFO ====> Epoch: 7, cost 379.10 s 2023-03-16 00:42:22,645 44k INFO Train Epoch: 8 [13%] 2023-03-16 00:42:22,645 44k INFO Losses: [2.4755806922912598, 2.3632864952087402, 12.606449127197266, 20.452219009399414, 1.65877103805542], step: 7200, lr: 9.991253280566489e-05 2023-03-16 00:43:33,946 44k INFO Train Epoch: 8 [33%] 2023-03-16 00:43:33,947 44k INFO Losses: [2.3836379051208496, 2.5833041667938232, 8.587996482849121, 20.95946502685547, 1.7604856491088867], step: 7400, lr: 9.991253280566489e-05 2023-03-16 00:44:45,888 44k INFO Train Epoch: 8 [52%] 2023-03-16 00:44:45,888 44k INFO Losses: [2.730246067047119, 2.300230026245117, 7.918561935424805, 18.70749282836914, 1.348031997680664], step: 7600, lr: 9.991253280566489e-05 2023-03-16 00:45:57,972 44k INFO Train Epoch: 8 [72%] 2023-03-16 00:45:57,973 44k INFO Losses: [2.443542003631592, 2.167905569076538, 7.399232864379883, 22.14451789855957, 1.7056339979171753], step: 7800, lr: 9.991253280566489e-05 2023-03-16 00:47:10,391 44k INFO Train Epoch: 8 [92%] 2023-03-16 00:47:10,392 44k INFO Losses: [2.3540232181549072, 2.204185962677002, 12.663874626159668, 21.712404251098633, 1.8588826656341553], step: 8000, lr: 9.991253280566489e-05 2023-03-16 00:47:13,382 44k INFO Saving model and optimizer state at iteration 8 to ./logs\44k\G_8000.pth 2023-03-16 00:47:14,085 44k INFO Saving model and optimizer state at iteration 8 to ./logs\44k\D_8000.pth 2023-03-16 00:47:14,784 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_5000.pth 2023-03-16 00:47:14,828 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_5000.pth 2023-03-16 00:47:43,100 44k INFO ====> Epoch: 8, cost 376.51 s 2023-03-16 00:48:35,475 44k INFO Train Epoch: 9 [12%] 2023-03-16 00:48:35,475 44k INFO Losses: [2.375885009765625, 2.3702824115753174, 11.442092895507812, 22.802810668945312, 1.5255464315414429], step: 8200, lr: 9.990004373906418e-05 2023-03-16 00:49:46,447 44k INFO Train Epoch: 9 [32%] 2023-03-16 00:49:46,447 44k INFO Losses: [2.6904866695404053, 2.363374948501587, 9.778064727783203, 23.542917251586914, 1.5048019886016846], step: 8400, lr: 9.990004373906418e-05 2023-03-16 00:50:58,315 44k INFO Train Epoch: 9 [51%] 2023-03-16 00:50:58,316 44k INFO Losses: [2.30169939994812, 2.261643886566162, 13.062037467956543, 25.44234848022461, 2.0130996704101562], step: 8600, lr: 9.990004373906418e-05 2023-03-16 00:52:10,651 44k INFO Train Epoch: 9 [71%] 2023-03-16 00:52:10,651 44k INFO Losses: [2.6778435707092285, 1.9205305576324463, 12.271574020385742, 21.46491241455078, 1.5914647579193115], step: 8800, lr: 9.990004373906418e-05 2023-03-16 00:53:22,976 44k INFO Train Epoch: 9 [91%] 2023-03-16 00:53:22,976 44k INFO Losses: [2.3424935340881348, 2.3373446464538574, 11.980668067932129, 23.96898651123047, 1.8243688344955444], step: 9000, lr: 9.990004373906418e-05 2023-03-16 00:53:25,909 44k INFO Saving model and optimizer state at iteration 9 to ./logs\44k\G_9000.pth 2023-03-16 00:53:26,591 44k INFO Saving model and optimizer state at iteration 9 to ./logs\44k\D_9000.pth 2023-03-16 00:53:27,290 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_6000.pth 2023-03-16 00:53:27,329 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_6000.pth 2023-03-16 00:53:59,535 44k INFO ====> Epoch: 9, cost 376.43 s 2023-03-16 00:54:48,260 44k INFO Train Epoch: 10 [11%] 2023-03-16 00:54:48,260 44k INFO Losses: [2.4331204891204834, 1.8966166973114014, 11.345996856689453, 22.878150939941406, 1.3406018018722534], step: 9200, lr: 9.98875562335968e-05 2023-03-16 00:55:59,248 44k INFO Train Epoch: 10 [31%] 2023-03-16 00:55:59,248 44k INFO Losses: [2.3518919944763184, 2.2060024738311768, 10.82605266571045, 22.18310546875, 1.7014986276626587], step: 9400, lr: 9.98875562335968e-05 2023-03-16 00:57:10,968 44k INFO Train Epoch: 10 [50%] 2023-03-16 00:57:10,969 44k INFO Losses: [2.5386462211608887, 1.9072651863098145, 14.640498161315918, 23.918350219726562, 1.4656016826629639], step: 9600, lr: 9.98875562335968e-05 2023-03-16 00:58:22,836 44k INFO Train Epoch: 10 [70%] 2023-03-16 00:58:22,836 44k INFO Losses: [2.033341884613037, 2.6255555152893066, 12.234801292419434, 26.206016540527344, 1.6282846927642822], step: 9800, lr: 9.98875562335968e-05 2023-03-16 00:59:34,565 44k INFO Train Epoch: 10 [90%] 2023-03-16 00:59:34,565 44k INFO Losses: [2.491273880004883, 2.074456214904785, 12.473411560058594, 19.898523330688477, 1.796959638595581], step: 10000, lr: 9.98875562335968e-05 2023-03-16 00:59:37,505 44k INFO Saving model and optimizer state at iteration 10 to ./logs\44k\G_10000.pth 2023-03-16 00:59:38,193 44k INFO Saving model and optimizer state at iteration 10 to ./logs\44k\D_10000.pth 2023-03-16 00:59:38,869 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_7000.pth 2023-03-16 00:59:38,908 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_7000.pth 2023-03-16 01:00:14,490 44k INFO ====> Epoch: 10, cost 374.96 s 2023-03-16 01:00:59,592 44k INFO Train Epoch: 11 [10%] 2023-03-16 01:00:59,592 44k INFO Losses: [2.5374398231506348, 2.199739694595337, 10.93807315826416, 16.951093673706055, 1.4652862548828125], step: 10200, lr: 9.987507028906759e-05 2023-03-16 01:02:10,746 44k INFO Train Epoch: 11 [30%] 2023-03-16 01:02:10,746 44k INFO Losses: [2.575394630432129, 2.103766679763794, 10.279555320739746, 22.22572898864746, 1.8592180013656616], step: 10400, lr: 9.987507028906759e-05 2023-03-16 01:03:22,395 44k INFO Train Epoch: 11 [50%] 2023-03-16 01:03:22,395 44k INFO Losses: [2.70969820022583, 2.5448691844940186, 13.535930633544922, 22.908435821533203, 1.3376126289367676], step: 10600, lr: 9.987507028906759e-05 2023-03-16 01:04:34,286 44k INFO Train Epoch: 11 [69%] 2023-03-16 01:04:34,286 44k INFO Losses: [2.659320116043091, 2.3223631381988525, 7.864261150360107, 17.47832489013672, 1.265831708908081], step: 10800, lr: 9.987507028906759e-05 2023-03-16 01:05:46,701 44k INFO Train Epoch: 11 [89%] 2023-03-16 01:05:46,702 44k INFO Losses: [2.511888265609741, 2.133340358734131, 9.049362182617188, 17.13569450378418, 1.4478918313980103], step: 11000, lr: 9.987507028906759e-05 2023-03-16 01:05:49,558 44k INFO Saving model and optimizer state at iteration 11 to ./logs\44k\G_11000.pth 2023-03-16 01:05:50,325 44k INFO Saving model and optimizer state at iteration 11 to ./logs\44k\D_11000.pth 2023-03-16 01:05:51,014 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_8000.pth 2023-03-16 01:05:51,042 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_8000.pth 2023-03-16 01:06:30,148 44k INFO ====> Epoch: 11, cost 375.66 s 2023-03-16 01:07:11,595 44k INFO Train Epoch: 12 [9%] 2023-03-16 01:07:11,595 44k INFO Losses: [2.4070801734924316, 2.3940012454986572, 14.862382888793945, 23.374860763549805, 1.6012767553329468], step: 11200, lr: 9.986258590528146e-05 2023-03-16 01:08:22,886 44k INFO Train Epoch: 12 [29%] 2023-03-16 01:08:22,886 44k INFO Losses: [2.432343006134033, 2.1754188537597656, 11.079984664916992, 19.15660858154297, 1.5149145126342773], step: 11400, lr: 9.986258590528146e-05 2023-03-16 01:09:34,320 44k INFO Train Epoch: 12 [49%] 2023-03-16 01:09:34,320 44k INFO Losses: [2.5059432983398438, 2.4319841861724854, 11.579089164733887, 22.386850357055664, 1.6966476440429688], step: 11600, lr: 9.986258590528146e-05 2023-03-16 01:10:46,397 44k INFO Train Epoch: 12 [68%] 2023-03-16 01:10:46,398 44k INFO Losses: [2.319063186645508, 2.4799838066101074, 9.805450439453125, 19.280879974365234, 1.2450532913208008], step: 11800, lr: 9.986258590528146e-05 2023-03-16 01:11:58,253 44k INFO Train Epoch: 12 [88%] 2023-03-16 01:11:58,253 44k INFO Losses: [2.7748799324035645, 1.9171966314315796, 8.554396629333496, 16.012683868408203, 0.9786509871482849], step: 12000, lr: 9.986258590528146e-05 2023-03-16 01:12:01,120 44k INFO Saving model and optimizer state at iteration 12 to ./logs\44k\G_12000.pth 2023-03-16 01:12:01,882 44k INFO Saving model and optimizer state at iteration 12 to ./logs\44k\D_12000.pth 2023-03-16 01:12:02,532 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_9000.pth 2023-03-16 01:12:02,573 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_9000.pth 2023-03-16 01:12:45,284 44k INFO ====> Epoch: 12, cost 375.14 s 2023-03-16 01:13:23,135 44k INFO Train Epoch: 13 [8%] 2023-03-16 01:13:23,135 44k INFO Losses: [2.5840554237365723, 2.2081243991851807, 12.085539817810059, 18.187217712402344, 1.48982834815979], step: 12200, lr: 9.98501030820433e-05 2023-03-16 01:14:34,519 44k INFO Train Epoch: 13 [28%] 2023-03-16 01:14:34,519 44k INFO Losses: [2.647184133529663, 2.0572681427001953, 9.573904991149902, 20.31665802001953, 1.4708908796310425], step: 12400, lr: 9.98501030820433e-05 2023-03-16 01:15:45,996 44k INFO Train Epoch: 13 [48%] 2023-03-16 01:15:45,996 44k INFO Losses: [2.373988628387451, 2.0000319480895996, 13.293739318847656, 21.337385177612305, 1.7034159898757935], step: 12600, lr: 9.98501030820433e-05 2023-03-16 01:16:58,159 44k INFO Train Epoch: 13 [67%] 2023-03-16 01:16:58,159 44k INFO Losses: [2.764277935028076, 2.0898571014404297, 10.216264724731445, 18.441394805908203, 1.5654468536376953], step: 12800, lr: 9.98501030820433e-05 2023-03-16 01:18:10,035 44k INFO Train Epoch: 13 [87%] 2023-03-16 01:18:10,035 44k INFO Losses: [2.253309965133667, 2.3982138633728027, 13.1033296585083, 19.574350357055664, 1.625113844871521], step: 13000, lr: 9.98501030820433e-05 2023-03-16 01:18:12,929 44k INFO Saving model and optimizer state at iteration 13 to ./logs\44k\G_13000.pth 2023-03-16 01:18:13,660 44k INFO Saving model and optimizer state at iteration 13 to ./logs\44k\D_13000.pth 2023-03-16 01:18:14,300 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_10000.pth 2023-03-16 01:18:14,343 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_10000.pth 2023-03-16 01:19:00,642 44k INFO ====> Epoch: 13, cost 375.36 s 2023-03-16 01:19:34,993 44k INFO Train Epoch: 14 [7%] 2023-03-16 01:19:34,994 44k INFO Losses: [2.6637840270996094, 2.25152325630188, 11.958680152893066, 21.45273780822754, 1.1105343103408813], step: 13200, lr: 9.983762181915804e-05 2023-03-16 01:20:46,523 44k INFO Train Epoch: 14 [27%] 2023-03-16 01:20:46,524 44k INFO Losses: [2.4224462509155273, 2.3287062644958496, 10.678092956542969, 19.09824562072754, 1.594849944114685], step: 13400, lr: 9.983762181915804e-05 2023-03-16 01:21:58,109 44k INFO Train Epoch: 14 [47%] 2023-03-16 01:21:58,109 44k INFO Losses: [2.712064266204834, 2.1936895847320557, 8.87340259552002, 20.91951560974121, 1.3817836046218872], step: 13600, lr: 9.983762181915804e-05 2023-03-16 01:23:10,231 44k INFO Train Epoch: 14 [66%] 2023-03-16 01:23:10,231 44k INFO Losses: [2.2476935386657715, 2.7752394676208496, 10.2214994430542, 15.945452690124512, 1.7218014001846313], step: 13800, lr: 9.983762181915804e-05 2023-03-16 01:24:22,069 44k INFO Train Epoch: 14 [86%] 2023-03-16 01:24:22,069 44k INFO Losses: [2.491288423538208, 2.164318561553955, 7.900779724121094, 19.121999740600586, 1.207377314567566], step: 14000, lr: 9.983762181915804e-05 2023-03-16 01:24:24,970 44k INFO Saving model and optimizer state at iteration 14 to ./logs\44k\G_14000.pth 2023-03-16 01:24:25,637 44k INFO Saving model and optimizer state at iteration 14 to ./logs\44k\D_14000.pth 2023-03-16 01:24:26,273 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_11000.pth 2023-03-16 01:24:26,308 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_11000.pth 2023-03-16 01:25:16,111 44k INFO ====> Epoch: 14, cost 375.47 s 2023-03-16 01:25:46,822 44k INFO Train Epoch: 15 [6%] 2023-03-16 01:25:46,822 44k INFO Losses: [2.4574337005615234, 2.471323013305664, 13.248011589050293, 23.87859344482422, 1.6126726865768433], step: 14200, lr: 9.982514211643064e-05 2023-03-16 01:26:58,147 44k INFO Train Epoch: 15 [26%] 2023-03-16 01:26:58,147 44k INFO Losses: [2.4262948036193848, 2.3058528900146484, 14.387864112854004, 24.02227210998535, 1.6401861906051636], step: 14400, lr: 9.982514211643064e-05 2023-03-16 01:28:09,771 44k INFO Train Epoch: 15 [46%] 2023-03-16 01:28:09,772 44k INFO Losses: [2.572964668273926, 2.092832088470459, 9.870129585266113, 19.208988189697266, 1.3285467624664307], step: 14600, lr: 9.982514211643064e-05 2023-03-16 01:29:21,823 44k INFO Train Epoch: 15 [65%] 2023-03-16 01:29:21,823 44k INFO Losses: [2.23113751411438, 2.4101192951202393, 14.177618026733398, 25.293479919433594, 2.019350051879883], step: 14800, lr: 9.982514211643064e-05 2023-03-16 01:30:33,528 44k INFO Train Epoch: 15 [85%] 2023-03-16 01:30:33,528 44k INFO Losses: [2.565001964569092, 2.160393238067627, 11.733285903930664, 22.589279174804688, 1.895139217376709], step: 15000, lr: 9.982514211643064e-05 2023-03-16 01:30:36,519 44k INFO Saving model and optimizer state at iteration 15 to ./logs\44k\G_15000.pth 2023-03-16 01:30:37,199 44k INFO Saving model and optimizer state at iteration 15 to ./logs\44k\D_15000.pth 2023-03-16 01:30:37,837 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_12000.pth 2023-03-16 01:30:37,881 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_12000.pth 2023-03-16 01:31:32,195 44k INFO ====> Epoch: 15, cost 376.08 s 2023-03-16 01:31:59,517 44k INFO Train Epoch: 16 [5%] 2023-03-16 01:31:59,517 44k INFO Losses: [2.557100296020508, 2.1491713523864746, 10.291960716247559, 21.9915828704834, 1.6165151596069336], step: 15200, lr: 9.981266397366609e-05 2023-03-16 01:33:10,942 44k INFO Train Epoch: 16 [25%] 2023-03-16 01:33:10,942 44k INFO Losses: [2.4912667274475098, 2.1575510501861572, 11.022111892700195, 20.3220157623291, 1.5803000926971436], step: 15400, lr: 9.981266397366609e-05 2023-03-16 01:34:22,327 44k INFO Train Epoch: 16 [45%] 2023-03-16 01:34:22,327 44k INFO Losses: [2.5079731941223145, 2.440004587173462, 8.238601684570312, 21.534034729003906, 1.6909219026565552], step: 15600, lr: 9.981266397366609e-05 2023-03-16 01:35:34,424 44k INFO Train Epoch: 16 [64%] 2023-03-16 01:35:34,424 44k INFO Losses: [2.143476724624634, 2.8424456119537354, 16.61366844177246, 22.534446716308594, 1.1176130771636963], step: 15800, lr: 9.981266397366609e-05 2023-03-16 01:36:46,208 44k INFO Train Epoch: 16 [84%] 2023-03-16 01:36:46,208 44k INFO Losses: [2.5649070739746094, 2.141246795654297, 9.349444389343262, 22.258838653564453, 1.4488643407821655], step: 16000, lr: 9.981266397366609e-05 2023-03-16 01:36:49,151 44k INFO Saving model and optimizer state at iteration 16 to ./logs\44k\G_16000.pth 2023-03-16 01:36:49,831 44k INFO Saving model and optimizer state at iteration 16 to ./logs\44k\D_16000.pth 2023-03-16 01:36:50,535 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_13000.pth 2023-03-16 01:36:50,574 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_13000.pth 2023-03-16 01:37:47,737 44k INFO ====> Epoch: 16, cost 375.54 s 2023-03-16 01:38:11,165 44k INFO Train Epoch: 17 [4%] 2023-03-16 01:38:11,166 44k INFO Losses: [2.6444520950317383, 2.1997766494750977, 12.863842964172363, 23.36856460571289, 1.8677303791046143], step: 16200, lr: 9.980018739066937e-05 2023-03-16 01:39:22,940 44k INFO Train Epoch: 17 [24%] 2023-03-16 01:39:22,940 44k INFO Losses: [2.5903525352478027, 2.2508544921875, 10.513290405273438, 24.210176467895508, 1.782578706741333], step: 16400, lr: 9.980018739066937e-05 2023-03-16 01:40:34,426 44k INFO Train Epoch: 17 [44%] 2023-03-16 01:40:34,426 44k INFO Losses: [2.4755630493164062, 2.309506893157959, 9.637182235717773, 21.083148956298828, 1.3930975198745728], step: 16600, lr: 9.980018739066937e-05 2023-03-16 01:41:46,608 44k INFO Train Epoch: 17 [63%] 2023-03-16 01:41:46,608 44k INFO Losses: [2.5209805965423584, 2.390125274658203, 12.708600997924805, 23.054603576660156, 1.6959490776062012], step: 16800, lr: 9.980018739066937e-05 2023-03-16 01:42:58,313 44k INFO Train Epoch: 17 [83%] 2023-03-16 01:42:58,314 44k INFO Losses: [2.3289060592651367, 2.5157666206359863, 10.593748092651367, 23.446277618408203, 1.683432936668396], step: 17000, lr: 9.980018739066937e-05 2023-03-16 01:43:01,209 44k INFO Saving model and optimizer state at iteration 17 to ./logs\44k\G_17000.pth 2023-03-16 01:43:01,917 44k INFO Saving model and optimizer state at iteration 17 to ./logs\44k\D_17000.pth 2023-03-16 01:43:02,600 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_14000.pth 2023-03-16 01:43:02,639 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_14000.pth 2023-03-16 01:44:03,398 44k INFO ====> Epoch: 17, cost 375.66 s 2023-03-16 01:44:23,188 44k INFO Train Epoch: 18 [3%] 2023-03-16 01:44:23,188 44k INFO Losses: [2.2854249477386475, 2.4649014472961426, 12.338493347167969, 25.138633728027344, 1.5594103336334229], step: 17200, lr: 9.978771236724554e-05 2023-03-16 01:45:35,055 44k INFO Train Epoch: 18 [23%] 2023-03-16 01:45:35,056 44k INFO Losses: [2.5241637229919434, 2.5264928340911865, 13.385576248168945, 23.03873062133789, 1.4072517156600952], step: 17400, lr: 9.978771236724554e-05 2023-03-16 01:46:46,582 44k INFO Train Epoch: 18 [43%] 2023-03-16 01:46:46,582 44k INFO Losses: [2.4268901348114014, 2.225811719894409, 10.496086120605469, 19.99051284790039, 1.544403076171875], step: 17600, lr: 9.978771236724554e-05 2023-03-16 01:47:58,645 44k INFO Train Epoch: 18 [62%] 2023-03-16 01:47:58,645 44k INFO Losses: [2.470054864883423, 2.2592246532440186, 8.183069229125977, 20.027297973632812, 1.48908269405365], step: 17800, lr: 9.978771236724554e-05 2023-03-16 01:49:10,522 44k INFO Train Epoch: 18 [82%] 2023-03-16 01:49:10,523 44k INFO Losses: [2.311995267868042, 2.5431365966796875, 11.228175163269043, 25.237241744995117, 1.31155526638031], step: 18000, lr: 9.978771236724554e-05 2023-03-16 01:49:13,459 44k INFO Saving model and optimizer state at iteration 18 to ./logs\44k\G_18000.pth 2023-03-16 01:49:14,174 44k INFO Saving model and optimizer state at iteration 18 to ./logs\44k\D_18000.pth 2023-03-16 01:49:14,808 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_15000.pth 2023-03-16 01:49:14,846 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_15000.pth 2023-03-16 01:50:19,203 44k INFO ====> Epoch: 18, cost 375.81 s 2023-03-16 01:50:35,345 44k INFO Train Epoch: 19 [2%] 2023-03-16 01:50:35,345 44k INFO Losses: [2.5381686687469482, 2.3045685291290283, 12.797904968261719, 23.979034423828125, 1.2082128524780273], step: 18200, lr: 9.977523890319963e-05 2023-03-16 01:51:47,352 44k INFO Train Epoch: 19 [22%] 2023-03-16 01:51:47,353 44k INFO Losses: [2.586062431335449, 2.0687570571899414, 6.487185001373291, 20.027673721313477, 1.4115573167800903], step: 18400, lr: 9.977523890319963e-05 2023-03-16 01:52:58,708 44k INFO Train Epoch: 19 [42%] 2023-03-16 01:52:58,709 44k INFO Losses: [2.369870185852051, 2.511805534362793, 12.17759895324707, 21.463531494140625, 1.434291124343872], step: 18600, lr: 9.977523890319963e-05 2023-03-16 01:54:10,849 44k INFO Train Epoch: 19 [61%] 2023-03-16 01:54:10,849 44k INFO Losses: [2.245140552520752, 2.491227626800537, 16.950294494628906, 24.448060989379883, 1.4237847328186035], step: 18800, lr: 9.977523890319963e-05 2023-03-16 01:55:22,643 44k INFO Train Epoch: 19 [81%] 2023-03-16 01:55:22,644 44k INFO Losses: [2.400696277618408, 2.071983814239502, 14.005691528320312, 25.178220748901367, 1.5081660747528076], step: 19000, lr: 9.977523890319963e-05 2023-03-16 01:55:25,531 44k INFO Saving model and optimizer state at iteration 19 to ./logs\44k\G_19000.pth 2023-03-16 01:55:26,256 44k INFO Saving model and optimizer state at iteration 19 to ./logs\44k\D_19000.pth 2023-03-16 01:55:26,893 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_16000.pth 2023-03-16 01:55:26,936 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_16000.pth 2023-03-16 01:56:34,963 44k INFO ====> Epoch: 19, cost 375.76 s 2023-03-16 01:56:47,515 44k INFO Train Epoch: 20 [1%] 2023-03-16 01:56:47,516 44k INFO Losses: [2.3414058685302734, 2.762958526611328, 12.027870178222656, 24.18658447265625, 1.5363625288009644], step: 19200, lr: 9.976276699833672e-05 2023-03-16 01:57:59,616 44k INFO Train Epoch: 20 [21%] 2023-03-16 01:57:59,616 44k INFO Losses: [2.5051376819610596, 2.083427906036377, 10.034896850585938, 20.797128677368164, 1.2484245300292969], step: 19400, lr: 9.976276699833672e-05 2023-03-16 01:59:10,959 44k INFO Train Epoch: 20 [41%] 2023-03-16 01:59:10,960 44k INFO Losses: [2.6454315185546875, 2.055673360824585, 12.265761375427246, 23.97649383544922, 1.4185601472854614], step: 19600, lr: 9.976276699833672e-05 2023-03-16 02:00:23,148 44k INFO Train Epoch: 20 [60%] 2023-03-16 02:00:23,149 44k INFO Losses: [2.7027695178985596, 2.1707215309143066, 9.469743728637695, 17.24054718017578, 1.236533522605896], step: 19800, lr: 9.976276699833672e-05 2023-03-16 02:01:34,989 44k INFO Train Epoch: 20 [80%] 2023-03-16 02:01:34,989 44k INFO Losses: [2.360731363296509, 2.328805685043335, 10.053674697875977, 19.50934410095215, 1.4486725330352783], step: 20000, lr: 9.976276699833672e-05 2023-03-16 02:01:37,910 44k INFO Saving model and optimizer state at iteration 20 to ./logs\44k\G_20000.pth 2023-03-16 02:01:38,571 44k INFO Saving model and optimizer state at iteration 20 to ./logs\44k\D_20000.pth 2023-03-16 02:01:39,209 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_17000.pth 2023-03-16 02:01:39,246 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_17000.pth 2023-03-16 02:02:51,635 44k INFO ====> Epoch: 20, cost 376.67 s 2023-03-16 02:03:01,010 44k INFO Train Epoch: 21 [0%] 2023-03-16 02:03:01,010 44k INFO Losses: [2.5805952548980713, 2.5616977214813232, 11.35832405090332, 22.124038696289062, 1.613099455833435], step: 20200, lr: 9.975029665246193e-05 2023-03-16 02:04:13,110 44k INFO Train Epoch: 21 [20%] 2023-03-16 02:04:13,111 44k INFO Losses: [2.3513832092285156, 2.4532272815704346, 12.532115936279297, 25.6800537109375, 1.4372217655181885], step: 20400, lr: 9.975029665246193e-05 2023-03-16 02:05:24,191 44k INFO Train Epoch: 21 [40%] 2023-03-16 02:05:24,192 44k INFO Losses: [2.525791645050049, 2.468655824661255, 7.806985855102539, 23.321149826049805, 1.6180458068847656], step: 20600, lr: 9.975029665246193e-05 2023-03-16 02:06:36,323 44k INFO Train Epoch: 21 [59%] 2023-03-16 02:06:36,324 44k INFO Losses: [2.5264980792999268, 2.328840970993042, 12.733165740966797, 22.271677017211914, 1.392823338508606], step: 20800, lr: 9.975029665246193e-05 2023-03-16 02:07:48,207 44k INFO Train Epoch: 21 [79%] 2023-03-16 02:07:48,208 44k INFO Losses: [2.6165390014648438, 2.106302499771118, 8.630501747131348, 21.351308822631836, 1.6641740798950195], step: 21000, lr: 9.975029665246193e-05 2023-03-16 02:07:51,215 44k INFO Saving model and optimizer state at iteration 21 to ./logs\44k\G_21000.pth 2023-03-16 02:07:51,885 44k INFO Saving model and optimizer state at iteration 21 to ./logs\44k\D_21000.pth 2023-03-16 02:07:52,567 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_18000.pth 2023-03-16 02:07:52,611 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_18000.pth 2023-03-16 02:09:05,075 44k INFO Train Epoch: 21 [99%] 2023-03-16 02:09:05,075 44k INFO Losses: [2.696974277496338, 2.0766756534576416, 8.186661720275879, 19.15684700012207, 1.5066715478897095], step: 21200, lr: 9.975029665246193e-05 2023-03-16 02:09:08,686 44k INFO ====> Epoch: 21, cost 377.05 s 2023-03-16 02:10:26,031 44k INFO Train Epoch: 22 [19%] 2023-03-16 02:10:26,032 44k INFO Losses: [2.546477794647217, 2.2954585552215576, 8.291088104248047, 22.679834365844727, 1.0614348649978638], step: 21400, lr: 9.973782786538036e-05 2023-03-16 02:11:37,490 44k INFO Train Epoch: 22 [39%] 2023-03-16 02:11:37,490 44k INFO Losses: [2.2956104278564453, 2.710975408554077, 12.705550193786621, 24.559558868408203, 1.9221078157424927], step: 21600, lr: 9.973782786538036e-05 2023-03-16 02:12:49,680 44k INFO Train Epoch: 22 [58%] 2023-03-16 02:12:49,681 44k INFO Losses: [2.3957786560058594, 2.4713969230651855, 13.691433906555176, 21.386377334594727, 1.4957661628723145], step: 21800, lr: 9.973782786538036e-05 2023-03-16 02:14:01,906 44k INFO Train Epoch: 22 [78%] 2023-03-16 02:14:01,906 44k INFO Losses: [2.498322010040283, 2.5698397159576416, 12.099370002746582, 21.449935913085938, 1.336240291595459], step: 22000, lr: 9.973782786538036e-05 2023-03-16 02:14:04,909 44k INFO Saving model and optimizer state at iteration 22 to ./logs\44k\G_22000.pth 2023-03-16 02:14:05,591 44k INFO Saving model and optimizer state at iteration 22 to ./logs\44k\D_22000.pth 2023-03-16 02:14:06,256 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_19000.pth 2023-03-16 02:14:06,293 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_19000.pth 2023-03-16 02:15:17,838 44k INFO Train Epoch: 22 [98%] 2023-03-16 02:15:17,838 44k INFO Losses: [2.2788851261138916, 2.543818235397339, 11.230917930603027, 22.040117263793945, 1.224067211151123], step: 22200, lr: 9.973782786538036e-05 2023-03-16 02:15:24,981 44k INFO ====> Epoch: 22, cost 376.29 s 2023-03-16 02:16:38,558 44k INFO Train Epoch: 23 [18%] 2023-03-16 02:16:38,559 44k INFO Losses: [2.6251535415649414, 2.215014934539795, 6.367043972015381, 18.47018051147461, 1.403922200202942], step: 22400, lr: 9.972536063689719e-05 2023-03-16 02:17:49,832 44k INFO Train Epoch: 23 [38%] 2023-03-16 02:17:49,832 44k INFO Losses: [2.6071598529815674, 2.4468743801116943, 14.948019981384277, 20.616823196411133, 1.3450515270233154], step: 22600, lr: 9.972536063689719e-05 2023-03-16 02:19:02,036 44k INFO Train Epoch: 23 [57%] 2023-03-16 02:19:02,037 44k INFO Losses: [2.399557590484619, 2.1637983322143555, 11.664308547973633, 23.133831024169922, 1.5004301071166992], step: 22800, lr: 9.972536063689719e-05 2023-03-16 02:20:13,933 44k INFO Train Epoch: 23 [77%] 2023-03-16 02:20:13,933 44k INFO Losses: [2.4661409854888916, 2.564974546432495, 10.288546562194824, 18.663484573364258, 1.1434128284454346], step: 23000, lr: 9.972536063689719e-05 2023-03-16 02:20:16,893 44k INFO Saving model and optimizer state at iteration 23 to ./logs\44k\G_23000.pth 2023-03-16 02:20:17,569 44k INFO Saving model and optimizer state at iteration 23 to ./logs\44k\D_23000.pth 2023-03-16 02:20:18,247 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_20000.pth 2023-03-16 02:20:18,289 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_20000.pth 2023-03-16 02:21:29,912 44k INFO Train Epoch: 23 [97%] 2023-03-16 02:21:29,913 44k INFO Losses: [2.5902364253997803, 2.278787851333618, 11.5897855758667, 22.404085159301758, 1.9016567468643188], step: 23200, lr: 9.972536063689719e-05 2023-03-16 02:21:40,803 44k INFO ====> Epoch: 23, cost 375.82 s 2023-03-16 02:22:50,855 44k INFO Train Epoch: 24 [17%] 2023-03-16 02:22:50,856 44k INFO Losses: [2.4348580837249756, 2.1944899559020996, 8.966178894042969, 20.27336311340332, 1.428238034248352], step: 23400, lr: 9.971289496681757e-05 2023-03-16 02:24:02,212 44k INFO Train Epoch: 24 [37%] 2023-03-16 02:24:02,213 44k INFO Losses: [2.4086520671844482, 2.265171527862549, 11.449037551879883, 22.273914337158203, 1.2673239707946777], step: 23600, lr: 9.971289496681757e-05 2023-03-16 02:25:14,384 44k INFO Train Epoch: 24 [56%] 2023-03-16 02:25:14,385 44k INFO Losses: [2.6412124633789062, 2.3571434020996094, 11.292902946472168, 22.9881534576416, 1.7186723947525024], step: 23800, lr: 9.971289496681757e-05 2023-03-16 02:26:26,254 44k INFO Train Epoch: 24 [76%] 2023-03-16 02:26:26,254 44k INFO Losses: [2.492760419845581, 2.008866548538208, 11.306182861328125, 20.715761184692383, 1.4763414859771729], step: 24000, lr: 9.971289496681757e-05 2023-03-16 02:26:29,149 44k INFO Saving model and optimizer state at iteration 24 to ./logs\44k\G_24000.pth 2023-03-16 02:26:29,872 44k INFO Saving model and optimizer state at iteration 24 to ./logs\44k\D_24000.pth 2023-03-16 02:26:30,556 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_21000.pth 2023-03-16 02:26:30,593 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_21000.pth 2023-03-16 02:27:42,795 44k INFO Train Epoch: 24 [96%] 2023-03-16 02:27:42,796 44k INFO Losses: [2.3681135177612305, 2.33176589012146, 15.517050743103027, 22.143266677856445, 1.5214263200759888], step: 24200, lr: 9.971289496681757e-05 2023-03-16 02:27:57,409 44k INFO ====> Epoch: 24, cost 376.61 s 2023-03-16 02:29:05,878 44k INFO Train Epoch: 25 [16%] 2023-03-16 02:29:05,879 44k INFO Losses: [2.35603666305542, 2.405923843383789, 7.719895839691162, 21.508512496948242, 1.5374904870986938], step: 24400, lr: 9.970043085494672e-05 2023-03-16 02:30:17,377 44k INFO Train Epoch: 25 [36%] 2023-03-16 02:30:17,378 44k INFO Losses: [2.2558679580688477, 2.517570972442627, 14.738548278808594, 25.359027862548828, 2.0540456771850586], step: 24600, lr: 9.970043085494672e-05 2023-03-16 02:31:29,325 44k INFO Train Epoch: 25 [55%] 2023-03-16 02:31:29,326 44k INFO Losses: [2.4953792095184326, 2.627546548843384, 10.921364784240723, 22.13829803466797, 1.426408290863037], step: 24800, lr: 9.970043085494672e-05 2023-03-16 02:32:59,108 44k INFO {'train': {'log_interval': 200, 'eval_interval': 1000, 'seed': 1234, 'epochs': 80, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': False, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 3}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'tubaki': 0}, 'model_dir': './logs\\44k'} 2023-03-16 02:32:59,134 44k WARNING git hash values are different. cea6df30(saved) != fd4d47fd(current) 2023-03-16 02:33:01,303 44k INFO Loaded checkpoint './logs\44k\G_24000.pth' (iteration 24) 2023-03-16 02:33:01,704 44k INFO Loaded checkpoint './logs\44k\D_24000.pth' (iteration 24) 2023-03-16 02:34:23,941 44k INFO Train Epoch: 24 [17%] 2023-03-16 02:34:23,941 44k INFO Losses: [2.256161689758301, 2.353424072265625, 12.059060096740723, 26.244239807128906, 1.0722169876098633], step: 23400, lr: 9.970043085494672e-05 2023-03-16 02:35:41,180 44k INFO Train Epoch: 24 [37%] 2023-03-16 02:35:41,181 44k INFO Losses: [2.6091957092285156, 2.3599488735198975, 8.557892799377441, 19.31502914428711, 1.279722809791565], step: 23600, lr: 9.970043085494672e-05 2023-03-16 02:36:57,038 44k INFO Train Epoch: 24 [56%] 2023-03-16 02:36:57,039 44k INFO Losses: [2.4007155895233154, 2.1949691772460938, 10.770856857299805, 22.908790588378906, 1.697563886642456], step: 23800, lr: 9.970043085494672e-05 2023-03-16 02:38:12,684 44k INFO Train Epoch: 24 [76%] 2023-03-16 02:38:12,684 44k INFO Losses: [2.377910614013672, 2.5852317810058594, 10.320157051086426, 23.98256492614746, 1.788327932357788], step: 24000, lr: 9.970043085494672e-05 2023-03-16 02:38:16,905 44k INFO Saving model and optimizer state at iteration 24 to ./logs\44k\G_24000.pth 2023-03-16 02:38:17,730 44k INFO Saving model and optimizer state at iteration 24 to ./logs\44k\D_24000.pth 2023-03-16 02:39:32,971 44k INFO Train Epoch: 24 [96%] 2023-03-16 02:39:32,972 44k INFO Losses: [2.358584403991699, 2.387054204940796, 12.950794219970703, 18.737619400024414, 1.5788359642028809], step: 24200, lr: 9.970043085494672e-05 2023-03-16 02:39:50,568 44k INFO ====> Epoch: 24, cost 411.46 s 2023-03-16 02:40:56,904 44k INFO Train Epoch: 25 [16%] 2023-03-16 02:40:56,904 44k INFO Losses: [2.496358871459961, 2.3065426349639893, 5.929593086242676, 23.000940322875977, 1.3831160068511963], step: 24400, lr: 9.968796830108985e-05 2023-03-16 02:42:07,659 44k INFO Train Epoch: 25 [36%] 2023-03-16 02:42:07,660 44k INFO Losses: [2.4015889167785645, 2.514981269836426, 12.299784660339355, 23.22321128845215, 1.283858060836792], step: 24600, lr: 9.968796830108985e-05 2023-03-16 02:43:20,341 44k INFO Train Epoch: 25 [55%] 2023-03-16 02:43:20,342 44k INFO Losses: [2.4981770515441895, 2.624650239944458, 11.823126792907715, 22.215713500976562, 1.6074793338775635], step: 24800, lr: 9.968796830108985e-05 2023-03-16 02:44:32,869 44k INFO Train Epoch: 25 [75%] 2023-03-16 02:44:32,870 44k INFO Losses: [2.445613384246826, 2.2901690006256104, 9.674025535583496, 20.551361083984375, 1.2761701345443726], step: 25000, lr: 9.968796830108985e-05 2023-03-16 02:44:35,916 44k INFO Saving model and optimizer state at iteration 25 to ./logs\44k\G_25000.pth 2023-03-16 02:44:36,747 44k INFO Saving model and optimizer state at iteration 25 to ./logs\44k\D_25000.pth 2023-03-16 02:44:37,562 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_22000.pth 2023-03-16 02:44:37,595 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_22000.pth 2023-03-16 02:45:49,133 44k INFO Train Epoch: 25 [95%] 2023-03-16 02:45:49,134 44k INFO Losses: [2.241028308868408, 2.289738416671753, 14.193833351135254, 20.99007225036621, 1.113254427909851], step: 25200, lr: 9.968796830108985e-05 2023-03-16 02:46:07,502 44k INFO ====> Epoch: 25, cost 376.93 s 2023-03-16 02:47:14,029 44k INFO Train Epoch: 26 [15%] 2023-03-16 02:47:14,029 44k INFO Losses: [2.835726737976074, 2.095684051513672, 5.26635217666626, 18.43368148803711, 1.3325340747833252], step: 25400, lr: 9.967550730505221e-05 2023-03-16 02:48:27,122 44k INFO Train Epoch: 26 [35%] 2023-03-16 02:48:27,122 44k INFO Losses: [2.727473258972168, 2.2782089710235596, 5.303587913513184, 16.13831329345703, 1.2571831941604614], step: 25600, lr: 9.967550730505221e-05 2023-03-16 02:49:41,126 44k INFO Train Epoch: 26 [54%] 2023-03-16 02:49:41,127 44k INFO Losses: [2.5170865058898926, 2.3737690448760986, 7.855102062225342, 19.032155990600586, 1.5124471187591553], step: 25800, lr: 9.967550730505221e-05 2023-03-16 02:50:54,866 44k INFO Train Epoch: 26 [74%] 2023-03-16 02:50:54,866 44k INFO Losses: [2.668579578399658, 1.9757747650146484, 8.264009475708008, 18.231170654296875, 1.463350772857666], step: 26000, lr: 9.967550730505221e-05 2023-03-16 02:50:58,200 44k INFO Saving model and optimizer state at iteration 26 to ./logs\44k\G_26000.pth 2023-03-16 02:50:58,975 44k INFO Saving model and optimizer state at iteration 26 to ./logs\44k\D_26000.pth 2023-03-16 02:50:59,671 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_23000.pth 2023-03-16 02:50:59,711 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_23000.pth 2023-03-16 02:52:13,468 44k INFO Train Epoch: 26 [94%] 2023-03-16 02:52:13,468 44k INFO Losses: [2.498525381088257, 2.3848297595977783, 12.209061622619629, 19.470373153686523, 1.536488652229309], step: 26200, lr: 9.967550730505221e-05 2023-03-16 02:52:35,601 44k INFO ====> Epoch: 26, cost 388.10 s 2023-03-16 02:53:37,621 44k INFO Train Epoch: 27 [14%] 2023-03-16 02:53:37,621 44k INFO Losses: [2.4980998039245605, 2.376551628112793, 10.22693920135498, 22.858905792236328, 1.3941915035247803], step: 26400, lr: 9.966304786663908e-05 2023-03-16 02:54:50,658 44k INFO Train Epoch: 27 [34%] 2023-03-16 02:54:50,659 44k INFO Losses: [2.386996030807495, 2.3716535568237305, 10.387357711791992, 24.71827507019043, 0.903770387172699], step: 26600, lr: 9.966304786663908e-05 2023-03-16 02:56:04,367 44k INFO Train Epoch: 27 [53%] 2023-03-16 02:56:04,367 44k INFO Losses: [2.1972522735595703, 2.515505790710449, 14.599126815795898, 25.55986213684082, 1.3752248287200928], step: 26800, lr: 9.966304786663908e-05 2023-03-16 02:57:18,224 44k INFO Train Epoch: 27 [73%] 2023-03-16 02:57:18,224 44k INFO Losses: [2.4568963050842285, 2.1340017318725586, 15.656890869140625, 20.10127830505371, 0.7791872024536133], step: 27000, lr: 9.966304786663908e-05 2023-03-16 02:57:21,558 44k INFO Saving model and optimizer state at iteration 27 to ./logs\44k\G_27000.pth 2023-03-16 02:57:22,366 44k INFO Saving model and optimizer state at iteration 27 to ./logs\44k\D_27000.pth 2023-03-16 02:57:23,081 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_24000.pth 2023-03-16 02:57:23,123 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_24000.pth 2023-03-16 02:58:36,907 44k INFO Train Epoch: 27 [93%] 2023-03-16 02:58:36,907 44k INFO Losses: [2.4854960441589355, 2.510793924331665, 8.482836723327637, 17.58075714111328, 1.5889105796813965], step: 27200, lr: 9.966304786663908e-05 2023-03-16 02:59:02,679 44k INFO ====> Epoch: 27, cost 387.08 s 2023-03-16 03:00:00,805 44k INFO Train Epoch: 28 [13%] 2023-03-16 03:00:00,806 44k INFO Losses: [2.2402095794677734, 2.575019359588623, 10.688492774963379, 19.385215759277344, 1.2475273609161377], step: 27400, lr: 9.965058998565574e-05 2023-03-16 03:01:13,887 44k INFO Train Epoch: 28 [33%] 2023-03-16 03:01:13,887 44k INFO Losses: [2.5222644805908203, 2.195505380630493, 9.039466857910156, 20.48299789428711, 1.0939148664474487], step: 27600, lr: 9.965058998565574e-05 2023-03-16 03:02:27,674 44k INFO Train Epoch: 28 [52%] 2023-03-16 03:02:27,674 44k INFO Losses: [2.6181838512420654, 2.2841413021087646, 7.338778018951416, 19.517711639404297, 1.2419270277023315], step: 27800, lr: 9.965058998565574e-05 2023-03-16 03:03:41,782 44k INFO Train Epoch: 28 [72%] 2023-03-16 03:03:41,783 44k INFO Losses: [2.3593363761901855, 2.4023687839508057, 8.351311683654785, 22.773019790649414, 1.3909177780151367], step: 28000, lr: 9.965058998565574e-05 2023-03-16 03:03:45,136 44k INFO Saving model and optimizer state at iteration 28 to ./logs\44k\G_28000.pth 2023-03-16 03:03:45,864 44k INFO Saving model and optimizer state at iteration 28 to ./logs\44k\D_28000.pth 2023-03-16 03:03:46,671 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_25000.pth 2023-03-16 03:03:46,714 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_25000.pth 2023-03-16 03:05:00,434 44k INFO Train Epoch: 28 [92%] 2023-03-16 03:05:00,435 44k INFO Losses: [2.7194132804870605, 2.0416510105133057, 13.62247085571289, 22.49553871154785, 1.402324914932251], step: 28200, lr: 9.965058998565574e-05 2023-03-16 03:05:29,788 44k INFO ====> Epoch: 28, cost 387.11 s 2023-03-16 03:06:24,516 44k INFO Train Epoch: 29 [12%] 2023-03-16 03:06:24,516 44k INFO Losses: [2.457123041152954, 2.379276752471924, 10.97140884399414, 16.928377151489258, 1.2140074968338013], step: 28400, lr: 9.963813366190753e-05 2023-03-16 03:07:37,934 44k INFO Train Epoch: 29 [32%] 2023-03-16 03:07:37,934 44k INFO Losses: [2.487278938293457, 2.1815173625946045, 12.108284950256348, 17.195192337036133, 1.25114905834198], step: 28600, lr: 9.963813366190753e-05 2023-03-16 03:08:51,676 44k INFO Train Epoch: 29 [51%] 2023-03-16 03:08:51,676 44k INFO Losses: [2.5518295764923096, 2.5065460205078125, 9.113849639892578, 24.584550857543945, 1.3649288415908813], step: 28800, lr: 9.963813366190753e-05 2023-03-16 03:10:05,635 44k INFO Train Epoch: 29 [71%] 2023-03-16 03:10:05,635 44k INFO Losses: [2.5384926795959473, 2.060751438140869, 16.977296829223633, 23.40003204345703, 1.55740487575531], step: 29000, lr: 9.963813366190753e-05 2023-03-16 03:10:09,016 44k INFO Saving model and optimizer state at iteration 29 to ./logs\44k\G_29000.pth 2023-03-16 03:10:09,734 44k INFO Saving model and optimizer state at iteration 29 to ./logs\44k\D_29000.pth 2023-03-16 03:10:10,471 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_26000.pth 2023-03-16 03:10:10,513 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_26000.pth 2023-03-16 03:11:24,220 44k INFO Train Epoch: 29 [91%] 2023-03-16 03:11:24,221 44k INFO Losses: [2.338834762573242, 2.597782611846924, 13.493698120117188, 27.03204345703125, 1.4835714101791382], step: 29200, lr: 9.963813366190753e-05 2023-03-16 03:11:57,300 44k INFO ====> Epoch: 29, cost 387.51 s 2023-03-16 03:12:48,616 44k INFO Train Epoch: 30 [11%] 2023-03-16 03:12:48,617 44k INFO Losses: [2.0912258625030518, 2.7193126678466797, 14.487010955810547, 23.989543914794922, 1.3611806631088257], step: 29400, lr: 9.962567889519979e-05 2023-03-16 03:14:01,589 44k INFO Train Epoch: 30 [31%] 2023-03-16 03:14:01,590 44k INFO Losses: [2.4132516384124756, 2.395432472229004, 13.655375480651855, 25.230676651000977, 1.7688844203948975], step: 29600, lr: 9.962567889519979e-05 2023-03-16 03:15:15,263 44k INFO Train Epoch: 30 [50%] 2023-03-16 03:15:15,263 44k INFO Losses: [2.476062297821045, 2.160902261734009, 12.101600646972656, 23.8223819732666, 1.5570317506790161], step: 29800, lr: 9.962567889519979e-05 2023-03-16 03:16:29,180 44k INFO Train Epoch: 30 [70%] 2023-03-16 03:16:29,181 44k INFO Losses: [2.2737932205200195, 2.447437286376953, 12.645472526550293, 23.222970962524414, 1.4255510568618774], step: 30000, lr: 9.962567889519979e-05 2023-03-16 03:16:32,484 44k INFO Saving model and optimizer state at iteration 30 to ./logs\44k\G_30000.pth 2023-03-16 03:16:33,208 44k INFO Saving model and optimizer state at iteration 30 to ./logs\44k\D_30000.pth 2023-03-16 03:16:33,917 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_27000.pth 2023-03-16 03:16:33,958 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_27000.pth 2023-03-16 03:17:47,552 44k INFO Train Epoch: 30 [90%] 2023-03-16 03:17:47,553 44k INFO Losses: [2.823334217071533, 2.1746232509613037, 15.930810928344727, 22.20973014831543, 1.2233502864837646], step: 30200, lr: 9.962567889519979e-05 2023-03-16 03:18:24,416 44k INFO ====> Epoch: 30, cost 387.12 s 2023-03-16 03:19:11,703 44k INFO Train Epoch: 31 [10%] 2023-03-16 03:19:11,704 44k INFO Losses: [2.539564847946167, 2.2286391258239746, 9.046024322509766, 20.115903854370117, 1.2273197174072266], step: 30400, lr: 9.961322568533789e-05 2023-03-16 03:20:24,725 44k INFO Train Epoch: 31 [30%] 2023-03-16 03:20:24,725 44k INFO Losses: [2.2585904598236084, 2.4984352588653564, 9.527566909790039, 22.968551635742188, 1.888730525970459], step: 30600, lr: 9.961322568533789e-05 2023-03-16 03:21:38,253 44k INFO Train Epoch: 31 [50%] 2023-03-16 03:21:38,253 44k INFO Losses: [2.559405565261841, 2.288482427597046, 11.696929931640625, 24.3270206451416, 1.4915549755096436], step: 30800, lr: 9.961322568533789e-05 2023-03-16 03:22:52,252 44k INFO Train Epoch: 31 [69%] 2023-03-16 03:22:52,253 44k INFO Losses: [2.555448532104492, 2.073540210723877, 10.545680046081543, 15.512166976928711, 1.1244412660598755], step: 31000, lr: 9.961322568533789e-05 2023-03-16 03:22:55,549 44k INFO Saving model and optimizer state at iteration 31 to ./logs\44k\G_31000.pth 2023-03-16 03:22:56,253 44k INFO Saving model and optimizer state at iteration 31 to ./logs\44k\D_31000.pth 2023-03-16 03:22:56,984 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_28000.pth 2023-03-16 03:22:57,025 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_28000.pth 2023-03-16 03:24:10,808 44k INFO Train Epoch: 31 [89%] 2023-03-16 03:24:10,808 44k INFO Losses: [2.63942813873291, 2.0896592140197754, 10.176759719848633, 18.490787506103516, 1.4620819091796875], step: 31200, lr: 9.961322568533789e-05 2023-03-16 03:24:51,256 44k INFO ====> Epoch: 31, cost 386.84 s 2023-03-16 03:25:34,836 44k INFO Train Epoch: 32 [9%] 2023-03-16 03:25:34,836 44k INFO Losses: [2.2341837882995605, 2.235257387161255, 14.818204879760742, 24.204313278198242, 1.6965343952178955], step: 31400, lr: 9.960077403212722e-05 2023-03-16 03:26:48,133 44k INFO Train Epoch: 32 [29%] 2023-03-16 03:26:48,134 44k INFO Losses: [2.3256654739379883, 2.2385029792785645, 10.850203514099121, 19.550249099731445, 0.9910757541656494], step: 31600, lr: 9.960077403212722e-05 2023-03-16 03:28:01,543 44k INFO Train Epoch: 32 [49%] 2023-03-16 03:28:01,543 44k INFO Losses: [2.6965925693511963, 2.0323081016540527, 9.872020721435547, 19.10701560974121, 1.7267124652862549], step: 31800, lr: 9.960077403212722e-05 2023-03-16 03:29:15,628 44k INFO Train Epoch: 32 [68%] 2023-03-16 03:29:15,628 44k INFO Losses: [2.812443256378174, 2.2365686893463135, 5.367918968200684, 14.458459854125977, 1.018988013267517], step: 32000, lr: 9.960077403212722e-05 2023-03-16 03:29:19,006 44k INFO Saving model and optimizer state at iteration 32 to ./logs\44k\G_32000.pth 2023-03-16 03:29:19,704 44k INFO Saving model and optimizer state at iteration 32 to ./logs\44k\D_32000.pth 2023-03-16 03:29:20,410 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_29000.pth 2023-03-16 03:29:20,451 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_29000.pth 2023-03-16 03:30:34,117 44k INFO Train Epoch: 32 [88%] 2023-03-16 03:30:34,118 44k INFO Losses: [2.5204293727874756, 2.237941026687622, 10.368865966796875, 15.533611297607422, 1.5804290771484375], step: 32200, lr: 9.960077403212722e-05 2023-03-16 03:31:18,288 44k INFO ====> Epoch: 32, cost 387.03 s 2023-03-16 03:31:58,186 44k INFO Train Epoch: 33 [8%] 2023-03-16 03:31:58,186 44k INFO Losses: [2.374772548675537, 2.4078075885772705, 10.99034309387207, 15.05627727508545, 1.443055510520935], step: 32400, lr: 9.95883239353732e-05 2023-03-16 03:33:11,579 44k INFO Train Epoch: 33 [28%] 2023-03-16 03:33:11,579 44k INFO Losses: [2.669719696044922, 2.046363353729248, 5.2895917892456055, 17.17399787902832, 1.3187588453292847], step: 32600, lr: 9.95883239353732e-05 2023-03-16 03:34:25,000 44k INFO Train Epoch: 33 [48%] 2023-03-16 03:34:25,001 44k INFO Losses: [2.5618042945861816, 2.2721028327941895, 8.22020435333252, 17.291372299194336, 1.4954009056091309], step: 32800, lr: 9.95883239353732e-05 2023-03-16 03:35:38,973 44k INFO Train Epoch: 33 [67%] 2023-03-16 03:35:38,974 44k INFO Losses: [2.4291791915893555, 2.1178977489471436, 8.579367637634277, 19.430696487426758, 0.9419689178466797], step: 33000, lr: 9.95883239353732e-05 2023-03-16 03:35:42,421 44k INFO Saving model and optimizer state at iteration 33 to ./logs\44k\G_33000.pth 2023-03-16 03:35:43,147 44k INFO Saving model and optimizer state at iteration 33 to ./logs\44k\D_33000.pth 2023-03-16 03:35:43,932 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_30000.pth 2023-03-16 03:35:43,981 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_30000.pth 2023-03-16 03:36:57,438 44k INFO Train Epoch: 33 [87%] 2023-03-16 03:36:57,438 44k INFO Losses: [2.4874870777130127, 2.051943063735962, 12.813075065612793, 20.92263412475586, 1.8126888275146484], step: 33200, lr: 9.95883239353732e-05 2023-03-16 03:37:45,128 44k INFO ====> Epoch: 33, cost 386.84 s 2023-03-16 03:38:21,472 44k INFO Train Epoch: 34 [7%] 2023-03-16 03:38:21,472 44k INFO Losses: [2.4531877040863037, 2.040008306503296, 10.012992858886719, 20.872087478637695, 1.443730354309082], step: 33400, lr: 9.957587539488128e-05 2023-03-16 03:39:34,641 44k INFO Train Epoch: 34 [27%] 2023-03-16 03:39:34,641 44k INFO Losses: [2.458981513977051, 2.0966765880584717, 9.663887023925781, 19.34764862060547, 1.2799453735351562], step: 33600, lr: 9.957587539488128e-05 2023-03-16 03:40:47,808 44k INFO Train Epoch: 34 [47%] 2023-03-16 03:40:47,809 44k INFO Losses: [2.297823429107666, 2.4502146244049072, 13.018662452697754, 23.71843910217285, 1.430517315864563], step: 33800, lr: 9.957587539488128e-05 2023-03-16 03:42:01,660 44k INFO Train Epoch: 34 [66%] 2023-03-16 03:42:01,660 44k INFO Losses: [2.5848140716552734, 2.2457940578460693, 8.234502792358398, 18.582042694091797, 1.1533933877944946], step: 34000, lr: 9.957587539488128e-05 2023-03-16 03:42:04,945 44k INFO Saving model and optimizer state at iteration 34 to ./logs\44k\G_34000.pth 2023-03-16 03:42:05,727 44k INFO Saving model and optimizer state at iteration 34 to ./logs\44k\D_34000.pth 2023-03-16 03:42:06,469 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_31000.pth 2023-03-16 03:42:06,511 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_31000.pth 2023-03-16 03:43:19,982 44k INFO Train Epoch: 34 [86%] 2023-03-16 03:43:19,982 44k INFO Losses: [2.715031862258911, 2.2611422538757324, 7.977295875549316, 21.879413604736328, 1.2521157264709473], step: 34200, lr: 9.957587539488128e-05 2023-03-16 03:44:11,577 44k INFO ====> Epoch: 34, cost 386.45 s 2023-03-16 03:44:42,734 44k INFO Train Epoch: 35 [6%] 2023-03-16 03:44:42,734 44k INFO Losses: [2.309983730316162, 2.459258556365967, 9.156838417053223, 22.34313201904297, 1.5120102167129517], step: 34400, lr: 9.956342841045691e-05 2023-03-16 03:56:25,039 44k INFO {'train': {'log_interval': 200, 'eval_interval': 1000, 'seed': 1234, 'epochs': 100, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': False, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 3}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'tubaki': 0}, 'model_dir': './logs\\44k'} 2023-03-16 03:56:25,067 44k WARNING git hash values are different. cea6df30(saved) != fd4d47fd(current) 2023-03-16 03:56:28,185 44k INFO Loaded checkpoint './logs\44k\G_34000.pth' (iteration 34) 2023-03-16 03:56:28,693 44k INFO Loaded checkpoint './logs\44k\D_34000.pth' (iteration 34) 2023-03-16 03:57:13,028 44k INFO Train Epoch: 34 [7%] 2023-03-16 03:57:13,028 44k INFO Losses: [2.735356092453003, 2.371401309967041, 8.512287139892578, 20.104995727539062, 1.436355710029602], step: 33400, lr: 9.956342841045691e-05 2023-03-16 03:58:28,254 44k INFO Train Epoch: 34 [27%] 2023-03-16 03:58:28,254 44k INFO Losses: [2.534684658050537, 2.2156550884246826, 4.7590742111206055, 17.158939361572266, 1.6694635152816772], step: 33600, lr: 9.956342841045691e-05 2023-03-16 03:59:41,868 44k INFO Train Epoch: 34 [47%] 2023-03-16 03:59:41,868 44k INFO Losses: [2.4582016468048096, 2.1844825744628906, 7.455877780914307, 20.30893325805664, 1.4267646074295044], step: 33800, lr: 9.956342841045691e-05 2023-03-16 04:00:54,885 44k INFO Train Epoch: 34 [66%] 2023-03-16 04:00:54,886 44k INFO Losses: [2.615025758743286, 2.116218090057373, 10.086602210998535, 17.498310089111328, 1.3476744890213013], step: 34000, lr: 9.956342841045691e-05 2023-03-16 04:01:03,984 44k INFO Saving model and optimizer state at iteration 34 to ./logs\44k\G_34000.pth 2023-03-16 04:01:04,661 44k INFO Saving model and optimizer state at iteration 34 to ./logs\44k\D_34000.pth 2023-03-16 04:02:18,655 44k INFO Train Epoch: 34 [86%] 2023-03-16 04:02:18,656 44k INFO Losses: [2.6912598609924316, 2.4489948749542236, 9.816937446594238, 21.75649642944336, 1.592055320739746], step: 34200, lr: 9.956342841045691e-05 2023-03-16 04:03:12,182 44k INFO ====> Epoch: 34, cost 407.14 s 2023-03-16 04:03:43,397 44k INFO Train Epoch: 35 [6%] 2023-03-16 04:03:43,397 44k INFO Losses: [2.5148420333862305, 2.4824299812316895, 8.655343055725098, 21.792661666870117, 1.4297841787338257], step: 34400, lr: 9.95509829819056e-05 2023-03-16 04:04:55,775 44k INFO Train Epoch: 35 [26%] 2023-03-16 04:04:55,775 44k INFO Losses: [2.6354150772094727, 2.1976029872894287, 11.027823448181152, 18.251169204711914, 2.0594427585601807], step: 34600, lr: 9.95509829819056e-05 2023-03-16 04:06:07,308 44k INFO Train Epoch: 35 [46%] 2023-03-16 04:06:07,309 44k INFO Losses: [2.409135580062866, 2.505441665649414, 13.478713989257812, 21.26117706298828, 1.4347319602966309], step: 34800, lr: 9.95509829819056e-05 2023-03-16 04:07:19,372 44k INFO Train Epoch: 35 [65%] 2023-03-16 04:07:19,372 44k INFO Losses: [2.294567584991455, 2.7145369052886963, 11.174776077270508, 24.157760620117188, 1.6589192152023315], step: 35000, lr: 9.95509829819056e-05 2023-03-16 04:07:27,037 44k INFO Saving model and optimizer state at iteration 35 to ./logs\44k\G_35000.pth 2023-03-16 04:07:27,703 44k INFO Saving model and optimizer state at iteration 35 to ./logs\44k\D_35000.pth 2023-03-16 04:07:28,279 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_32000.pth 2023-03-16 04:07:28,280 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_32000.pth 2023-03-16 04:08:39,711 44k INFO Train Epoch: 35 [85%] 2023-03-16 04:08:39,711 44k INFO Losses: [2.0827527046203613, 2.782172441482544, 11.539838790893555, 23.830232620239258, 1.5794413089752197], step: 35200, lr: 9.95509829819056e-05 2023-03-16 04:09:33,218 44k INFO ====> Epoch: 35, cost 381.04 s 2023-03-16 04:10:00,651 44k INFO Train Epoch: 36 [5%] 2023-03-16 04:10:00,651 44k INFO Losses: [2.521193027496338, 2.195659637451172, 15.156967163085938, 21.15805435180664, 1.6199477910995483], step: 35400, lr: 9.953853910903285e-05 2023-03-16 04:11:12,061 44k INFO Train Epoch: 36 [25%] 2023-03-16 04:11:12,061 44k INFO Losses: [2.5079658031463623, 2.203470230102539, 10.543901443481445, 20.68781852722168, 1.1460888385772705], step: 35600, lr: 9.953853910903285e-05 2023-03-16 04:12:23,541 44k INFO Train Epoch: 36 [45%] 2023-03-16 04:12:23,541 44k INFO Losses: [2.470787763595581, 2.757784605026245, 9.826325416564941, 20.834917068481445, 1.3395824432373047], step: 35800, lr: 9.953853910903285e-05 2023-03-16 04:13:35,843 44k INFO Train Epoch: 36 [64%] 2023-03-16 04:13:35,844 44k INFO Losses: [2.5167808532714844, 2.0204222202301025, 14.0824556350708, 23.684534072875977, 1.1264201402664185], step: 36000, lr: 9.953853910903285e-05 2023-03-16 04:13:38,981 44k INFO Saving model and optimizer state at iteration 36 to ./logs\44k\G_36000.pth 2023-03-16 04:13:39,704 44k INFO Saving model and optimizer state at iteration 36 to ./logs\44k\D_36000.pth 2023-03-16 04:13:40,279 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_33000.pth 2023-03-16 04:13:40,280 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_33000.pth 2023-03-16 04:14:51,876 44k INFO Train Epoch: 36 [84%] 2023-03-16 04:14:51,877 44k INFO Losses: [2.573528289794922, 2.017301321029663, 9.839195251464844, 22.667755126953125, 1.5985040664672852], step: 36200, lr: 9.953853910903285e-05 2023-03-16 04:15:49,391 44k INFO ====> Epoch: 36, cost 376.17 s 2023-03-16 04:16:13,101 44k INFO Train Epoch: 37 [4%] 2023-03-16 04:16:13,101 44k INFO Losses: [2.152750015258789, 2.6431026458740234, 10.045553207397461, 23.544601440429688, 1.5016255378723145], step: 36400, lr: 9.952609679164422e-05 2023-03-16 04:17:24,712 44k INFO Train Epoch: 37 [24%] 2023-03-16 04:17:24,713 44k INFO Losses: [2.5414018630981445, 2.1730339527130127, 8.318066596984863, 23.514345169067383, 1.5246663093566895], step: 36600, lr: 9.952609679164422e-05 2023-03-16 04:18:36,231 44k INFO Train Epoch: 37 [44%] 2023-03-16 04:18:36,231 44k INFO Losses: [2.5094096660614014, 2.0330300331115723, 10.376587867736816, 20.111948013305664, 1.3331259489059448], step: 36800, lr: 9.952609679164422e-05 2023-03-16 04:19:48,528 44k INFO Train Epoch: 37 [63%] 2023-03-16 04:19:48,528 44k INFO Losses: [2.596001148223877, 2.4675142765045166, 11.434625625610352, 23.16342544555664, 1.3103680610656738], step: 37000, lr: 9.952609679164422e-05 2023-03-16 04:19:51,509 44k INFO Saving model and optimizer state at iteration 37 to ./logs\44k\G_37000.pth 2023-03-16 04:19:52,254 44k INFO Saving model and optimizer state at iteration 37 to ./logs\44k\D_37000.pth 2023-03-16 04:19:52,845 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_34000.pth 2023-03-16 04:19:52,867 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_34000.pth 2023-03-16 04:21:04,464 44k INFO Train Epoch: 37 [83%] 2023-03-16 04:21:04,464 44k INFO Losses: [2.4715142250061035, 2.613856077194214, 8.718017578125, 25.486440658569336, 1.9496705532073975], step: 37200, lr: 9.952609679164422e-05 2023-03-16 04:22:05,480 44k INFO ====> Epoch: 37, cost 376.09 s 2023-03-16 04:22:25,761 44k INFO Train Epoch: 38 [3%] 2023-03-16 04:22:25,761 44k INFO Losses: [2.341283082962036, 2.2758655548095703, 9.830924987792969, 21.075105667114258, 1.4869089126586914], step: 37400, lr: 9.951365602954526e-05 2023-03-16 04:23:37,443 44k INFO Train Epoch: 38 [23%] 2023-03-16 04:23:37,443 44k INFO Losses: [2.5156164169311523, 2.408754825592041, 8.739418983459473, 21.17428970336914, 1.7206997871398926], step: 37600, lr: 9.951365602954526e-05 2023-03-16 04:24:48,931 44k INFO Train Epoch: 38 [43%] 2023-03-16 04:24:48,931 44k INFO Losses: [2.443415880203247, 2.293189287185669, 7.390056610107422, 20.74066734313965, 0.9931838512420654], step: 37800, lr: 9.951365602954526e-05 2023-03-16 04:26:01,295 44k INFO Train Epoch: 38 [62%] 2023-03-16 04:26:01,295 44k INFO Losses: [2.6905035972595215, 2.138937473297119, 12.016372680664062, 22.26052474975586, 1.4739972352981567], step: 38000, lr: 9.951365602954526e-05 2023-03-16 04:26:04,374 44k INFO Saving model and optimizer state at iteration 38 to ./logs\44k\G_38000.pth 2023-03-16 04:26:05,058 44k INFO Saving model and optimizer state at iteration 38 to ./logs\44k\D_38000.pth 2023-03-16 04:26:05,653 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_35000.pth 2023-03-16 04:26:05,674 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_35000.pth 2023-03-16 04:27:17,016 44k INFO Train Epoch: 38 [82%] 2023-03-16 04:27:17,017 44k INFO Losses: [2.5242133140563965, 2.4488563537597656, 8.857050895690918, 20.094764709472656, 1.6617921590805054], step: 38200, lr: 9.951365602954526e-05 2023-03-16 04:28:21,497 44k INFO ====> Epoch: 38, cost 376.02 s 2023-03-16 04:28:37,961 44k INFO Train Epoch: 39 [2%] 2023-03-16 04:28:37,961 44k INFO Losses: [2.290692090988159, 2.2121002674102783, 12.79197883605957, 22.22524070739746, 1.650564432144165], step: 38400, lr: 9.950121682254156e-05 2023-03-16 04:29:49,570 44k INFO Train Epoch: 39 [22%] 2023-03-16 04:29:49,571 44k INFO Losses: [2.4197869300842285, 2.445948600769043, 8.676871299743652, 21.672258377075195, 1.3894102573394775], step: 38600, lr: 9.950121682254156e-05 2023-03-16 04:31:00,634 44k INFO Train Epoch: 39 [42%] 2023-03-16 04:31:00,634 44k INFO Losses: [2.6619246006011963, 2.2307937145233154, 7.426494598388672, 16.014183044433594, 1.3580684661865234], step: 38800, lr: 9.950121682254156e-05 2023-03-16 04:32:12,655 44k INFO Train Epoch: 39 [61%] 2023-03-16 04:32:12,655 44k INFO Losses: [2.607482433319092, 2.601162910461426, 13.646580696105957, 22.15692901611328, 1.360393762588501], step: 39000, lr: 9.950121682254156e-05 2023-03-16 04:32:15,631 44k INFO Saving model and optimizer state at iteration 39 to ./logs\44k\G_39000.pth 2023-03-16 04:32:16,350 44k INFO Saving model and optimizer state at iteration 39 to ./logs\44k\D_39000.pth 2023-03-16 04:32:16,955 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_36000.pth 2023-03-16 04:32:16,976 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_36000.pth 2023-03-16 04:33:28,372 44k INFO Train Epoch: 39 [81%] 2023-03-16 04:33:28,372 44k INFO Losses: [2.32938289642334, 2.7104849815368652, 13.814224243164062, 23.792015075683594, 1.3935807943344116], step: 39200, lr: 9.950121682254156e-05 2023-03-16 04:34:36,496 44k INFO ====> Epoch: 39, cost 375.00 s 2023-03-16 04:34:49,239 44k INFO Train Epoch: 40 [1%] 2023-03-16 04:34:49,239 44k INFO Losses: [2.365050792694092, 2.383535385131836, 10.685169219970703, 23.14546012878418, 1.409051775932312], step: 39400, lr: 9.948877917043875e-05 2023-03-16 04:36:00,914 44k INFO Train Epoch: 40 [21%] 2023-03-16 04:36:00,914 44k INFO Losses: [2.567038059234619, 2.0466129779815674, 7.52158784866333, 16.795089721679688, 1.6113615036010742], step: 39600, lr: 9.948877917043875e-05 2023-03-16 04:37:11,903 44k INFO Train Epoch: 40 [41%] 2023-03-16 04:37:11,903 44k INFO Losses: [2.440204381942749, 2.3316915035247803, 8.64145278930664, 20.47035026550293, 1.5951958894729614], step: 39800, lr: 9.948877917043875e-05 2023-03-16 04:38:24,032 44k INFO Train Epoch: 40 [60%] 2023-03-16 04:38:24,032 44k INFO Losses: [2.4474246501922607, 2.59323787689209, 10.797174453735352, 21.607683181762695, 1.3864877223968506], step: 40000, lr: 9.948877917043875e-05 2023-03-16 04:38:27,100 44k INFO Saving model and optimizer state at iteration 40 to ./logs\44k\G_40000.pth 2023-03-16 04:38:27,780 44k INFO Saving model and optimizer state at iteration 40 to ./logs\44k\D_40000.pth 2023-03-16 04:38:28,392 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_37000.pth 2023-03-16 04:38:28,413 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_37000.pth 2023-03-16 04:39:39,710 44k INFO Train Epoch: 40 [80%] 2023-03-16 04:39:39,711 44k INFO Losses: [2.583094358444214, 2.0790019035339355, 5.435317039489746, 15.86678409576416, 1.1569397449493408], step: 40200, lr: 9.948877917043875e-05 2023-03-16 04:40:51,284 44k INFO ====> Epoch: 40, cost 374.79 s 2023-03-16 04:41:00,612 44k INFO Train Epoch: 41 [0%] 2023-03-16 04:41:00,613 44k INFO Losses: [2.3359639644622803, 2.442025661468506, 10.433860778808594, 24.25275993347168, 1.335142731666565], step: 40400, lr: 9.947634307304244e-05 2023-03-16 04:42:12,226 44k INFO Train Epoch: 41 [20%] 2023-03-16 04:42:12,226 44k INFO Losses: [2.4404492378234863, 2.7170982360839844, 10.853322982788086, 20.1866512298584, 1.4084573984146118], step: 40600, lr: 9.947634307304244e-05 2023-03-16 04:43:23,029 44k INFO Train Epoch: 41 [40%] 2023-03-16 04:43:23,029 44k INFO Losses: [2.2509171962738037, 2.40610933303833, 11.377174377441406, 22.095745086669922, 1.1475670337677002], step: 40800, lr: 9.947634307304244e-05 2023-03-16 04:44:35,235 44k INFO Train Epoch: 41 [59%] 2023-03-16 04:44:35,236 44k INFO Losses: [2.562128782272339, 2.2492477893829346, 11.298260688781738, 20.017236709594727, 1.6544709205627441], step: 41000, lr: 9.947634307304244e-05 2023-03-16 04:44:38,347 44k INFO Saving model and optimizer state at iteration 41 to ./logs\44k\G_41000.pth 2023-03-16 04:44:39,003 44k INFO Saving model and optimizer state at iteration 41 to ./logs\44k\D_41000.pth 2023-03-16 04:44:39,610 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_38000.pth 2023-03-16 04:44:39,633 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_38000.pth 2023-03-16 04:45:50,945 44k INFO Train Epoch: 41 [79%] 2023-03-16 04:45:50,946 44k INFO Losses: [2.472869396209717, 2.203312873840332, 8.827527046203613, 20.363361358642578, 1.292399287223816], step: 41200, lr: 9.947634307304244e-05 2023-03-16 04:47:02,743 44k INFO Train Epoch: 41 [99%] 2023-03-16 04:47:02,743 44k INFO Losses: [2.744673252105713, 2.2253713607788086, 10.198305130004883, 20.45889663696289, 1.52680504322052], step: 41400, lr: 9.947634307304244e-05 2023-03-16 04:47:06,255 44k INFO ====> Epoch: 41, cost 374.97 s 2023-03-16 04:48:23,440 44k INFO Train Epoch: 42 [19%] 2023-03-16 04:48:23,440 44k INFO Losses: [2.550706386566162, 2.255624294281006, 6.949566841125488, 18.170839309692383, 1.3795970678329468], step: 41600, lr: 9.94639085301583e-05 2023-03-16 04:49:34,276 44k INFO Train Epoch: 42 [39%] 2023-03-16 04:49:34,277 44k INFO Losses: [2.2660303115844727, 2.727027177810669, 13.636224746704102, 25.189931869506836, 1.6753475666046143], step: 41800, lr: 9.94639085301583e-05 2023-03-16 04:50:46,457 44k INFO Train Epoch: 42 [58%] 2023-03-16 04:50:46,457 44k INFO Losses: [2.224123954772949, 2.409886121749878, 14.216771125793457, 22.546138763427734, 1.386003017425537], step: 42000, lr: 9.94639085301583e-05 2023-03-16 04:50:49,512 44k INFO Saving model and optimizer state at iteration 42 to ./logs\44k\G_42000.pth 2023-03-16 04:50:50,172 44k INFO Saving model and optimizer state at iteration 42 to ./logs\44k\D_42000.pth 2023-03-16 04:50:50,774 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_39000.pth 2023-03-16 04:50:50,796 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_39000.pth 2023-03-16 04:52:02,086 44k INFO Train Epoch: 42 [78%] 2023-03-16 04:52:02,087 44k INFO Losses: [2.5909955501556396, 2.0392813682556152, 7.411962032318115, 16.3780517578125, 0.9410098195075989], step: 42200, lr: 9.94639085301583e-05 2023-03-16 04:53:14,015 44k INFO Train Epoch: 42 [98%] 2023-03-16 04:53:14,015 44k INFO Losses: [2.4276227951049805, 2.394519090652466, 10.991179466247559, 20.983442306518555, 1.366299033164978], step: 42400, lr: 9.94639085301583e-05 2023-03-16 04:53:21,121 44k INFO ====> Epoch: 42, cost 374.87 s 2023-03-16 04:54:34,761 44k INFO Train Epoch: 43 [18%] 2023-03-16 04:54:34,761 44k INFO Losses: [2.634112596511841, 2.134216547012329, 8.934220314025879, 17.885133743286133, 0.9130648970603943], step: 42600, lr: 9.945147554159202e-05 2023-03-16 04:55:45,828 44k INFO Train Epoch: 43 [38%] 2023-03-16 04:55:45,829 44k INFO Losses: [2.412219524383545, 2.380774974822998, 13.896178245544434, 23.119586944580078, 1.3717542886734009], step: 42800, lr: 9.945147554159202e-05 2023-03-16 04:56:58,001 44k INFO Train Epoch: 43 [57%] 2023-03-16 04:56:58,002 44k INFO Losses: [2.2736198902130127, 2.6207873821258545, 10.55638313293457, 21.289901733398438, 1.6902225017547607], step: 43000, lr: 9.945147554159202e-05 2023-03-16 04:57:01,064 44k INFO Saving model and optimizer state at iteration 43 to ./logs\44k\G_43000.pth 2023-03-16 04:57:01,770 44k INFO Saving model and optimizer state at iteration 43 to ./logs\44k\D_43000.pth 2023-03-16 04:57:02,390 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_40000.pth 2023-03-16 04:57:02,412 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_40000.pth 2023-03-16 04:58:13,839 44k INFO Train Epoch: 43 [77%] 2023-03-16 04:58:13,839 44k INFO Losses: [2.2816877365112305, 2.42964768409729, 11.332585334777832, 19.005067825317383, 1.7012441158294678], step: 43200, lr: 9.945147554159202e-05 2023-03-16 04:59:25,527 44k INFO Train Epoch: 43 [97%] 2023-03-16 04:59:25,527 44k INFO Losses: [2.4685380458831787, 2.1541407108306885, 11.15649700164795, 22.170900344848633, 1.3258031606674194], step: 43400, lr: 9.945147554159202e-05 2023-03-16 04:59:36,305 44k INFO ====> Epoch: 43, cost 375.18 s 2023-03-16 05:00:46,383 44k INFO Train Epoch: 44 [17%] 2023-03-16 05:00:46,383 44k INFO Losses: [2.4514083862304688, 2.2304863929748535, 8.855379104614258, 22.2222957611084, 1.8094515800476074], step: 43600, lr: 9.943904410714931e-05 2023-03-16 05:01:57,339 44k INFO Train Epoch: 44 [37%] 2023-03-16 05:01:57,340 44k INFO Losses: [2.5272789001464844, 2.2021329402923584, 9.52602767944336, 17.716938018798828, 0.8632234930992126], step: 43800, lr: 9.943904410714931e-05 2023-03-16 05:03:09,443 44k INFO Train Epoch: 44 [56%] 2023-03-16 05:03:09,444 44k INFO Losses: [2.407869815826416, 2.35640287399292, 10.980133056640625, 22.32611846923828, 1.5755138397216797], step: 44000, lr: 9.943904410714931e-05 2023-03-16 05:03:12,445 44k INFO Saving model and optimizer state at iteration 44 to ./logs\44k\G_44000.pth 2023-03-16 05:03:13,153 44k INFO Saving model and optimizer state at iteration 44 to ./logs\44k\D_44000.pth 2023-03-16 05:03:13,749 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_41000.pth 2023-03-16 05:03:13,771 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_41000.pth 2023-03-16 05:04:25,111 44k INFO Train Epoch: 44 [76%] 2023-03-16 05:04:25,112 44k INFO Losses: [2.3835160732269287, 2.389317512512207, 10.71520709991455, 18.310638427734375, 1.1260182857513428], step: 44200, lr: 9.943904410714931e-05 2023-03-16 05:05:36,803 44k INFO Train Epoch: 44 [96%] 2023-03-16 05:05:36,803 44k INFO Losses: [2.4367361068725586, 2.370126247406006, 12.113393783569336, 21.660015106201172, 1.485506296157837], step: 44400, lr: 9.943904410714931e-05 2023-03-16 05:05:51,175 44k INFO ====> Epoch: 44, cost 374.87 s 2023-03-16 05:06:57,800 44k INFO Train Epoch: 45 [16%] 2023-03-16 05:06:57,800 44k INFO Losses: [2.4158332347869873, 2.3154475688934326, 9.379013061523438, 23.635696411132812, 1.3461123704910278], step: 44600, lr: 9.942661422663591e-05 2023-03-16 05:08:08,640 44k INFO Train Epoch: 45 [36%] 2023-03-16 05:08:08,641 44k INFO Losses: [2.5560221672058105, 2.159437894821167, 13.100471496582031, 24.253488540649414, 1.6214666366577148], step: 44800, lr: 9.942661422663591e-05 2023-03-16 05:09:20,606 44k INFO Train Epoch: 45 [55%] 2023-03-16 05:09:20,606 44k INFO Losses: [2.4073994159698486, 2.340664863586426, 9.460436820983887, 22.38808250427246, 1.7057092189788818], step: 45000, lr: 9.942661422663591e-05 2023-03-16 05:09:23,659 44k INFO Saving model and optimizer state at iteration 45 to ./logs\44k\G_45000.pth 2023-03-16 05:09:24,361 44k INFO Saving model and optimizer state at iteration 45 to ./logs\44k\D_45000.pth 2023-03-16 05:09:24,972 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_42000.pth 2023-03-16 05:09:24,995 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_42000.pth 2023-03-16 05:10:36,382 44k INFO Train Epoch: 45 [75%] 2023-03-16 05:10:36,382 44k INFO Losses: [2.450536012649536, 2.5554616451263428, 10.579137802124023, 24.643321990966797, 1.5360589027404785], step: 45200, lr: 9.942661422663591e-05 2023-03-16 05:11:48,146 44k INFO Train Epoch: 45 [95%] 2023-03-16 05:11:48,147 44k INFO Losses: [2.4385368824005127, 2.5642099380493164, 11.95130729675293, 21.388864517211914, 1.2218255996704102], step: 45400, lr: 9.942661422663591e-05 2023-03-16 05:12:06,027 44k INFO ====> Epoch: 45, cost 374.85 s 2023-03-16 05:13:09,106 44k INFO Train Epoch: 46 [15%] 2023-03-16 05:13:09,107 44k INFO Losses: [2.6866252422332764, 2.512784957885742, 6.735740661621094, 18.387805938720703, 0.9726436734199524], step: 45600, lr: 9.941418589985758e-05 2023-03-16 05:14:19,996 44k INFO Train Epoch: 46 [35%] 2023-03-16 05:14:19,996 44k INFO Losses: [2.324293613433838, 2.413609027862549, 8.455078125, 20.085384368896484, 1.1986966133117676], step: 45800, lr: 9.941418589985758e-05 2023-03-16 05:15:31,905 44k INFO Train Epoch: 46 [54%] 2023-03-16 05:15:31,906 44k INFO Losses: [2.4402267932891846, 2.2198450565338135, 8.886374473571777, 23.751253128051758, 1.5070862770080566], step: 46000, lr: 9.941418589985758e-05 2023-03-16 05:15:34,933 44k INFO Saving model and optimizer state at iteration 46 to ./logs\44k\G_46000.pth 2023-03-16 05:15:35,634 44k INFO Saving model and optimizer state at iteration 46 to ./logs\44k\D_46000.pth 2023-03-16 05:15:36,254 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_43000.pth 2023-03-16 05:15:36,276 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_43000.pth 2023-03-16 05:16:47,601 44k INFO Train Epoch: 46 [74%] 2023-03-16 05:16:47,601 44k INFO Losses: [2.329178810119629, 2.706017017364502, 9.56794548034668, 15.50391960144043, 0.9271169304847717], step: 46200, lr: 9.941418589985758e-05 2023-03-16 05:17:59,261 44k INFO Train Epoch: 46 [94%] 2023-03-16 05:17:59,261 44k INFO Losses: [2.6961116790771484, 1.9932681322097778, 6.934106349945068, 13.896705627441406, 1.8193912506103516], step: 46400, lr: 9.941418589985758e-05 2023-03-16 05:18:20,697 44k INFO ====> Epoch: 46, cost 374.67 s 2023-03-16 05:19:20,206 44k INFO Train Epoch: 47 [14%] 2023-03-16 05:19:20,206 44k INFO Losses: [2.4343173503875732, 2.218107223510742, 10.415726661682129, 20.94625473022461, 1.4619580507278442], step: 46600, lr: 9.940175912662009e-05 2023-03-16 05:20:31,205 44k INFO Train Epoch: 47 [34%] 2023-03-16 05:20:31,205 44k INFO Losses: [2.5127692222595215, 2.4343924522399902, 9.154876708984375, 20.082353591918945, 1.348007082939148], step: 46800, lr: 9.940175912662009e-05 2023-03-16 05:21:42,908 44k INFO Train Epoch: 47 [53%] 2023-03-16 05:21:42,908 44k INFO Losses: [2.347452163696289, 2.5012331008911133, 12.505840301513672, 19.54120445251465, 1.2244873046875], step: 47000, lr: 9.940175912662009e-05 2023-03-16 05:21:45,975 44k INFO Saving model and optimizer state at iteration 47 to ./logs\44k\G_47000.pth 2023-03-16 05:21:46,681 44k INFO Saving model and optimizer state at iteration 47 to ./logs\44k\D_47000.pth 2023-03-16 05:21:47,312 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_44000.pth 2023-03-16 05:21:47,333 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_44000.pth 2023-03-16 05:22:58,908 44k INFO Train Epoch: 47 [73%] 2023-03-16 05:22:58,909 44k INFO Losses: [2.388561487197876, 2.3846726417541504, 16.041318893432617, 18.3599910736084, 1.3529706001281738], step: 47200, lr: 9.940175912662009e-05 2023-03-16 05:24:10,720 44k INFO Train Epoch: 47 [93%] 2023-03-16 05:24:10,720 44k INFO Losses: [2.4501678943634033, 2.236650228500366, 12.069024085998535, 17.32603645324707, 1.3553587198257446], step: 47400, lr: 9.940175912662009e-05 2023-03-16 05:24:35,690 44k INFO ====> Epoch: 47, cost 374.99 s 2023-03-16 05:25:31,949 44k INFO Train Epoch: 48 [13%] 2023-03-16 05:25:31,950 44k INFO Losses: [2.4507391452789307, 2.314365863800049, 7.981650352478027, 18.75534439086914, 1.3199877738952637], step: 47600, lr: 9.938933390672926e-05 2023-03-16 05:26:42,754 44k INFO Train Epoch: 48 [33%] 2023-03-16 05:26:42,755 44k INFO Losses: [2.542100191116333, 2.1746866703033447, 10.533347129821777, 19.808061599731445, 1.3847103118896484], step: 47800, lr: 9.938933390672926e-05 2023-03-16 05:27:54,259 44k INFO Train Epoch: 48 [52%] 2023-03-16 05:27:54,259 44k INFO Losses: [2.409079074859619, 2.2021172046661377, 10.677349090576172, 18.054149627685547, 1.334299921989441], step: 48000, lr: 9.938933390672926e-05 2023-03-16 05:27:57,281 44k INFO Saving model and optimizer state at iteration 48 to ./logs\44k\G_48000.pth 2023-03-16 05:27:57,992 44k INFO Saving model and optimizer state at iteration 48 to ./logs\44k\D_48000.pth 2023-03-16 05:27:58,605 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_45000.pth 2023-03-16 05:27:58,628 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_45000.pth 2023-03-16 05:29:10,314 44k INFO Train Epoch: 48 [72%] 2023-03-16 05:29:10,315 44k INFO Losses: [2.5763540267944336, 2.2233710289001465, 6.700291633605957, 18.025501251220703, 1.161285638809204], step: 48200, lr: 9.938933390672926e-05 2023-03-16 05:30:22,080 44k INFO Train Epoch: 48 [92%] 2023-03-16 05:30:22,081 44k INFO Losses: [2.607163906097412, 2.364773750305176, 13.358384132385254, 23.33789825439453, 1.3308541774749756], step: 48400, lr: 9.938933390672926e-05 2023-03-16 05:30:50,527 44k INFO ====> Epoch: 48, cost 374.84 s 2023-03-16 05:31:42,963 44k INFO Train Epoch: 49 [12%] 2023-03-16 05:31:42,963 44k INFO Losses: [2.695791721343994, 2.3502726554870605, 11.981863021850586, 22.564855575561523, 1.507075548171997], step: 48600, lr: 9.937691023999092e-05 2023-03-16 05:32:53,824 44k INFO Train Epoch: 49 [32%] 2023-03-16 05:32:53,824 44k INFO Losses: [2.3196558952331543, 2.3600001335144043, 10.499870300292969, 19.04021644592285, 1.8852360248565674], step: 48800, lr: 9.937691023999092e-05 2023-03-16 05:34:05,406 44k INFO Train Epoch: 49 [51%] 2023-03-16 05:34:05,407 44k INFO Losses: [2.499946355819702, 2.1812098026275635, 9.973196983337402, 21.39131736755371, 1.0995495319366455], step: 49000, lr: 9.937691023999092e-05 2023-03-16 05:34:08,512 44k INFO Saving model and optimizer state at iteration 49 to ./logs\44k\G_49000.pth 2023-03-16 05:34:09,227 44k INFO Saving model and optimizer state at iteration 49 to ./logs\44k\D_49000.pth 2023-03-16 05:34:09,840 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_46000.pth 2023-03-16 05:34:09,863 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_46000.pth 2023-03-16 05:35:21,569 44k INFO Train Epoch: 49 [71%] 2023-03-16 05:35:21,569 44k INFO Losses: [2.6241278648376465, 2.1841540336608887, 10.812655448913574, 21.911190032958984, 1.559238314628601], step: 49200, lr: 9.937691023999092e-05 2023-03-16 05:36:33,318 44k INFO Train Epoch: 49 [91%] 2023-03-16 05:36:33,318 44k INFO Losses: [2.2002882957458496, 2.596094846725464, 16.186054229736328, 25.531511306762695, 1.7639577388763428], step: 49400, lr: 9.937691023999092e-05 2023-03-16 05:37:05,415 44k INFO ====> Epoch: 49, cost 374.89 s 2023-03-16 05:37:54,277 44k INFO Train Epoch: 50 [11%] 2023-03-16 05:37:54,277 44k INFO Losses: [2.2145020961761475, 2.4302988052368164, 13.595687866210938, 21.35150718688965, 1.4173370599746704], step: 49600, lr: 9.936448812621091e-05 2023-03-16 05:39:05,011 44k INFO Train Epoch: 50 [31%] 2023-03-16 05:39:05,012 44k INFO Losses: [2.555030584335327, 2.4358816146850586, 12.273147583007812, 21.640710830688477, 1.7424627542495728], step: 49800, lr: 9.936448812621091e-05 2023-03-16 05:40:16,593 44k INFO Train Epoch: 50 [50%] 2023-03-16 05:40:16,594 44k INFO Losses: [2.5356380939483643, 2.515882968902588, 8.12830924987793, 17.605701446533203, 1.735107660293579], step: 50000, lr: 9.936448812621091e-05 2023-03-16 05:40:19,663 44k INFO Saving model and optimizer state at iteration 50 to ./logs\44k\G_50000.pth 2023-03-16 05:40:20,338 44k INFO Saving model and optimizer state at iteration 50 to ./logs\44k\D_50000.pth 2023-03-16 05:40:20,951 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_47000.pth 2023-03-16 05:40:20,974 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_47000.pth 2023-03-16 05:41:32,666 44k INFO Train Epoch: 50 [70%] 2023-03-16 05:41:32,666 44k INFO Losses: [2.4940195083618164, 2.4696528911590576, 10.620898246765137, 23.589889526367188, 1.03789484500885], step: 50200, lr: 9.936448812621091e-05 2023-03-16 05:42:44,356 44k INFO Train Epoch: 50 [90%] 2023-03-16 05:42:44,357 44k INFO Losses: [2.4433135986328125, 2.311039447784424, 14.237082481384277, 21.56534194946289, 1.4720821380615234], step: 50400, lr: 9.936448812621091e-05 2023-03-16 05:43:20,109 44k INFO ====> Epoch: 50, cost 374.69 s 2023-03-16 05:44:05,328 44k INFO Train Epoch: 51 [10%] 2023-03-16 05:44:05,328 44k INFO Losses: [2.2649154663085938, 2.2357828617095947, 10.266229629516602, 23.800050735473633, 1.4280152320861816], step: 50600, lr: 9.935206756519513e-05 2023-03-16 05:45:16,013 44k INFO Train Epoch: 51 [30%] 2023-03-16 05:45:16,014 44k INFO Losses: [2.1995975971221924, 2.9666035175323486, 14.098599433898926, 20.49896812438965, 1.4560972452163696], step: 50800, lr: 9.935206756519513e-05 2023-03-16 05:46:27,376 44k INFO Train Epoch: 51 [50%] 2023-03-16 05:46:27,376 44k INFO Losses: [2.604919672012329, 2.1948182582855225, 9.889162063598633, 19.047727584838867, 1.2606456279754639], step: 51000, lr: 9.935206756519513e-05 2023-03-16 05:46:30,430 44k INFO Saving model and optimizer state at iteration 51 to ./logs\44k\G_51000.pth 2023-03-16 05:46:31,107 44k INFO Saving model and optimizer state at iteration 51 to ./logs\44k\D_51000.pth 2023-03-16 05:46:31,767 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_48000.pth 2023-03-16 05:46:31,791 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_48000.pth 2023-03-16 05:47:43,281 44k INFO Train Epoch: 51 [69%] 2023-03-16 05:47:43,281 44k INFO Losses: [2.4805476665496826, 2.080566644668579, 6.5381293296813965, 15.510347366333008, 1.3366036415100098], step: 51200, lr: 9.935206756519513e-05 2023-03-16 05:48:54,906 44k INFO Train Epoch: 51 [89%] 2023-03-16 05:48:54,906 44k INFO Losses: [2.4964962005615234, 2.2061524391174316, 10.092826843261719, 17.283185958862305, 1.3437267541885376], step: 51400, lr: 9.935206756519513e-05 2023-03-16 05:49:34,011 44k INFO ====> Epoch: 51, cost 373.90 s 2023-03-16 05:50:15,600 44k INFO Train Epoch: 52 [9%] 2023-03-16 05:50:15,601 44k INFO Losses: [2.1847033500671387, 2.37404465675354, 12.029313087463379, 23.075904846191406, 1.2532305717468262], step: 51600, lr: 9.933964855674948e-05 2023-03-16 05:51:26,493 44k INFO Train Epoch: 52 [29%] 2023-03-16 05:51:26,493 44k INFO Losses: [2.1701598167419434, 2.43580961227417, 9.93907356262207, 21.37563705444336, 1.298106074333191], step: 51800, lr: 9.933964855674948e-05 2023-03-16 05:52:37,618 44k INFO Train Epoch: 52 [49%] 2023-03-16 05:52:37,618 44k INFO Losses: [2.452150821685791, 2.5048558712005615, 12.4544677734375, 24.675209045410156, 1.2430179119110107], step: 52000, lr: 9.933964855674948e-05 2023-03-16 05:52:40,659 44k INFO Saving model and optimizer state at iteration 52 to ./logs\44k\G_52000.pth 2023-03-16 05:52:41,350 44k INFO Saving model and optimizer state at iteration 52 to ./logs\44k\D_52000.pth 2023-03-16 05:52:41,969 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_49000.pth 2023-03-16 05:52:41,991 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_49000.pth 2023-03-16 05:53:53,875 44k INFO Train Epoch: 52 [68%] 2023-03-16 05:53:53,876 44k INFO Losses: [2.638254165649414, 2.334127187728882, 7.196601390838623, 14.477352142333984, 0.8605620265007019], step: 52200, lr: 9.933964855674948e-05 2023-03-16 05:55:05,611 44k INFO Train Epoch: 52 [88%] 2023-03-16 05:55:05,611 44k INFO Losses: [2.6963863372802734, 2.2833876609802246, 5.878235816955566, 17.45134162902832, 1.5066460371017456], step: 52400, lr: 9.933964855674948e-05 2023-03-16 05:55:48,400 44k INFO ====> Epoch: 52, cost 374.39 s 2023-03-16 05:56:26,333 44k INFO Train Epoch: 53 [8%] 2023-03-16 05:56:26,333 44k INFO Losses: [2.7052953243255615, 2.002361297607422, 11.435791015625, 16.997867584228516, 1.5922973155975342], step: 52600, lr: 9.932723110067987e-05 2023-03-16 05:57:37,487 44k INFO Train Epoch: 53 [28%] 2023-03-16 05:57:37,488 44k INFO Losses: [2.292419910430908, 2.4256606101989746, 7.135193824768066, 11.692338943481445, 1.7374460697174072], step: 52800, lr: 9.932723110067987e-05 2023-03-16 05:58:48,812 44k INFO Train Epoch: 53 [48%] 2023-03-16 05:58:48,813 44k INFO Losses: [2.8056139945983887, 2.0237298011779785, 7.309670448303223, 16.0150146484375, 1.2727463245391846], step: 53000, lr: 9.932723110067987e-05 2023-03-16 05:58:51,868 44k INFO Saving model and optimizer state at iteration 53 to ./logs\44k\G_53000.pth 2023-03-16 05:58:52,537 44k INFO Saving model and optimizer state at iteration 53 to ./logs\44k\D_53000.pth 2023-03-16 05:58:53,154 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_50000.pth 2023-03-16 05:58:53,177 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_50000.pth 2023-03-16 06:00:05,092 44k INFO Train Epoch: 53 [67%] 2023-03-16 06:00:05,092 44k INFO Losses: [2.6008663177490234, 2.185480833053589, 9.639498710632324, 15.495162010192871, 0.9592587947845459], step: 53200, lr: 9.932723110067987e-05 2023-03-16 06:01:16,841 44k INFO Train Epoch: 53 [87%] 2023-03-16 06:01:16,842 44k INFO Losses: [2.686331033706665, 2.2196414470672607, 8.396406173706055, 20.826974868774414, 1.3370802402496338], step: 53400, lr: 9.932723110067987e-05 2023-03-16 06:02:03,177 44k INFO ====> Epoch: 53, cost 374.78 s 2023-03-16 06:02:37,372 44k INFO Train Epoch: 54 [7%] 2023-03-16 06:02:37,372 44k INFO Losses: [2.4122931957244873, 2.44576358795166, 10.521524429321289, 19.082338333129883, 1.5011610984802246], step: 53600, lr: 9.931481519679228e-05 2023-03-16 06:03:48,640 44k INFO Train Epoch: 54 [27%] 2023-03-16 06:03:48,641 44k INFO Losses: [2.363226890563965, 2.341606616973877, 10.892135620117188, 19.379182815551758, 1.3139328956604004], step: 53800, lr: 9.931481519679228e-05 2023-03-16 06:04:59,947 44k INFO Train Epoch: 54 [47%] 2023-03-16 06:04:59,948 44k INFO Losses: [2.4376261234283447, 2.133969783782959, 10.000365257263184, 25.318647384643555, 1.5127915143966675], step: 54000, lr: 9.931481519679228e-05 2023-03-16 06:05:03,095 44k INFO Saving model and optimizer state at iteration 54 to ./logs\44k\G_54000.pth 2023-03-16 06:05:03,756 44k INFO Saving model and optimizer state at iteration 54 to ./logs\44k\D_54000.pth 2023-03-16 06:05:04,368 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_51000.pth 2023-03-16 06:05:04,392 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_51000.pth 2023-03-16 06:06:16,169 44k INFO Train Epoch: 54 [66%] 2023-03-16 06:06:16,169 44k INFO Losses: [2.6491243839263916, 2.0175490379333496, 8.08851432800293, 16.99297332763672, 1.301060438156128], step: 54200, lr: 9.931481519679228e-05 2023-03-16 06:07:27,765 44k INFO Train Epoch: 54 [86%] 2023-03-16 06:07:27,766 44k INFO Losses: [2.7649648189544678, 1.929985761642456, 7.316077709197998, 16.9305362701416, 1.0621405839920044], step: 54400, lr: 9.931481519679228e-05 2023-03-16 06:08:17,724 44k INFO ====> Epoch: 54, cost 374.55 s 2023-03-16 06:08:48,518 44k INFO Train Epoch: 55 [6%] 2023-03-16 06:08:48,519 44k INFO Losses: [2.3893895149230957, 2.22790265083313, 12.192964553833008, 22.122791290283203, 1.2813856601715088], step: 54600, lr: 9.930240084489267e-05 2023-03-16 06:09:59,759 44k INFO Train Epoch: 55 [26%] 2023-03-16 06:09:59,760 44k INFO Losses: [2.5274550914764404, 2.2352283000946045, 9.875089645385742, 17.065467834472656, 1.5922656059265137], step: 54800, lr: 9.930240084489267e-05 2023-03-16 06:11:11,096 44k INFO Train Epoch: 55 [46%] 2023-03-16 06:11:11,096 44k INFO Losses: [2.191709280014038, 2.4090373516082764, 12.352890014648438, 22.31217384338379, 1.3551628589630127], step: 55000, lr: 9.930240084489267e-05 2023-03-16 06:11:14,141 44k INFO Saving model and optimizer state at iteration 55 to ./logs\44k\G_55000.pth 2023-03-16 06:11:14,857 44k INFO Saving model and optimizer state at iteration 55 to ./logs\44k\D_55000.pth 2023-03-16 06:11:15,480 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_52000.pth 2023-03-16 06:11:15,502 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_52000.pth 2023-03-16 06:12:27,159 44k INFO Train Epoch: 55 [65%] 2023-03-16 06:12:27,160 44k INFO Losses: [2.196718692779541, 2.5569074153900146, 14.24284553527832, 25.717140197753906, 1.2173073291778564], step: 55200, lr: 9.930240084489267e-05 2023-03-16 06:13:38,727 44k INFO Train Epoch: 55 [85%] 2023-03-16 06:13:38,728 44k INFO Losses: [2.3945653438568115, 2.3066320419311523, 8.143322944641113, 20.244539260864258, 1.3104956150054932], step: 55400, lr: 9.930240084489267e-05 2023-03-16 06:14:32,296 44k INFO ====> Epoch: 55, cost 374.57 s 2023-03-16 06:14:59,545 44k INFO Train Epoch: 56 [5%] 2023-03-16 06:14:59,545 44k INFO Losses: [2.3161509037017822, 2.268843412399292, 13.133040428161621, 22.105560302734375, 1.5680454969406128], step: 55600, lr: 9.928998804478705e-05 2023-03-16 06:16:10,794 44k INFO Train Epoch: 56 [25%] 2023-03-16 06:16:10,795 44k INFO Losses: [2.4446775913238525, 2.039659261703491, 11.9905366897583, 20.907440185546875, 1.4816337823867798], step: 55800, lr: 9.928998804478705e-05 2023-03-16 06:17:21,941 44k INFO Train Epoch: 56 [45%] 2023-03-16 06:17:21,941 44k INFO Losses: [2.2558887004852295, 2.193169355392456, 12.455041885375977, 21.766685485839844, 1.7797049283981323], step: 56000, lr: 9.928998804478705e-05 2023-03-16 06:17:24,999 44k INFO Saving model and optimizer state at iteration 56 to ./logs\44k\G_56000.pth 2023-03-16 06:17:25,707 44k INFO Saving model and optimizer state at iteration 56 to ./logs\44k\D_56000.pth 2023-03-16 06:17:26,328 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_53000.pth 2023-03-16 06:17:26,350 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_53000.pth 2023-03-16 06:18:38,056 44k INFO Train Epoch: 56 [64%] 2023-03-16 06:18:38,056 44k INFO Losses: [2.459052324295044, 2.1125807762145996, 11.439477920532227, 18.268381118774414, 1.385172963142395], step: 56200, lr: 9.928998804478705e-05 2023-03-16 06:19:49,600 44k INFO Train Epoch: 56 [84%] 2023-03-16 06:19:49,600 44k INFO Losses: [2.6347994804382324, 2.3432302474975586, 7.210263252258301, 19.71118927001953, 1.5107039213180542], step: 56400, lr: 9.928998804478705e-05 2023-03-16 06:20:46,725 44k INFO ====> Epoch: 56, cost 374.43 s 2023-03-16 06:21:10,270 44k INFO Train Epoch: 57 [4%] 2023-03-16 06:21:10,271 44k INFO Losses: [2.528183937072754, 2.385098457336426, 9.7555513381958, 22.41512107849121, 1.7227468490600586], step: 56600, lr: 9.927757679628145e-05 2023-03-16 06:22:21,629 44k INFO Train Epoch: 57 [24%] 2023-03-16 06:22:21,629 44k INFO Losses: [2.5793471336364746, 2.1895012855529785, 10.034649848937988, 20.79099464416504, 1.8144338130950928], step: 56800, lr: 9.927757679628145e-05 2023-03-16 06:23:32,812 44k INFO Train Epoch: 57 [44%] 2023-03-16 06:23:32,813 44k INFO Losses: [2.393862247467041, 2.3325138092041016, 8.37338638305664, 18.404531478881836, 1.2277024984359741], step: 57000, lr: 9.927757679628145e-05 2023-03-16 06:23:35,873 44k INFO Saving model and optimizer state at iteration 57 to ./logs\44k\G_57000.pth 2023-03-16 06:23:36,605 44k INFO Saving model and optimizer state at iteration 57 to ./logs\44k\D_57000.pth 2023-03-16 06:23:37,227 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_54000.pth 2023-03-16 06:23:37,250 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_54000.pth 2023-03-16 06:24:49,031 44k INFO Train Epoch: 57 [63%] 2023-03-16 06:24:49,032 44k INFO Losses: [2.6776883602142334, 2.276427745819092, 7.52245569229126, 25.277921676635742, 1.426473617553711], step: 57200, lr: 9.927757679628145e-05 2023-03-16 06:26:00,492 44k INFO Train Epoch: 57 [83%] 2023-03-16 06:26:00,492 44k INFO Losses: [2.618356227874756, 2.364541530609131, 8.127699851989746, 21.59534454345703, 1.7137606143951416], step: 57400, lr: 9.927757679628145e-05 2023-03-16 06:27:01,243 44k INFO ====> Epoch: 57, cost 374.52 s 2023-03-16 06:27:21,042 44k INFO Train Epoch: 58 [3%] 2023-03-16 06:27:21,042 44k INFO Losses: [2.383373498916626, 2.3782365322113037, 9.023234367370605, 21.17331886291504, 1.283076286315918], step: 57600, lr: 9.926516709918191e-05 2023-03-16 06:28:32,549 44k INFO Train Epoch: 58 [23%] 2023-03-16 06:28:32,550 44k INFO Losses: [2.735565662384033, 2.2358357906341553, 10.265924453735352, 20.935205459594727, 1.1191951036453247], step: 57800, lr: 9.926516709918191e-05 2023-03-16 06:29:43,713 44k INFO Train Epoch: 58 [43%] 2023-03-16 06:29:43,714 44k INFO Losses: [2.4482946395874023, 2.2811217308044434, 11.449775695800781, 20.785953521728516, 1.458518147468567], step: 58000, lr: 9.926516709918191e-05 2023-03-16 06:29:46,760 44k INFO Saving model and optimizer state at iteration 58 to ./logs\44k\G_58000.pth 2023-03-16 06:29:47,414 44k INFO Saving model and optimizer state at iteration 58 to ./logs\44k\D_58000.pth 2023-03-16 06:29:48,021 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_55000.pth 2023-03-16 06:29:48,054 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_55000.pth 2023-03-16 06:30:59,738 44k INFO Train Epoch: 58 [62%] 2023-03-16 06:30:59,739 44k INFO Losses: [2.6652584075927734, 2.3268935680389404, 9.22270393371582, 22.05027198791504, 1.2152332067489624], step: 58200, lr: 9.926516709918191e-05 2023-03-16 06:32:11,278 44k INFO Train Epoch: 58 [82%] 2023-03-16 06:32:11,278 44k INFO Losses: [2.1563501358032227, 2.644608736038208, 12.080540657043457, 24.662160873413086, 1.5082582235336304], step: 58400, lr: 9.926516709918191e-05 2023-03-16 06:33:15,750 44k INFO ====> Epoch: 58, cost 374.51 s 2023-03-16 06:33:32,091 44k INFO Train Epoch: 59 [2%] 2023-03-16 06:33:32,092 44k INFO Losses: [2.4383208751678467, 2.3831608295440674, 9.977876663208008, 21.361282348632812, 1.5987921953201294], step: 58600, lr: 9.92527589532945e-05 2023-03-16 06:34:43,850 44k INFO Train Epoch: 59 [22%] 2023-03-16 06:34:43,850 44k INFO Losses: [2.4742932319641113, 2.307314872741699, 9.58877182006836, 20.811452865600586, 1.3841925859451294], step: 58800, lr: 9.92527589532945e-05 2023-03-16 06:35:54,910 44k INFO Train Epoch: 59 [42%] 2023-03-16 06:35:54,910 44k INFO Losses: [2.480698823928833, 2.280200719833374, 12.008065223693848, 21.638774871826172, 1.4675570726394653], step: 59000, lr: 9.92527589532945e-05 2023-03-16 06:35:57,880 44k INFO Saving model and optimizer state at iteration 59 to ./logs\44k\G_59000.pth 2023-03-16 06:35:58,619 44k INFO Saving model and optimizer state at iteration 59 to ./logs\44k\D_59000.pth 2023-03-16 06:35:59,242 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_56000.pth 2023-03-16 06:35:59,266 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_56000.pth 2023-03-16 06:37:11,026 44k INFO Train Epoch: 59 [61%] 2023-03-16 06:37:11,026 44k INFO Losses: [2.7482075691223145, 2.3250551223754883, 10.144131660461426, 17.649269104003906, 1.1196376085281372], step: 59200, lr: 9.92527589532945e-05 2023-03-16 06:38:22,587 44k INFO Train Epoch: 59 [81%] 2023-03-16 06:38:22,587 44k INFO Losses: [2.3222851753234863, 2.67536997795105, 15.731600761413574, 24.398061752319336, 1.5496623516082764], step: 59400, lr: 9.92527589532945e-05 2023-03-16 06:39:30,618 44k INFO ====> Epoch: 59, cost 374.87 s 2023-03-16 06:39:43,419 44k INFO Train Epoch: 60 [1%] 2023-03-16 06:39:43,419 44k INFO Losses: [2.2816951274871826, 2.3232123851776123, 12.57906723022461, 19.693254470825195, 1.447852611541748], step: 59600, lr: 9.924035235842533e-05 2023-03-16 06:40:55,261 44k INFO Train Epoch: 60 [21%] 2023-03-16 06:40:55,262 44k INFO Losses: [2.7650609016418457, 1.8004460334777832, 7.356016159057617, 17.435461044311523, 1.105635166168213], step: 59800, lr: 9.924035235842533e-05 2023-03-16 06:42:06,201 44k INFO Train Epoch: 60 [41%] 2023-03-16 06:42:06,201 44k INFO Losses: [2.410510778427124, 2.2478556632995605, 12.357992172241211, 19.564266204833984, 1.458536982536316], step: 60000, lr: 9.924035235842533e-05 2023-03-16 06:42:09,268 44k INFO Saving model and optimizer state at iteration 60 to ./logs\44k\G_60000.pth 2023-03-16 06:42:09,966 44k INFO Saving model and optimizer state at iteration 60 to ./logs\44k\D_60000.pth 2023-03-16 06:42:10,580 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_57000.pth 2023-03-16 06:42:10,605 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_57000.pth 2023-03-16 06:43:22,393 44k INFO Train Epoch: 60 [60%] 2023-03-16 06:43:22,393 44k INFO Losses: [2.7057650089263916, 1.9795520305633545, 6.657564163208008, 14.583634376525879, 1.5959246158599854], step: 60200, lr: 9.924035235842533e-05 2023-03-16 06:44:33,868 44k INFO Train Epoch: 60 [80%] 2023-03-16 06:44:33,868 44k INFO Losses: [2.6102042198181152, 2.2890470027923584, 7.962343692779541, 17.4096736907959, 1.321347713470459], step: 60400, lr: 9.924035235842533e-05 2023-03-16 06:45:45,450 44k INFO ====> Epoch: 60, cost 374.83 s 2023-03-16 06:45:54,601 44k INFO Train Epoch: 61 [0%] 2023-03-16 06:45:54,601 44k INFO Losses: [2.7593061923980713, 1.9758137464523315, 4.8922882080078125, 19.751646041870117, 1.1462370157241821], step: 60600, lr: 9.922794731438052e-05 2023-03-16 06:47:06,323 44k INFO Train Epoch: 61 [20%] 2023-03-16 06:47:06,324 44k INFO Losses: [2.7056806087493896, 2.1603140830993652, 10.46125316619873, 22.36989974975586, 1.2734047174453735], step: 60800, lr: 9.922794731438052e-05 2023-03-16 06:48:17,104 44k INFO Train Epoch: 61 [40%] 2023-03-16 06:48:17,104 44k INFO Losses: [2.8985965251922607, 2.3280982971191406, 8.797693252563477, 21.997297286987305, 1.303520917892456], step: 61000, lr: 9.922794731438052e-05 2023-03-16 06:48:20,213 44k INFO Saving model and optimizer state at iteration 61 to ./logs\44k\G_61000.pth 2023-03-16 06:48:20,873 44k INFO Saving model and optimizer state at iteration 61 to ./logs\44k\D_61000.pth 2023-03-16 06:48:21,487 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_58000.pth 2023-03-16 06:48:21,509 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_58000.pth 2023-03-16 06:49:33,491 44k INFO Train Epoch: 61 [59%] 2023-03-16 06:49:33,492 44k INFO Losses: [2.344120740890503, 2.1864466667175293, 12.829692840576172, 21.025205612182617, 1.2770798206329346], step: 61200, lr: 9.922794731438052e-05 2023-03-16 06:50:45,025 44k INFO Train Epoch: 61 [79%] 2023-03-16 06:50:45,025 44k INFO Losses: [2.6161179542541504, 2.371936559677124, 11.00240421295166, 21.41561508178711, 1.4301221370697021], step: 61400, lr: 9.922794731438052e-05 2023-03-16 06:51:56,750 44k INFO Train Epoch: 61 [99%] 2023-03-16 06:51:56,750 44k INFO Losses: [2.6805951595306396, 1.995265007019043, 9.318704605102539, 20.254688262939453, 1.308785080909729], step: 61600, lr: 9.922794731438052e-05 2023-03-16 06:52:00,251 44k INFO ====> Epoch: 61, cost 374.80 s 2023-03-16 06:53:17,312 44k INFO Train Epoch: 62 [19%] 2023-03-16 06:53:17,313 44k INFO Losses: [2.372156858444214, 2.494840621948242, 9.675850868225098, 21.873626708984375, 0.8981418013572693], step: 61800, lr: 9.921554382096622e-05 2023-03-16 06:54:28,025 44k INFO Train Epoch: 62 [39%] 2023-03-16 06:54:28,026 44k INFO Losses: [2.4088785648345947, 2.5296666622161865, 13.363727569580078, 27.521406173706055, 1.1275840997695923], step: 62000, lr: 9.921554382096622e-05 2023-03-16 06:54:31,054 44k INFO Saving model and optimizer state at iteration 62 to ./logs\44k\G_62000.pth 2023-03-16 06:54:31,762 44k INFO Saving model and optimizer state at iteration 62 to ./logs\44k\D_62000.pth 2023-03-16 06:54:32,378 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_59000.pth 2023-03-16 06:54:32,402 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_59000.pth 2023-03-16 06:55:44,254 44k INFO Train Epoch: 62 [58%] 2023-03-16 06:55:44,255 44k INFO Losses: [2.534717321395874, 2.351696729660034, 11.497075080871582, 23.17961883544922, 1.4618197679519653], step: 62200, lr: 9.921554382096622e-05 2023-03-16 06:56:55,459 44k INFO Train Epoch: 62 [78%] 2023-03-16 06:56:55,459 44k INFO Losses: [2.28094482421875, 2.4312500953674316, 9.535026550292969, 18.86186981201172, 1.2571314573287964], step: 62400, lr: 9.921554382096622e-05 2023-03-16 06:58:07,118 44k INFO Train Epoch: 62 [98%] 2023-03-16 06:58:07,119 44k INFO Losses: [2.604215145111084, 2.2055795192718506, 8.820465087890625, 18.79984474182129, 1.4397066831588745], step: 62600, lr: 9.921554382096622e-05 2023-03-16 06:58:14,246 44k INFO ====> Epoch: 62, cost 373.99 s 2023-03-16 06:59:28,096 44k INFO Train Epoch: 63 [18%] 2023-03-16 06:59:28,096 44k INFO Losses: [2.452930212020874, 2.123499631881714, 7.208822250366211, 15.5643892288208, 1.2262436151504517], step: 62800, lr: 9.92031418779886e-05 2023-03-16 07:00:38,842 44k INFO Train Epoch: 63 [38%] 2023-03-16 07:00:38,842 44k INFO Losses: [2.138422966003418, 2.401440143585205, 13.062027931213379, 22.145933151245117, 1.5596489906311035], step: 63000, lr: 9.92031418779886e-05 2023-03-16 07:00:41,908 44k INFO Saving model and optimizer state at iteration 63 to ./logs\44k\G_63000.pth 2023-03-16 07:00:42,617 44k INFO Saving model and optimizer state at iteration 63 to ./logs\44k\D_63000.pth 2023-03-16 07:00:43,239 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_60000.pth 2023-03-16 07:00:43,264 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_60000.pth 2023-03-16 07:01:54,874 44k INFO Train Epoch: 63 [57%] 2023-03-16 07:01:54,875 44k INFO Losses: [2.6010782718658447, 2.3608205318450928, 10.726785659790039, 21.26218032836914, 1.6935458183288574], step: 63200, lr: 9.92031418779886e-05 2023-03-16 07:03:06,314 44k INFO Train Epoch: 63 [77%] 2023-03-16 07:03:06,314 44k INFO Losses: [2.4458539485931396, 2.513310432434082, 10.144731521606445, 18.328472137451172, 1.2336912155151367], step: 63400, lr: 9.92031418779886e-05 2023-03-16 07:04:17,878 44k INFO Train Epoch: 63 [97%] 2023-03-16 07:04:17,878 44k INFO Losses: [2.4216670989990234, 2.2047061920166016, 10.746430397033691, 18.813318252563477, 1.1873033046722412], step: 63600, lr: 9.92031418779886e-05 2023-03-16 07:04:28,607 44k INFO ====> Epoch: 63, cost 374.36 s 2023-03-16 07:05:38,627 44k INFO Train Epoch: 64 [17%] 2023-03-16 07:05:38,628 44k INFO Losses: [2.5538456439971924, 2.2065470218658447, 9.375168800354004, 23.07136344909668, 1.4801201820373535], step: 63800, lr: 9.919074148525384e-05 2023-03-16 07:06:49,388 44k INFO Train Epoch: 64 [37%] 2023-03-16 07:06:49,388 44k INFO Losses: [2.4514946937561035, 2.074242115020752, 8.017518043518066, 17.905977249145508, 1.0193684101104736], step: 64000, lr: 9.919074148525384e-05 2023-03-16 07:06:52,447 44k INFO Saving model and optimizer state at iteration 64 to ./logs\44k\G_64000.pth 2023-03-16 07:06:53,160 44k INFO Saving model and optimizer state at iteration 64 to ./logs\44k\D_64000.pth 2023-03-16 07:06:53,769 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_61000.pth 2023-03-16 07:06:53,793 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_61000.pth 2023-03-16 07:08:05,518 44k INFO Train Epoch: 64 [56%] 2023-03-16 07:08:05,518 44k INFO Losses: [2.4136290550231934, 2.33967661857605, 11.612395286560059, 22.223291397094727, 1.6526914834976196], step: 64200, lr: 9.919074148525384e-05 2023-03-16 07:09:17,054 44k INFO Train Epoch: 64 [76%] 2023-03-16 07:09:17,055 44k INFO Losses: [2.596095085144043, 2.2051570415496826, 9.688583374023438, 16.785175323486328, 1.6811262369155884], step: 64400, lr: 9.919074148525384e-05 2023-03-16 07:10:28,691 44k INFO Train Epoch: 64 [96%] 2023-03-16 07:10:28,691 44k INFO Losses: [2.433039665222168, 2.324530601501465, 11.308756828308105, 21.04051399230957, 1.3797807693481445], step: 64600, lr: 9.919074148525384e-05 2023-03-16 07:10:43,133 44k INFO ====> Epoch: 64, cost 374.53 s 2023-03-16 07:11:49,828 44k INFO Train Epoch: 65 [16%] 2023-03-16 07:11:49,829 44k INFO Losses: [2.5542449951171875, 2.258772134780884, 12.215781211853027, 20.714351654052734, 1.6240487098693848], step: 64800, lr: 9.917834264256819e-05 2023-03-16 07:13:00,667 44k INFO Train Epoch: 65 [36%] 2023-03-16 07:13:00,667 44k INFO Losses: [2.5281729698181152, 2.2848734855651855, 8.5928316116333, 22.82178497314453, 1.6093143224716187], step: 65000, lr: 9.917834264256819e-05 2023-03-16 07:13:03,690 44k INFO Saving model and optimizer state at iteration 65 to ./logs\44k\G_65000.pth 2023-03-16 07:13:04,455 44k INFO Saving model and optimizer state at iteration 65 to ./logs\44k\D_65000.pth 2023-03-16 07:13:05,081 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_62000.pth 2023-03-16 07:13:05,106 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_62000.pth 2023-03-16 07:14:16,958 44k INFO Train Epoch: 65 [55%] 2023-03-16 07:14:16,959 44k INFO Losses: [2.5677547454833984, 2.376760959625244, 8.902772903442383, 22.25922966003418, 1.3527641296386719], step: 65200, lr: 9.917834264256819e-05 2023-03-16 07:15:28,598 44k INFO Train Epoch: 65 [75%] 2023-03-16 07:15:28,598 44k INFO Losses: [2.221897602081299, 2.5951075553894043, 13.255752563476562, 24.719114303588867, 1.1982113122940063], step: 65400, lr: 9.917834264256819e-05 2023-03-16 07:16:40,354 44k INFO Train Epoch: 65 [95%] 2023-03-16 07:16:40,354 44k INFO Losses: [2.50701904296875, 2.2207560539245605, 10.062827110290527, 18.300819396972656, 0.9836584329605103], step: 65600, lr: 9.917834264256819e-05 2023-03-16 07:16:58,261 44k INFO ====> Epoch: 65, cost 375.13 s 2023-03-16 07:18:01,320 44k INFO Train Epoch: 66 [15%] 2023-03-16 07:18:01,321 44k INFO Losses: [2.757188558578491, 2.577167510986328, 5.832032680511475, 19.799758911132812, 1.429742693901062], step: 65800, lr: 9.916594534973787e-05 2023-03-16 07:19:12,251 44k INFO Train Epoch: 66 [35%] 2023-03-16 07:19:12,251 44k INFO Losses: [2.5593433380126953, 1.920663595199585, 5.524940013885498, 14.828027725219727, 1.2177046537399292], step: 66000, lr: 9.916594534973787e-05 2023-03-16 07:19:15,320 44k INFO Saving model and optimizer state at iteration 66 to ./logs\44k\G_66000.pth 2023-03-16 07:19:16,029 44k INFO Saving model and optimizer state at iteration 66 to ./logs\44k\D_66000.pth 2023-03-16 07:19:16,652 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_63000.pth 2023-03-16 07:19:16,676 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_63000.pth 2023-03-16 07:20:28,661 44k INFO Train Epoch: 66 [54%] 2023-03-16 07:20:28,662 44k INFO Losses: [2.362170457839966, 2.586259126663208, 11.748368263244629, 22.89942169189453, 1.6210683584213257], step: 66200, lr: 9.916594534973787e-05 2023-03-16 07:21:40,329 44k INFO Train Epoch: 66 [74%] 2023-03-16 07:21:40,329 44k INFO Losses: [2.3817200660705566, 2.2820253372192383, 8.820152282714844, 19.673795700073242, 1.096224069595337], step: 66400, lr: 9.916594534973787e-05 2023-03-16 07:22:52,073 44k INFO Train Epoch: 66 [94%] 2023-03-16 07:22:52,074 44k INFO Losses: [2.6774370670318604, 2.0234580039978027, 8.080631256103516, 16.866168975830078, 1.3877543210983276], step: 66600, lr: 9.916594534973787e-05 2023-03-16 07:23:13,508 44k INFO ====> Epoch: 66, cost 375.25 s 2023-03-16 07:24:13,102 44k INFO Train Epoch: 67 [14%] 2023-03-16 07:24:13,102 44k INFO Losses: [2.438415288925171, 2.4707155227661133, 13.722101211547852, 25.317459106445312, 1.7664999961853027], step: 66800, lr: 9.915354960656915e-05 2023-03-16 07:25:24,046 44k INFO Train Epoch: 67 [34%] 2023-03-16 07:25:24,047 44k INFO Losses: [2.339923143386841, 2.6686596870422363, 8.09809398651123, 19.495820999145508, 1.6878600120544434], step: 67000, lr: 9.915354960656915e-05 2023-03-16 07:25:27,281 44k INFO Saving model and optimizer state at iteration 67 to ./logs\44k\G_67000.pth 2023-03-16 07:25:27,941 44k INFO Saving model and optimizer state at iteration 67 to ./logs\44k\D_67000.pth 2023-03-16 07:25:28,554 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_64000.pth 2023-03-16 07:25:28,578 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_64000.pth 2023-03-16 07:26:39,999 44k INFO Train Epoch: 67 [53%] 2023-03-16 07:26:39,999 44k INFO Losses: [2.3894968032836914, 2.2250101566314697, 10.531296730041504, 21.7607479095459, 1.4273045063018799], step: 67200, lr: 9.915354960656915e-05 2023-03-16 07:27:51,698 44k INFO Train Epoch: 67 [73%] 2023-03-16 07:27:51,699 44k INFO Losses: [2.5041468143463135, 2.091434955596924, 11.3255033493042, 19.223388671875, 0.8941289782524109], step: 67400, lr: 9.915354960656915e-05 2023-03-16 07:29:03,439 44k INFO Train Epoch: 67 [93%] 2023-03-16 07:29:03,439 44k INFO Losses: [2.656792402267456, 2.301718235015869, 9.042182922363281, 19.839405059814453, 1.2715908288955688], step: 67600, lr: 9.915354960656915e-05 2023-03-16 07:29:28,344 44k INFO ====> Epoch: 67, cost 374.84 s 2023-03-16 07:30:24,232 44k INFO Train Epoch: 68 [13%] 2023-03-16 07:30:24,232 44k INFO Losses: [2.7115330696105957, 2.080782413482666, 6.398636817932129, 17.35288429260254, 0.9913434982299805], step: 67800, lr: 9.914115541286833e-05 2023-03-16 07:31:35,243 44k INFO Train Epoch: 68 [33%] 2023-03-16 07:31:35,243 44k INFO Losses: [2.6752021312713623, 2.349947452545166, 5.00053596496582, 14.64750862121582, 1.1192201375961304], step: 68000, lr: 9.914115541286833e-05 2023-03-16 07:31:38,320 44k INFO Saving model and optimizer state at iteration 68 to ./logs\44k\G_68000.pth 2023-03-16 07:31:39,054 44k INFO Saving model and optimizer state at iteration 68 to ./logs\44k\D_68000.pth 2023-03-16 07:31:39,682 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_65000.pth 2023-03-16 07:31:39,707 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_65000.pth 2023-03-16 07:32:51,285 44k INFO Train Epoch: 68 [52%] 2023-03-16 07:32:51,285 44k INFO Losses: [2.4303529262542725, 2.3513693809509277, 10.154219627380371, 20.642934799194336, 0.9450553059577942], step: 68200, lr: 9.914115541286833e-05 2023-03-16 07:34:03,245 44k INFO Train Epoch: 68 [72%] 2023-03-16 07:34:03,246 44k INFO Losses: [2.5441806316375732, 2.2535455226898193, 8.215392112731934, 19.09328269958496, 1.83035409450531], step: 68400, lr: 9.914115541286833e-05 2023-03-16 07:35:15,077 44k INFO Train Epoch: 68 [92%] 2023-03-16 07:35:15,077 44k INFO Losses: [2.409698009490967, 2.2950544357299805, 12.955523490905762, 21.999065399169922, 1.5316895246505737], step: 68600, lr: 9.914115541286833e-05 2023-03-16 07:35:43,592 44k INFO ====> Epoch: 68, cost 375.25 s 2023-03-16 07:36:36,071 44k INFO Train Epoch: 69 [12%] 2023-03-16 07:36:36,071 44k INFO Losses: [2.6606597900390625, 2.1993470191955566, 13.751875877380371, 20.415468215942383, 1.100536823272705], step: 68800, lr: 9.912876276844171e-05 2023-03-16 07:37:46,864 44k INFO Train Epoch: 69 [32%] 2023-03-16 07:37:46,864 44k INFO Losses: [2.3119237422943115, 2.2873716354370117, 15.286767959594727, 22.860576629638672, 1.4720842838287354], step: 69000, lr: 9.912876276844171e-05 2023-03-16 07:37:49,951 44k INFO Saving model and optimizer state at iteration 69 to ./logs\44k\G_69000.pth 2023-03-16 07:37:50,666 44k INFO Saving model and optimizer state at iteration 69 to ./logs\44k\D_69000.pth 2023-03-16 07:37:51,304 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_66000.pth 2023-03-16 07:37:51,329 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_66000.pth 2023-03-16 07:39:02,626 44k INFO Train Epoch: 69 [51%] 2023-03-16 07:39:02,627 44k INFO Losses: [2.441434383392334, 2.1160433292388916, 11.244366645812988, 22.83915138244629, 1.302855134010315], step: 69200, lr: 9.912876276844171e-05 2023-03-16 07:40:14,406 44k INFO Train Epoch: 69 [71%] 2023-03-16 07:40:14,407 44k INFO Losses: [2.5239176750183105, 2.4624862670898438, 9.99053955078125, 18.13821029663086, 1.15556800365448], step: 69400, lr: 9.912876276844171e-05 2023-03-16 07:41:26,025 44k INFO Train Epoch: 69 [91%] 2023-03-16 07:41:26,026 44k INFO Losses: [2.418076992034912, 2.4273295402526855, 12.558492660522461, 22.233577728271484, 1.3437540531158447], step: 69600, lr: 9.912876276844171e-05 2023-03-16 07:41:58,099 44k INFO ====> Epoch: 69, cost 374.51 s 2023-03-16 07:42:46,987 44k INFO Train Epoch: 70 [11%] 2023-03-16 07:42:46,987 44k INFO Losses: [2.3613154888153076, 2.4775662422180176, 10.81726360321045, 23.17139434814453, 1.4606130123138428], step: 69800, lr: 9.911637167309565e-05 2023-03-16 07:43:57,611 44k INFO Train Epoch: 70 [31%] 2023-03-16 07:43:57,612 44k INFO Losses: [2.3860409259796143, 2.265611171722412, 13.279573440551758, 22.570627212524414, 1.7245194911956787], step: 70000, lr: 9.911637167309565e-05 2023-03-16 07:44:00,643 44k INFO Saving model and optimizer state at iteration 70 to ./logs\44k\G_70000.pth 2023-03-16 07:44:01,336 44k INFO Saving model and optimizer state at iteration 70 to ./logs\44k\D_70000.pth 2023-03-16 07:44:01,967 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_67000.pth 2023-03-16 07:44:01,988 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_67000.pth 2023-03-16 07:45:13,286 44k INFO Train Epoch: 70 [50%] 2023-03-16 07:45:13,286 44k INFO Losses: [2.5135810375213623, 2.0026986598968506, 10.517877578735352, 19.80780601501465, 1.212860345840454], step: 70200, lr: 9.911637167309565e-05 2023-03-16 07:46:25,038 44k INFO Train Epoch: 70 [70%] 2023-03-16 07:46:25,039 44k INFO Losses: [2.440673589706421, 2.5206618309020996, 9.971756935119629, 21.28899574279785, 1.3058249950408936], step: 70400, lr: 9.911637167309565e-05 2023-03-16 07:47:36,504 44k INFO Train Epoch: 70 [90%] 2023-03-16 07:47:36,505 44k INFO Losses: [2.6341397762298584, 2.266674757003784, 10.995609283447266, 16.512502670288086, 1.5564336776733398], step: 70600, lr: 9.911637167309565e-05 2023-03-16 07:48:12,257 44k INFO ====> Epoch: 70, cost 374.16 s 2023-03-16 07:48:57,508 44k INFO Train Epoch: 71 [10%] 2023-03-16 07:48:57,509 44k INFO Losses: [2.397519111633301, 2.467268943786621, 11.40195083618164, 23.70783805847168, 1.594031810760498], step: 70800, lr: 9.910398212663652e-05 2023-03-16 07:50:08,265 44k INFO Train Epoch: 71 [30%] 2023-03-16 07:50:08,266 44k INFO Losses: [2.5749127864837646, 2.1935222148895264, 9.946022033691406, 20.77037239074707, 1.303407907485962], step: 71000, lr: 9.910398212663652e-05 2023-03-16 07:50:11,277 44k INFO Saving model and optimizer state at iteration 71 to ./logs\44k\G_71000.pth 2023-03-16 07:50:11,966 44k INFO Saving model and optimizer state at iteration 71 to ./logs\44k\D_71000.pth 2023-03-16 07:50:12,577 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_68000.pth 2023-03-16 07:50:12,601 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_68000.pth 2023-03-16 07:51:23,745 44k INFO Train Epoch: 71 [50%] 2023-03-16 07:51:23,746 44k INFO Losses: [2.410564661026001, 2.475268602371216, 9.32088565826416, 22.023120880126953, 1.3307219743728638], step: 71200, lr: 9.910398212663652e-05 2023-03-16 07:52:35,426 44k INFO Train Epoch: 71 [69%] 2023-03-16 07:52:35,426 44k INFO Losses: [2.6614580154418945, 1.959304690361023, 6.140883445739746, 14.096338272094727, 1.2026904821395874], step: 71400, lr: 9.910398212663652e-05 2023-03-16 07:53:47,094 44k INFO Train Epoch: 71 [89%] 2023-03-16 07:53:47,094 44k INFO Losses: [2.4228484630584717, 2.2952003479003906, 8.36586856842041, 17.7866153717041, 1.0310453176498413], step: 71600, lr: 9.910398212663652e-05 2023-03-16 07:54:26,398 44k INFO ====> Epoch: 71, cost 374.14 s 2023-03-16 07:55:08,038 44k INFO Train Epoch: 72 [9%] 2023-03-16 07:55:08,039 44k INFO Losses: [2.3638274669647217, 2.5754928588867188, 12.633895874023438, 20.021862030029297, 1.7982776165008545], step: 71800, lr: 9.909159412887068e-05 2023-03-16 07:56:19,156 44k INFO Train Epoch: 72 [29%] 2023-03-16 07:56:19,156 44k INFO Losses: [2.3130581378936768, 2.5737838745117188, 12.368646621704102, 16.076583862304688, 1.3148750066757202], step: 72000, lr: 9.909159412887068e-05 2023-03-16 07:56:22,226 44k INFO Saving model and optimizer state at iteration 72 to ./logs\44k\G_72000.pth 2023-03-16 07:56:22,904 44k INFO Saving model and optimizer state at iteration 72 to ./logs\44k\D_72000.pth 2023-03-16 07:56:23,736 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_69000.pth 2023-03-16 07:56:23,789 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_69000.pth 2023-03-16 07:57:35,024 44k INFO Train Epoch: 72 [49%] 2023-03-16 07:57:35,025 44k INFO Losses: [2.6512742042541504, 2.202547550201416, 10.765206336975098, 22.25473403930664, 1.4418127536773682], step: 72200, lr: 9.909159412887068e-05 2023-03-16 07:58:47,055 44k INFO Train Epoch: 72 [68%] 2023-03-16 07:58:47,056 44k INFO Losses: [2.6853132247924805, 2.004429340362549, 9.440337181091309, 20.457284927368164, 1.323033094406128], step: 72400, lr: 9.909159412887068e-05 2023-03-16 07:59:58,818 44k INFO Train Epoch: 72 [88%] 2023-03-16 07:59:58,818 44k INFO Losses: [2.514395236968994, 2.050365447998047, 9.70026683807373, 20.430299758911133, 0.9085346460342407], step: 72600, lr: 9.909159412887068e-05 2023-03-16 08:00:41,718 44k INFO ====> Epoch: 72, cost 375.32 s 2023-03-16 08:01:19,692 44k INFO Train Epoch: 73 [8%] 2023-03-16 08:01:19,693 44k INFO Losses: [2.6645898818969727, 2.2670254707336426, 14.24485969543457, 15.388519287109375, 0.9493722915649414], step: 72800, lr: 9.907920767960457e-05 2023-03-16 08:02:30,900 44k INFO Train Epoch: 73 [28%] 2023-03-16 08:02:30,900 44k INFO Losses: [2.406534194946289, 2.2548716068267822, 10.36341381072998, 18.95358657836914, 1.2472317218780518], step: 73000, lr: 9.907920767960457e-05 2023-03-16 08:02:34,026 44k INFO Saving model and optimizer state at iteration 73 to ./logs\44k\G_73000.pth 2023-03-16 08:02:34,690 44k INFO Saving model and optimizer state at iteration 73 to ./logs\44k\D_73000.pth 2023-03-16 08:02:35,322 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_70000.pth 2023-03-16 08:02:35,347 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_70000.pth 2023-03-16 08:03:46,435 44k INFO Train Epoch: 73 [48%] 2023-03-16 08:03:46,436 44k INFO Losses: [2.3205201625823975, 2.4413833618164062, 14.139167785644531, 22.206674575805664, 1.5058813095092773], step: 73200, lr: 9.907920767960457e-05 2023-03-16 08:04:58,608 44k INFO Train Epoch: 73 [67%] 2023-03-16 08:04:58,608 44k INFO Losses: [2.464883804321289, 2.21601939201355, 11.3814697265625, 17.96633529663086, 1.5996133089065552], step: 73400, lr: 9.907920767960457e-05 2023-03-16 08:06:10,455 44k INFO Train Epoch: 73 [87%] 2023-03-16 08:06:10,455 44k INFO Losses: [2.3852152824401855, 2.1528103351593018, 14.479198455810547, 21.864933013916016, 1.347219705581665], step: 73600, lr: 9.907920767960457e-05 2023-03-16 08:06:56,750 44k INFO ====> Epoch: 73, cost 375.03 s 2023-03-16 08:07:31,156 44k INFO Train Epoch: 74 [7%] 2023-03-16 08:07:31,156 44k INFO Losses: [2.334798574447632, 2.3930344581604004, 10.216038703918457, 18.6864013671875, 1.0427542924880981], step: 73800, lr: 9.906682277864462e-05 2023-03-16 08:08:42,525 44k INFO Train Epoch: 74 [27%] 2023-03-16 08:08:42,525 44k INFO Losses: [2.203385829925537, 2.8809807300567627, 9.31330680847168, 19.625986099243164, 1.5512375831604004], step: 74000, lr: 9.906682277864462e-05 2023-03-16 08:08:45,501 44k INFO Saving model and optimizer state at iteration 74 to ./logs\44k\G_74000.pth 2023-03-16 08:08:46,256 44k INFO Saving model and optimizer state at iteration 74 to ./logs\44k\D_74000.pth 2023-03-16 08:08:46,885 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_71000.pth 2023-03-16 08:08:46,907 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_71000.pth 2023-03-16 08:09:58,111 44k INFO Train Epoch: 74 [47%] 2023-03-16 08:09:58,112 44k INFO Losses: [2.6014280319213867, 2.2384276390075684, 13.354851722717285, 22.026979446411133, 1.7550629377365112], step: 74200, lr: 9.906682277864462e-05 2023-03-16 08:11:10,118 44k INFO Train Epoch: 74 [66%] 2023-03-16 08:11:10,118 44k INFO Losses: [2.6676650047302246, 2.1448397636413574, 11.560942649841309, 19.6511287689209, 1.185433030128479], step: 74400, lr: 9.906682277864462e-05 2023-03-16 08:12:21,848 44k INFO Train Epoch: 74 [86%] 2023-03-16 08:12:21,849 44k INFO Losses: [2.684251308441162, 2.3151044845581055, 4.290086269378662, 15.301222801208496, 1.3987154960632324], step: 74600, lr: 9.906682277864462e-05 2023-03-16 08:13:11,812 44k INFO ====> Epoch: 74, cost 375.06 s 2023-03-16 08:13:42,492 44k INFO Train Epoch: 75 [6%] 2023-03-16 08:13:42,493 44k INFO Losses: [2.402836799621582, 2.376347780227661, 10.250021934509277, 21.834169387817383, 1.2624506950378418], step: 74800, lr: 9.905443942579728e-05 2023-03-16 08:14:53,842 44k INFO Train Epoch: 75 [26%] 2023-03-16 08:14:53,842 44k INFO Losses: [2.431527614593506, 2.1984190940856934, 8.636610984802246, 18.061769485473633, 1.0245157480239868], step: 75000, lr: 9.905443942579728e-05 2023-03-16 08:14:56,785 44k INFO Saving model and optimizer state at iteration 75 to ./logs\44k\G_75000.pth 2023-03-16 08:14:57,541 44k INFO Saving model and optimizer state at iteration 75 to ./logs\44k\D_75000.pth 2023-03-16 08:14:58,179 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_72000.pth 2023-03-16 08:14:58,206 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_72000.pth 2023-03-16 08:16:09,544 44k INFO Train Epoch: 75 [46%] 2023-03-16 08:16:09,544 44k INFO Losses: [2.490330219268799, 2.3275415897369385, 11.905667304992676, 19.14838218688965, 1.231898307800293], step: 75200, lr: 9.905443942579728e-05 2023-03-16 08:17:21,525 44k INFO Train Epoch: 75 [65%] 2023-03-16 08:17:21,525 44k INFO Losses: [2.6385440826416016, 2.147819995880127, 15.932032585144043, 23.828824996948242, 1.4443410634994507], step: 75400, lr: 9.905443942579728e-05 2023-03-16 08:18:33,077 44k INFO Train Epoch: 75 [85%] 2023-03-16 08:18:33,078 44k INFO Losses: [2.3297924995422363, 2.7480313777923584, 12.362821578979492, 23.63309097290039, 1.3659799098968506], step: 75600, lr: 9.905443942579728e-05 2023-03-16 08:19:26,710 44k INFO ====> Epoch: 75, cost 374.90 s 2023-03-16 08:19:53,967 44k INFO Train Epoch: 76 [5%] 2023-03-16 08:19:53,967 44k INFO Losses: [2.621009588241577, 1.989734411239624, 10.847014427185059, 19.115522384643555, 1.4305171966552734], step: 75800, lr: 9.904205762086905e-05 2023-03-16 08:21:05,302 44k INFO Train Epoch: 76 [25%] 2023-03-16 08:21:05,302 44k INFO Losses: [2.589390277862549, 2.1650614738464355, 13.935149192810059, 20.79219627380371, 1.2378240823745728], step: 76000, lr: 9.904205762086905e-05 2023-03-16 08:21:08,256 44k INFO Saving model and optimizer state at iteration 76 to ./logs\44k\G_76000.pth 2023-03-16 08:21:08,971 44k INFO Saving model and optimizer state at iteration 76 to ./logs\44k\D_76000.pth 2023-03-16 08:21:09,650 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_73000.pth 2023-03-16 08:21:09,675 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_73000.pth 2023-03-16 08:22:20,767 44k INFO Train Epoch: 76 [45%] 2023-03-16 08:22:20,768 44k INFO Losses: [2.764695167541504, 2.037844657897949, 7.30868673324585, 18.849525451660156, 1.5398517847061157], step: 76200, lr: 9.904205762086905e-05 2023-03-16 08:23:32,790 44k INFO Train Epoch: 76 [64%] 2023-03-16 08:23:32,790 44k INFO Losses: [2.4071483612060547, 2.176701068878174, 10.771800994873047, 20.07645034790039, 1.301085114479065], step: 76400, lr: 9.904205762086905e-05 2023-03-16 08:24:44,373 44k INFO Train Epoch: 76 [84%] 2023-03-16 08:24:44,374 44k INFO Losses: [2.562617063522339, 2.2594494819641113, 9.221774101257324, 18.526309967041016, 1.2344696521759033], step: 76600, lr: 9.904205762086905e-05 2023-03-16 08:25:41,655 44k INFO ====> Epoch: 76, cost 374.95 s 2023-03-16 08:26:05,167 44k INFO Train Epoch: 77 [4%] 2023-03-16 08:26:05,167 44k INFO Losses: [2.4960951805114746, 2.1838841438293457, 10.645844459533691, 18.92750358581543, 1.3045406341552734], step: 76800, lr: 9.902967736366644e-05 2023-03-16 08:27:16,579 44k INFO Train Epoch: 77 [24%] 2023-03-16 08:27:16,579 44k INFO Losses: [2.505439519882202, 2.347480535507202, 6.893670558929443, 12.618837356567383, 1.668109655380249], step: 77000, lr: 9.902967736366644e-05 2023-03-16 08:27:19,520 44k INFO Saving model and optimizer state at iteration 77 to ./logs\44k\G_77000.pth 2023-03-16 08:27:20,237 44k INFO Saving model and optimizer state at iteration 77 to ./logs\44k\D_77000.pth 2023-03-16 08:27:20,865 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_74000.pth 2023-03-16 08:27:20,891 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_74000.pth 2023-03-16 08:28:32,072 44k INFO Train Epoch: 77 [44%] 2023-03-16 08:28:32,072 44k INFO Losses: [2.258953809738159, 2.513097047805786, 14.614194869995117, 21.197948455810547, 1.2188001871109009], step: 77200, lr: 9.902967736366644e-05 2023-03-16 08:29:44,050 44k INFO Train Epoch: 77 [63%] 2023-03-16 08:29:44,050 44k INFO Losses: [2.5148801803588867, 2.0178418159484863, 9.908560752868652, 22.410139083862305, 1.7777165174484253], step: 77400, lr: 9.902967736366644e-05 2023-03-16 08:30:55,602 44k INFO Train Epoch: 77 [83%] 2023-03-16 08:30:55,603 44k INFO Losses: [2.5989484786987305, 2.272611618041992, 6.816071510314941, 22.063467025756836, 2.130357027053833], step: 77600, lr: 9.902967736366644e-05 2023-03-16 08:31:56,479 44k INFO ====> Epoch: 77, cost 374.82 s 2023-03-16 08:32:16,456 44k INFO Train Epoch: 78 [3%] 2023-03-16 08:32:16,457 44k INFO Losses: [2.5013084411621094, 2.2255375385284424, 8.702656745910645, 20.998201370239258, 1.4081073999404907], step: 77800, lr: 9.901729865399597e-05 2023-03-16 08:33:28,058 44k INFO Train Epoch: 78 [23%] 2023-03-16 08:33:28,059 44k INFO Losses: [2.4589316844940186, 2.281000852584839, 8.13118839263916, 16.346097946166992, 1.1765598058700562], step: 78000, lr: 9.901729865399597e-05 2023-03-16 08:33:31,099 44k INFO Saving model and optimizer state at iteration 78 to ./logs\44k\G_78000.pth 2023-03-16 08:33:31,812 44k INFO Saving model and optimizer state at iteration 78 to ./logs\44k\D_78000.pth 2023-03-16 08:33:32,438 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_75000.pth 2023-03-16 08:33:32,460 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_75000.pth 2023-03-16 08:34:43,494 44k INFO Train Epoch: 78 [43%] 2023-03-16 08:34:43,495 44k INFO Losses: [2.2363779544830322, 2.4098973274230957, 9.9180326461792, 22.015722274780273, 1.4916006326675415], step: 78200, lr: 9.901729865399597e-05 2023-03-16 08:35:55,354 44k INFO Train Epoch: 78 [62%] 2023-03-16 08:35:55,355 44k INFO Losses: [2.339261770248413, 2.482637405395508, 12.17127513885498, 21.207361221313477, 1.387426733970642], step: 78400, lr: 9.901729865399597e-05 2023-03-16 08:37:06,998 44k INFO Train Epoch: 78 [82%] 2023-03-16 08:37:06,999 44k INFO Losses: [2.7623422145843506, 2.041926145553589, 11.458247184753418, 23.37499237060547, 1.836408257484436], step: 78600, lr: 9.901729865399597e-05 2023-03-16 08:38:11,495 44k INFO ====> Epoch: 78, cost 375.02 s 2023-03-16 08:38:27,898 44k INFO Train Epoch: 79 [2%] 2023-03-16 08:38:27,898 44k INFO Losses: [2.3294332027435303, 2.3041718006134033, 9.930187225341797, 17.782129287719727, 1.4994163513183594], step: 78800, lr: 9.900492149166423e-05 2023-03-16 08:39:39,565 44k INFO Train Epoch: 79 [22%] 2023-03-16 08:39:39,566 44k INFO Losses: [2.5428919792175293, 2.2645680904388428, 9.759968757629395, 20.82404136657715, 1.1724902391433716], step: 79000, lr: 9.900492149166423e-05 2023-03-16 08:39:42,655 44k INFO Saving model and optimizer state at iteration 79 to ./logs\44k\G_79000.pth 2023-03-16 08:39:43,326 44k INFO Saving model and optimizer state at iteration 79 to ./logs\44k\D_79000.pth 2023-03-16 08:39:43,931 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_76000.pth 2023-03-16 08:39:43,961 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_76000.pth 2023-03-16 08:40:54,745 44k INFO Train Epoch: 79 [42%] 2023-03-16 08:40:54,745 44k INFO Losses: [2.695894479751587, 1.9336440563201904, 7.467321872711182, 16.397506713867188, 1.5879876613616943], step: 79200, lr: 9.900492149166423e-05 2023-03-16 08:42:06,773 44k INFO Train Epoch: 79 [61%] 2023-03-16 08:42:06,773 44k INFO Losses: [2.2390894889831543, 2.2661945819854736, 11.610947608947754, 20.589651107788086, 1.0034098625183105], step: 79400, lr: 9.900492149166423e-05 2023-03-16 08:43:18,346 44k INFO Train Epoch: 79 [81%] 2023-03-16 08:43:18,346 44k INFO Losses: [2.4455981254577637, 2.336909294128418, 10.225975036621094, 20.968854904174805, 1.353347897529602], step: 79600, lr: 9.900492149166423e-05 2023-03-16 08:44:26,313 44k INFO ====> Epoch: 79, cost 374.82 s 2023-03-16 08:44:39,008 44k INFO Train Epoch: 80 [1%] 2023-03-16 08:44:39,008 44k INFO Losses: [2.483541488647461, 2.3048312664031982, 10.860982894897461, 22.411209106445312, 1.5081133842468262], step: 79800, lr: 9.899254587647776e-05 2023-03-16 08:45:50,897 44k INFO Train Epoch: 80 [21%] 2023-03-16 08:45:50,897 44k INFO Losses: [2.4603524208068848, 2.3156752586364746, 7.199110507965088, 19.657878875732422, 1.4247809648513794], step: 80000, lr: 9.899254587647776e-05 2023-03-16 08:45:53,966 44k INFO Saving model and optimizer state at iteration 80 to ./logs\44k\G_80000.pth 2023-03-16 08:45:54,636 44k INFO Saving model and optimizer state at iteration 80 to ./logs\44k\D_80000.pth 2023-03-16 08:45:55,267 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_77000.pth 2023-03-16 08:45:55,293 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_77000.pth 2023-03-16 08:47:06,068 44k INFO Train Epoch: 80 [41%] 2023-03-16 08:47:06,068 44k INFO Losses: [2.4275999069213867, 2.322844982147217, 8.517242431640625, 21.514070510864258, 1.490932583808899], step: 80200, lr: 9.899254587647776e-05 2023-03-16 08:48:18,194 44k INFO Train Epoch: 80 [60%] 2023-03-16 08:48:18,194 44k INFO Losses: [2.5545899868011475, 2.649416446685791, 9.965587615966797, 21.290109634399414, 1.3762506246566772], step: 80400, lr: 9.899254587647776e-05 2023-03-16 08:49:29,726 44k INFO Train Epoch: 80 [80%] 2023-03-16 08:49:29,727 44k INFO Losses: [2.432999610900879, 2.5708298683166504, 10.679574966430664, 16.490766525268555, 1.0844477415084839], step: 80600, lr: 9.899254587647776e-05 2023-03-16 08:50:41,335 44k INFO ====> Epoch: 80, cost 375.02 s 2023-03-16 08:50:50,592 44k INFO Train Epoch: 81 [0%] 2023-03-16 08:50:50,593 44k INFO Losses: [2.502391815185547, 2.208683729171753, 6.374049186706543, 19.98403549194336, 1.2378820180892944], step: 80800, lr: 9.89801718082432e-05 2023-03-16 08:52:02,339 44k INFO Train Epoch: 81 [20%] 2023-03-16 08:52:02,340 44k INFO Losses: [2.4770898818969727, 2.2964415550231934, 10.303099632263184, 23.075862884521484, 1.0373879671096802], step: 81000, lr: 9.89801718082432e-05 2023-03-16 08:52:05,411 44k INFO Saving model and optimizer state at iteration 81 to ./logs\44k\G_81000.pth 2023-03-16 08:52:06,124 44k INFO Saving model and optimizer state at iteration 81 to ./logs\44k\D_81000.pth 2023-03-16 08:52:06,753 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_78000.pth 2023-03-16 08:52:06,779 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_78000.pth 2023-03-16 08:53:17,631 44k INFO Train Epoch: 81 [40%] 2023-03-16 08:53:17,631 44k INFO Losses: [2.6314992904663086, 2.77280330657959, 12.704195976257324, 24.806196212768555, 1.2970190048217773], step: 81200, lr: 9.89801718082432e-05 2023-03-16 08:54:29,745 44k INFO Train Epoch: 81 [59%] 2023-03-16 08:54:29,746 44k INFO Losses: [2.5656938552856445, 2.0626039505004883, 13.572209358215332, 24.554906845092773, 1.3037029504776], step: 81400, lr: 9.89801718082432e-05 2023-03-16 08:55:41,234 44k INFO Train Epoch: 81 [79%] 2023-03-16 08:55:41,235 44k INFO Losses: [2.4658374786376953, 2.210562229156494, 10.703784942626953, 20.447978973388672, 1.3574137687683105], step: 81600, lr: 9.89801718082432e-05 2023-03-16 08:56:52,957 44k INFO Train Epoch: 81 [99%] 2023-03-16 08:56:52,958 44k INFO Losses: [2.4650776386260986, 2.2526063919067383, 10.94345474243164, 20.874055862426758, 1.1980767250061035], step: 81800, lr: 9.89801718082432e-05 2023-03-16 08:56:56,503 44k INFO ====> Epoch: 81, cost 375.17 s 2023-03-16 08:58:13,843 44k INFO Train Epoch: 82 [19%] 2023-03-16 08:58:13,843 44k INFO Losses: [2.2458887100219727, 2.815030097961426, 10.976490020751953, 22.55339813232422, 1.3060927391052246], step: 82000, lr: 9.896779928676716e-05 2023-03-16 08:58:16,845 44k INFO Saving model and optimizer state at iteration 82 to ./logs\44k\G_82000.pth 2023-03-16 08:58:17,612 44k INFO Saving model and optimizer state at iteration 82 to ./logs\44k\D_82000.pth 2023-03-16 08:58:18,236 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_79000.pth 2023-03-16 08:58:18,258 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_79000.pth 2023-03-16 08:59:28,840 44k INFO Train Epoch: 82 [39%] 2023-03-16 08:59:28,841 44k INFO Losses: [2.255950450897217, 2.870279312133789, 13.86103630065918, 27.950183868408203, 1.3149572610855103], step: 82200, lr: 9.896779928676716e-05 2023-03-16 09:00:40,967 44k INFO Train Epoch: 82 [58%] 2023-03-16 09:00:40,968 44k INFO Losses: [2.361602544784546, 2.230639934539795, 11.809147834777832, 20.376632690429688, 1.2128304243087769], step: 82400, lr: 9.896779928676716e-05 2023-03-16 09:01:52,268 44k INFO Train Epoch: 82 [78%] 2023-03-16 09:01:52,268 44k INFO Losses: [2.4168827533721924, 2.234450578689575, 11.464388847351074, 20.628000259399414, 1.2325433492660522], step: 82600, lr: 9.896779928676716e-05 2023-03-16 09:03:03,865 44k INFO Train Epoch: 82 [98%] 2023-03-16 09:03:03,865 44k INFO Losses: [2.5826163291931152, 2.0423102378845215, 6.507233142852783, 21.60892105102539, 1.1802419424057007], step: 82800, lr: 9.896779928676716e-05 2023-03-16 09:03:10,933 44k INFO ====> Epoch: 82, cost 374.43 s 2023-03-16 09:04:24,529 44k INFO Train Epoch: 83 [18%] 2023-03-16 09:04:24,529 44k INFO Losses: [2.536294460296631, 2.2390122413635254, 7.695847511291504, 13.978148460388184, 0.897167444229126], step: 83000, lr: 9.895542831185631e-05 2023-03-16 09:04:27,517 44k INFO Saving model and optimizer state at iteration 83 to ./logs\44k\G_83000.pth 2023-03-16 09:04:28,236 44k INFO Saving model and optimizer state at iteration 83 to ./logs\44k\D_83000.pth 2023-03-16 09:04:28,849 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_80000.pth 2023-03-16 09:04:28,875 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_80000.pth 2023-03-16 09:05:39,500 44k INFO Train Epoch: 83 [38%] 2023-03-16 09:05:39,500 44k INFO Losses: [2.44332218170166, 2.298760175704956, 11.801156044006348, 20.952777862548828, 1.3831758499145508], step: 83200, lr: 9.895542831185631e-05 2023-03-16 09:06:51,364 44k INFO Train Epoch: 83 [57%] 2023-03-16 09:06:51,364 44k INFO Losses: [2.5737709999084473, 2.248979330062866, 11.688226699829102, 18.79779052734375, 1.5552974939346313], step: 83400, lr: 9.895542831185631e-05 2023-03-16 09:08:02,880 44k INFO Train Epoch: 83 [77%] 2023-03-16 09:08:02,880 44k INFO Losses: [2.4200944900512695, 2.2563607692718506, 7.927487373352051, 16.792776107788086, 1.2168667316436768], step: 83600, lr: 9.895542831185631e-05 2023-03-16 09:09:14,480 44k INFO Train Epoch: 83 [97%] 2023-03-16 09:09:14,481 44k INFO Losses: [2.4972729682922363, 2.0163214206695557, 10.501612663269043, 20.172029495239258, 1.2477095127105713], step: 83800, lr: 9.895542831185631e-05 2023-03-16 09:09:25,203 44k INFO ====> Epoch: 83, cost 374.27 s 2023-03-16 09:10:35,342 44k INFO Train Epoch: 84 [17%] 2023-03-16 09:10:35,342 44k INFO Losses: [2.285039186477661, 2.3283092975616455, 11.608366966247559, 22.11026382446289, 1.3514654636383057], step: 84000, lr: 9.894305888331732e-05 2023-03-16 09:10:38,395 44k INFO Saving model and optimizer state at iteration 84 to ./logs\44k\G_84000.pth 2023-03-16 09:10:39,094 44k INFO Saving model and optimizer state at iteration 84 to ./logs\44k\D_84000.pth 2023-03-16 09:10:39,729 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_81000.pth 2023-03-16 09:10:39,754 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_81000.pth 2023-03-16 09:11:50,400 44k INFO Train Epoch: 84 [37%] 2023-03-16 09:11:50,400 44k INFO Losses: [2.5742058753967285, 2.1825203895568848, 9.148046493530273, 18.277294158935547, 1.3206961154937744], step: 84200, lr: 9.894305888331732e-05 2023-03-16 09:13:02,291 44k INFO Train Epoch: 84 [56%] 2023-03-16 09:13:02,292 44k INFO Losses: [2.462679862976074, 2.294734239578247, 9.28104305267334, 20.143587112426758, 1.5915547609329224], step: 84400, lr: 9.894305888331732e-05 2023-03-16 09:14:13,736 44k INFO Train Epoch: 84 [76%] 2023-03-16 09:14:13,737 44k INFO Losses: [2.426880359649658, 2.3748016357421875, 15.442784309387207, 22.434341430664062, 1.6158491373062134], step: 84600, lr: 9.894305888331732e-05 2023-03-16 09:15:25,271 44k INFO Train Epoch: 84 [96%] 2023-03-16 09:15:25,271 44k INFO Losses: [2.253547430038452, 2.538923501968384, 13.277788162231445, 18.98879051208496, 1.4754467010498047], step: 84800, lr: 9.894305888331732e-05 2023-03-16 09:15:39,631 44k INFO ====> Epoch: 84, cost 374.43 s 2023-03-16 09:16:46,116 44k INFO Train Epoch: 85 [16%] 2023-03-16 09:16:46,116 44k INFO Losses: [2.466411590576172, 2.32041597366333, 9.246880531311035, 21.504104614257812, 1.6453016996383667], step: 85000, lr: 9.89306910009569e-05 2023-03-16 09:16:49,209 44k INFO Saving model and optimizer state at iteration 85 to ./logs\44k\G_85000.pth 2023-03-16 09:16:49,915 44k INFO Saving model and optimizer state at iteration 85 to ./logs\44k\D_85000.pth 2023-03-16 09:16:50,543 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_82000.pth 2023-03-16 09:16:50,567 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_82000.pth 2023-03-16 09:18:01,057 44k INFO Train Epoch: 85 [36%] 2023-03-16 09:18:01,057 44k INFO Losses: [2.3698880672454834, 2.4822804927825928, 8.87939453125, 19.538908004760742, 1.2417246103286743], step: 85200, lr: 9.89306910009569e-05 2023-03-16 09:19:12,916 44k INFO Train Epoch: 85 [55%] 2023-03-16 09:19:12,917 44k INFO Losses: [2.4803197383880615, 2.1381683349609375, 9.977704048156738, 23.01763343811035, 1.3221185207366943], step: 85400, lr: 9.89306910009569e-05 2023-03-16 09:20:24,399 44k INFO Train Epoch: 85 [75%] 2023-03-16 09:20:24,399 44k INFO Losses: [2.555454969406128, 2.3342885971069336, 10.486130714416504, 20.234840393066406, 1.189312219619751], step: 85600, lr: 9.89306910009569e-05 2023-03-16 09:21:36,035 44k INFO Train Epoch: 85 [95%] 2023-03-16 09:21:36,036 44k INFO Losses: [2.608707904815674, 2.2419769763946533, 7.738720417022705, 18.764848709106445, 1.4283477067947388], step: 85800, lr: 9.89306910009569e-05 2023-03-16 09:21:53,891 44k INFO ====> Epoch: 85, cost 374.26 s 2023-03-16 09:22:56,815 44k INFO Train Epoch: 86 [15%] 2023-03-16 09:22:56,815 44k INFO Losses: [2.5920183658599854, 2.233304262161255, 8.666301727294922, 19.75871467590332, 1.4143037796020508], step: 86000, lr: 9.891832466458178e-05 2023-03-16 09:22:59,932 44k INFO Saving model and optimizer state at iteration 86 to ./logs\44k\G_86000.pth 2023-03-16 09:23:00,651 44k INFO Saving model and optimizer state at iteration 86 to ./logs\44k\D_86000.pth 2023-03-16 09:23:01,274 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_83000.pth 2023-03-16 09:23:01,296 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_83000.pth 2023-03-16 09:24:11,855 44k INFO Train Epoch: 86 [35%] 2023-03-16 09:24:11,856 44k INFO Losses: [2.4423017501831055, 2.272350549697876, 5.2909698486328125, 15.038141250610352, 1.3805804252624512], step: 86200, lr: 9.891832466458178e-05 2023-03-16 09:25:23,707 44k INFO Train Epoch: 86 [54%] 2023-03-16 09:25:23,707 44k INFO Losses: [2.428504705429077, 2.300312042236328, 10.701539039611816, 22.59787368774414, 1.5780612230300903], step: 86400, lr: 9.891832466458178e-05 2023-03-16 09:26:35,234 44k INFO Train Epoch: 86 [74%] 2023-03-16 09:26:35,234 44k INFO Losses: [2.6799802780151367, 2.1990044116973877, 10.587084770202637, 20.922901153564453, 0.8492997884750366], step: 86600, lr: 9.891832466458178e-05 2023-03-16 09:27:46,954 44k INFO Train Epoch: 86 [94%] 2023-03-16 09:27:46,955 44k INFO Losses: [2.530937671661377, 2.26322078704834, 10.54049301147461, 18.824464797973633, 1.6132570505142212], step: 86800, lr: 9.891832466458178e-05 2023-03-16 09:28:08,372 44k INFO ====> Epoch: 86, cost 374.48 s 2023-03-16 09:29:07,758 44k INFO Train Epoch: 87 [14%] 2023-03-16 09:29:07,759 44k INFO Losses: [2.5990445613861084, 2.171870470046997, 11.60184383392334, 22.513011932373047, 1.3168915510177612], step: 87000, lr: 9.89059598739987e-05 2023-03-16 09:29:10,865 44k INFO Saving model and optimizer state at iteration 87 to ./logs\44k\G_87000.pth 2023-03-16 09:29:11,536 44k INFO Saving model and optimizer state at iteration 87 to ./logs\44k\D_87000.pth 2023-03-16 09:29:12,164 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_84000.pth 2023-03-16 09:29:12,191 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_84000.pth 2023-03-16 09:30:22,798 44k INFO Train Epoch: 87 [34%] 2023-03-16 09:30:22,799 44k INFO Losses: [2.414428949356079, 2.4445366859436035, 11.911677360534668, 21.196571350097656, 1.5586199760437012], step: 87200, lr: 9.89059598739987e-05 2023-03-16 09:31:34,459 44k INFO Train Epoch: 87 [53%] 2023-03-16 09:31:34,459 44k INFO Losses: [2.5124599933624268, 2.5057621002197266, 6.575525760650635, 13.146332740783691, 1.1289998292922974], step: 87400, lr: 9.89059598739987e-05 2023-03-16 09:32:46,095 44k INFO Train Epoch: 87 [73%] 2023-03-16 09:32:46,096 44k INFO Losses: [2.5963377952575684, 2.019986152648926, 11.445646286010742, 18.536222457885742, 0.956800639629364], step: 87600, lr: 9.89059598739987e-05 2023-03-16 09:33:57,790 44k INFO Train Epoch: 87 [93%] 2023-03-16 09:33:57,791 44k INFO Losses: [2.5420660972595215, 2.2588889598846436, 13.185507774353027, 19.680587768554688, 1.476169228553772], step: 87800, lr: 9.89059598739987e-05 2023-03-16 09:34:22,772 44k INFO ====> Epoch: 87, cost 374.40 s 2023-03-16 09:35:18,512 44k INFO Train Epoch: 88 [13%] 2023-03-16 09:35:18,512 44k INFO Losses: [2.4435153007507324, 2.4803285598754883, 11.846606254577637, 21.53473663330078, 0.7915248870849609], step: 88000, lr: 9.889359662901445e-05 2023-03-16 09:35:21,541 44k INFO Saving model and optimizer state at iteration 88 to ./logs\44k\G_88000.pth 2023-03-16 09:35:22,260 44k INFO Saving model and optimizer state at iteration 88 to ./logs\44k\D_88000.pth 2023-03-16 09:35:22,907 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_85000.pth 2023-03-16 09:35:22,933 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_85000.pth 2023-03-16 09:36:33,715 44k INFO Train Epoch: 88 [33%] 2023-03-16 09:36:33,716 44k INFO Losses: [2.4632749557495117, 2.2379255294799805, 6.509737491607666, 17.37969970703125, 1.097799301147461], step: 88200, lr: 9.889359662901445e-05 2023-03-16 09:37:45,258 44k INFO Train Epoch: 88 [52%] 2023-03-16 09:37:45,258 44k INFO Losses: [2.578878402709961, 2.0770108699798584, 12.151925086975098, 20.63962173461914, 1.3579449653625488], step: 88400, lr: 9.889359662901445e-05 2023-03-16 09:38:57,031 44k INFO Train Epoch: 88 [72%] 2023-03-16 09:38:57,032 44k INFO Losses: [2.294105291366577, 2.5774037837982178, 9.299508094787598, 25.318513870239258, 1.6841968297958374], step: 88600, lr: 9.889359662901445e-05 2023-03-16 09:40:08,670 44k INFO Train Epoch: 88 [92%] 2023-03-16 09:40:08,671 44k INFO Losses: [2.5860586166381836, 2.47522234916687, 10.411622047424316, 18.660018920898438, 1.145768642425537], step: 88800, lr: 9.889359662901445e-05 2023-03-16 09:40:37,132 44k INFO ====> Epoch: 88, cost 374.36 s 2023-03-16 09:41:29,509 44k INFO Train Epoch: 89 [12%] 2023-03-16 09:41:29,510 44k INFO Losses: [2.764439582824707, 2.640115261077881, 10.152050018310547, 19.31775665283203, 1.2350974082946777], step: 89000, lr: 9.888123492943583e-05 2023-03-16 09:41:32,585 44k INFO Saving model and optimizer state at iteration 89 to ./logs\44k\G_89000.pth 2023-03-16 09:41:33,248 44k INFO Saving model and optimizer state at iteration 89 to ./logs\44k\D_89000.pth 2023-03-16 09:41:33,882 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_86000.pth 2023-03-16 09:41:33,906 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_86000.pth 2023-03-16 09:42:44,538 44k INFO Train Epoch: 89 [32%] 2023-03-16 09:42:44,539 44k INFO Losses: [2.5072550773620605, 2.2392876148223877, 11.934779167175293, 23.634946823120117, 1.421088695526123], step: 89200, lr: 9.888123492943583e-05 2023-03-16 09:43:56,024 44k INFO Train Epoch: 89 [51%] 2023-03-16 09:43:56,024 44k INFO Losses: [2.405548572540283, 2.0596470832824707, 13.566308975219727, 22.439037322998047, 1.3956049680709839], step: 89400, lr: 9.888123492943583e-05 2023-03-16 09:45:07,863 44k INFO Train Epoch: 89 [71%] 2023-03-16 09:45:07,864 44k INFO Losses: [2.3175876140594482, 2.416653871536255, 15.02763557434082, 22.61331558227539, 0.8822878003120422], step: 89600, lr: 9.888123492943583e-05 2023-03-16 09:46:19,503 44k INFO Train Epoch: 89 [91%] 2023-03-16 09:46:19,503 44k INFO Losses: [2.4850311279296875, 2.3599612712860107, 12.672500610351562, 25.221946716308594, 1.4505938291549683], step: 89800, lr: 9.888123492943583e-05 2023-03-16 09:46:51,553 44k INFO ====> Epoch: 89, cost 374.42 s 2023-03-16 09:47:40,481 44k INFO Train Epoch: 90 [11%] 2023-03-16 09:47:40,482 44k INFO Losses: [2.296196937561035, 2.2783255577087402, 16.8337345123291, 23.087385177612305, 1.426957607269287], step: 90000, lr: 9.886887477506964e-05 2023-03-16 09:47:43,484 44k INFO Saving model and optimizer state at iteration 90 to ./logs\44k\G_90000.pth 2023-03-16 09:47:44,148 44k INFO Saving model and optimizer state at iteration 90 to ./logs\44k\D_90000.pth 2023-03-16 09:47:44,767 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_87000.pth 2023-03-16 09:47:44,791 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_87000.pth 2023-03-16 09:48:55,437 44k INFO Train Epoch: 90 [31%] 2023-03-16 09:48:55,438 44k INFO Losses: [2.3520121574401855, 2.0827693939208984, 16.129817962646484, 23.40156364440918, 1.1940932273864746], step: 90200, lr: 9.886887477506964e-05 2023-03-16 09:50:06,932 44k INFO Train Epoch: 90 [50%] 2023-03-16 09:50:06,933 44k INFO Losses: [2.3947930335998535, 2.2279841899871826, 12.65507984161377, 20.63216781616211, 1.218024492263794], step: 90400, lr: 9.886887477506964e-05 2023-03-16 09:51:18,756 44k INFO Train Epoch: 90 [70%] 2023-03-16 09:51:18,756 44k INFO Losses: [2.519961357116699, 2.5786502361297607, 7.6782331466674805, 21.633529663085938, 1.0456920862197876], step: 90600, lr: 9.886887477506964e-05 2023-03-16 09:52:30,394 44k INFO Train Epoch: 90 [90%] 2023-03-16 09:52:30,394 44k INFO Losses: [2.264974355697632, 2.3510477542877197, 13.131908416748047, 19.271560668945312, 0.909986674785614], step: 90800, lr: 9.886887477506964e-05 2023-03-16 09:53:06,108 44k INFO ====> Epoch: 90, cost 374.56 s 2023-03-16 09:53:51,350 44k INFO Train Epoch: 91 [10%] 2023-03-16 09:53:51,351 44k INFO Losses: [2.28155255317688, 2.382960081100464, 10.253368377685547, 23.311002731323242, 1.5722631216049194], step: 91000, lr: 9.885651616572276e-05 2023-03-16 09:53:54,472 44k INFO Saving model and optimizer state at iteration 91 to ./logs\44k\G_91000.pth 2023-03-16 09:53:55,144 44k INFO Saving model and optimizer state at iteration 91 to ./logs\44k\D_91000.pth 2023-03-16 09:53:55,852 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_88000.pth 2023-03-16 09:53:55,951 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_88000.pth 2023-03-16 09:55:06,964 44k INFO Train Epoch: 91 [30%] 2023-03-16 09:55:06,965 44k INFO Losses: [2.568169355392456, 2.312192678451538, 11.520745277404785, 22.137693405151367, 1.4236477613449097], step: 91200, lr: 9.885651616572276e-05 2023-03-16 09:56:18,632 44k INFO Train Epoch: 91 [50%] 2023-03-16 09:56:18,632 44k INFO Losses: [2.465627908706665, 2.60711932182312, 11.168686866760254, 21.6435489654541, 1.4331811666488647], step: 91400, lr: 9.885651616572276e-05 2023-03-16 09:57:30,609 44k INFO Train Epoch: 91 [69%] 2023-03-16 09:57:30,609 44k INFO Losses: [2.3018529415130615, 2.2485485076904297, 11.710991859436035, 20.75224494934082, 0.900825023651123], step: 91600, lr: 9.885651616572276e-05 2023-03-16 09:58:42,457 44k INFO Train Epoch: 91 [89%] 2023-03-16 09:58:42,457 44k INFO Losses: [2.3866446018218994, 2.2490594387054443, 9.760946273803711, 19.133052825927734, 1.396047830581665], step: 91800, lr: 9.885651616572276e-05 2023-03-16 09:59:21,753 44k INFO ====> Epoch: 91, cost 375.65 s 2023-03-16 10:00:03,305 44k INFO Train Epoch: 92 [9%] 2023-03-16 10:00:03,305 44k INFO Losses: [2.630767345428467, 2.3219544887542725, 8.898465156555176, 19.66999626159668, 1.5874972343444824], step: 92000, lr: 9.884415910120204e-05 2023-03-16 10:00:06,370 44k INFO Saving model and optimizer state at iteration 92 to ./logs\44k\G_92000.pth 2023-03-16 10:00:07,084 44k INFO Saving model and optimizer state at iteration 92 to ./logs\44k\D_92000.pth 2023-03-16 10:00:07,715 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_89000.pth 2023-03-16 10:00:07,742 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_89000.pth 2023-03-16 10:01:18,859 44k INFO Train Epoch: 92 [29%] 2023-03-16 10:01:18,859 44k INFO Losses: [2.2569010257720947, 2.421705484390259, 11.445120811462402, 17.16594886779785, 1.6108025312423706], step: 92200, lr: 9.884415910120204e-05 2023-03-16 10:02:30,240 44k INFO Train Epoch: 92 [49%] 2023-03-16 10:02:30,241 44k INFO Losses: [2.3128585815429688, 2.4064743518829346, 12.000336647033691, 21.91853141784668, 1.402632236480713], step: 92400, lr: 9.884415910120204e-05 2023-03-16 10:03:42,332 44k INFO Train Epoch: 92 [68%] 2023-03-16 10:03:42,333 44k INFO Losses: [2.5819995403289795, 2.1667747497558594, 5.624081611633301, 17.503890991210938, 1.3848079442977905], step: 92600, lr: 9.884415910120204e-05 2023-03-16 10:04:54,087 44k INFO Train Epoch: 92 [88%] 2023-03-16 10:04:54,088 44k INFO Losses: [2.498790979385376, 2.336146831512451, 10.666160583496094, 16.48863410949707, 1.3359835147857666], step: 92800, lr: 9.884415910120204e-05 2023-03-16 10:05:36,964 44k INFO ====> Epoch: 92, cost 375.21 s 2023-03-16 10:06:14,935 44k INFO Train Epoch: 93 [8%] 2023-03-16 10:06:14,936 44k INFO Losses: [2.7477219104766846, 1.864100694656372, 10.564748764038086, 14.589831352233887, 1.4894291162490845], step: 93000, lr: 9.883180358131438e-05 2023-03-16 10:06:17,975 44k INFO Saving model and optimizer state at iteration 93 to ./logs\44k\G_93000.pth 2023-03-16 10:06:18,692 44k INFO Saving model and optimizer state at iteration 93 to ./logs\44k\D_93000.pth 2023-03-16 10:06:19,333 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_90000.pth 2023-03-16 10:06:19,358 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_90000.pth 2023-03-16 10:07:30,330 44k INFO Train Epoch: 93 [28%] 2023-03-16 10:07:30,331 44k INFO Losses: [2.288186550140381, 2.465183973312378, 10.223837852478027, 22.78514289855957, 1.2428505420684814], step: 93200, lr: 9.883180358131438e-05 2023-03-16 10:08:41,819 44k INFO Train Epoch: 93 [48%] 2023-03-16 10:08:41,819 44k INFO Losses: [2.2503550052642822, 2.3726909160614014, 12.3851318359375, 21.652217864990234, 1.5490036010742188], step: 93400, lr: 9.883180358131438e-05 2023-03-16 10:09:53,866 44k INFO Train Epoch: 93 [67%] 2023-03-16 10:09:53,866 44k INFO Losses: [2.2903709411621094, 2.349590539932251, 12.833297729492188, 23.3292293548584, 0.9508614540100098], step: 93600, lr: 9.883180358131438e-05 2023-03-16 10:11:05,709 44k INFO Train Epoch: 93 [87%] 2023-03-16 10:11:05,709 44k INFO Losses: [2.510441303253174, 2.1532437801361084, 9.292210578918457, 19.792085647583008, 1.6795915365219116], step: 93800, lr: 9.883180358131438e-05 2023-03-16 10:11:52,043 44k INFO ====> Epoch: 93, cost 375.08 s 2023-03-16 10:12:26,453 44k INFO Train Epoch: 94 [7%] 2023-03-16 10:12:26,453 44k INFO Losses: [2.3050482273101807, 2.413522243499756, 11.30544662475586, 20.490615844726562, 1.1789718866348267], step: 94000, lr: 9.881944960586671e-05 2023-03-16 10:12:29,495 44k INFO Saving model and optimizer state at iteration 94 to ./logs\44k\G_94000.pth 2023-03-16 10:12:30,205 44k INFO Saving model and optimizer state at iteration 94 to ./logs\44k\D_94000.pth 2023-03-16 10:12:30,845 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_91000.pth 2023-03-16 10:12:30,870 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_91000.pth 2023-03-16 10:13:42,028 44k INFO Train Epoch: 94 [27%] 2023-03-16 10:13:42,028 44k INFO Losses: [2.531876564025879, 2.359044313430786, 11.063161849975586, 21.274824142456055, 1.5374351739883423], step: 94200, lr: 9.881944960586671e-05 2023-03-16 10:14:53,436 44k INFO Train Epoch: 94 [47%] 2023-03-16 10:14:53,436 44k INFO Losses: [2.389726161956787, 2.366335868835449, 10.261191368103027, 21.914411544799805, 1.690136432647705], step: 94400, lr: 9.881944960586671e-05 2023-03-16 10:16:05,491 44k INFO Train Epoch: 94 [66%] 2023-03-16 10:16:05,491 44k INFO Losses: [2.7154252529144287, 2.4505748748779297, 10.284183502197266, 20.64603042602539, 1.0555118322372437], step: 94600, lr: 9.881944960586671e-05 2023-03-16 10:17:17,196 44k INFO Train Epoch: 94 [86%] 2023-03-16 10:17:17,197 44k INFO Losses: [2.574066162109375, 2.2493834495544434, 5.171478271484375, 19.64855194091797, 0.9616140127182007], step: 94800, lr: 9.881944960586671e-05 2023-03-16 10:18:07,314 44k INFO ====> Epoch: 94, cost 375.27 s 2023-03-16 10:18:38,106 44k INFO Train Epoch: 95 [6%] 2023-03-16 10:18:38,106 44k INFO Losses: [2.407264471054077, 2.2122113704681396, 11.964512825012207, 20.47117805480957, 1.4750803709030151], step: 95000, lr: 9.880709717466598e-05 2023-03-16 10:18:41,224 44k INFO Saving model and optimizer state at iteration 95 to ./logs\44k\G_95000.pth 2023-03-16 10:18:41,895 44k INFO Saving model and optimizer state at iteration 95 to ./logs\44k\D_95000.pth 2023-03-16 10:18:42,524 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_92000.pth 2023-03-16 10:18:42,553 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_92000.pth 2023-03-16 10:19:53,704 44k INFO Train Epoch: 95 [26%] 2023-03-16 10:19:53,704 44k INFO Losses: [2.3830575942993164, 2.13096284866333, 9.147899627685547, 18.47677993774414, 1.4473885297775269], step: 95200, lr: 9.880709717466598e-05 2023-03-16 10:21:05,081 44k INFO Train Epoch: 95 [46%] 2023-03-16 10:21:05,082 44k INFO Losses: [2.548275947570801, 2.365858554840088, 11.29920768737793, 20.096200942993164, 1.1846317052841187], step: 95400, lr: 9.880709717466598e-05 2023-03-16 10:22:17,085 44k INFO Train Epoch: 95 [65%] 2023-03-16 10:22:17,085 44k INFO Losses: [2.439570188522339, 2.1662001609802246, 10.441203117370605, 20.189376831054688, 1.2520674467086792], step: 95600, lr: 9.880709717466598e-05 2023-03-16 10:23:28,717 44k INFO Train Epoch: 95 [85%] 2023-03-16 10:23:28,717 44k INFO Losses: [2.6683802604675293, 2.1408257484436035, 6.228217601776123, 15.492775917053223, 1.3028850555419922], step: 95800, lr: 9.880709717466598e-05 2023-03-16 10:24:22,362 44k INFO ====> Epoch: 95, cost 375.05 s 2023-03-16 10:24:49,568 44k INFO Train Epoch: 96 [5%] 2023-03-16 10:24:49,568 44k INFO Losses: [2.423225164413452, 2.180241584777832, 13.85638427734375, 20.591611862182617, 1.3317956924438477], step: 96000, lr: 9.879474628751914e-05 2023-03-16 10:24:52,685 44k INFO Saving model and optimizer state at iteration 96 to ./logs\44k\G_96000.pth 2023-03-16 10:24:53,362 44k INFO Saving model and optimizer state at iteration 96 to ./logs\44k\D_96000.pth 2023-03-16 10:24:54,017 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_93000.pth 2023-03-16 10:24:54,052 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_93000.pth 2023-03-16 10:26:05,315 44k INFO Train Epoch: 96 [25%] 2023-03-16 10:26:05,316 44k INFO Losses: [2.3847250938415527, 2.13529634475708, 13.05992317199707, 18.601070404052734, 1.0959008932113647], step: 96200, lr: 9.879474628751914e-05 2023-03-16 10:27:16,539 44k INFO Train Epoch: 96 [45%] 2023-03-16 10:27:16,539 44k INFO Losses: [2.4747719764709473, 2.536181688308716, 12.518535614013672, 22.06035804748535, 1.3983831405639648], step: 96400, lr: 9.879474628751914e-05 2023-03-16 10:28:28,612 44k INFO Train Epoch: 96 [64%] 2023-03-16 10:28:28,613 44k INFO Losses: [2.390172004699707, 2.452089786529541, 11.995162010192871, 20.491117477416992, 1.4234834909439087], step: 96600, lr: 9.879474628751914e-05 2023-03-16 10:29:40,518 44k INFO Train Epoch: 96 [84%] 2023-03-16 10:29:40,519 44k INFO Losses: [2.4926156997680664, 2.1358277797698975, 9.909256935119629, 19.07024574279785, 1.2519376277923584], step: 96800, lr: 9.879474628751914e-05 2023-03-16 10:30:37,988 44k INFO ====> Epoch: 96, cost 375.63 s 2023-03-16 10:31:01,603 44k INFO Train Epoch: 97 [4%] 2023-03-16 10:31:01,603 44k INFO Losses: [2.5998661518096924, 2.2860989570617676, 8.30806827545166, 22.127315521240234, 1.5067157745361328], step: 97000, lr: 9.87823969442332e-05 2023-03-16 10:31:04,692 44k INFO Saving model and optimizer state at iteration 97 to ./logs\44k\G_97000.pth 2023-03-16 10:31:05,406 44k INFO Saving model and optimizer state at iteration 97 to ./logs\44k\D_97000.pth 2023-03-16 10:31:06,027 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_94000.pth 2023-03-16 10:31:06,066 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_94000.pth 2023-03-16 10:32:17,427 44k INFO Train Epoch: 97 [24%] 2023-03-16 10:32:17,428 44k INFO Losses: [2.762390375137329, 2.2116048336029053, 8.69176959991455, 18.046415328979492, 1.4916889667510986], step: 97200, lr: 9.87823969442332e-05 2023-03-16 10:33:28,868 44k INFO Train Epoch: 97 [44%] 2023-03-16 10:33:28,868 44k INFO Losses: [2.2893989086151123, 2.3764495849609375, 11.434520721435547, 20.83608627319336, 0.9428768754005432], step: 97400, lr: 9.87823969442332e-05 2023-03-16 10:34:41,040 44k INFO Train Epoch: 97 [63%] 2023-03-16 10:34:41,040 44k INFO Losses: [2.390425205230713, 2.188778877258301, 14.541621208190918, 22.166662216186523, 1.524403691291809], step: 97600, lr: 9.87823969442332e-05 2023-03-16 10:35:52,791 44k INFO Train Epoch: 97 [83%] 2023-03-16 10:35:52,792 44k INFO Losses: [2.2891814708709717, 2.340794563293457, 6.047842979431152, 26.284358978271484, 1.6414474248886108], step: 97800, lr: 9.87823969442332e-05 2023-03-16 10:36:54,193 44k INFO ====> Epoch: 97, cost 376.20 s 2023-03-16 10:37:14,368 44k INFO Train Epoch: 98 [3%] 2023-03-16 10:37:14,368 44k INFO Losses: [2.530836820602417, 2.6767477989196777, 9.527822494506836, 23.346120834350586, 1.214306116104126], step: 98000, lr: 9.877004914461517e-05 2023-03-16 10:37:17,450 44k INFO Saving model and optimizer state at iteration 98 to ./logs\44k\G_98000.pth 2023-03-16 10:37:18,126 44k INFO Saving model and optimizer state at iteration 98 to ./logs\44k\D_98000.pth 2023-03-16 10:37:18,763 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_95000.pth 2023-03-16 10:37:18,789 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_95000.pth 2023-03-16 10:38:30,736 44k INFO Train Epoch: 98 [23%] 2023-03-16 10:38:30,736 44k INFO Losses: [2.240939140319824, 2.6303930282592773, 14.154045104980469, 22.669918060302734, 1.1736403703689575], step: 98200, lr: 9.877004914461517e-05 2023-03-16 10:39:42,488 44k INFO Train Epoch: 98 [43%] 2023-03-16 10:39:42,489 44k INFO Losses: [2.3085873126983643, 2.398265838623047, 13.140448570251465, 23.267702102661133, 1.2491856813430786], step: 98400, lr: 9.877004914461517e-05 2023-03-16 10:40:55,052 44k INFO Train Epoch: 98 [62%] 2023-03-16 10:40:55,052 44k INFO Losses: [2.625894784927368, 2.537395715713501, 8.5150728225708, 21.123537063598633, 1.5649067163467407], step: 98600, lr: 9.877004914461517e-05 2023-03-16 10:42:07,170 44k INFO Train Epoch: 98 [82%] 2023-03-16 10:42:07,171 44k INFO Losses: [2.4745941162109375, 2.305778980255127, 8.477484703063965, 23.12016487121582, 1.3841317892074585], step: 98800, lr: 9.877004914461517e-05 2023-03-16 10:43:12,283 44k INFO ====> Epoch: 98, cost 378.09 s 2023-03-16 10:43:28,756 44k INFO Train Epoch: 99 [2%] 2023-03-16 10:43:28,756 44k INFO Losses: [2.6226534843444824, 2.1139397621154785, 10.64820671081543, 20.274431228637695, 1.1143274307250977], step: 99000, lr: 9.875770288847208e-05 2023-03-16 10:43:31,808 44k INFO Saving model and optimizer state at iteration 99 to ./logs\44k\G_99000.pth 2023-03-16 10:43:32,539 44k INFO Saving model and optimizer state at iteration 99 to ./logs\44k\D_99000.pth 2023-03-16 10:43:33,173 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_96000.pth 2023-03-16 10:43:33,201 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_96000.pth 2023-03-16 10:44:45,437 44k INFO Train Epoch: 99 [22%] 2023-03-16 10:44:45,438 44k INFO Losses: [2.109076499938965, 3.0519027709960938, 7.144651412963867, 19.88193130493164, 1.1527076959609985], step: 99200, lr: 9.875770288847208e-05 2023-03-16 10:45:57,095 44k INFO Train Epoch: 99 [42%] 2023-03-16 10:45:57,096 44k INFO Losses: [2.46441912651062, 2.1537137031555176, 10.373666763305664, 20.86663055419922, 1.3243262767791748], step: 99400, lr: 9.875770288847208e-05 2023-03-16 10:47:09,628 44k INFO Train Epoch: 99 [61%] 2023-03-16 10:47:09,628 44k INFO Losses: [2.4625566005706787, 2.1726722717285156, 12.713409423828125, 20.93848991394043, 1.6843042373657227], step: 99600, lr: 9.875770288847208e-05 2023-03-16 10:48:21,831 44k INFO Train Epoch: 99 [81%] 2023-03-16 10:48:21,831 44k INFO Losses: [2.508784770965576, 2.1664464473724365, 8.720691680908203, 22.675800323486328, 1.481604814529419], step: 99800, lr: 9.875770288847208e-05 2023-03-16 10:49:30,501 44k INFO ====> Epoch: 99, cost 378.22 s 2023-03-16 10:49:43,439 44k INFO Train Epoch: 100 [1%] 2023-03-16 10:49:43,439 44k INFO Losses: [2.8435609340667725, 1.7970507144927979, 6.463192462921143, 14.67757511138916, 1.2721436023712158], step: 100000, lr: 9.874535817561101e-05 2023-03-16 10:49:46,538 44k INFO Saving model and optimizer state at iteration 100 to ./logs\44k\G_100000.pth 2023-03-16 10:49:47,210 44k INFO Saving model and optimizer state at iteration 100 to ./logs\44k\D_100000.pth 2023-03-16 10:49:47,842 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_97000.pth 2023-03-16 10:49:47,868 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_97000.pth 2023-03-16 10:51:00,165 44k INFO Train Epoch: 100 [21%] 2023-03-16 10:51:00,165 44k INFO Losses: [2.623159885406494, 2.2041592597961426, 6.459061622619629, 21.700838088989258, 1.3217129707336426], step: 100200, lr: 9.874535817561101e-05 2023-03-16 10:52:11,679 44k INFO Train Epoch: 100 [41%] 2023-03-16 10:52:11,680 44k INFO Losses: [2.348304510116577, 2.5427184104919434, 13.132057189941406, 21.43621253967285, 1.4100782871246338], step: 100400, lr: 9.874535817561101e-05 2023-03-16 10:53:24,424 44k INFO Train Epoch: 100 [60%] 2023-03-16 10:53:24,425 44k INFO Losses: [2.5569941997528076, 2.1295979022979736, 10.157417297363281, 17.588577270507812, 1.1932084560394287], step: 100600, lr: 9.874535817561101e-05 2023-03-16 10:54:36,607 44k INFO Train Epoch: 100 [80%] 2023-03-16 10:54:36,608 44k INFO Losses: [2.7414817810058594, 1.9982762336730957, 8.255739212036133, 14.999199867248535, 1.4018306732177734], step: 100800, lr: 9.874535817561101e-05 2023-03-16 10:55:48,897 44k INFO ====> Epoch: 100, cost 378.40 s 2023-03-16 18:35:02,771 44k INFO {'train': {'log_interval': 200, 'eval_interval': 1000, 'seed': 1234, 'epochs': 130, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': False, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 3}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'tubaki': 0}, 'model_dir': './logs\\44k'} 2023-03-16 18:35:02,803 44k WARNING git hash values are different. cea6df30(saved) != fd4d47fd(current) 2023-03-16 18:35:05,274 44k INFO Loaded checkpoint './logs\44k\G_100000.pth' (iteration 100) 2023-03-16 18:35:05,646 44k INFO Loaded checkpoint './logs\44k\D_100000.pth' (iteration 100) 2023-03-16 18:35:24,757 44k INFO Train Epoch: 100 [1%] 2023-03-16 18:35:24,758 44k INFO Losses: [2.352499008178711, 2.5264525413513184, 10.694681167602539, 19.746871948242188, 1.1882693767547607], step: 100000, lr: 9.873301500583906e-05 2023-03-16 18:35:28,736 44k INFO Saving model and optimizer state at iteration 100 to ./logs\44k\G_100000.pth 2023-03-16 18:35:29,437 44k INFO Saving model and optimizer state at iteration 100 to ./logs\44k\D_100000.pth 2023-03-16 18:36:53,323 44k INFO Train Epoch: 100 [21%] 2023-03-16 18:36:53,324 44k INFO Losses: [2.4134793281555176, 2.3157546520233154, 9.52792739868164, 19.854612350463867, 1.4460333585739136], step: 100200, lr: 9.873301500583906e-05 2023-03-16 18:38:15,244 44k INFO Train Epoch: 100 [41%] 2023-03-16 18:38:15,245 44k INFO Losses: [2.331712007522583, 2.342722177505493, 11.321316719055176, 20.277372360229492, 1.3242733478546143], step: 100400, lr: 9.873301500583906e-05 2023-03-16 18:39:34,882 44k INFO Train Epoch: 100 [60%] 2023-03-16 18:39:34,883 44k INFO Losses: [2.583242654800415, 2.1765220165252686, 9.554434776306152, 15.710580825805664, 1.2635109424591064], step: 100600, lr: 9.873301500583906e-05 2023-03-16 18:40:54,559 44k INFO Train Epoch: 100 [80%] 2023-03-16 18:40:54,560 44k INFO Losses: [2.692044258117676, 1.8482187986373901, 7.695149898529053, 16.247652053833008, 1.1291632652282715], step: 100800, lr: 9.873301500583906e-05 2023-03-16 18:42:19,541 44k INFO ====> Epoch: 100, cost 436.77 s 2023-03-16 18:42:29,489 44k INFO Train Epoch: 101 [0%] 2023-03-16 18:42:29,490 44k INFO Losses: [2.3634707927703857, 2.293125629425049, 8.925023078918457, 19.80962562561035, 1.3410481214523315], step: 101000, lr: 9.872067337896332e-05 2023-03-16 18:42:32,783 44k INFO Saving model and optimizer state at iteration 101 to ./logs\44k\G_101000.pth 2023-03-16 18:42:33,609 44k INFO Saving model and optimizer state at iteration 101 to ./logs\44k\D_101000.pth 2023-03-16 18:42:34,256 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_98000.pth 2023-03-16 18:42:34,257 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_98000.pth 2023-03-16 18:43:52,558 44k INFO Train Epoch: 101 [20%] 2023-03-16 18:43:52,559 44k INFO Losses: [2.2733848094940186, 2.7985353469848633, 12.472558975219727, 22.796142578125, 1.4166874885559082], step: 101200, lr: 9.872067337896332e-05 2023-03-16 18:45:09,890 44k INFO Train Epoch: 101 [40%] 2023-03-16 18:45:09,890 44k INFO Losses: [2.564690113067627, 2.086977481842041, 7.5771164894104, 23.15154266357422, 1.2792149782180786], step: 101400, lr: 9.872067337896332e-05 2023-03-16 18:46:27,817 44k INFO Train Epoch: 101 [59%] 2023-03-16 18:46:27,818 44k INFO Losses: [2.4626896381378174, 2.5506534576416016, 12.846664428710938, 22.02436065673828, 1.190474033355713], step: 101600, lr: 9.872067337896332e-05 2023-03-16 18:47:46,753 44k INFO Train Epoch: 101 [79%] 2023-03-16 18:47:46,754 44k INFO Losses: [2.2131175994873047, 2.7812626361846924, 7.259298801422119, 17.3315372467041, 1.4720531702041626], step: 101800, lr: 9.872067337896332e-05 2023-03-16 18:49:06,839 44k INFO Train Epoch: 101 [99%] 2023-03-16 18:49:06,839 44k INFO Losses: [2.539477825164795, 2.2692408561706543, 8.470839500427246, 18.879804611206055, 1.3676260709762573], step: 102000, lr: 9.872067337896332e-05 2023-03-16 18:49:10,188 44k INFO Saving model and optimizer state at iteration 101 to ./logs\44k\G_102000.pth 2023-03-16 18:49:10,869 44k INFO Saving model and optimizer state at iteration 101 to ./logs\44k\D_102000.pth 2023-03-16 18:49:11,523 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_99000.pth 2023-03-16 18:49:11,524 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_99000.pth 2023-03-16 18:49:15,170 44k INFO ====> Epoch: 101, cost 415.63 s 2023-03-16 18:50:38,657 44k INFO Train Epoch: 102 [19%] 2023-03-16 18:50:38,657 44k INFO Losses: [2.350372314453125, 2.8133704662323, 8.603009223937988, 22.730714797973633, 1.1533516645431519], step: 102200, lr: 9.870833329479095e-05 2023-03-16 18:51:56,255 44k INFO Train Epoch: 102 [39%] 2023-03-16 18:51:56,255 44k INFO Losses: [2.2813713550567627, 2.779161214828491, 11.277300834655762, 25.908245086669922, 1.659639596939087], step: 102400, lr: 9.870833329479095e-05 2023-03-16 18:53:14,740 44k INFO Train Epoch: 102 [58%] 2023-03-16 18:53:14,740 44k INFO Losses: [2.516172409057617, 2.2031350135803223, 11.097248077392578, 21.38197898864746, 1.4504997730255127], step: 102600, lr: 9.870833329479095e-05 2023-03-16 18:54:31,855 44k INFO Train Epoch: 102 [78%] 2023-03-16 18:54:31,855 44k INFO Losses: [2.387209892272949, 2.4780192375183105, 11.967145919799805, 18.898578643798828, 1.0681045055389404], step: 102800, lr: 9.870833329479095e-05 2023-03-16 18:55:49,811 44k INFO Train Epoch: 102 [98%] 2023-03-16 18:55:49,812 44k INFO Losses: [2.735853672027588, 2.0577919483184814, 8.04570198059082, 21.468994140625, 1.3339506387710571], step: 103000, lr: 9.870833329479095e-05 2023-03-16 18:55:53,183 44k INFO Saving model and optimizer state at iteration 102 to ./logs\44k\G_103000.pth 2023-03-16 18:55:53,871 44k INFO Saving model and optimizer state at iteration 102 to ./logs\44k\D_103000.pth 2023-03-16 18:55:54,542 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_100000.pth 2023-03-16 18:55:54,571 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_100000.pth 2023-03-16 18:56:02,150 44k INFO ====> Epoch: 102, cost 406.98 s 2023-03-16 18:57:22,701 44k INFO Train Epoch: 103 [18%] 2023-03-16 18:57:22,701 44k INFO Losses: [2.6619930267333984, 1.9500380754470825, 8.452136993408203, 14.282692909240723, 0.9549041986465454], step: 103200, lr: 9.86959947531291e-05 2023-03-16 18:58:39,550 44k INFO Train Epoch: 103 [38%] 2023-03-16 18:58:39,551 44k INFO Losses: [2.546396255493164, 2.3510568141937256, 13.095368385314941, 20.178911209106445, 1.4519743919372559], step: 103400, lr: 9.86959947531291e-05 2023-03-16 18:59:58,113 44k INFO Train Epoch: 103 [57%] 2023-03-16 18:59:58,113 44k INFO Losses: [2.382441520690918, 2.3944942951202393, 9.300783157348633, 20.201784133911133, 1.2732118368148804], step: 103600, lr: 9.86959947531291e-05 2023-03-16 19:01:17,006 44k INFO Train Epoch: 103 [77%] 2023-03-16 19:01:17,007 44k INFO Losses: [2.4325287342071533, 2.1766197681427, 7.583266735076904, 15.61154842376709, 1.4083456993103027], step: 103800, lr: 9.86959947531291e-05 2023-03-16 19:02:35,408 44k INFO Train Epoch: 103 [97%] 2023-03-16 19:02:35,409 44k INFO Losses: [2.8179590702056885, 2.2161052227020264, 10.458117485046387, 21.00156593322754, 1.2065852880477905], step: 104000, lr: 9.86959947531291e-05 2023-03-16 19:02:38,751 44k INFO Saving model and optimizer state at iteration 103 to ./logs\44k\G_104000.pth 2023-03-16 19:02:39,506 44k INFO Saving model and optimizer state at iteration 103 to ./logs\44k\D_104000.pth 2023-03-16 19:02:40,198 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_101000.pth 2023-03-16 19:02:40,230 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_101000.pth 2023-03-16 19:02:51,703 44k INFO ====> Epoch: 103, cost 409.55 s 2023-03-16 19:04:07,607 44k INFO Train Epoch: 104 [17%] 2023-03-16 19:04:07,608 44k INFO Losses: [2.4581305980682373, 2.2573561668395996, 10.0740327835083, 21.230379104614258, 1.1396268606185913], step: 104200, lr: 9.868365775378495e-05 2023-03-16 19:05:24,381 44k INFO Train Epoch: 104 [37%] 2023-03-16 19:05:24,382 44k INFO Losses: [2.4898855686187744, 2.3605713844299316, 11.894591331481934, 19.498815536499023, 0.9768962264060974], step: 104400, lr: 9.868365775378495e-05 2023-03-16 19:06:41,357 44k INFO Train Epoch: 104 [56%] 2023-03-16 19:06:41,358 44k INFO Losses: [2.4074501991271973, 2.219926118850708, 9.74096965789795, 23.276264190673828, 1.4428446292877197], step: 104600, lr: 9.868365775378495e-05 2023-03-16 19:07:58,895 44k INFO Train Epoch: 104 [76%] 2023-03-16 19:07:58,895 44k INFO Losses: [2.325242519378662, 2.1525354385375977, 14.217634201049805, 21.201541900634766, 1.3065811395645142], step: 104800, lr: 9.868365775378495e-05 2023-03-16 19:09:15,264 44k INFO Train Epoch: 104 [96%] 2023-03-16 19:09:15,264 44k INFO Losses: [2.439889430999756, 2.212041139602661, 10.302045822143555, 19.9798583984375, 1.5028016567230225], step: 105000, lr: 9.868365775378495e-05 2023-03-16 19:09:18,628 44k INFO Saving model and optimizer state at iteration 104 to ./logs\44k\G_105000.pth 2023-03-16 19:09:19,358 44k INFO Saving model and optimizer state at iteration 104 to ./logs\44k\D_105000.pth 2023-03-16 19:09:20,067 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_102000.pth 2023-03-16 19:09:20,104 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_102000.pth 2023-03-16 19:09:36,359 44k INFO ====> Epoch: 104, cost 404.66 s 2023-03-16 19:10:50,235 44k INFO Train Epoch: 105 [16%] 2023-03-16 19:10:50,235 44k INFO Losses: [2.7125699520111084, 2.1547799110412598, 6.154138565063477, 18.309581756591797, 1.5228025913238525], step: 105200, lr: 9.867132229656573e-05 2023-03-16 19:12:08,119 44k INFO Train Epoch: 105 [36%] 2023-03-16 19:12:08,119 44k INFO Losses: [2.5522348880767822, 2.3118233680725098, 7.248117923736572, 16.587879180908203, 1.3650609254837036], step: 105400, lr: 9.867132229656573e-05 2023-03-16 19:13:27,066 44k INFO Train Epoch: 105 [55%] 2023-03-16 19:13:27,066 44k INFO Losses: [2.53957200050354, 2.198219060897827, 11.880880355834961, 21.223337173461914, 1.4009060859680176], step: 105600, lr: 9.867132229656573e-05 2023-03-16 19:14:45,760 44k INFO Train Epoch: 105 [75%] 2023-03-16 19:14:45,761 44k INFO Losses: [2.468348264694214, 2.6612741947174072, 15.471314430236816, 25.796527862548828, 1.3809095621109009], step: 105800, lr: 9.867132229656573e-05 2023-03-16 19:16:04,566 44k INFO Train Epoch: 105 [95%] 2023-03-16 19:16:04,567 44k INFO Losses: [2.386331081390381, 2.3981215953826904, 11.016193389892578, 20.765804290771484, 1.2626591920852661], step: 106000, lr: 9.867132229656573e-05 2023-03-16 19:16:08,111 44k INFO Saving model and optimizer state at iteration 105 to ./logs\44k\G_106000.pth 2023-03-16 19:16:08,842 44k INFO Saving model and optimizer state at iteration 105 to ./logs\44k\D_106000.pth 2023-03-16 19:16:09,551 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_103000.pth 2023-03-16 19:16:09,612 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_103000.pth 2023-03-16 19:16:29,356 44k INFO ====> Epoch: 105, cost 413.00 s 2023-03-16 19:17:37,833 44k INFO Train Epoch: 106 [15%] 2023-03-16 19:17:37,833 44k INFO Losses: [2.550137996673584, 2.4480531215667725, 5.76303768157959, 19.042335510253906, 0.9707169532775879], step: 106200, lr: 9.865898838127865e-05 2023-03-16 19:18:56,493 44k INFO Train Epoch: 106 [35%] 2023-03-16 19:18:56,494 44k INFO Losses: [2.5131077766418457, 2.209357261657715, 9.368471145629883, 19.698036193847656, 1.2758872509002686], step: 106400, lr: 9.865898838127865e-05 2023-03-16 19:20:16,604 44k INFO Train Epoch: 106 [54%] 2023-03-16 19:20:16,605 44k INFO Losses: [2.4008445739746094, 2.510056495666504, 9.183367729187012, 17.825531005859375, 1.4488168954849243], step: 106600, lr: 9.865898838127865e-05 2023-03-16 19:21:31,717 44k INFO Train Epoch: 106 [74%] 2023-03-16 19:21:31,717 44k INFO Losses: [2.559652090072632, 2.4842846393585205, 8.452862739562988, 17.190231323242188, 1.0290930271148682], step: 106800, lr: 9.865898838127865e-05 2023-03-16 19:22:50,079 44k INFO Train Epoch: 106 [94%] 2023-03-16 19:22:50,080 44k INFO Losses: [2.583590030670166, 2.3238272666931152, 12.34815502166748, 18.650197982788086, 1.4180781841278076], step: 107000, lr: 9.865898838127865e-05 2023-03-16 19:22:53,522 44k INFO Saving model and optimizer state at iteration 106 to ./logs\44k\G_107000.pth 2023-03-16 19:22:54,255 44k INFO Saving model and optimizer state at iteration 106 to ./logs\44k\D_107000.pth 2023-03-16 19:22:54,913 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_104000.pth 2023-03-16 19:22:54,944 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_104000.pth 2023-03-16 19:23:18,230 44k INFO ====> Epoch: 106, cost 408.87 s 2023-03-16 19:24:23,276 44k INFO Train Epoch: 107 [14%] 2023-03-16 19:24:23,276 44k INFO Losses: [2.7247161865234375, 2.186880588531494, 10.011500358581543, 19.555702209472656, 1.5136762857437134], step: 107200, lr: 9.864665600773098e-05 2023-03-16 19:25:39,731 44k INFO Train Epoch: 107 [34%] 2023-03-16 19:25:39,732 44k INFO Losses: [2.4139769077301025, 2.3111350536346436, 12.02389144897461, 20.903871536254883, 1.2709146738052368], step: 107400, lr: 9.864665600773098e-05 2023-03-16 19:26:58,061 44k INFO Train Epoch: 107 [53%] 2023-03-16 19:26:58,061 44k INFO Losses: [2.424685001373291, 2.3184123039245605, 14.012737274169922, 21.10992431640625, 1.3072139024734497], step: 107600, lr: 9.864665600773098e-05 2023-03-16 19:28:14,307 44k INFO Train Epoch: 107 [73%] 2023-03-16 19:28:14,308 44k INFO Losses: [2.2630457878112793, 2.340075731277466, 13.049065589904785, 19.529470443725586, 0.7466109991073608], step: 107800, lr: 9.864665600773098e-05 2023-03-16 19:29:27,087 44k INFO Train Epoch: 107 [93%] 2023-03-16 19:29:27,087 44k INFO Losses: [2.518249988555908, 2.2035880088806152, 8.35668659210205, 20.29602813720703, 1.298140048980713], step: 108000, lr: 9.864665600773098e-05 2023-03-16 19:29:30,367 44k INFO Saving model and optimizer state at iteration 107 to ./logs\44k\G_108000.pth 2023-03-16 19:29:31,173 44k INFO Saving model and optimizer state at iteration 107 to ./logs\44k\D_108000.pth 2023-03-16 19:29:31,849 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_105000.pth 2023-03-16 19:29:31,890 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_105000.pth 2023-03-16 19:29:56,903 44k INFO ====> Epoch: 107, cost 398.67 s 2023-03-16 19:30:53,631 44k INFO Train Epoch: 108 [13%] 2023-03-16 19:30:53,632 44k INFO Losses: [2.3993587493896484, 2.408130168914795, 10.325724601745605, 18.83313751220703, 1.3399596214294434], step: 108200, lr: 9.863432517573002e-05 2023-03-16 19:32:08,545 44k INFO Train Epoch: 108 [33%] 2023-03-16 19:32:08,546 44k INFO Losses: [2.5411553382873535, 2.325108289718628, 10.674457550048828, 22.355192184448242, 1.5891567468643188], step: 108400, lr: 9.863432517573002e-05 2023-03-16 19:33:21,360 44k INFO Train Epoch: 108 [52%] 2023-03-16 19:33:21,361 44k INFO Losses: [2.6109657287597656, 2.163167953491211, 7.466449737548828, 14.839601516723633, 1.4064149856567383], step: 108600, lr: 9.863432517573002e-05 2023-03-16 19:34:36,440 44k INFO Train Epoch: 108 [72%] 2023-03-16 19:34:36,440 44k INFO Losses: [2.596961736679077, 2.233189582824707, 8.943161010742188, 21.99428367614746, 1.5113868713378906], step: 108800, lr: 9.863432517573002e-05 2023-03-16 19:35:49,610 44k INFO Train Epoch: 108 [92%] 2023-03-16 19:35:49,611 44k INFO Losses: [2.216644525527954, 2.4535937309265137, 13.4431734085083, 24.872787475585938, 1.4282587766647339], step: 109000, lr: 9.863432517573002e-05 2023-03-16 19:35:52,991 44k INFO Saving model and optimizer state at iteration 108 to ./logs\44k\G_109000.pth 2023-03-16 19:35:53,668 44k INFO Saving model and optimizer state at iteration 108 to ./logs\44k\D_109000.pth 2023-03-16 19:35:54,369 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_106000.pth 2023-03-16 19:35:54,408 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_106000.pth 2023-03-16 19:36:22,935 44k INFO ====> Epoch: 108, cost 386.03 s 2023-03-16 19:37:15,902 44k INFO Train Epoch: 109 [12%] 2023-03-16 19:37:15,902 44k INFO Losses: [2.5338892936706543, 2.4083492755889893, 11.514901161193848, 18.990095138549805, 1.4886080026626587], step: 109200, lr: 9.862199588508305e-05 2023-03-16 19:38:27,674 44k INFO Train Epoch: 109 [32%] 2023-03-16 19:38:27,674 44k INFO Losses: [2.5191590785980225, 2.2084879875183105, 11.158270835876465, 21.830965042114258, 1.0842177867889404], step: 109400, lr: 9.862199588508305e-05 2023-03-16 19:39:39,723 44k INFO Train Epoch: 109 [51%] 2023-03-16 19:39:39,723 44k INFO Losses: [2.1219263076782227, 2.674105167388916, 12.388086318969727, 23.70669937133789, 1.595335602760315], step: 109600, lr: 9.862199588508305e-05 2023-03-16 19:40:51,936 44k INFO Train Epoch: 109 [71%] 2023-03-16 19:40:51,937 44k INFO Losses: [2.4336464405059814, 2.217231273651123, 12.205126762390137, 20.255136489868164, 1.0188686847686768], step: 109800, lr: 9.862199588508305e-05 2023-03-16 19:42:04,093 44k INFO Train Epoch: 109 [91%] 2023-03-16 19:42:04,093 44k INFO Losses: [2.0445220470428467, 2.3562591075897217, 12.183077812194824, 23.34262466430664, 1.2170790433883667], step: 110000, lr: 9.862199588508305e-05 2023-03-16 19:42:07,244 44k INFO Saving model and optimizer state at iteration 109 to ./logs\44k\G_110000.pth 2023-03-16 19:42:08,003 44k INFO Saving model and optimizer state at iteration 109 to ./logs\44k\D_110000.pth 2023-03-16 19:42:08,651 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_107000.pth 2023-03-16 19:42:08,688 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_107000.pth 2023-03-16 19:42:41,005 44k INFO ====> Epoch: 109, cost 378.07 s 2023-03-16 19:43:31,140 44k INFO Train Epoch: 110 [11%] 2023-03-16 19:43:31,140 44k INFO Losses: [2.5360608100891113, 2.3384571075439453, 12.82214641571045, 20.05782699584961, 1.3113558292388916], step: 110200, lr: 9.86096681355974e-05 2023-03-16 19:44:42,630 44k INFO Train Epoch: 110 [31%] 2023-03-16 19:44:42,630 44k INFO Losses: [2.157736301422119, 2.2172720432281494, 12.901641845703125, 24.17711639404297, 1.5628111362457275], step: 110400, lr: 9.86096681355974e-05 2023-03-16 19:45:54,332 44k INFO Train Epoch: 110 [50%] 2023-03-16 19:45:54,333 44k INFO Losses: [2.4863734245300293, 2.208966016769409, 11.493553161621094, 18.304126739501953, 1.0315524339675903], step: 110600, lr: 9.86096681355974e-05 2023-03-16 19:47:06,407 44k INFO Train Epoch: 110 [70%] 2023-03-16 19:47:06,407 44k INFO Losses: [2.4811272621154785, 2.7006020545959473, 10.21004581451416, 22.426158905029297, 1.6115814447402954], step: 110800, lr: 9.86096681355974e-05 2023-03-16 19:48:18,442 44k INFO Train Epoch: 110 [90%] 2023-03-16 19:48:18,442 44k INFO Losses: [2.512158155441284, 2.155628204345703, 13.90842056274414, 21.215272903442383, 1.2806403636932373], step: 111000, lr: 9.86096681355974e-05 2023-03-16 19:48:21,535 44k INFO Saving model and optimizer state at iteration 110 to ./logs\44k\G_111000.pth 2023-03-16 19:48:22,254 44k INFO Saving model and optimizer state at iteration 110 to ./logs\44k\D_111000.pth 2023-03-16 19:48:22,940 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_108000.pth 2023-03-16 19:48:22,971 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_108000.pth 2023-03-16 19:48:58,799 44k INFO ====> Epoch: 110, cost 377.79 s 2023-03-16 19:49:44,573 44k INFO Train Epoch: 111 [10%] 2023-03-16 19:49:44,573 44k INFO Losses: [2.7829527854919434, 2.5503792762756348, 8.1664457321167, 15.054678916931152, 1.2869277000427246], step: 111200, lr: 9.859734192708044e-05 2023-03-16 19:50:55,742 44k INFO Train Epoch: 111 [30%] 2023-03-16 19:50:55,742 44k INFO Losses: [2.435966968536377, 2.4484810829162598, 11.424511909484863, 20.274335861206055, 1.5633515119552612], step: 111400, lr: 9.859734192708044e-05 2023-03-16 19:52:07,346 44k INFO Train Epoch: 111 [50%] 2023-03-16 19:52:07,347 44k INFO Losses: [2.54677152633667, 2.5275063514709473, 7.337162494659424, 19.620101928710938, 1.3163241147994995], step: 111600, lr: 9.859734192708044e-05 2023-03-16 19:53:19,380 44k INFO Train Epoch: 111 [69%] 2023-03-16 19:53:19,380 44k INFO Losses: [2.6778454780578613, 1.9166826009750366, 9.106194496154785, 16.475370407104492, 0.9432523846626282], step: 111800, lr: 9.859734192708044e-05 2023-03-16 19:54:31,371 44k INFO Train Epoch: 111 [89%] 2023-03-16 19:54:31,371 44k INFO Losses: [2.637789011001587, 1.9292513132095337, 11.115605354309082, 17.49993324279785, 1.317169427871704], step: 112000, lr: 9.859734192708044e-05 2023-03-16 19:54:34,552 44k INFO Saving model and optimizer state at iteration 111 to ./logs\44k\G_112000.pth 2023-03-16 19:54:35,268 44k INFO Saving model and optimizer state at iteration 111 to ./logs\44k\D_112000.pth 2023-03-16 19:54:35,980 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_109000.pth 2023-03-16 19:54:36,019 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_109000.pth 2023-03-16 19:55:15,280 44k INFO ====> Epoch: 111, cost 376.48 s 2023-03-16 19:55:57,325 44k INFO Train Epoch: 112 [9%] 2023-03-16 19:55:57,325 44k INFO Losses: [2.474533796310425, 2.3867053985595703, 9.262903213500977, 20.982810974121094, 1.5172783136367798], step: 112200, lr: 9.858501725933955e-05 2023-03-16 19:57:08,594 44k INFO Train Epoch: 112 [29%] 2023-03-16 19:57:08,594 44k INFO Losses: [2.648468494415283, 2.066965341567993, 6.631067276000977, 15.730916023254395, 1.264640212059021], step: 112400, lr: 9.858501725933955e-05 2023-03-16 19:58:20,122 44k INFO Train Epoch: 112 [49%] 2023-03-16 19:58:20,122 44k INFO Losses: [2.691431760787964, 2.1259725093841553, 7.341141700744629, 17.1492862701416, 1.3904781341552734], step: 112600, lr: 9.858501725933955e-05 2023-03-16 19:59:32,211 44k INFO Train Epoch: 112 [68%] 2023-03-16 19:59:32,211 44k INFO Losses: [2.422222852706909, 2.4778048992156982, 9.634007453918457, 18.466899871826172, 1.1678003072738647], step: 112800, lr: 9.858501725933955e-05 2023-03-16 20:00:43,905 44k INFO Train Epoch: 112 [88%] 2023-03-16 20:00:43,905 44k INFO Losses: [2.258207321166992, 2.7016382217407227, 7.4235076904296875, 17.13028335571289, 1.1242066621780396], step: 113000, lr: 9.858501725933955e-05 2023-03-16 20:00:47,053 44k INFO Saving model and optimizer state at iteration 112 to ./logs\44k\G_113000.pth 2023-03-16 20:00:47,797 44k INFO Saving model and optimizer state at iteration 112 to ./logs\44k\D_113000.pth 2023-03-16 20:00:48,489 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_110000.pth 2023-03-16 20:00:48,530 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_110000.pth 2023-03-16 20:01:31,393 44k INFO ====> Epoch: 112, cost 376.11 s 2023-03-16 20:02:09,761 44k INFO Train Epoch: 113 [8%] 2023-03-16 20:02:09,762 44k INFO Losses: [2.510511636734009, 1.9069246053695679, 12.157500267028809, 18.486888885498047, 1.2840062379837036], step: 113200, lr: 9.857269413218213e-05 2023-03-16 20:03:21,132 44k INFO Train Epoch: 113 [28%] 2023-03-16 20:03:21,132 44k INFO Losses: [2.8348844051361084, 2.0400195121765137, 6.59060525894165, 19.827482223510742, 1.474209189414978], step: 113400, lr: 9.857269413218213e-05 2023-03-16 20:04:32,708 44k INFO Train Epoch: 113 [48%] 2023-03-16 20:04:32,709 44k INFO Losses: [2.350370168685913, 2.4720094203948975, 14.132868766784668, 22.241130828857422, 1.6533058881759644], step: 113600, lr: 9.857269413218213e-05 2023-03-16 20:05:44,864 44k INFO Train Epoch: 113 [67%] 2023-03-16 20:05:44,865 44k INFO Losses: [2.4097559452056885, 2.4737629890441895, 9.355610847473145, 17.727405548095703, 0.8935867547988892], step: 113800, lr: 9.857269413218213e-05 2023-03-16 20:06:56,786 44k INFO Train Epoch: 113 [87%] 2023-03-16 20:06:56,787 44k INFO Losses: [2.5446743965148926, 2.2762508392333984, 7.853325843811035, 15.548717498779297, 1.6602590084075928], step: 114000, lr: 9.857269413218213e-05 2023-03-16 20:06:59,970 44k INFO Saving model and optimizer state at iteration 113 to ./logs\44k\G_114000.pth 2023-03-16 20:07:00,680 44k INFO Saving model and optimizer state at iteration 113 to ./logs\44k\D_114000.pth 2023-03-16 20:07:01,361 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_111000.pth 2023-03-16 20:07:01,403 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_111000.pth 2023-03-16 20:07:47,832 44k INFO ====> Epoch: 113, cost 376.44 s 2023-03-16 20:08:22,703 44k INFO Train Epoch: 114 [7%] 2023-03-16 20:08:22,703 44k INFO Losses: [2.8394041061401367, 2.094789981842041, 13.028121948242188, 20.424211502075195, 1.16084623336792], step: 114200, lr: 9.85603725454156e-05 2023-03-16 20:09:34,136 44k INFO Train Epoch: 114 [27%] 2023-03-16 20:09:34,136 44k INFO Losses: [2.4932336807250977, 2.2435872554779053, 12.374580383300781, 20.05066680908203, 1.4756797552108765], step: 114400, lr: 9.85603725454156e-05 2023-03-16 20:10:45,717 44k INFO Train Epoch: 114 [47%] 2023-03-16 20:10:45,717 44k INFO Losses: [2.6318984031677246, 2.156958818435669, 12.351582527160645, 24.100854873657227, 1.1226983070373535], step: 114600, lr: 9.85603725454156e-05 2023-03-16 20:11:57,837 44k INFO Train Epoch: 114 [66%] 2023-03-16 20:11:57,838 44k INFO Losses: [2.746143102645874, 2.2030768394470215, 11.346953392028809, 20.823190689086914, 1.028752326965332], step: 114800, lr: 9.85603725454156e-05 2023-03-16 20:13:09,790 44k INFO Train Epoch: 114 [86%] 2023-03-16 20:13:09,790 44k INFO Losses: [2.6820855140686035, 1.935328722000122, 5.855556488037109, 16.021072387695312, 1.2925729751586914], step: 115000, lr: 9.85603725454156e-05 2023-03-16 20:13:12,970 44k INFO Saving model and optimizer state at iteration 114 to ./logs\44k\G_115000.pth 2023-03-16 20:13:13,722 44k INFO Saving model and optimizer state at iteration 114 to ./logs\44k\D_115000.pth 2023-03-16 20:13:14,412 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_112000.pth 2023-03-16 20:13:14,450 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_112000.pth 2023-03-16 20:14:04,352 44k INFO ====> Epoch: 114, cost 376.52 s 2023-03-16 20:14:35,601 44k INFO Train Epoch: 115 [6%] 2023-03-16 20:14:35,601 44k INFO Losses: [2.418961524963379, 2.1781325340270996, 8.092780113220215, 20.71607208251953, 1.1528825759887695], step: 115200, lr: 9.854805249884741e-05 2023-03-16 20:15:46,958 44k INFO Train Epoch: 115 [26%] 2023-03-16 20:15:46,958 44k INFO Losses: [2.565761089324951, 2.0851645469665527, 10.104085922241211, 20.31898307800293, 1.557237982749939], step: 115400, lr: 9.854805249884741e-05 2023-03-16 20:16:58,700 44k INFO Train Epoch: 115 [46%] 2023-03-16 20:16:58,700 44k INFO Losses: [2.797769069671631, 2.0892281532287598, 7.427924156188965, 16.45076560974121, 1.0911482572555542], step: 115600, lr: 9.854805249884741e-05 2023-03-16 20:18:10,868 44k INFO Train Epoch: 115 [65%] 2023-03-16 20:18:10,868 44k INFO Losses: [2.116917610168457, 2.469419002532959, 12.572044372558594, 23.192358016967773, 1.3001784086227417], step: 115800, lr: 9.854805249884741e-05 2023-03-16 20:19:22,874 44k INFO Train Epoch: 115 [85%] 2023-03-16 20:19:22,874 44k INFO Losses: [2.329336643218994, 2.5184173583984375, 9.287086486816406, 21.569961547851562, 1.6202257871627808], step: 116000, lr: 9.854805249884741e-05 2023-03-16 20:19:26,009 44k INFO Saving model and optimizer state at iteration 115 to ./logs\44k\G_116000.pth 2023-03-16 20:19:26,784 44k INFO Saving model and optimizer state at iteration 115 to ./logs\44k\D_116000.pth 2023-03-16 20:19:27,477 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_113000.pth 2023-03-16 20:19:27,515 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_113000.pth 2023-03-16 20:20:21,171 44k INFO ====> Epoch: 115, cost 376.82 s 2023-03-16 20:20:48,817 44k INFO Train Epoch: 116 [5%] 2023-03-16 20:20:48,818 44k INFO Losses: [2.4128026962280273, 2.409444808959961, 12.59819507598877, 17.297595977783203, 1.446598768234253], step: 116200, lr: 9.853573399228505e-05 2023-03-16 20:22:00,369 44k INFO Train Epoch: 116 [25%] 2023-03-16 20:22:00,369 44k INFO Losses: [2.5565378665924072, 2.1316163539886475, 8.16817855834961, 19.556529998779297, 1.242074966430664], step: 116400, lr: 9.853573399228505e-05 2023-03-16 20:23:11,816 44k INFO Train Epoch: 116 [45%] 2023-03-16 20:23:11,816 44k INFO Losses: [2.1058003902435303, 2.543138027191162, 13.345788955688477, 21.831945419311523, 1.2679104804992676], step: 116600, lr: 9.853573399228505e-05 2023-03-16 20:24:23,949 44k INFO Train Epoch: 116 [64%] 2023-03-16 20:24:23,950 44k INFO Losses: [2.423766851425171, 2.241889238357544, 15.511407852172852, 21.42144012451172, 1.189314842224121], step: 116800, lr: 9.853573399228505e-05 2023-03-16 20:25:36,038 44k INFO Train Epoch: 116 [84%] 2023-03-16 20:25:36,038 44k INFO Losses: [2.4974868297576904, 2.276735305786133, 11.958528518676758, 21.426111221313477, 1.3227143287658691], step: 117000, lr: 9.853573399228505e-05 2023-03-16 20:25:39,151 44k INFO Saving model and optimizer state at iteration 116 to ./logs\44k\G_117000.pth 2023-03-16 20:25:39,849 44k INFO Saving model and optimizer state at iteration 116 to ./logs\44k\D_117000.pth 2023-03-16 20:25:40,496 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_114000.pth 2023-03-16 20:25:40,535 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_114000.pth 2023-03-16 20:26:37,801 44k INFO ====> Epoch: 116, cost 376.63 s 2023-03-16 20:27:01,790 44k INFO Train Epoch: 117 [4%] 2023-03-16 20:27:01,791 44k INFO Losses: [2.5670840740203857, 2.4008948802948, 8.85390853881836, 21.93359375, 1.3762580156326294], step: 117200, lr: 9.8523417025536e-05 2023-03-16 20:28:13,391 44k INFO Train Epoch: 117 [24%] 2023-03-16 20:28:13,391 44k INFO Losses: [2.4471287727355957, 2.4639875888824463, 10.48588752746582, 20.83785057067871, 1.331135869026184], step: 117400, lr: 9.8523417025536e-05 2023-03-16 20:29:24,873 44k INFO Train Epoch: 117 [44%] 2023-03-16 20:29:24,873 44k INFO Losses: [2.365216016769409, 2.149038791656494, 10.309051513671875, 17.905086517333984, 0.9931918382644653], step: 117600, lr: 9.8523417025536e-05 2023-03-16 20:30:37,056 44k INFO Train Epoch: 117 [63%] 2023-03-16 20:30:37,057 44k INFO Losses: [2.490966796875, 2.246497631072998, 9.168817520141602, 17.52777099609375, 1.3448182344436646], step: 117800, lr: 9.8523417025536e-05 2023-03-16 20:31:50,078 44k INFO Train Epoch: 117 [83%] 2023-03-16 20:31:50,079 44k INFO Losses: [2.456911563873291, 2.427382469177246, 9.538981437683105, 20.529132843017578, 1.5779484510421753], step: 118000, lr: 9.8523417025536e-05 2023-03-16 20:31:53,274 44k INFO Saving model and optimizer state at iteration 117 to ./logs\44k\G_118000.pth 2023-03-16 20:31:53,964 44k INFO Saving model and optimizer state at iteration 117 to ./logs\44k\D_118000.pth 2023-03-16 20:31:54,614 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_115000.pth 2023-03-16 20:31:54,652 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_115000.pth 2023-03-16 20:32:56,537 44k INFO ====> Epoch: 117, cost 378.74 s 2023-03-16 20:33:17,176 44k INFO Train Epoch: 118 [3%] 2023-03-16 20:33:17,177 44k INFO Losses: [2.3732285499572754, 2.238072395324707, 9.031987190246582, 17.511493682861328, 1.3117611408233643], step: 118200, lr: 9.851110159840781e-05 2023-03-16 20:34:29,529 44k INFO Train Epoch: 118 [23%] 2023-03-16 20:34:29,529 44k INFO Losses: [2.4841156005859375, 2.364858627319336, 10.738537788391113, 20.136043548583984, 1.3878077268600464], step: 118400, lr: 9.851110159840781e-05 2023-03-16 20:35:42,669 44k INFO Train Epoch: 118 [43%] 2023-03-16 20:35:42,669 44k INFO Losses: [2.2336785793304443, 2.342721462249756, 11.801963806152344, 22.64765739440918, 1.296323537826538], step: 118600, lr: 9.851110159840781e-05 2023-03-16 20:36:55,327 44k INFO Train Epoch: 118 [62%] 2023-03-16 20:36:55,327 44k INFO Losses: [2.4463324546813965, 2.191504955291748, 8.74538803100586, 19.542762756347656, 1.2858890295028687], step: 118800, lr: 9.851110159840781e-05 2023-03-16 20:38:07,602 44k INFO Train Epoch: 118 [82%] 2023-03-16 20:38:07,602 44k INFO Losses: [2.3348886966705322, 2.262185573577881, 12.302762985229492, 22.71776580810547, 1.4875975847244263], step: 119000, lr: 9.851110159840781e-05 2023-03-16 20:38:10,852 44k INFO Saving model and optimizer state at iteration 118 to ./logs\44k\G_119000.pth 2023-03-16 20:38:11,547 44k INFO Saving model and optimizer state at iteration 118 to ./logs\44k\D_119000.pth 2023-03-16 20:38:12,175 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_116000.pth 2023-03-16 20:38:12,215 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_116000.pth 2023-03-16 20:39:17,010 44k INFO ====> Epoch: 118, cost 380.47 s 2023-03-16 20:39:33,726 44k INFO Train Epoch: 119 [2%] 2023-03-16 20:39:33,726 44k INFO Losses: [2.5319886207580566, 2.3508386611938477, 9.93680477142334, 19.086078643798828, 1.6137272119522095], step: 119200, lr: 9.8498787710708e-05 2023-03-16 20:40:46,220 44k INFO Train Epoch: 119 [22%] 2023-03-16 20:40:46,221 44k INFO Losses: [2.400822401046753, 2.63740873336792, 5.971881866455078, 17.041297912597656, 1.2544069290161133], step: 119400, lr: 9.8498787710708e-05 2023-03-16 20:42:03,617 44k INFO Train Epoch: 119 [42%] 2023-03-16 20:42:03,617 44k INFO Losses: [2.824288845062256, 2.2292511463165283, 6.055537700653076, 15.162551879882812, 1.2261730432510376], step: 119600, lr: 9.8498787710708e-05 2023-03-16 20:43:20,668 44k INFO Train Epoch: 119 [61%] 2023-03-16 20:43:20,668 44k INFO Losses: [2.320554733276367, 2.5026793479919434, 13.502059936523438, 20.95378875732422, 1.1065009832382202], step: 119800, lr: 9.8498787710708e-05 2023-03-16 20:44:37,038 44k INFO Train Epoch: 119 [81%] 2023-03-16 20:44:37,038 44k INFO Losses: [2.4294002056121826, 2.5578501224517822, 11.935531616210938, 19.584735870361328, 0.9947730302810669], step: 120000, lr: 9.8498787710708e-05 2023-03-16 20:44:40,593 44k INFO Saving model and optimizer state at iteration 119 to ./logs\44k\G_120000.pth 2023-03-16 20:44:41,409 44k INFO Saving model and optimizer state at iteration 119 to ./logs\44k\D_120000.pth 2023-03-16 20:44:42,208 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_117000.pth 2023-03-16 20:44:42,249 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_117000.pth 2023-03-16 20:45:51,953 44k INFO ====> Epoch: 119, cost 394.94 s 2023-03-16 20:46:05,447 44k INFO Train Epoch: 120 [1%] 2023-03-16 20:46:05,448 44k INFO Losses: [2.3426527976989746, 2.5756657123565674, 12.134187698364258, 22.73140525817871, 1.0304087400436401], step: 120200, lr: 9.848647536224416e-05 2023-03-16 20:47:21,031 44k INFO Train Epoch: 120 [21%] 2023-03-16 20:47:21,031 44k INFO Losses: [2.549818992614746, 2.2958855628967285, 9.458595275878906, 20.94374656677246, 1.0694291591644287], step: 120400, lr: 9.848647536224416e-05 2023-03-16 20:48:34,190 44k INFO Train Epoch: 120 [41%] 2023-03-16 20:48:34,191 44k INFO Losses: [2.5939319133758545, 2.316530466079712, 12.986124992370605, 20.895397186279297, 1.5729703903198242], step: 120600, lr: 9.848647536224416e-05 2023-03-16 20:49:48,371 44k INFO Train Epoch: 120 [60%] 2023-03-16 20:49:48,371 44k INFO Losses: [2.4803035259246826, 2.2571616172790527, 12.496935844421387, 20.984500885009766, 1.5023380517959595], step: 120800, lr: 9.848647536224416e-05 2023-03-16 20:51:01,497 44k INFO Train Epoch: 120 [80%] 2023-03-16 20:51:01,497 44k INFO Losses: [2.4221818447113037, 2.344900608062744, 12.195839881896973, 20.148597717285156, 1.2884448766708374], step: 121000, lr: 9.848647536224416e-05 2023-03-16 20:51:04,689 44k INFO Saving model and optimizer state at iteration 120 to ./logs\44k\G_121000.pth 2023-03-16 20:51:05,407 44k INFO Saving model and optimizer state at iteration 120 to ./logs\44k\D_121000.pth 2023-03-16 20:51:06,162 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_118000.pth 2023-03-16 20:51:06,199 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_118000.pth 2023-03-16 20:52:18,807 44k INFO ====> Epoch: 120, cost 386.85 s 2023-03-16 20:52:28,748 44k INFO Train Epoch: 121 [0%] 2023-03-16 20:52:28,749 44k INFO Losses: [2.5168094635009766, 2.265253782272339, 6.303280830383301, 17.867095947265625, 1.5749740600585938], step: 121200, lr: 9.847416455282387e-05 2023-03-16 20:53:42,778 44k INFO Train Epoch: 121 [20%] 2023-03-16 20:53:42,779 44k INFO Losses: [2.3313047885894775, 2.622833251953125, 11.143790245056152, 22.727998733520508, 1.271028757095337], step: 121400, lr: 9.847416455282387e-05 2023-03-16 20:54:57,521 44k INFO Train Epoch: 121 [40%] 2023-03-16 20:54:57,521 44k INFO Losses: [2.6032228469848633, 2.267793655395508, 8.595023155212402, 24.498836517333984, 1.7842572927474976], step: 121600, lr: 9.847416455282387e-05 2023-03-16 20:56:15,287 44k INFO Train Epoch: 121 [59%] 2023-03-16 20:56:15,288 44k INFO Losses: [2.48294734954834, 2.3361880779266357, 10.041101455688477, 19.738990783691406, 1.078739881515503], step: 121800, lr: 9.847416455282387e-05 2023-03-16 20:57:33,356 44k INFO Train Epoch: 121 [79%] 2023-03-16 20:57:33,356 44k INFO Losses: [2.2145307064056396, 2.700969696044922, 9.63270092010498, 20.534894943237305, 1.278476357460022], step: 122000, lr: 9.847416455282387e-05 2023-03-16 20:57:36,770 44k INFO Saving model and optimizer state at iteration 121 to ./logs\44k\G_122000.pth 2023-03-16 20:57:37,651 44k INFO Saving model and optimizer state at iteration 121 to ./logs\44k\D_122000.pth 2023-03-16 20:57:38,480 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_119000.pth 2023-03-16 20:57:38,519 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_119000.pth 2023-03-16 20:58:56,029 44k INFO Train Epoch: 121 [99%] 2023-03-16 20:58:56,030 44k INFO Losses: [2.687877893447876, 2.0021510124206543, 6.629727840423584, 18.29709815979004, 1.2445378303527832], step: 122200, lr: 9.847416455282387e-05 2023-03-16 20:58:59,921 44k INFO ====> Epoch: 121, cost 401.11 s 2023-03-16 21:00:23,654 44k INFO Train Epoch: 122 [19%] 2023-03-16 21:00:23,654 44k INFO Losses: [2.5563669204711914, 2.611219882965088, 8.79637336730957, 22.935073852539062, 1.182663083076477], step: 122400, lr: 9.846185528225477e-05 2023-03-16 21:01:41,627 44k INFO Train Epoch: 122 [39%] 2023-03-16 21:01:41,628 44k INFO Losses: [2.030308246612549, 2.636428117752075, 14.505556106567383, 26.99493408203125, 1.1989253759384155], step: 122600, lr: 9.846185528225477e-05 2023-03-16 21:03:01,572 44k INFO Train Epoch: 122 [58%] 2023-03-16 21:03:01,573 44k INFO Losses: [2.396817684173584, 2.445812225341797, 15.277170181274414, 24.843608856201172, 1.4175527095794678], step: 122800, lr: 9.846185528225477e-05 2023-03-16 21:04:20,157 44k INFO Train Epoch: 122 [78%] 2023-03-16 21:04:20,158 44k INFO Losses: [2.3463704586029053, 2.369340419769287, 8.37028694152832, 15.44964599609375, 1.4577456712722778], step: 123000, lr: 9.846185528225477e-05 2023-03-16 21:04:23,617 44k INFO Saving model and optimizer state at iteration 122 to ./logs\44k\G_123000.pth 2023-03-16 21:04:24,394 44k INFO Saving model and optimizer state at iteration 122 to ./logs\44k\D_123000.pth 2023-03-16 21:04:25,234 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_120000.pth 2023-03-16 21:04:25,278 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_120000.pth 2023-03-16 21:05:44,679 44k INFO Train Epoch: 122 [98%] 2023-03-16 21:05:44,679 44k INFO Losses: [2.388167142868042, 2.5680830478668213, 5.100479602813721, 12.093184471130371, 1.144964337348938], step: 123200, lr: 9.846185528225477e-05 2023-03-16 21:05:52,854 44k INFO ====> Epoch: 122, cost 412.93 s 2023-03-16 21:07:11,756 44k INFO Train Epoch: 123 [18%] 2023-03-16 21:07:11,756 44k INFO Losses: [2.427475690841675, 2.3610458374023438, 9.498632431030273, 16.818723678588867, 1.280234694480896], step: 123400, lr: 9.84495475503445e-05 2023-03-16 21:08:27,382 44k INFO Train Epoch: 123 [38%] 2023-03-16 21:08:27,383 44k INFO Losses: [2.344620943069458, 2.5941522121429443, 14.342659950256348, 23.07984733581543, 1.0939462184906006], step: 123600, lr: 9.84495475503445e-05 2023-03-16 21:09:46,453 44k INFO Train Epoch: 123 [57%] 2023-03-16 21:09:46,454 44k INFO Losses: [2.547884941101074, 2.054893732070923, 9.148332595825195, 20.018417358398438, 1.649023175239563], step: 123800, lr: 9.84495475503445e-05 2023-03-16 21:11:00,840 44k INFO Train Epoch: 123 [77%] 2023-03-16 21:11:00,840 44k INFO Losses: [2.518825054168701, 2.1414053440093994, 10.317829132080078, 19.119966506958008, 0.7595401406288147], step: 124000, lr: 9.84495475503445e-05 2023-03-16 21:11:04,082 44k INFO Saving model and optimizer state at iteration 123 to ./logs\44k\G_124000.pth 2023-03-16 21:11:04,805 44k INFO Saving model and optimizer state at iteration 123 to ./logs\44k\D_124000.pth 2023-03-16 21:11:05,501 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_121000.pth 2023-03-16 21:11:05,535 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_121000.pth 2023-03-16 21:12:22,286 44k INFO Train Epoch: 123 [97%] 2023-03-16 21:12:22,286 44k INFO Losses: [2.5878682136535645, 2.1829071044921875, 7.049212455749512, 17.75411605834961, 1.3398253917694092], step: 124200, lr: 9.84495475503445e-05 2023-03-16 21:12:33,421 44k INFO ====> Epoch: 123, cost 400.57 s 2023-03-16 21:13:49,313 44k INFO Train Epoch: 124 [17%] 2023-03-16 21:13:49,314 44k INFO Losses: [2.582327127456665, 2.024710178375244, 8.379120826721191, 21.977935791015625, 1.327603816986084], step: 124400, lr: 9.84372413569007e-05 2023-03-16 21:15:06,288 44k INFO Train Epoch: 124 [37%] 2023-03-16 21:15:06,289 44k INFO Losses: [2.5604705810546875, 2.083108425140381, 9.70132827758789, 18.714637756347656, 1.3466579914093018], step: 124600, lr: 9.84372413569007e-05 2023-03-16 21:16:24,982 44k INFO Train Epoch: 124 [56%] 2023-03-16 21:16:24,982 44k INFO Losses: [2.313772439956665, 2.436140537261963, 10.304165840148926, 19.76287078857422, 1.3897817134857178], step: 124800, lr: 9.84372413569007e-05 2023-03-16 21:17:43,969 44k INFO Train Epoch: 124 [76%] 2023-03-16 21:17:43,969 44k INFO Losses: [2.189805507659912, 2.604588508605957, 15.192005157470703, 20.383209228515625, 1.4267423152923584], step: 125000, lr: 9.84372413569007e-05 2023-03-16 21:17:47,416 44k INFO Saving model and optimizer state at iteration 124 to ./logs\44k\G_125000.pth 2023-03-16 21:17:48,166 44k INFO Saving model and optimizer state at iteration 124 to ./logs\44k\D_125000.pth 2023-03-16 21:17:48,950 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_122000.pth 2023-03-16 21:17:48,983 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_122000.pth 2023-03-16 21:19:04,042 44k INFO Train Epoch: 124 [96%] 2023-03-16 21:19:04,043 44k INFO Losses: [2.582874298095703, 2.188211441040039, 9.371465682983398, 19.447050094604492, 1.0882121324539185], step: 125200, lr: 9.84372413569007e-05 2023-03-16 21:19:20,032 44k INFO ====> Epoch: 124, cost 406.61 s 2023-03-16 21:20:32,107 44k INFO Train Epoch: 125 [16%] 2023-03-16 21:20:32,108 44k INFO Losses: [2.3764219284057617, 2.6233363151550293, 10.088127136230469, 21.445926666259766, 1.579843521118164], step: 125400, lr: 9.842493670173108e-05 2023-03-16 21:21:48,129 44k INFO Train Epoch: 125 [36%] 2023-03-16 21:21:48,130 44k INFO Losses: [2.717830181121826, 2.0937118530273438, 5.251359939575195, 14.760302543640137, 1.555282711982727], step: 125600, lr: 9.842493670173108e-05 2023-03-16 21:23:07,316 44k INFO Train Epoch: 125 [55%] 2023-03-16 21:23:07,316 44k INFO Losses: [2.560476303100586, 2.3678083419799805, 8.846772193908691, 20.237735748291016, 1.3674736022949219], step: 125800, lr: 9.842493670173108e-05 2023-03-16 21:24:25,368 44k INFO Train Epoch: 125 [75%] 2023-03-16 21:24:25,369 44k INFO Losses: [2.3682737350463867, 2.2751691341400146, 11.449261665344238, 23.939096450805664, 1.291507601737976], step: 126000, lr: 9.842493670173108e-05 2023-03-16 21:24:28,866 44k INFO Saving model and optimizer state at iteration 125 to ./logs\44k\G_126000.pth 2023-03-16 21:24:29,617 44k INFO Saving model and optimizer state at iteration 125 to ./logs\44k\D_126000.pth 2023-03-16 21:24:30,390 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_123000.pth 2023-03-16 21:24:30,425 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_123000.pth 2023-03-16 21:25:51,265 44k INFO Train Epoch: 125 [95%] 2023-03-16 21:25:51,266 44k INFO Losses: [2.2202296257019043, 2.755955219268799, 8.957170486450195, 17.368812561035156, 1.2911958694458008], step: 126200, lr: 9.842493670173108e-05 2023-03-16 21:26:11,090 44k INFO ====> Epoch: 125, cost 411.06 s 2023-03-16 21:27:20,498 44k INFO Train Epoch: 126 [15%] 2023-03-16 21:27:20,499 44k INFO Losses: [2.8419511318206787, 2.1519012451171875, 4.74504280090332, 14.384322166442871, 1.1298590898513794], step: 126400, lr: 9.841263358464336e-05 2023-03-16 21:28:37,977 44k INFO Train Epoch: 126 [35%] 2023-03-16 21:28:37,977 44k INFO Losses: [2.333691358566284, 2.256894588470459, 10.803601264953613, 19.03571891784668, 1.4740780591964722], step: 126600, lr: 9.841263358464336e-05 2023-03-16 21:29:58,404 44k INFO Train Epoch: 126 [54%] 2023-03-16 21:29:58,404 44k INFO Losses: [2.8947510719299316, 2.169710874557495, 9.449152946472168, 21.0382022857666, 1.6140398979187012], step: 126800, lr: 9.841263358464336e-05 2023-03-16 21:31:18,779 44k INFO Train Epoch: 126 [74%] 2023-03-16 21:31:18,780 44k INFO Losses: [2.5135457515716553, 2.175605058670044, 9.934426307678223, 19.865989685058594, 0.8287474513053894], step: 127000, lr: 9.841263358464336e-05 2023-03-16 21:31:22,121 44k INFO Saving model and optimizer state at iteration 126 to ./logs\44k\G_127000.pth 2023-03-16 21:31:22,934 44k INFO Saving model and optimizer state at iteration 126 to ./logs\44k\D_127000.pth 2023-03-16 21:31:23,594 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_124000.pth 2023-03-16 21:31:23,634 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_124000.pth 2023-03-16 21:32:43,527 44k INFO Train Epoch: 126 [94%] 2023-03-16 21:32:43,528 44k INFO Losses: [2.7297589778900146, 2.3555426597595215, 10.663714408874512, 17.71245765686035, 1.3499878644943237], step: 127200, lr: 9.841263358464336e-05 2023-03-16 21:33:06,565 44k INFO ====> Epoch: 126, cost 415.47 s 2023-03-16 21:34:12,464 44k INFO Train Epoch: 127 [14%] 2023-03-16 21:34:12,465 44k INFO Losses: [2.3476643562316895, 2.347994565963745, 11.09720230102539, 23.507179260253906, 1.547228455543518], step: 127400, lr: 9.840033200544528e-05 2023-03-16 21:35:29,401 44k INFO Train Epoch: 127 [34%] 2023-03-16 21:35:29,401 44k INFO Losses: [2.554994821548462, 2.389326333999634, 7.781848907470703, 19.748640060424805, 0.8924790620803833], step: 127600, lr: 9.840033200544528e-05 2023-03-16 21:36:44,075 44k INFO Train Epoch: 127 [53%] 2023-03-16 21:36:44,075 44k INFO Losses: [2.5738961696624756, 2.051382303237915, 9.24011516571045, 13.894948959350586, 1.6917957067489624], step: 127800, lr: 9.840033200544528e-05 2023-03-16 21:37:58,903 44k INFO Train Epoch: 127 [73%] 2023-03-16 21:37:58,903 44k INFO Losses: [2.564816951751709, 2.011338710784912, 8.829463005065918, 15.653246879577637, 1.3370722532272339], step: 128000, lr: 9.840033200544528e-05 2023-03-16 21:38:02,010 44k INFO Saving model and optimizer state at iteration 127 to ./logs\44k\G_128000.pth 2023-03-16 21:38:02,732 44k INFO Saving model and optimizer state at iteration 127 to ./logs\44k\D_128000.pth 2023-03-16 21:38:03,417 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_125000.pth 2023-03-16 21:38:03,451 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_125000.pth 2023-03-16 21:39:18,541 44k INFO Train Epoch: 127 [93%] 2023-03-16 21:39:18,542 44k INFO Losses: [2.3865556716918945, 2.548262596130371, 9.01450252532959, 21.457292556762695, 1.1322754621505737], step: 128200, lr: 9.840033200544528e-05 2023-03-16 21:39:45,012 44k INFO ====> Epoch: 127, cost 398.45 s 2023-03-16 21:40:44,149 44k INFO Train Epoch: 128 [13%] 2023-03-16 21:40:44,149 44k INFO Losses: [2.663928747177124, 2.0870141983032227, 8.000514030456543, 15.461501121520996, 1.238242268562317], step: 128400, lr: 9.838803196394459e-05 2023-03-16 21:42:00,886 44k INFO Train Epoch: 128 [33%] 2023-03-16 21:42:00,886 44k INFO Losses: [2.1933984756469727, 2.603637933731079, 10.112042427062988, 21.473114013671875, 1.1478219032287598], step: 128600, lr: 9.838803196394459e-05 2023-03-16 21:43:20,793 44k INFO Train Epoch: 128 [52%] 2023-03-16 21:43:20,793 44k INFO Losses: [2.386568069458008, 2.399456739425659, 9.913968086242676, 18.65563201904297, 1.3566242456436157], step: 128800, lr: 9.838803196394459e-05 2023-03-16 21:44:38,269 44k INFO Train Epoch: 128 [72%] 2023-03-16 21:44:38,270 44k INFO Losses: [2.1013412475585938, 2.509451150894165, 11.9275484085083, 24.23639678955078, 1.298487901687622], step: 129000, lr: 9.838803196394459e-05 2023-03-16 21:44:41,526 44k INFO Saving model and optimizer state at iteration 128 to ./logs\44k\G_129000.pth 2023-03-16 21:44:42,265 44k INFO Saving model and optimizer state at iteration 128 to ./logs\44k\D_129000.pth 2023-03-16 21:44:42,944 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_126000.pth 2023-03-16 21:44:42,944 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_126000.pth 2023-03-16 21:46:01,986 44k INFO Train Epoch: 128 [92%] 2023-03-16 21:46:01,986 44k INFO Losses: [2.714503288269043, 2.1214232444763184, 10.781322479248047, 20.27105140686035, 1.3933510780334473], step: 129200, lr: 9.838803196394459e-05 2023-03-16 21:46:32,103 44k INFO ====> Epoch: 128, cost 407.09 s 2023-03-16 21:47:26,333 44k INFO Train Epoch: 129 [12%] 2023-03-16 21:47:26,333 44k INFO Losses: [2.36114239692688, 2.54467511177063, 12.97600269317627, 21.302833557128906, 1.3872027397155762], step: 129400, lr: 9.837573345994909e-05 2023-03-16 21:48:41,063 44k INFO Train Epoch: 129 [32%] 2023-03-16 21:48:41,064 44k INFO Losses: [2.194422960281372, 2.5571022033691406, 13.86385726928711, 21.944990158081055, 1.1686285734176636], step: 129600, lr: 9.837573345994909e-05 2023-03-16 21:49:59,217 44k INFO Train Epoch: 129 [51%] 2023-03-16 21:49:59,217 44k INFO Losses: [2.400430202484131, 2.4840714931488037, 12.558272361755371, 22.832727432250977, 1.3673943281173706], step: 129800, lr: 9.837573345994909e-05 2023-03-16 21:51:18,073 44k INFO Train Epoch: 129 [71%] 2023-03-16 21:51:18,074 44k INFO Losses: [2.309587001800537, 2.1837422847747803, 13.710969924926758, 20.554431915283203, 1.1682411432266235], step: 130000, lr: 9.837573345994909e-05 2023-03-16 21:51:21,295 44k INFO Saving model and optimizer state at iteration 129 to ./logs\44k\G_130000.pth 2023-03-16 21:51:22,082 44k INFO Saving model and optimizer state at iteration 129 to ./logs\44k\D_130000.pth 2023-03-16 21:51:22,802 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_127000.pth 2023-03-16 21:51:22,850 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_127000.pth 2023-03-16 21:52:42,275 44k INFO Train Epoch: 129 [91%] 2023-03-16 21:52:42,276 44k INFO Losses: [2.225048065185547, 2.4784798622131348, 16.56960678100586, 23.200525283813477, 1.5371919870376587], step: 130200, lr: 9.837573345994909e-05 2023-03-16 21:53:17,823 44k INFO ====> Epoch: 129, cost 405.72 s 2023-03-16 21:54:11,525 44k INFO Train Epoch: 130 [11%] 2023-03-16 21:54:11,526 44k INFO Losses: [2.2791314125061035, 2.3530805110931396, 15.32770824432373, 21.12164878845215, 1.5226047039031982], step: 130400, lr: 9.836343649326659e-05 2023-03-16 21:55:29,286 44k INFO Train Epoch: 130 [31%] 2023-03-16 21:55:29,286 44k INFO Losses: [2.461757183074951, 2.226109504699707, 11.762144088745117, 21.150938034057617, 1.1832889318466187], step: 130600, lr: 9.836343649326659e-05 2023-03-16 21:56:49,455 44k INFO Train Epoch: 130 [50%] 2023-03-16 21:56:49,456 44k INFO Losses: [2.405156135559082, 2.4055044651031494, 9.579560279846191, 17.96463394165039, 1.4282358884811401], step: 130800, lr: 9.836343649326659e-05 2023-03-16 21:58:10,466 44k INFO Train Epoch: 130 [70%] 2023-03-16 21:58:10,466 44k INFO Losses: [2.4095988273620605, 2.475285291671753, 8.887436866760254, 20.731224060058594, 1.461783528327942], step: 131000, lr: 9.836343649326659e-05 2023-03-16 21:58:13,710 44k INFO Saving model and optimizer state at iteration 130 to ./logs\44k\G_131000.pth 2023-03-16 21:58:14,476 44k INFO Saving model and optimizer state at iteration 130 to ./logs\44k\D_131000.pth 2023-03-16 21:58:15,116 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_128000.pth 2023-03-16 21:58:15,161 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_128000.pth 2023-03-16 21:59:33,688 44k INFO Train Epoch: 130 [90%] 2023-03-16 21:59:33,688 44k INFO Losses: [2.396411657333374, 2.1896917819976807, 10.844066619873047, 20.400211334228516, 1.1212650537490845], step: 131200, lr: 9.836343649326659e-05 2023-03-16 22:00:12,672 44k INFO ====> Epoch: 130, cost 414.85 s 2023-03-17 00:45:55,461 44k INFO {'train': {'log_interval': 200, 'eval_interval': 1000, 'seed': 1234, 'epochs': 200, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': False, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 3}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'tubaki': 0}, 'model_dir': './logs\\44k'} 2023-03-17 00:45:55,488 44k WARNING git hash values are different. cea6df30(saved) != fd4d47fd(current) 2023-03-17 00:45:58,549 44k INFO Loaded checkpoint './logs\44k\G_131000.pth' (iteration 130) 2023-03-17 00:45:59,035 44k INFO Loaded checkpoint './logs\44k\D_131000.pth' (iteration 130) 2023-03-17 00:46:57,849 44k INFO Train Epoch: 130 [11%] 2023-03-17 00:46:57,849 44k INFO Losses: [2.515319347381592, 2.2044897079467773, 10.619453430175781, 19.655818939208984, 1.2873295545578003], step: 130400, lr: 9.835114106370493e-05 2023-03-17 00:48:12,810 44k INFO Train Epoch: 130 [31%] 2023-03-17 00:48:12,810 44k INFO Losses: [2.41685152053833, 2.213315963745117, 10.664912223815918, 19.46683692932129, 1.2917394638061523], step: 130600, lr: 9.835114106370493e-05 2023-03-17 00:49:26,951 44k INFO Train Epoch: 130 [50%] 2023-03-17 00:49:26,952 44k INFO Losses: [2.3559603691101074, 2.306091070175171, 12.108537673950195, 19.82294273376465, 1.3004748821258545], step: 130800, lr: 9.835114106370493e-05 2023-03-17 00:50:40,424 44k INFO Train Epoch: 130 [70%] 2023-03-17 00:50:40,425 44k INFO Losses: [2.4487738609313965, 2.440524101257324, 8.93284797668457, 19.889787673950195, 1.3925398588180542], step: 131000, lr: 9.835114106370493e-05 2023-03-17 00:50:44,199 44k INFO Saving model and optimizer state at iteration 130 to ./logs\44k\G_131000.pth 2023-03-17 00:50:44,907 44k INFO Saving model and optimizer state at iteration 130 to ./logs\44k\D_131000.pth 2023-03-17 00:51:58,670 44k INFO Train Epoch: 130 [90%] 2023-03-17 00:51:58,671 44k INFO Losses: [2.2225944995880127, 2.436965227127075, 15.346778869628906, 19.289621353149414, 0.9636982083320618], step: 131200, lr: 9.835114106370493e-05 2023-03-17 00:52:37,754 44k INFO ====> Epoch: 130, cost 402.29 s 2023-03-17 00:53:22,854 44k INFO Train Epoch: 131 [10%] 2023-03-17 00:53:22,855 44k INFO Losses: [2.2966837882995605, 2.3501596450805664, 11.471131324768066, 22.913076400756836, 1.27975594997406], step: 131400, lr: 9.833884717107196e-05 2023-03-17 00:54:33,330 44k INFO Train Epoch: 131 [30%] 2023-03-17 00:54:33,330 44k INFO Losses: [2.4693539142608643, 2.2898452281951904, 11.40464973449707, 20.242176055908203, 1.7467775344848633], step: 131600, lr: 9.833884717107196e-05 2023-03-17 00:55:44,245 44k INFO Train Epoch: 131 [50%] 2023-03-17 00:55:44,246 44k INFO Losses: [2.4316959381103516, 2.3097615242004395, 8.49577808380127, 22.404626846313477, 0.9041913747787476], step: 131800, lr: 9.833884717107196e-05 2023-03-17 00:56:55,533 44k INFO Train Epoch: 131 [69%] 2023-03-17 00:56:55,534 44k INFO Losses: [2.474576950073242, 2.1168057918548584, 10.92618179321289, 18.783601760864258, 1.0390485525131226], step: 132000, lr: 9.833884717107196e-05 2023-03-17 00:56:58,719 44k INFO Saving model and optimizer state at iteration 131 to ./logs\44k\G_132000.pth 2023-03-17 00:56:59,379 44k INFO Saving model and optimizer state at iteration 131 to ./logs\44k\D_132000.pth 2023-03-17 00:56:59,951 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_129000.pth 2023-03-17 00:56:59,951 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_129000.pth 2023-03-17 00:58:11,188 44k INFO Train Epoch: 131 [89%] 2023-03-17 00:58:11,189 44k INFO Losses: [2.564199447631836, 2.180450439453125, 14.912457466125488, 20.347394943237305, 1.4326921701431274], step: 132200, lr: 9.833884717107196e-05 2023-03-17 00:58:50,229 44k INFO ====> Epoch: 131, cost 372.47 s 2023-03-17 00:59:31,917 44k INFO Train Epoch: 132 [9%] 2023-03-17 00:59:31,918 44k INFO Losses: [2.3032517433166504, 2.211087703704834, 10.420531272888184, 19.246906280517578, 1.2039368152618408], step: 132400, lr: 9.832655481517557e-05 2023-03-17 01:00:42,404 44k INFO Train Epoch: 132 [29%] 2023-03-17 01:00:42,405 44k INFO Losses: [2.3265318870544434, 2.133335828781128, 11.170360565185547, 17.800195693969727, 1.4345520734786987], step: 132600, lr: 9.832655481517557e-05 2023-03-17 01:01:53,772 44k INFO Train Epoch: 132 [49%] 2023-03-17 01:01:53,772 44k INFO Losses: [2.679762125015259, 2.1087942123413086, 8.560114860534668, 17.913320541381836, 1.3686516284942627], step: 132800, lr: 9.832655481517557e-05 2023-03-17 01:03:05,312 44k INFO Train Epoch: 132 [68%] 2023-03-17 01:03:05,312 44k INFO Losses: [2.461533784866333, 2.6619174480438232, 8.439775466918945, 18.15907859802246, 1.2864606380462646], step: 133000, lr: 9.832655481517557e-05 2023-03-17 01:03:08,524 44k INFO Saving model and optimizer state at iteration 132 to ./logs\44k\G_133000.pth 2023-03-17 01:03:09,246 44k INFO Saving model and optimizer state at iteration 132 to ./logs\44k\D_133000.pth 2023-03-17 01:03:09,826 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_130000.pth 2023-03-17 01:03:09,826 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_130000.pth 2023-03-17 01:04:20,962 44k INFO Train Epoch: 132 [88%] 2023-03-17 01:04:20,963 44k INFO Losses: [2.27639102935791, 2.6087613105773926, 10.117354393005371, 23.067853927612305, 1.0199527740478516], step: 133200, lr: 9.832655481517557e-05 2023-03-17 01:05:03,734 44k INFO ====> Epoch: 132, cost 373.51 s 2023-03-17 01:05:41,889 44k INFO Train Epoch: 133 [8%] 2023-03-17 01:05:41,889 44k INFO Losses: [2.2630434036254883, 2.105909585952759, 14.448860168457031, 19.87299156188965, 1.072484016418457], step: 133400, lr: 9.831426399582366e-05 2023-03-17 01:06:52,689 44k INFO Train Epoch: 133 [28%] 2023-03-17 01:06:52,690 44k INFO Losses: [2.57877254486084, 2.179549217224121, 6.909611225128174, 17.378055572509766, 1.2851967811584473], step: 133600, lr: 9.831426399582366e-05 2023-03-17 01:08:03,876 44k INFO Train Epoch: 133 [48%] 2023-03-17 01:08:03,876 44k INFO Losses: [2.0359578132629395, 2.751680374145508, 16.25448989868164, 23.52233123779297, 1.4066616296768188], step: 133800, lr: 9.831426399582366e-05 2023-03-17 01:09:15,600 44k INFO Train Epoch: 133 [67%] 2023-03-17 01:09:15,601 44k INFO Losses: [2.3857693672180176, 2.312159538269043, 11.445144653320312, 18.620725631713867, 1.381258249282837], step: 134000, lr: 9.831426399582366e-05 2023-03-17 01:09:18,807 44k INFO Saving model and optimizer state at iteration 133 to ./logs\44k\G_134000.pth 2023-03-17 01:09:19,472 44k INFO Saving model and optimizer state at iteration 133 to ./logs\44k\D_134000.pth 2023-03-17 01:09:20,109 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_131000.pth 2023-03-17 01:09:20,138 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_131000.pth 2023-03-17 01:10:31,473 44k INFO Train Epoch: 133 [87%] 2023-03-17 01:10:31,474 44k INFO Losses: [2.523191213607788, 1.9888473749160767, 11.672584533691406, 17.758657455444336, 1.7752776145935059], step: 134200, lr: 9.831426399582366e-05 2023-03-17 01:11:17,618 44k INFO ====> Epoch: 133, cost 373.88 s 2023-03-17 01:11:52,181 44k INFO Train Epoch: 134 [7%] 2023-03-17 01:11:52,181 44k INFO Losses: [2.700049877166748, 2.1109743118286133, 8.713677406311035, 18.656505584716797, 1.3333792686462402], step: 134400, lr: 9.830197471282419e-05 2023-03-17 01:13:03,066 44k INFO Train Epoch: 134 [27%] 2023-03-17 01:13:03,066 44k INFO Losses: [2.5792527198791504, 2.3817694187164307, 11.263164520263672, 18.88690948486328, 1.452743411064148], step: 134600, lr: 9.830197471282419e-05 2023-03-17 01:14:14,143 44k INFO Train Epoch: 134 [47%] 2023-03-17 01:14:14,143 44k INFO Losses: [2.158463954925537, 2.3287062644958496, 13.803495407104492, 23.02851104736328, 1.1427438259124756], step: 134800, lr: 9.830197471282419e-05 2023-03-17 01:15:25,667 44k INFO Train Epoch: 134 [66%] 2023-03-17 01:15:25,668 44k INFO Losses: [2.7090885639190674, 2.0478761196136475, 8.351889610290527, 16.95465087890625, 1.1763147115707397], step: 135000, lr: 9.830197471282419e-05 2023-03-17 01:15:28,860 44k INFO Saving model and optimizer state at iteration 134 to ./logs\44k\G_135000.pth 2023-03-17 01:15:29,548 44k INFO Saving model and optimizer state at iteration 134 to ./logs\44k\D_135000.pth 2023-03-17 01:15:30,226 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_132000.pth 2023-03-17 01:15:30,254 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_132000.pth 2023-03-17 01:16:41,500 44k INFO Train Epoch: 134 [86%] 2023-03-17 01:16:41,500 44k INFO Losses: [2.5091633796691895, 2.6037425994873047, 11.83573055267334, 20.08641815185547, 1.1493847370147705], step: 135200, lr: 9.830197471282419e-05 2023-03-17 01:17:31,178 44k INFO ====> Epoch: 134, cost 373.56 s 2023-03-17 01:18:02,140 44k INFO Train Epoch: 135 [6%] 2023-03-17 01:18:02,140 44k INFO Losses: [2.5179355144500732, 2.230459451675415, 7.949930191040039, 18.210058212280273, 1.4153021574020386], step: 135400, lr: 9.828968696598508e-05 2023-03-17 01:19:12,836 44k INFO Train Epoch: 135 [26%] 2023-03-17 01:19:12,837 44k INFO Losses: [2.494192600250244, 2.1553540229797363, 11.54837703704834, 19.415260314941406, 1.5798628330230713], step: 135600, lr: 9.828968696598508e-05 2023-03-17 01:20:23,183 44k INFO Train Epoch: 135 [46%] 2023-03-17 01:20:23,184 44k INFO Losses: [2.4279069900512695, 2.023193836212158, 8.366968154907227, 18.869081497192383, 1.3344875574111938], step: 135800, lr: 9.828968696598508e-05 2023-03-17 01:21:33,901 44k INFO Train Epoch: 135 [65%] 2023-03-17 01:21:33,902 44k INFO Losses: [2.1663734912872314, 2.3040404319763184, 14.047518730163574, 23.113216400146484, 1.6738510131835938], step: 136000, lr: 9.828968696598508e-05 2023-03-17 01:21:37,073 44k INFO Saving model and optimizer state at iteration 135 to ./logs\44k\G_136000.pth 2023-03-17 01:21:37,713 44k INFO Saving model and optimizer state at iteration 135 to ./logs\44k\D_136000.pth 2023-03-17 01:21:38,320 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_133000.pth 2023-03-17 01:21:38,349 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_133000.pth 2023-03-17 01:22:48,562 44k INFO Train Epoch: 135 [85%] 2023-03-17 01:22:48,562 44k INFO Losses: [2.526240587234497, 2.3862416744232178, 9.692863464355469, 17.845613479614258, 1.242034673690796], step: 136200, lr: 9.828968696598508e-05 2023-03-17 01:23:41,606 44k INFO ====> Epoch: 135, cost 370.43 s 2023-03-17 01:24:08,624 44k INFO Train Epoch: 136 [5%] 2023-03-17 01:24:08,624 44k INFO Losses: [2.422752857208252, 2.191754102706909, 10.67060661315918, 20.035690307617188, 1.5129849910736084], step: 136400, lr: 9.827740075511432e-05 2023-03-17 01:25:18,823 44k INFO Train Epoch: 136 [25%] 2023-03-17 01:25:18,823 44k INFO Losses: [2.4388811588287354, 2.3099894523620605, 12.97171401977539, 20.86192512512207, 1.2476913928985596], step: 136600, lr: 9.827740075511432e-05 2023-03-17 01:26:29,027 44k INFO Train Epoch: 136 [45%] 2023-03-17 01:26:29,027 44k INFO Losses: [2.340277910232544, 2.2466633319854736, 12.719745635986328, 22.420021057128906, 1.5185600519180298], step: 136800, lr: 9.827740075511432e-05 2023-03-17 01:27:39,770 44k INFO Train Epoch: 136 [64%] 2023-03-17 01:27:39,770 44k INFO Losses: [2.2075088024139404, 2.3650319576263428, 12.203450202941895, 19.791677474975586, 1.2010153532028198], step: 137000, lr: 9.827740075511432e-05 2023-03-17 01:27:42,811 44k INFO Saving model and optimizer state at iteration 136 to ./logs\44k\G_137000.pth 2023-03-17 01:27:43,450 44k INFO Saving model and optimizer state at iteration 136 to ./logs\44k\D_137000.pth 2023-03-17 01:27:44,084 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_134000.pth 2023-03-17 01:27:44,116 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_134000.pth 2023-03-17 01:28:54,709 44k INFO Train Epoch: 136 [84%] 2023-03-17 01:28:54,709 44k INFO Losses: [2.4411323070526123, 2.2212955951690674, 8.676998138427734, 21.300195693969727, 1.2615365982055664], step: 137200, lr: 9.827740075511432e-05 2023-03-17 01:29:50,971 44k INFO ====> Epoch: 136, cost 369.36 s 2023-03-17 01:30:14,402 44k INFO Train Epoch: 137 [4%] 2023-03-17 01:30:14,403 44k INFO Losses: [2.2587409019470215, 2.436528205871582, 11.907655715942383, 24.48907470703125, 1.1111832857131958], step: 137400, lr: 9.826511608001993e-05 2023-03-17 01:31:24,773 44k INFO Train Epoch: 137 [24%] 2023-03-17 01:31:24,774 44k INFO Losses: [2.2772068977355957, 2.2588560581207275, 15.186775207519531, 23.630550384521484, 1.1863977909088135], step: 137600, lr: 9.826511608001993e-05 2023-03-17 01:32:34,942 44k INFO Train Epoch: 137 [44%] 2023-03-17 01:32:34,943 44k INFO Losses: [2.387472629547119, 2.1845736503601074, 8.30869197845459, 16.70293426513672, 1.0892188549041748], step: 137800, lr: 9.826511608001993e-05 2023-03-17 01:33:45,847 44k INFO Train Epoch: 137 [63%] 2023-03-17 01:33:45,847 44k INFO Losses: [2.689639091491699, 2.3808789253234863, 8.777978897094727, 19.990604400634766, 1.1632226705551147], step: 138000, lr: 9.826511608001993e-05 2023-03-17 01:33:48,895 44k INFO Saving model and optimizer state at iteration 137 to ./logs\44k\G_138000.pth 2023-03-17 01:33:49,533 44k INFO Saving model and optimizer state at iteration 137 to ./logs\44k\D_138000.pth 2023-03-17 01:33:50,138 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_135000.pth 2023-03-17 01:33:50,173 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_135000.pth 2023-03-17 01:35:00,421 44k INFO Train Epoch: 137 [83%] 2023-03-17 01:35:00,421 44k INFO Losses: [2.429964303970337, 2.277604818344116, 9.633702278137207, 22.414121627807617, 1.7027508020401], step: 138200, lr: 9.826511608001993e-05 2023-03-17 01:36:00,470 44k INFO ====> Epoch: 137, cost 369.50 s 2023-03-17 01:36:20,263 44k INFO Train Epoch: 138 [3%] 2023-03-17 01:36:20,264 44k INFO Losses: [2.4664711952209473, 2.18500018119812, 6.574722766876221, 14.945267677307129, 1.3491848707199097], step: 138400, lr: 9.825283294050992e-05 2023-03-17 01:37:30,978 44k INFO Train Epoch: 138 [23%] 2023-03-17 01:37:30,979 44k INFO Losses: [2.7440102100372314, 2.2425615787506104, 10.326831817626953, 15.27411937713623, 1.089178442955017], step: 138600, lr: 9.825283294050992e-05 2023-03-17 01:38:41,160 44k INFO Train Epoch: 138 [43%] 2023-03-17 01:38:41,161 44k INFO Losses: [2.1425397396087646, 2.2856156826019287, 10.571455955505371, 23.286603927612305, 1.2303693294525146], step: 138800, lr: 9.825283294050992e-05 2023-03-17 01:39:51,973 44k INFO Train Epoch: 138 [62%] 2023-03-17 01:39:51,974 44k INFO Losses: [2.5275769233703613, 2.198232412338257, 6.540899276733398, 19.75349235534668, 1.4199858903884888], step: 139000, lr: 9.825283294050992e-05 2023-03-17 01:39:55,062 44k INFO Saving model and optimizer state at iteration 138 to ./logs\44k\G_139000.pth 2023-03-17 01:39:55,712 44k INFO Saving model and optimizer state at iteration 138 to ./logs\44k\D_139000.pth 2023-03-17 01:39:56,305 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_136000.pth 2023-03-17 01:39:56,335 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_136000.pth 2023-03-17 01:41:06,719 44k INFO Train Epoch: 138 [82%] 2023-03-17 01:41:06,719 44k INFO Losses: [2.5045626163482666, 2.414212942123413, 8.698945045471191, 19.799230575561523, 1.5727343559265137], step: 139200, lr: 9.825283294050992e-05 2023-03-17 01:42:10,147 44k INFO ====> Epoch: 138, cost 369.68 s 2023-03-17 01:42:26,377 44k INFO Train Epoch: 139 [2%] 2023-03-17 01:42:26,377 44k INFO Losses: [2.2408905029296875, 2.3493242263793945, 13.189056396484375, 20.819915771484375, 1.6241501569747925], step: 139400, lr: 9.824055133639235e-05 2023-03-17 01:43:36,949 44k INFO Train Epoch: 139 [22%] 2023-03-17 01:43:36,949 44k INFO Losses: [2.4738245010375977, 2.2892019748687744, 7.234914779663086, 19.189268112182617, 1.4091464281082153], step: 139600, lr: 9.824055133639235e-05 2023-03-17 01:44:46,882 44k INFO Train Epoch: 139 [42%] 2023-03-17 01:44:46,883 44k INFO Losses: [2.4787230491638184, 2.102396249771118, 7.48057222366333, 18.352941513061523, 1.3798733949661255], step: 139800, lr: 9.824055133639235e-05 2023-03-17 01:45:57,805 44k INFO Train Epoch: 139 [61%] 2023-03-17 01:45:57,806 44k INFO Losses: [2.295813798904419, 2.34798526763916, 12.865368843078613, 20.838212966918945, 1.3867943286895752], step: 140000, lr: 9.824055133639235e-05 2023-03-17 01:46:00,801 44k INFO Saving model and optimizer state at iteration 139 to ./logs\44k\G_140000.pth 2023-03-17 01:46:01,489 44k INFO Saving model and optimizer state at iteration 139 to ./logs\44k\D_140000.pth 2023-03-17 01:46:02,096 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_137000.pth 2023-03-17 01:46:02,129 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_137000.pth 2023-03-17 01:47:12,478 44k INFO Train Epoch: 139 [81%] 2023-03-17 01:47:12,479 44k INFO Losses: [2.184166193008423, 2.4530320167541504, 11.028959274291992, 22.47673797607422, 1.7389177083969116], step: 140200, lr: 9.824055133639235e-05 2023-03-17 01:48:19,491 44k INFO ====> Epoch: 139, cost 369.34 s 2023-03-17 01:48:32,132 44k INFO Train Epoch: 140 [1%] 2023-03-17 01:48:32,133 44k INFO Losses: [2.4320712089538574, 2.7425172328948975, 13.605969429016113, 23.84764862060547, 1.4222296476364136], step: 140400, lr: 9.822827126747529e-05 2023-03-17 01:49:42,681 44k INFO Train Epoch: 140 [21%] 2023-03-17 01:49:42,682 44k INFO Losses: [2.40950345993042, 2.2759902477264404, 6.620016098022461, 18.145917892456055, 1.377079963684082], step: 140600, lr: 9.822827126747529e-05 2023-03-17 01:50:52,592 44k INFO Train Epoch: 140 [41%] 2023-03-17 01:50:52,592 44k INFO Losses: [2.266798496246338, 2.441028118133545, 12.391716957092285, 22.288434982299805, 1.4350391626358032], step: 140800, lr: 9.822827126747529e-05 2023-03-17 01:52:03,433 44k INFO Train Epoch: 140 [60%] 2023-03-17 01:52:03,433 44k INFO Losses: [2.484628438949585, 2.2237496376037598, 9.234139442443848, 17.915681838989258, 1.2863881587982178], step: 141000, lr: 9.822827126747529e-05 2023-03-17 01:52:06,455 44k INFO Saving model and optimizer state at iteration 140 to ./logs\44k\G_141000.pth 2023-03-17 01:52:07,148 44k INFO Saving model and optimizer state at iteration 140 to ./logs\44k\D_141000.pth 2023-03-17 01:52:07,755 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_138000.pth 2023-03-17 01:52:07,790 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_138000.pth 2023-03-17 01:53:18,031 44k INFO Train Epoch: 140 [80%] 2023-03-17 01:53:18,031 44k INFO Losses: [2.491326332092285, 2.2515761852264404, 8.954583168029785, 16.0284423828125, 1.23373544216156], step: 141200, lr: 9.822827126747529e-05 2023-03-17 01:54:28,411 44k INFO ====> Epoch: 140, cost 368.92 s 2023-03-17 01:54:37,957 44k INFO Train Epoch: 141 [0%] 2023-03-17 01:54:37,957 44k INFO Losses: [2.520181894302368, 2.1476526260375977, 9.775007247924805, 21.404401779174805, 1.3946411609649658], step: 141400, lr: 9.821599273356685e-05 2023-03-17 01:55:48,456 44k INFO Train Epoch: 141 [20%] 2023-03-17 01:55:48,456 44k INFO Losses: [2.5833911895751953, 2.2513344287872314, 11.674667358398438, 22.499195098876953, 1.2419625520706177], step: 141600, lr: 9.821599273356685e-05 2023-03-17 01:56:58,488 44k INFO Train Epoch: 141 [40%] 2023-03-17 01:56:58,488 44k INFO Losses: [2.6335062980651855, 2.3465447425842285, 7.272137641906738, 20.479202270507812, 1.2535107135772705], step: 141800, lr: 9.821599273356685e-05 2023-03-17 01:58:09,331 44k INFO Train Epoch: 141 [59%] 2023-03-17 01:58:09,331 44k INFO Losses: [2.5386199951171875, 2.2476534843444824, 12.052621841430664, 20.556564331054688, 1.1914763450622559], step: 142000, lr: 9.821599273356685e-05 2023-03-17 01:58:12,417 44k INFO Saving model and optimizer state at iteration 141 to ./logs\44k\G_142000.pth 2023-03-17 01:58:13,104 44k INFO Saving model and optimizer state at iteration 141 to ./logs\44k\D_142000.pth 2023-03-17 01:58:13,708 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_139000.pth 2023-03-17 01:58:13,747 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_139000.pth 2023-03-17 01:59:24,815 44k INFO Train Epoch: 141 [79%] 2023-03-17 01:59:24,816 44k INFO Losses: [2.5560145378112793, 2.03122615814209, 8.542130470275879, 21.85157012939453, 1.62907075881958], step: 142200, lr: 9.821599273356685e-05 2023-03-17 02:00:35,892 44k INFO Train Epoch: 141 [99%] 2023-03-17 02:00:35,893 44k INFO Losses: [2.5318126678466797, 2.252755641937256, 11.249448776245117, 22.86631965637207, 1.262173056602478], step: 142400, lr: 9.821599273356685e-05 2023-03-17 02:00:39,385 44k INFO ====> Epoch: 141, cost 370.97 s 2023-03-17 02:01:55,548 44k INFO Train Epoch: 142 [19%] 2023-03-17 02:01:55,548 44k INFO Losses: [2.56221079826355, 2.392754554748535, 7.5037312507629395, 17.940845489501953, 1.3125108480453491], step: 142600, lr: 9.820371573447515e-05 2023-03-17 02:03:05,400 44k INFO Train Epoch: 142 [39%] 2023-03-17 02:03:05,400 44k INFO Losses: [2.7892448902130127, 2.064818859100342, 10.001720428466797, 19.24372100830078, 1.3814494609832764], step: 142800, lr: 9.820371573447515e-05 2023-03-17 02:04:16,295 44k INFO Train Epoch: 142 [58%] 2023-03-17 02:04:16,295 44k INFO Losses: [2.372030258178711, 2.3001599311828613, 10.737469673156738, 20.451004028320312, 1.4634617567062378], step: 143000, lr: 9.820371573447515e-05 2023-03-17 02:04:19,397 44k INFO Saving model and optimizer state at iteration 142 to ./logs\44k\G_143000.pth 2023-03-17 02:04:20,067 44k INFO Saving model and optimizer state at iteration 142 to ./logs\44k\D_143000.pth 2023-03-17 02:04:20,667 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_140000.pth 2023-03-17 02:04:20,702 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_140000.pth 2023-03-17 02:05:30,718 44k INFO Train Epoch: 142 [78%] 2023-03-17 02:05:30,719 44k INFO Losses: [2.6565194129943848, 2.016814708709717, 7.816755294799805, 16.713903427124023, 0.9570029973983765], step: 143200, lr: 9.820371573447515e-05 2023-03-17 02:06:41,442 44k INFO Train Epoch: 142 [98%] 2023-03-17 02:06:41,442 44k INFO Losses: [2.516446352005005, 2.1491611003875732, 8.10746955871582, 20.912885665893555, 1.0939801931381226], step: 143400, lr: 9.820371573447515e-05 2023-03-17 02:06:48,526 44k INFO ====> Epoch: 142, cost 369.14 s 2023-03-17 02:08:00,932 44k INFO Train Epoch: 143 [18%] 2023-03-17 02:08:00,933 44k INFO Losses: [2.3922297954559326, 2.365651845932007, 9.851824760437012, 20.40537452697754, 0.9612472057342529], step: 143600, lr: 9.819144027000834e-05 2023-03-17 02:09:10,572 44k INFO Train Epoch: 143 [38%] 2023-03-17 02:09:10,573 44k INFO Losses: [2.3723983764648438, 2.323636054992676, 12.374360084533691, 21.383270263671875, 0.8947606086730957], step: 143800, lr: 9.819144027000834e-05 2023-03-17 02:10:21,153 44k INFO Train Epoch: 143 [57%] 2023-03-17 02:10:21,154 44k INFO Losses: [2.5713727474212646, 2.309406042098999, 7.146517753601074, 23.31093978881836, 1.8196829557418823], step: 144000, lr: 9.819144027000834e-05 2023-03-17 02:10:24,330 44k INFO Saving model and optimizer state at iteration 143 to ./logs\44k\G_144000.pth 2023-03-17 02:10:24,979 44k INFO Saving model and optimizer state at iteration 143 to ./logs\44k\D_144000.pth 2023-03-17 02:10:25,565 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_141000.pth 2023-03-17 02:10:25,599 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_141000.pth 2023-03-17 02:11:35,616 44k INFO Train Epoch: 143 [77%] 2023-03-17 02:11:35,617 44k INFO Losses: [2.6384267807006836, 2.124640464782715, 8.701159477233887, 18.232786178588867, 1.1600104570388794], step: 144200, lr: 9.819144027000834e-05 2023-03-17 02:12:45,896 44k INFO Train Epoch: 143 [97%] 2023-03-17 02:12:45,896 44k INFO Losses: [2.4628560543060303, 2.2618296146392822, 9.829648971557617, 18.562997817993164, 1.2613706588745117], step: 144400, lr: 9.819144027000834e-05 2023-03-17 02:12:56,472 44k INFO ====> Epoch: 143, cost 367.95 s 2023-03-17 02:14:05,498 44k INFO Train Epoch: 144 [17%] 2023-03-17 02:14:05,498 44k INFO Losses: [2.472198724746704, 2.0728559494018555, 12.136018753051758, 23.403667449951172, 0.9892506003379822], step: 144600, lr: 9.817916633997459e-05 2023-03-17 02:15:15,257 44k INFO Train Epoch: 144 [37%] 2023-03-17 02:15:15,257 44k INFO Losses: [2.4751126766204834, 2.4535820484161377, 11.15581226348877, 17.328481674194336, 0.8765872716903687], step: 144800, lr: 9.817916633997459e-05 2023-03-17 02:16:26,056 44k INFO Train Epoch: 144 [56%] 2023-03-17 02:16:26,056 44k INFO Losses: [2.3363850116729736, 2.5289101600646973, 9.743159294128418, 21.447858810424805, 1.6044301986694336], step: 145000, lr: 9.817916633997459e-05 2023-03-17 02:16:29,155 44k INFO Saving model and optimizer state at iteration 144 to ./logs\44k\G_145000.pth 2023-03-17 02:16:29,860 44k INFO Saving model and optimizer state at iteration 144 to ./logs\44k\D_145000.pth 2023-03-17 02:16:30,461 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_142000.pth 2023-03-17 02:16:30,491 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_142000.pth 2023-03-17 02:17:40,658 44k INFO Train Epoch: 144 [76%] 2023-03-17 02:17:40,658 44k INFO Losses: [2.337587356567383, 2.418259859085083, 13.480792999267578, 21.0185489654541, 0.9377906918525696], step: 145200, lr: 9.817916633997459e-05 2023-03-17 02:18:51,368 44k INFO Train Epoch: 144 [96%] 2023-03-17 02:18:51,369 44k INFO Losses: [2.301130771636963, 2.307330846786499, 13.998675346374512, 21.231788635253906, 1.3912228345870972], step: 145400, lr: 9.817916633997459e-05 2023-03-17 02:19:05,527 44k INFO ====> Epoch: 144, cost 369.06 s 2023-03-17 02:20:11,239 44k INFO Train Epoch: 145 [16%] 2023-03-17 02:20:11,239 44k INFO Losses: [2.3901329040527344, 2.578500270843506, 11.800321578979492, 19.551559448242188, 1.4697291851043701], step: 145600, lr: 9.816689394418209e-05 2023-03-17 02:21:21,052 44k INFO Train Epoch: 145 [36%] 2023-03-17 02:21:21,052 44k INFO Losses: [2.4902114868164062, 2.051246404647827, 7.082446098327637, 13.617019653320312, 1.0942580699920654], step: 145800, lr: 9.816689394418209e-05 2023-03-17 02:22:31,778 44k INFO Train Epoch: 145 [55%] 2023-03-17 02:22:31,779 44k INFO Losses: [2.4638302326202393, 2.2407705783843994, 8.87726879119873, 19.708955764770508, 1.3834363222122192], step: 146000, lr: 9.816689394418209e-05 2023-03-17 02:22:34,959 44k INFO Saving model and optimizer state at iteration 145 to ./logs\44k\G_146000.pth 2023-03-17 02:22:35,645 44k INFO Saving model and optimizer state at iteration 145 to ./logs\44k\D_146000.pth 2023-03-17 02:22:36,300 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_143000.pth 2023-03-17 02:22:36,332 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_143000.pth 2023-03-17 02:23:46,501 44k INFO Train Epoch: 145 [75%] 2023-03-17 02:23:46,501 44k INFO Losses: [2.6060454845428467, 2.4841930866241455, 11.812039375305176, 23.66813087463379, 1.299347162246704], step: 146200, lr: 9.816689394418209e-05 2023-03-17 02:24:57,065 44k INFO Train Epoch: 145 [95%] 2023-03-17 02:24:57,065 44k INFO Losses: [2.255368709564209, 2.4912664890289307, 10.848477363586426, 22.426483154296875, 1.312187910079956], step: 146400, lr: 9.816689394418209e-05 2023-03-17 02:25:14,742 44k INFO ====> Epoch: 145, cost 369.22 s 2023-03-17 02:26:16,758 44k INFO Train Epoch: 146 [15%] 2023-03-17 02:26:16,759 44k INFO Losses: [2.8345890045166016, 2.072849750518799, 5.509282112121582, 15.329811096191406, 1.2686318159103394], step: 146600, lr: 9.815462308243906e-05 2023-03-17 02:27:26,657 44k INFO Train Epoch: 146 [35%] 2023-03-17 02:27:26,657 44k INFO Losses: [2.5242462158203125, 2.463927745819092, 10.107146263122559, 17.082233428955078, 1.0591872930526733], step: 146800, lr: 9.815462308243906e-05 2023-03-17 02:28:37,296 44k INFO Train Epoch: 146 [54%] 2023-03-17 02:28:37,296 44k INFO Losses: [2.4842188358306885, 2.1964023113250732, 11.136263847351074, 21.42406463623047, 1.3546452522277832], step: 147000, lr: 9.815462308243906e-05 2023-03-17 02:28:40,468 44k INFO Saving model and optimizer state at iteration 146 to ./logs\44k\G_147000.pth 2023-03-17 02:28:41,129 44k INFO Saving model and optimizer state at iteration 146 to ./logs\44k\D_147000.pth 2023-03-17 02:28:41,783 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_144000.pth 2023-03-17 02:28:41,816 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_144000.pth 2023-03-17 02:29:52,035 44k INFO Train Epoch: 146 [74%] 2023-03-17 02:29:52,036 44k INFO Losses: [2.6272642612457275, 2.0447070598602295, 5.085176944732666, 12.158442497253418, 1.2828593254089355], step: 147200, lr: 9.815462308243906e-05 2023-03-17 02:31:02,538 44k INFO Train Epoch: 146 [94%] 2023-03-17 02:31:02,539 44k INFO Losses: [2.5182833671569824, 2.3965141773223877, 8.652215957641602, 14.659794807434082, 0.9146302938461304], step: 147400, lr: 9.815462308243906e-05 2023-03-17 02:31:23,716 44k INFO ====> Epoch: 146, cost 368.97 s 2023-03-17 02:32:22,329 44k INFO Train Epoch: 147 [14%] 2023-03-17 02:32:22,330 44k INFO Losses: [2.4641647338867188, 2.2374963760375977, 11.4824857711792, 23.132169723510742, 1.20614492893219], step: 147600, lr: 9.814235375455375e-05 2023-03-17 02:33:34,627 44k INFO Train Epoch: 147 [34%] 2023-03-17 02:33:34,627 44k INFO Losses: [2.3208370208740234, 2.442451238632202, 9.399152755737305, 18.20328140258789, 1.3080898523330688], step: 147800, lr: 9.814235375455375e-05 2023-03-17 02:34:47,043 44k INFO Train Epoch: 147 [53%] 2023-03-17 02:34:47,044 44k INFO Losses: [2.4444210529327393, 2.481877326965332, 7.134228229522705, 16.215221405029297, 1.2470393180847168], step: 148000, lr: 9.814235375455375e-05 2023-03-17 02:34:50,124 44k INFO Saving model and optimizer state at iteration 147 to ./logs\44k\G_148000.pth 2023-03-17 02:34:50,810 44k INFO Saving model and optimizer state at iteration 147 to ./logs\44k\D_148000.pth 2023-03-17 02:34:51,409 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_145000.pth 2023-03-17 02:34:51,445 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_145000.pth 2023-03-17 02:36:02,247 44k INFO Train Epoch: 147 [73%] 2023-03-17 02:36:02,247 44k INFO Losses: [2.774273157119751, 1.8308204412460327, 10.809883117675781, 18.927326202392578, 1.1131304502487183], step: 148200, lr: 9.814235375455375e-05 2023-03-17 02:37:13,455 44k INFO Train Epoch: 147 [93%] 2023-03-17 02:37:13,455 44k INFO Losses: [2.3471131324768066, 2.6782352924346924, 12.661517143249512, 21.266592025756836, 1.2054786682128906], step: 148400, lr: 9.814235375455375e-05 2023-03-17 02:37:38,254 44k INFO ====> Epoch: 147, cost 374.54 s 2023-03-17 02:38:33,837 44k INFO Train Epoch: 148 [13%] 2023-03-17 02:38:33,837 44k INFO Losses: [2.460023880004883, 2.188377857208252, 9.611383438110352, 16.891197204589844, 1.0549757480621338], step: 148600, lr: 9.813008596033443e-05 2023-03-17 02:39:44,036 44k INFO Train Epoch: 148 [33%] 2023-03-17 02:39:44,037 44k INFO Losses: [2.5768251419067383, 2.1704776287078857, 7.759487628936768, 16.657909393310547, 1.0242143869400024], step: 148800, lr: 9.813008596033443e-05 2023-03-17 02:40:54,707 44k INFO Train Epoch: 148 [52%] 2023-03-17 02:40:54,707 44k INFO Losses: [2.427022695541382, 2.2065563201904297, 9.859994888305664, 19.965591430664062, 1.0910308361053467], step: 149000, lr: 9.813008596033443e-05 2023-03-17 02:40:57,693 44k INFO Saving model and optimizer state at iteration 148 to ./logs\44k\G_149000.pth 2023-03-17 02:40:58,364 44k INFO Saving model and optimizer state at iteration 148 to ./logs\44k\D_149000.pth 2023-03-17 02:40:58,971 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_146000.pth 2023-03-17 02:40:59,003 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_146000.pth 2023-03-17 02:42:09,848 44k INFO Train Epoch: 148 [72%] 2023-03-17 02:42:09,848 44k INFO Losses: [2.3886189460754395, 2.518155097961426, 9.367219924926758, 20.939302444458008, 1.3177533149719238], step: 149200, lr: 9.813008596033443e-05 2023-03-17 02:43:20,846 44k INFO Train Epoch: 148 [92%] 2023-03-17 02:43:20,847 44k INFO Losses: [2.5139198303222656, 2.3592448234558105, 11.756673812866211, 22.758563995361328, 1.5792471170425415], step: 149400, lr: 9.813008596033443e-05 2023-03-17 02:43:49,259 44k INFO ====> Epoch: 148, cost 371.00 s 2023-03-17 02:44:40,973 44k INFO Train Epoch: 149 [12%] 2023-03-17 02:44:40,973 44k INFO Losses: [2.703355073928833, 2.2882187366485596, 10.118617057800293, 20.18439483642578, 1.1898094415664673], step: 149600, lr: 9.811781969958938e-05 2023-03-17 02:45:51,245 44k INFO Train Epoch: 149 [32%] 2023-03-17 02:45:51,245 44k INFO Losses: [2.3939573764801025, 2.3266475200653076, 12.866251945495605, 20.62938117980957, 1.2919466495513916], step: 149800, lr: 9.811781969958938e-05 2023-03-17 02:47:01,910 44k INFO Train Epoch: 149 [51%] 2023-03-17 02:47:01,910 44k INFO Losses: [2.416184902191162, 2.519071102142334, 11.054637908935547, 21.003456115722656, 1.39250910282135], step: 150000, lr: 9.811781969958938e-05 2023-03-17 02:47:04,913 44k INFO Saving model and optimizer state at iteration 149 to ./logs\44k\G_150000.pth 2023-03-17 02:47:05,567 44k INFO Saving model and optimizer state at iteration 149 to ./logs\44k\D_150000.pth 2023-03-17 02:47:06,190 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_147000.pth 2023-03-17 02:47:06,225 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_147000.pth 2023-03-17 02:48:17,394 44k INFO Train Epoch: 149 [71%] 2023-03-17 02:48:17,394 44k INFO Losses: [2.485835552215576, 2.3074214458465576, 7.019090175628662, 16.389488220214844, 1.2632291316986084], step: 150200, lr: 9.811781969958938e-05 2023-03-17 02:49:28,326 44k INFO Train Epoch: 149 [91%] 2023-03-17 02:49:28,326 44k INFO Losses: [2.573530435562134, 2.116640090942383, 10.353866577148438, 18.548192977905273, 1.284400463104248], step: 150400, lr: 9.811781969958938e-05 2023-03-17 02:50:00,126 44k INFO ====> Epoch: 149, cost 370.87 s 2023-03-17 02:50:48,343 44k INFO Train Epoch: 150 [11%] 2023-03-17 02:50:48,344 44k INFO Losses: [2.2846150398254395, 2.429863929748535, 12.692280769348145, 21.730628967285156, 1.607886791229248], step: 150600, lr: 9.810555497212693e-05 2023-03-17 02:51:58,436 44k INFO Train Epoch: 150 [31%] 2023-03-17 02:51:58,437 44k INFO Losses: [2.5696561336517334, 2.4303181171417236, 14.175826072692871, 22.565196990966797, 1.4116381406784058], step: 150800, lr: 9.810555497212693e-05 2023-03-17 02:53:09,269 44k INFO Train Epoch: 150 [50%] 2023-03-17 02:53:09,269 44k INFO Losses: [2.206298351287842, 2.6214494705200195, 14.689628601074219, 23.046512603759766, 1.4627023935317993], step: 151000, lr: 9.810555497212693e-05 2023-03-17 02:53:12,261 44k INFO Saving model and optimizer state at iteration 150 to ./logs\44k\G_151000.pth 2023-03-17 02:53:12,918 44k INFO Saving model and optimizer state at iteration 150 to ./logs\44k\D_151000.pth 2023-03-17 02:53:13,538 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_148000.pth 2023-03-17 02:53:13,574 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_148000.pth 2023-03-17 02:54:24,266 44k INFO Train Epoch: 150 [70%] 2023-03-17 02:54:24,267 44k INFO Losses: [2.3353240489959717, 2.360241413116455, 9.465228080749512, 18.809062957763672, 1.6291866302490234], step: 151200, lr: 9.810555497212693e-05 2023-03-17 02:55:35,125 44k INFO Train Epoch: 150 [90%] 2023-03-17 02:55:35,125 44k INFO Losses: [2.4989447593688965, 2.271209716796875, 13.308920860290527, 20.406789779663086, 1.171337366104126], step: 151400, lr: 9.810555497212693e-05 2023-03-17 02:56:10,564 44k INFO ====> Epoch: 150, cost 370.44 s 2023-03-17 02:56:55,251 44k INFO Train Epoch: 151 [10%] 2023-03-17 02:56:55,252 44k INFO Losses: [2.2104814052581787, 2.5505270957946777, 13.22044849395752, 22.76844596862793, 1.4571675062179565], step: 151600, lr: 9.809329177775541e-05 2023-03-17 02:58:05,368 44k INFO Train Epoch: 151 [30%] 2023-03-17 02:58:05,369 44k INFO Losses: [2.5568158626556396, 2.1889026165008545, 10.244500160217285, 18.93541717529297, 1.574784755706787], step: 151800, lr: 9.809329177775541e-05 2023-03-17 02:59:16,085 44k INFO Train Epoch: 151 [50%] 2023-03-17 02:59:16,086 44k INFO Losses: [2.497706413269043, 2.369779109954834, 10.073479652404785, 22.2327938079834, 1.15057373046875], step: 152000, lr: 9.809329177775541e-05 2023-03-17 02:59:18,971 44k INFO Saving model and optimizer state at iteration 151 to ./logs\44k\G_152000.pth 2023-03-17 02:59:19,714 44k INFO Saving model and optimizer state at iteration 151 to ./logs\44k\D_152000.pth 2023-03-17 02:59:20,314 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_149000.pth 2023-03-17 02:59:20,350 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_149000.pth 2023-03-17 03:00:31,000 44k INFO Train Epoch: 151 [69%] 2023-03-17 03:00:31,001 44k INFO Losses: [2.6747615337371826, 2.314469337463379, 10.588971138000488, 16.291154861450195, 1.1112359762191772], step: 152200, lr: 9.809329177775541e-05 2023-03-17 03:01:41,886 44k INFO Train Epoch: 151 [89%] 2023-03-17 03:01:41,887 44k INFO Losses: [2.299777030944824, 2.311330556869507, 9.698742866516113, 18.1346492767334, 1.0566151142120361], step: 152400, lr: 9.809329177775541e-05 2023-03-17 03:02:20,832 44k INFO ====> Epoch: 151, cost 370.27 s 2023-03-17 03:03:02,092 44k INFO Train Epoch: 152 [9%] 2023-03-17 03:03:02,093 44k INFO Losses: [2.327758312225342, 2.091064453125, 11.724469184875488, 18.56585693359375, 1.1287027597427368], step: 152600, lr: 9.808103011628319e-05 2023-03-17 03:04:12,679 44k INFO Train Epoch: 152 [29%] 2023-03-17 03:04:12,679 44k INFO Losses: [2.5169479846954346, 2.1583595275878906, 8.051013946533203, 17.721345901489258, 1.1389167308807373], step: 152800, lr: 9.808103011628319e-05 2023-03-17 03:05:23,334 44k INFO Train Epoch: 152 [49%] 2023-03-17 03:05:23,335 44k INFO Losses: [2.5254907608032227, 2.331249237060547, 10.00870132446289, 20.5595645904541, 1.3657342195510864], step: 153000, lr: 9.808103011628319e-05 2023-03-17 03:05:26,266 44k INFO Saving model and optimizer state at iteration 152 to ./logs\44k\G_153000.pth 2023-03-17 03:05:27,009 44k INFO Saving model and optimizer state at iteration 152 to ./logs\44k\D_153000.pth 2023-03-17 03:05:27,610 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_150000.pth 2023-03-17 03:05:27,644 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_150000.pth 2023-03-17 03:06:38,672 44k INFO Train Epoch: 152 [68%] 2023-03-17 03:06:38,673 44k INFO Losses: [2.5728394985198975, 2.3807435035705566, 10.748559951782227, 19.518705368041992, 1.0946861505508423], step: 153200, lr: 9.808103011628319e-05 2023-03-17 03:07:49,725 44k INFO Train Epoch: 152 [88%] 2023-03-17 03:07:49,726 44k INFO Losses: [2.403559446334839, 2.45613694190979, 10.849448204040527, 14.419554710388184, 0.9949098825454712], step: 153400, lr: 9.808103011628319e-05 2023-03-17 03:08:32,246 44k INFO ====> Epoch: 152, cost 371.41 s 2023-03-17 03:09:09,893 44k INFO Train Epoch: 153 [8%] 2023-03-17 03:09:09,894 44k INFO Losses: [2.4459877014160156, 2.252241611480713, 9.409289360046387, 13.0472412109375, 1.1094169616699219], step: 153600, lr: 9.806876998751865e-05 2023-03-17 03:10:20,542 44k INFO Train Epoch: 153 [28%] 2023-03-17 03:10:20,542 44k INFO Losses: [2.4028172492980957, 2.3238260746002197, 6.204263687133789, 18.967802047729492, 1.3916704654693604], step: 153800, lr: 9.806876998751865e-05 2023-03-17 03:11:31,305 44k INFO Train Epoch: 153 [48%] 2023-03-17 03:11:31,305 44k INFO Losses: [2.4505317211151123, 2.1691322326660156, 12.112874031066895, 17.919849395751953, 1.0298407077789307], step: 154000, lr: 9.806876998751865e-05 2023-03-17 03:11:34,248 44k INFO Saving model and optimizer state at iteration 153 to ./logs\44k\G_154000.pth 2023-03-17 03:11:34,934 44k INFO Saving model and optimizer state at iteration 153 to ./logs\44k\D_154000.pth 2023-03-17 03:11:35,550 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_151000.pth 2023-03-17 03:11:35,586 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_151000.pth 2023-03-17 03:12:46,950 44k INFO Train Epoch: 153 [67%] 2023-03-17 03:12:46,950 44k INFO Losses: [2.137655258178711, 2.46486234664917, 13.419692039489746, 21.622934341430664, 1.0811878442764282], step: 154200, lr: 9.806876998751865e-05 2023-03-17 03:13:59,893 44k INFO Train Epoch: 153 [87%] 2023-03-17 03:13:59,894 44k INFO Losses: [2.2882468700408936, 2.4351203441619873, 11.837101936340332, 22.319473266601562, 1.1949949264526367], step: 154400, lr: 9.806876998751865e-05 2023-03-17 03:14:46,370 44k INFO ====> Epoch: 153, cost 374.12 s 2023-03-17 03:15:20,402 44k INFO Train Epoch: 154 [7%] 2023-03-17 03:15:20,403 44k INFO Losses: [2.453604221343994, 2.2107958793640137, 12.972556114196777, 19.72896957397461, 1.5030136108398438], step: 154600, lr: 9.80565113912702e-05 2023-03-17 03:16:30,749 44k INFO Train Epoch: 154 [27%] 2023-03-17 03:16:30,750 44k INFO Losses: [2.5383644104003906, 2.3127903938293457, 9.835686683654785, 18.817386627197266, 1.4010090827941895], step: 154800, lr: 9.80565113912702e-05 2023-03-17 03:17:41,453 44k INFO Train Epoch: 154 [47%] 2023-03-17 03:17:41,453 44k INFO Losses: [2.3527235984802246, 2.2611005306243896, 13.445037841796875, 22.48185920715332, 1.582826018333435], step: 155000, lr: 9.80565113912702e-05 2023-03-17 03:17:44,502 44k INFO Saving model and optimizer state at iteration 154 to ./logs\44k\G_155000.pth 2023-03-17 03:17:45,157 44k INFO Saving model and optimizer state at iteration 154 to ./logs\44k\D_155000.pth 2023-03-17 03:17:45,761 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_152000.pth 2023-03-17 03:17:45,795 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_152000.pth 2023-03-17 03:18:56,452 44k INFO Train Epoch: 154 [66%] 2023-03-17 03:18:56,453 44k INFO Losses: [2.877025604248047, 2.0072073936462402, 8.932306289672852, 18.183000564575195, 1.082656979560852], step: 155200, lr: 9.80565113912702e-05 2023-03-17 03:20:07,160 44k INFO Train Epoch: 154 [86%] 2023-03-17 03:20:07,160 44k INFO Losses: [2.264820098876953, 2.4126968383789062, 8.17087459564209, 20.098752975463867, 1.2217382192611694], step: 155400, lr: 9.80565113912702e-05 2023-03-17 03:20:56,515 44k INFO ====> Epoch: 154, cost 370.14 s 2023-03-17 03:21:27,016 44k INFO Train Epoch: 155 [6%] 2023-03-17 03:21:27,017 44k INFO Losses: [2.184081792831421, 2.3568177223205566, 11.463321685791016, 21.984704971313477, 1.2828794717788696], step: 155600, lr: 9.804425432734629e-05 2023-03-17 03:22:37,489 44k INFO Train Epoch: 155 [26%] 2023-03-17 03:22:37,489 44k INFO Losses: [2.4620485305786133, 2.3575634956359863, 13.929765701293945, 20.664743423461914, 1.1776976585388184], step: 155800, lr: 9.804425432734629e-05 2023-03-17 03:23:48,079 44k INFO Train Epoch: 155 [46%] 2023-03-17 03:23:48,079 44k INFO Losses: [2.558061122894287, 2.1859400272369385, 8.454398155212402, 17.873615264892578, 1.2040057182312012], step: 156000, lr: 9.804425432734629e-05 2023-03-17 03:23:51,076 44k INFO Saving model and optimizer state at iteration 155 to ./logs\44k\G_156000.pth 2023-03-17 03:23:51,747 44k INFO Saving model and optimizer state at iteration 155 to ./logs\44k\D_156000.pth 2023-03-17 03:23:52,341 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_153000.pth 2023-03-17 03:23:52,375 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_153000.pth 2023-03-17 03:25:03,049 44k INFO Train Epoch: 155 [65%] 2023-03-17 03:25:03,050 44k INFO Losses: [2.2180354595184326, 2.2497916221618652, 14.834362983703613, 22.76137924194336, 1.6323806047439575], step: 156200, lr: 9.804425432734629e-05 2023-03-17 03:26:14,117 44k INFO Train Epoch: 155 [85%] 2023-03-17 03:26:14,117 44k INFO Losses: [2.4419310092926025, 2.4645957946777344, 6.570553302764893, 17.671045303344727, 1.269668459892273], step: 156400, lr: 9.804425432734629e-05 2023-03-17 03:27:08,185 44k INFO ====> Epoch: 155, cost 371.67 s 2023-03-17 03:27:35,592 44k INFO Train Epoch: 156 [5%] 2023-03-17 03:27:35,593 44k INFO Losses: [2.4301724433898926, 2.2006101608276367, 12.37120532989502, 20.548810958862305, 1.3266189098358154], step: 156600, lr: 9.803199879555537e-05 2023-03-17 03:28:46,126 44k INFO Train Epoch: 156 [25%] 2023-03-17 03:28:46,127 44k INFO Losses: [2.7203073501586914, 2.1223244667053223, 7.122328758239746, 17.335546493530273, 1.101303219795227], step: 156800, lr: 9.803199879555537e-05 2023-03-17 03:29:56,589 44k INFO Train Epoch: 156 [45%] 2023-03-17 03:29:56,590 44k INFO Losses: [2.5051965713500977, 2.2930829524993896, 9.417571067810059, 20.740907669067383, 1.3742021322250366], step: 157000, lr: 9.803199879555537e-05 2023-03-17 03:29:59,592 44k INFO Saving model and optimizer state at iteration 156 to ./logs\44k\G_157000.pth 2023-03-17 03:30:00,257 44k INFO Saving model and optimizer state at iteration 156 to ./logs\44k\D_157000.pth 2023-03-17 03:30:00,865 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_154000.pth 2023-03-17 03:30:00,911 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_154000.pth 2023-03-17 03:31:11,597 44k INFO Train Epoch: 156 [64%] 2023-03-17 03:31:11,597 44k INFO Losses: [2.608232259750366, 2.1466875076293945, 10.086484909057617, 17.03070831298828, 0.880395770072937], step: 157200, lr: 9.803199879555537e-05 2023-03-17 03:32:22,360 44k INFO Train Epoch: 156 [84%] 2023-03-17 03:32:22,360 44k INFO Losses: [2.5604209899902344, 2.174489736557007, 10.313692092895508, 21.056346893310547, 1.3102695941925049], step: 157400, lr: 9.803199879555537e-05 2023-03-17 03:33:18,937 44k INFO ====> Epoch: 156, cost 370.75 s 2023-03-17 03:33:42,213 44k INFO Train Epoch: 157 [4%] 2023-03-17 03:33:42,213 44k INFO Losses: [2.5864505767822266, 2.419405937194824, 12.394896507263184, 24.01335334777832, 1.4563125371932983], step: 157600, lr: 9.801974479570593e-05 2023-03-17 03:34:52,794 44k INFO Train Epoch: 157 [24%] 2023-03-17 03:34:52,794 44k INFO Losses: [2.3079378604888916, 2.711482286453247, 11.82590103149414, 22.335294723510742, 1.5150401592254639], step: 157800, lr: 9.801974479570593e-05 2023-03-17 03:36:03,302 44k INFO Train Epoch: 157 [44%] 2023-03-17 03:36:03,303 44k INFO Losses: [2.2850494384765625, 2.2497165203094482, 10.524386405944824, 18.199600219726562, 1.4430677890777588], step: 158000, lr: 9.801974479570593e-05 2023-03-17 03:36:06,299 44k INFO Saving model and optimizer state at iteration 157 to ./logs\44k\G_158000.pth 2023-03-17 03:36:06,959 44k INFO Saving model and optimizer state at iteration 157 to ./logs\44k\D_158000.pth 2023-03-17 03:36:07,603 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_155000.pth 2023-03-17 03:36:07,637 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_155000.pth 2023-03-17 03:37:18,385 44k INFO Train Epoch: 157 [63%] 2023-03-17 03:37:18,386 44k INFO Losses: [2.521307945251465, 2.298090934753418, 11.54759693145752, 22.1992244720459, 1.2205640077590942], step: 158200, lr: 9.801974479570593e-05 2023-03-17 03:38:29,051 44k INFO Train Epoch: 157 [83%] 2023-03-17 03:38:29,052 44k INFO Losses: [2.4279329776763916, 2.4230666160583496, 7.6124725341796875, 21.82315444946289, 1.5975395441055298], step: 158400, lr: 9.801974479570593e-05 2023-03-17 03:39:29,145 44k INFO ====> Epoch: 157, cost 370.21 s 2023-03-17 03:39:48,779 44k INFO Train Epoch: 158 [3%] 2023-03-17 03:39:48,779 44k INFO Losses: [2.603762626647949, 2.0978477001190186, 9.749545097351074, 20.098342895507812, 1.4050651788711548], step: 158600, lr: 9.800749232760646e-05 2023-03-17 03:40:59,339 44k INFO Train Epoch: 158 [23%] 2023-03-17 03:40:59,339 44k INFO Losses: [2.299445629119873, 2.5088696479797363, 7.275696277618408, 16.942094802856445, 1.2084721326828003], step: 158800, lr: 9.800749232760646e-05 2023-03-17 03:42:09,737 44k INFO Train Epoch: 158 [43%] 2023-03-17 03:42:09,737 44k INFO Losses: [2.444998264312744, 2.193211317062378, 9.808513641357422, 21.087167739868164, 1.3585577011108398], step: 159000, lr: 9.800749232760646e-05 2023-03-17 03:42:12,619 44k INFO Saving model and optimizer state at iteration 158 to ./logs\44k\G_159000.pth 2023-03-17 03:42:13,367 44k INFO Saving model and optimizer state at iteration 158 to ./logs\44k\D_159000.pth 2023-03-17 03:42:13,978 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_156000.pth 2023-03-17 03:42:14,016 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_156000.pth 2023-03-17 03:43:24,565 44k INFO Train Epoch: 158 [62%] 2023-03-17 03:43:24,565 44k INFO Losses: [2.559319019317627, 2.231544256210327, 10.777417182922363, 18.406160354614258, 1.2406738996505737], step: 159200, lr: 9.800749232760646e-05 2023-03-17 03:44:35,284 44k INFO Train Epoch: 158 [82%] 2023-03-17 03:44:35,285 44k INFO Losses: [2.29837965965271, 2.6355679035186768, 13.215998649597168, 24.7915096282959, 1.0889911651611328], step: 159400, lr: 9.800749232760646e-05 2023-03-17 03:45:38,911 44k INFO ====> Epoch: 158, cost 369.77 s 2023-03-17 03:45:54,987 44k INFO Train Epoch: 159 [2%] 2023-03-17 03:45:54,987 44k INFO Losses: [2.1696994304656982, 2.288760185241699, 14.756003379821777, 24.101898193359375, 1.6770610809326172], step: 159600, lr: 9.79952413910655e-05 2023-03-17 03:47:05,754 44k INFO Train Epoch: 159 [22%] 2023-03-17 03:47:05,755 44k INFO Losses: [2.3998844623565674, 2.445526361465454, 9.13878345489502, 19.070289611816406, 1.018646001815796], step: 159800, lr: 9.79952413910655e-05 2023-03-17 03:48:16,113 44k INFO Train Epoch: 159 [42%] 2023-03-17 03:48:16,114 44k INFO Losses: [2.6659576892852783, 2.127209424972534, 8.128349304199219, 17.561676025390625, 1.3006987571716309], step: 160000, lr: 9.79952413910655e-05 2023-03-17 03:48:19,072 44k INFO Saving model and optimizer state at iteration 159 to ./logs\44k\G_160000.pth 2023-03-17 03:48:19,791 44k INFO Saving model and optimizer state at iteration 159 to ./logs\44k\D_160000.pth 2023-03-17 03:48:20,394 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_157000.pth 2023-03-17 03:48:20,434 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_157000.pth 2023-03-17 03:49:31,119 44k INFO Train Epoch: 159 [61%] 2023-03-17 03:49:31,119 44k INFO Losses: [2.6048693656921387, 2.468010425567627, 13.791705131530762, 21.464387893676758, 1.3984739780426025], step: 160200, lr: 9.79952413910655e-05 2023-03-17 03:50:41,778 44k INFO Train Epoch: 159 [81%] 2023-03-17 03:50:41,779 44k INFO Losses: [2.608167886734009, 2.2798733711242676, 15.77651596069336, 21.902111053466797, 1.3965251445770264], step: 160400, lr: 9.79952413910655e-05 2023-03-17 03:51:48,994 44k INFO ====> Epoch: 159, cost 370.08 s 2023-03-17 03:52:01,615 44k INFO Train Epoch: 160 [1%] 2023-03-17 03:52:01,616 44k INFO Losses: [2.7711474895477295, 2.1932950019836426, 10.245020866394043, 21.873340606689453, 1.1223422288894653], step: 160600, lr: 9.798299198589162e-05 2023-03-17 03:53:12,391 44k INFO Train Epoch: 160 [21%] 2023-03-17 03:53:12,392 44k INFO Losses: [2.4442756175994873, 2.236901044845581, 6.293633937835693, 19.606666564941406, 1.4831857681274414], step: 160800, lr: 9.798299198589162e-05 2023-03-17 03:54:22,593 44k INFO Train Epoch: 160 [41%] 2023-03-17 03:54:22,593 44k INFO Losses: [2.688000202178955, 2.4710288047790527, 9.657411575317383, 19.78565788269043, 1.3106321096420288], step: 161000, lr: 9.798299198589162e-05 2023-03-17 03:54:25,528 44k INFO Saving model and optimizer state at iteration 160 to ./logs\44k\G_161000.pth 2023-03-17 03:54:26,195 44k INFO Saving model and optimizer state at iteration 160 to ./logs\44k\D_161000.pth 2023-03-17 03:54:26,809 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_158000.pth 2023-03-17 03:54:26,845 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_158000.pth 2023-03-17 03:55:37,610 44k INFO Train Epoch: 160 [60%] 2023-03-17 03:55:37,610 44k INFO Losses: [2.4663591384887695, 2.4280076026916504, 7.977385520935059, 14.1154203414917, 0.9202083349227905], step: 161200, lr: 9.798299198589162e-05 2023-03-17 03:56:48,283 44k INFO Train Epoch: 160 [80%] 2023-03-17 03:56:48,283 44k INFO Losses: [2.2753286361694336, 2.413290500640869, 10.32357120513916, 19.149131774902344, 1.0347225666046143], step: 161400, lr: 9.798299198589162e-05 2023-03-17 03:57:58,967 44k INFO ====> Epoch: 160, cost 369.97 s 2023-03-17 03:58:08,101 44k INFO Train Epoch: 161 [0%] 2023-03-17 03:58:08,101 44k INFO Losses: [2.80898118019104, 2.002685785293579, 8.439790725708008, 18.249692916870117, 1.1688997745513916], step: 161600, lr: 9.797074411189339e-05 2023-03-17 03:59:18,826 44k INFO Train Epoch: 161 [20%] 2023-03-17 03:59:18,827 44k INFO Losses: [2.447150230407715, 2.3052265644073486, 10.589366912841797, 23.05910301208496, 1.4268662929534912], step: 161800, lr: 9.797074411189339e-05 2023-03-17 04:00:29,030 44k INFO Train Epoch: 161 [40%] 2023-03-17 04:00:29,030 44k INFO Losses: [2.496339797973633, 2.6373887062072754, 10.833412170410156, 22.541166305541992, 1.5368038415908813], step: 162000, lr: 9.797074411189339e-05 2023-03-17 04:00:31,966 44k INFO Saving model and optimizer state at iteration 161 to ./logs\44k\G_162000.pth 2023-03-17 04:00:32,623 44k INFO Saving model and optimizer state at iteration 161 to ./logs\44k\D_162000.pth 2023-03-17 04:00:33,246 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_159000.pth 2023-03-17 04:00:33,283 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_159000.pth 2023-03-17 04:01:44,243 44k INFO Train Epoch: 161 [59%] 2023-03-17 04:01:44,244 44k INFO Losses: [2.328559398651123, 2.4431774616241455, 14.038437843322754, 22.991012573242188, 1.429046630859375], step: 162200, lr: 9.797074411189339e-05 2023-03-17 04:02:54,882 44k INFO Train Epoch: 161 [79%] 2023-03-17 04:02:54,883 44k INFO Losses: [2.4306647777557373, 2.1232967376708984, 7.930111408233643, 18.893526077270508, 1.3822845220565796], step: 162400, lr: 9.797074411189339e-05 2023-03-17 04:04:05,824 44k INFO Train Epoch: 161 [99%] 2023-03-17 04:04:05,825 44k INFO Losses: [2.4105262756347656, 2.241891622543335, 11.0470609664917, 20.676149368286133, 1.3853471279144287], step: 162600, lr: 9.797074411189339e-05 2023-03-17 04:04:09,364 44k INFO ====> Epoch: 161, cost 370.40 s 2023-03-17 04:05:25,484 44k INFO Train Epoch: 162 [19%] 2023-03-17 04:05:25,484 44k INFO Losses: [2.5325729846954346, 2.203444480895996, 8.331308364868164, 19.08558464050293, 1.2152752876281738], step: 162800, lr: 9.795849776887939e-05 2023-03-17 04:06:35,633 44k INFO Train Epoch: 162 [39%] 2023-03-17 04:06:35,633 44k INFO Losses: [2.393533229827881, 2.4701995849609375, 12.85317611694336, 25.953603744506836, 1.0487861633300781], step: 163000, lr: 9.795849776887939e-05 2023-03-17 04:06:38,639 44k INFO Saving model and optimizer state at iteration 162 to ./logs\44k\G_163000.pth 2023-03-17 04:06:39,301 44k INFO Saving model and optimizer state at iteration 162 to ./logs\44k\D_163000.pth 2023-03-17 04:06:39,924 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_160000.pth 2023-03-17 04:06:39,957 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_160000.pth 2023-03-17 04:07:50,925 44k INFO Train Epoch: 162 [58%] 2023-03-17 04:07:50,926 44k INFO Losses: [2.350189685821533, 2.321927547454834, 12.643945693969727, 21.504772186279297, 1.3528380393981934], step: 163200, lr: 9.795849776887939e-05 2023-03-17 04:09:01,724 44k INFO Train Epoch: 162 [78%] 2023-03-17 04:09:01,724 44k INFO Losses: [2.476156234741211, 2.0020029544830322, 10.5330228805542, 18.66104507446289, 1.088453769683838], step: 163400, lr: 9.795849776887939e-05 2023-03-17 04:10:12,675 44k INFO Train Epoch: 162 [98%] 2023-03-17 04:10:12,676 44k INFO Losses: [2.4122273921966553, 2.2923738956451416, 8.436908721923828, 20.782255172729492, 1.0200611352920532], step: 163600, lr: 9.795849776887939e-05 2023-03-17 04:10:19,792 44k INFO ====> Epoch: 162, cost 370.43 s 2023-03-17 04:11:32,426 44k INFO Train Epoch: 163 [18%] 2023-03-17 04:11:32,427 44k INFO Losses: [2.485478162765503, 1.9593195915222168, 4.676070213317871, 12.976960182189941, 0.9519991278648376], step: 163800, lr: 9.794625295665828e-05 2023-03-17 04:12:43,499 44k INFO Train Epoch: 163 [38%] 2023-03-17 04:12:43,499 44k INFO Losses: [2.5439436435699463, 2.3110549449920654, 13.00318717956543, 21.175125122070312, 1.3661500215530396], step: 164000, lr: 9.794625295665828e-05 2023-03-17 04:12:46,399 44k INFO Saving model and optimizer state at iteration 163 to ./logs\44k\G_164000.pth 2023-03-17 04:12:47,142 44k INFO Saving model and optimizer state at iteration 163 to ./logs\44k\D_164000.pth 2023-03-17 04:12:47,760 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_161000.pth 2023-03-17 04:12:47,798 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_161000.pth 2023-03-17 04:13:58,588 44k INFO Train Epoch: 163 [57%] 2023-03-17 04:13:58,589 44k INFO Losses: [2.6850357055664062, 2.2428104877471924, 6.86476469039917, 18.435823440551758, 1.7257484197616577], step: 164200, lr: 9.794625295665828e-05 2023-03-17 04:15:09,382 44k INFO Train Epoch: 163 [77%] 2023-03-17 04:15:09,382 44k INFO Losses: [2.583627462387085, 2.139423370361328, 11.233219146728516, 21.76848602294922, 1.5414283275604248], step: 164400, lr: 9.794625295665828e-05 2023-03-17 04:16:20,219 44k INFO Train Epoch: 163 [97%] 2023-03-17 04:16:20,219 44k INFO Losses: [2.559586524963379, 2.2459592819213867, 10.108153343200684, 21.960891723632812, 1.260255217552185], step: 164600, lr: 9.794625295665828e-05 2023-03-17 04:16:30,946 44k INFO ====> Epoch: 163, cost 371.15 s 2023-03-17 04:17:39,988 44k INFO Train Epoch: 164 [17%] 2023-03-17 04:17:39,988 44k INFO Losses: [2.2347159385681152, 2.480541467666626, 12.011685371398926, 22.061786651611328, 1.3101166486740112], step: 164800, lr: 9.79340096750387e-05 2023-03-17 04:18:50,379 44k INFO Train Epoch: 164 [37%] 2023-03-17 04:18:50,379 44k INFO Losses: [2.4738364219665527, 2.2977471351623535, 9.457829475402832, 20.054378509521484, 1.2972062826156616], step: 165000, lr: 9.79340096750387e-05 2023-03-17 04:18:53,321 44k INFO Saving model and optimizer state at iteration 164 to ./logs\44k\G_165000.pth 2023-03-17 04:18:54,019 44k INFO Saving model and optimizer state at iteration 164 to ./logs\44k\D_165000.pth 2023-03-17 04:18:54,642 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_162000.pth 2023-03-17 04:18:54,682 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_162000.pth 2023-03-17 04:20:05,291 44k INFO Train Epoch: 164 [56%] 2023-03-17 04:20:05,291 44k INFO Losses: [2.3758769035339355, 2.3554208278656006, 8.80248737335205, 21.847248077392578, 1.2709317207336426], step: 165200, lr: 9.79340096750387e-05 2023-03-17 04:21:15,873 44k INFO Train Epoch: 164 [76%] 2023-03-17 04:21:15,874 44k INFO Losses: [2.3753721714019775, 2.5158605575561523, 11.298696517944336, 19.972333908081055, 1.4888763427734375], step: 165400, lr: 9.79340096750387e-05 2023-03-17 04:22:26,423 44k INFO Train Epoch: 164 [96%] 2023-03-17 04:22:26,424 44k INFO Losses: [2.666335105895996, 2.2118635177612305, 11.878429412841797, 19.291040420532227, 1.2491436004638672], step: 165600, lr: 9.79340096750387e-05 2023-03-17 04:22:40,627 44k INFO ====> Epoch: 164, cost 369.68 s 2023-03-17 04:23:46,243 44k INFO Train Epoch: 165 [16%] 2023-03-17 04:23:46,244 44k INFO Losses: [2.7026419639587402, 2.207399845123291, 8.784773826599121, 19.268991470336914, 1.435257077217102], step: 165800, lr: 9.792176792382932e-05 2023-03-17 04:24:56,203 44k INFO Train Epoch: 165 [36%] 2023-03-17 04:24:56,203 44k INFO Losses: [2.4560372829437256, 2.244762420654297, 10.722067832946777, 21.692895889282227, 1.4903018474578857], step: 166000, lr: 9.792176792382932e-05 2023-03-17 04:24:59,229 44k INFO Saving model and optimizer state at iteration 165 to ./logs\44k\G_166000.pth 2023-03-17 04:24:59,944 44k INFO Saving model and optimizer state at iteration 165 to ./logs\44k\D_166000.pth 2023-03-17 04:25:00,567 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_163000.pth 2023-03-17 04:25:00,602 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_163000.pth 2023-03-17 04:26:11,085 44k INFO Train Epoch: 165 [55%] 2023-03-17 04:26:11,085 44k INFO Losses: [2.586042642593384, 2.333362102508545, 13.670852661132812, 19.63782501220703, 1.4769389629364014], step: 166200, lr: 9.792176792382932e-05 2023-03-17 04:27:21,566 44k INFO Train Epoch: 165 [75%] 2023-03-17 04:27:21,567 44k INFO Losses: [2.5004115104675293, 2.3205721378326416, 11.097075462341309, 21.215538024902344, 1.4349952936172485], step: 166400, lr: 9.792176792382932e-05 2023-03-17 04:28:32,225 44k INFO Train Epoch: 165 [95%] 2023-03-17 04:28:32,226 44k INFO Losses: [2.539518356323242, 2.2841198444366455, 11.68876838684082, 22.7287540435791, 1.4358386993408203], step: 166600, lr: 9.792176792382932e-05 2023-03-17 04:28:49,950 44k INFO ====> Epoch: 165, cost 369.32 s 2023-03-17 04:29:51,878 44k INFO Train Epoch: 166 [15%] 2023-03-17 04:29:51,879 44k INFO Losses: [2.493783950805664, 2.333738088607788, 7.487810134887695, 17.47283935546875, 1.061367392539978], step: 166800, lr: 9.790952770283884e-05 2023-03-17 04:31:01,724 44k INFO Train Epoch: 166 [35%] 2023-03-17 04:31:01,725 44k INFO Losses: [2.499182939529419, 2.2580721378326416, 8.508832931518555, 17.498069763183594, 1.2362858057022095], step: 167000, lr: 9.790952770283884e-05 2023-03-17 04:31:04,701 44k INFO Saving model and optimizer state at iteration 166 to ./logs\44k\G_167000.pth 2023-03-17 04:31:05,393 44k INFO Saving model and optimizer state at iteration 166 to ./logs\44k\D_167000.pth 2023-03-17 04:31:06,009 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_164000.pth 2023-03-17 04:31:06,048 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_164000.pth 2023-03-17 04:32:16,377 44k INFO Train Epoch: 166 [54%] 2023-03-17 04:32:16,377 44k INFO Losses: [2.2695534229278564, 2.411278247833252, 11.2454833984375, 22.002077102661133, 1.437696933746338], step: 167200, lr: 9.790952770283884e-05 2023-03-17 04:33:26,834 44k INFO Train Epoch: 166 [74%] 2023-03-17 04:33:26,835 44k INFO Losses: [2.45676589012146, 2.4450271129608154, 6.790806770324707, 12.859803199768066, 1.3618690967559814], step: 167400, lr: 9.790952770283884e-05 2023-03-17 04:34:37,384 44k INFO Train Epoch: 166 [94%] 2023-03-17 04:34:37,385 44k INFO Losses: [2.6475718021392822, 2.302717924118042, 8.049555778503418, 15.662849426269531, 1.1095852851867676], step: 167600, lr: 9.790952770283884e-05 2023-03-17 04:34:58,546 44k INFO ====> Epoch: 166, cost 368.60 s 2023-03-17 04:35:56,949 44k INFO Train Epoch: 167 [14%] 2023-03-17 04:35:56,950 44k INFO Losses: [2.607792615890503, 2.3368096351623535, 8.037344932556152, 22.185211181640625, 1.2712455987930298], step: 167800, lr: 9.789728901187598e-05 2023-03-17 04:37:06,931 44k INFO Train Epoch: 167 [34%] 2023-03-17 04:37:06,931 44k INFO Losses: [2.5568292140960693, 2.4341344833374023, 13.729990005493164, 21.024545669555664, 1.4374464750289917], step: 168000, lr: 9.789728901187598e-05 2023-03-17 04:37:09,834 44k INFO Saving model and optimizer state at iteration 167 to ./logs\44k\G_168000.pth 2023-03-17 04:37:10,533 44k INFO Saving model and optimizer state at iteration 167 to ./logs\44k\D_168000.pth 2023-03-17 04:37:11,150 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_165000.pth 2023-03-17 04:37:11,186 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_165000.pth 2023-03-17 04:38:21,587 44k INFO Train Epoch: 167 [53%] 2023-03-17 04:38:21,587 44k INFO Losses: [2.241170883178711, 2.3188490867614746, 15.77836799621582, 22.702119827270508, 1.3600119352340698], step: 168200, lr: 9.789728901187598e-05 2023-03-17 04:39:32,192 44k INFO Train Epoch: 167 [73%] 2023-03-17 04:39:32,193 44k INFO Losses: [2.5564093589782715, 2.127133846282959, 9.180643081665039, 14.103662490844727, 0.9098934531211853], step: 168400, lr: 9.789728901187598e-05 2023-03-17 04:40:42,789 44k INFO Train Epoch: 167 [93%] 2023-03-17 04:40:42,790 44k INFO Losses: [2.5574111938476562, 2.293527126312256, 9.465546607971191, 19.10737419128418, 1.4003574848175049], step: 168600, lr: 9.789728901187598e-05 2023-03-17 04:41:07,459 44k INFO ====> Epoch: 167, cost 368.91 s 2023-03-17 04:42:02,346 44k INFO Train Epoch: 168 [13%] 2023-03-17 04:42:02,346 44k INFO Losses: [2.5253539085388184, 2.3659274578094482, 6.984872341156006, 17.593246459960938, 0.9354928731918335], step: 168800, lr: 9.78850518507495e-05 2023-03-17 04:43:12,351 44k INFO Train Epoch: 168 [33%] 2023-03-17 04:43:12,351 44k INFO Losses: [2.434610366821289, 2.4354333877563477, 10.947222709655762, 18.272216796875, 1.0717718601226807], step: 169000, lr: 9.78850518507495e-05 2023-03-17 04:43:15,230 44k INFO Saving model and optimizer state at iteration 168 to ./logs\44k\G_169000.pth 2023-03-17 04:43:15,932 44k INFO Saving model and optimizer state at iteration 168 to ./logs\44k\D_169000.pth 2023-03-17 04:43:16,537 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_166000.pth 2023-03-17 04:43:16,564 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_166000.pth 2023-03-17 04:44:26,816 44k INFO Train Epoch: 168 [52%] 2023-03-17 04:44:26,817 44k INFO Losses: [2.5283043384552, 2.334235429763794, 11.283750534057617, 18.647144317626953, 1.2101844549179077], step: 169200, lr: 9.78850518507495e-05 2023-03-17 04:45:37,504 44k INFO Train Epoch: 168 [72%] 2023-03-17 04:45:37,504 44k INFO Losses: [2.324428081512451, 2.4366393089294434, 12.556160926818848, 24.02130126953125, 1.3256025314331055], step: 169400, lr: 9.78850518507495e-05 2023-03-17 04:46:48,084 44k INFO Train Epoch: 168 [92%] 2023-03-17 04:46:48,084 44k INFO Losses: [2.460514545440674, 2.178175449371338, 12.044654846191406, 20.812217712402344, 1.2808411121368408], step: 169600, lr: 9.78850518507495e-05 2023-03-17 04:47:16,239 44k INFO ====> Epoch: 168, cost 368.78 s 2023-03-17 04:48:07,539 44k INFO Train Epoch: 169 [12%] 2023-03-17 04:48:07,539 44k INFO Losses: [2.316577434539795, 2.5733094215393066, 9.141018867492676, 20.155942916870117, 1.173896312713623], step: 169800, lr: 9.787281621926815e-05 2023-03-17 04:49:17,509 44k INFO Train Epoch: 169 [32%] 2023-03-17 04:49:17,509 44k INFO Losses: [2.2698142528533936, 2.396270751953125, 13.069070816040039, 18.94829559326172, 1.6185959577560425], step: 170000, lr: 9.787281621926815e-05 2023-03-17 04:49:20,426 44k INFO Saving model and optimizer state at iteration 169 to ./logs\44k\G_170000.pth 2023-03-17 04:49:21,131 44k INFO Saving model and optimizer state at iteration 169 to ./logs\44k\D_170000.pth 2023-03-17 04:49:21,743 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_167000.pth 2023-03-17 04:49:21,772 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_167000.pth 2023-03-17 04:50:31,986 44k INFO Train Epoch: 169 [51%] 2023-03-17 04:50:31,986 44k INFO Losses: [2.3310632705688477, 2.2245841026306152, 10.296073913574219, 23.079872131347656, 1.2163323163986206], step: 170200, lr: 9.787281621926815e-05 2023-03-17 04:51:42,666 44k INFO Train Epoch: 169 [71%] 2023-03-17 04:51:42,667 44k INFO Losses: [2.5247764587402344, 2.2039618492126465, 11.390571594238281, 19.5611629486084, 1.3607815504074097], step: 170400, lr: 9.787281621926815e-05 2023-03-17 04:52:53,169 44k INFO Train Epoch: 169 [91%] 2023-03-17 04:52:53,169 44k INFO Losses: [2.490769863128662, 2.3391306400299072, 13.27554988861084, 24.095306396484375, 1.3454861640930176], step: 170600, lr: 9.787281621926815e-05 2023-03-17 04:53:24,789 44k INFO ====> Epoch: 169, cost 368.55 s 2023-03-17 04:54:12,740 44k INFO Train Epoch: 170 [11%] 2023-03-17 04:54:12,741 44k INFO Losses: [2.1122329235076904, 2.491966962814331, 14.107803344726562, 21.80328369140625, 1.3047798871994019], step: 170800, lr: 9.786058211724074e-05 2023-03-17 04:55:22,506 44k INFO Train Epoch: 170 [31%] 2023-03-17 04:55:22,506 44k INFO Losses: [2.254040479660034, 2.288689374923706, 12.518832206726074, 23.34246253967285, 1.4431203603744507], step: 171000, lr: 9.786058211724074e-05 2023-03-17 04:55:25,480 44k INFO Saving model and optimizer state at iteration 170 to ./logs\44k\G_171000.pth 2023-03-17 04:55:26,130 44k INFO Saving model and optimizer state at iteration 170 to ./logs\44k\D_171000.pth 2023-03-17 04:55:26,753 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_168000.pth 2023-03-17 04:55:26,784 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_168000.pth 2023-03-17 04:56:36,961 44k INFO Train Epoch: 170 [50%] 2023-03-17 04:56:36,961 44k INFO Losses: [2.3078529834747314, 2.6335272789001465, 11.47938060760498, 23.248891830444336, 1.3534314632415771], step: 171200, lr: 9.786058211724074e-05 2023-03-17 04:57:47,557 44k INFO Train Epoch: 170 [70%] 2023-03-17 04:57:47,557 44k INFO Losses: [2.5918376445770264, 2.532334089279175, 9.925875663757324, 20.56053352355957, 1.3788505792617798], step: 171400, lr: 9.786058211724074e-05 2023-03-17 04:58:57,940 44k INFO Train Epoch: 170 [90%] 2023-03-17 04:58:57,940 44k INFO Losses: [2.444242000579834, 2.183255910873413, 12.629212379455566, 17.18226432800293, 1.0421754121780396], step: 171600, lr: 9.786058211724074e-05 2023-03-17 04:59:33,139 44k INFO ====> Epoch: 170, cost 368.35 s 2023-03-17 05:00:17,496 44k INFO Train Epoch: 171 [10%] 2023-03-17 05:00:17,496 44k INFO Losses: [2.4186947345733643, 2.4659268856048584, 12.484707832336426, 22.238672256469727, 1.4591763019561768], step: 171800, lr: 9.784834954447608e-05 2023-03-17 05:01:27,459 44k INFO Train Epoch: 171 [30%] 2023-03-17 05:01:27,460 44k INFO Losses: [2.2162604331970215, 2.5948939323425293, 10.783220291137695, 21.34513282775879, 1.239669680595398], step: 172000, lr: 9.784834954447608e-05 2023-03-17 05:01:30,338 44k INFO Saving model and optimizer state at iteration 171 to ./logs\44k\G_172000.pth 2023-03-17 05:01:31,074 44k INFO Saving model and optimizer state at iteration 171 to ./logs\44k\D_172000.pth 2023-03-17 05:01:31,687 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_169000.pth 2023-03-17 05:01:31,726 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_169000.pth 2023-03-17 05:02:41,757 44k INFO Train Epoch: 171 [50%] 2023-03-17 05:02:41,757 44k INFO Losses: [2.4960107803344727, 2.310649871826172, 9.919122695922852, 20.603105545043945, 1.5402024984359741], step: 172200, lr: 9.784834954447608e-05 2023-03-17 05:03:52,314 44k INFO Train Epoch: 171 [69%] 2023-03-17 05:03:52,314 44k INFO Losses: [2.4924089908599854, 2.2489776611328125, 5.782907962799072, 12.713704109191895, 1.1137930154800415], step: 172400, lr: 9.784834954447608e-05 2023-03-17 05:05:02,898 44k INFO Train Epoch: 171 [89%] 2023-03-17 05:05:02,898 44k INFO Losses: [2.3304030895233154, 2.347945213317871, 9.178772926330566, 18.282926559448242, 1.1401474475860596], step: 172600, lr: 9.784834954447608e-05 2023-03-17 05:05:41,522 44k INFO ====> Epoch: 171, cost 368.38 s 2023-03-17 05:06:22,362 44k INFO Train Epoch: 172 [9%] 2023-03-17 05:06:22,362 44k INFO Losses: [2.376771926879883, 2.412903308868408, 11.116055488586426, 22.822593688964844, 1.589823842048645], step: 172800, lr: 9.783611850078301e-05 2023-03-17 05:07:32,362 44k INFO Train Epoch: 172 [29%] 2023-03-17 05:07:32,362 44k INFO Losses: [2.4479446411132812, 2.4429636001586914, 8.877976417541504, 20.534015655517578, 1.3065537214279175], step: 173000, lr: 9.783611850078301e-05 2023-03-17 05:07:35,265 44k INFO Saving model and optimizer state at iteration 172 to ./logs\44k\G_173000.pth 2023-03-17 05:07:35,994 44k INFO Saving model and optimizer state at iteration 172 to ./logs\44k\D_173000.pth 2023-03-17 05:07:36,600 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_170000.pth 2023-03-17 05:07:36,643 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_170000.pth 2023-03-17 05:08:46,669 44k INFO Train Epoch: 172 [49%] 2023-03-17 05:08:46,670 44k INFO Losses: [2.2583131790161133, 2.5424909591674805, 9.912247657775879, 17.451047897338867, 1.2973597049713135], step: 173200, lr: 9.783611850078301e-05 2023-03-17 05:09:57,466 44k INFO Train Epoch: 172 [68%] 2023-03-17 05:09:57,467 44k INFO Losses: [2.510129928588867, 2.182860851287842, 8.8870210647583, 19.688438415527344, 1.4307098388671875], step: 173400, lr: 9.783611850078301e-05 2023-03-17 05:11:08,064 44k INFO Train Epoch: 172 [88%] 2023-03-17 05:11:08,064 44k INFO Losses: [2.5453741550445557, 1.9824531078338623, 10.729374885559082, 16.494857788085938, 1.2652252912521362], step: 173600, lr: 9.783611850078301e-05 2023-03-17 05:11:50,184 44k INFO ====> Epoch: 172, cost 368.66 s 2023-03-17 05:12:27,506 44k INFO Train Epoch: 173 [8%] 2023-03-17 05:12:27,506 44k INFO Losses: [2.4886467456817627, 2.1289424896240234, 9.379639625549316, 14.90047836303711, 1.0736968517303467], step: 173800, lr: 9.782388898597041e-05 2023-03-17 05:13:37,633 44k INFO Train Epoch: 173 [28%] 2023-03-17 05:13:37,633 44k INFO Losses: [2.122258186340332, 2.800487518310547, 12.231902122497559, 22.1917724609375, 1.3255913257598877], step: 174000, lr: 9.782388898597041e-05 2023-03-17 05:13:40,566 44k INFO Saving model and optimizer state at iteration 173 to ./logs\44k\G_174000.pth 2023-03-17 05:13:41,263 44k INFO Saving model and optimizer state at iteration 173 to ./logs\44k\D_174000.pth 2023-03-17 05:13:41,875 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_171000.pth 2023-03-17 05:13:41,913 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_171000.pth 2023-03-17 05:14:51,834 44k INFO Train Epoch: 173 [48%] 2023-03-17 05:14:51,834 44k INFO Losses: [2.698932647705078, 2.1686785221099854, 14.241243362426758, 21.647300720214844, 1.0984690189361572], step: 174200, lr: 9.782388898597041e-05 2023-03-17 05:16:02,556 44k INFO Train Epoch: 173 [67%] 2023-03-17 05:16:02,556 44k INFO Losses: [2.413693428039551, 2.2814106941223145, 11.994617462158203, 20.61872673034668, 0.7581030130386353], step: 174400, lr: 9.782388898597041e-05 2023-03-17 05:17:13,145 44k INFO Train Epoch: 173 [87%] 2023-03-17 05:17:13,145 44k INFO Losses: [2.415893077850342, 2.253887176513672, 12.423856735229492, 20.939970016479492, 1.2925928831100464], step: 174600, lr: 9.782388898597041e-05 2023-03-17 05:17:58,808 44k INFO ====> Epoch: 173, cost 368.62 s 2023-03-17 05:18:32,613 44k INFO Train Epoch: 174 [7%] 2023-03-17 05:18:32,613 44k INFO Losses: [2.3620798587799072, 2.6207382678985596, 13.472980499267578, 21.143457412719727, 1.0658859014511108], step: 174800, lr: 9.781166099984716e-05 2023-03-17 05:19:42,850 44k INFO Train Epoch: 174 [27%] 2023-03-17 05:19:42,851 44k INFO Losses: [2.5293350219726562, 2.3154826164245605, 6.825946807861328, 17.882078170776367, 1.237874984741211], step: 175000, lr: 9.781166099984716e-05 2023-03-17 05:19:45,731 44k INFO Saving model and optimizer state at iteration 174 to ./logs\44k\G_175000.pth 2023-03-17 05:19:46,474 44k INFO Saving model and optimizer state at iteration 174 to ./logs\44k\D_175000.pth 2023-03-17 05:19:47,085 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_172000.pth 2023-03-17 05:19:47,125 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_172000.pth 2023-03-17 05:20:57,123 44k INFO Train Epoch: 174 [47%] 2023-03-17 05:20:57,124 44k INFO Losses: [2.7060530185699463, 1.8608990907669067, 10.477849960327148, 22.855266571044922, 1.39141845703125], step: 175200, lr: 9.781166099984716e-05 2023-03-17 05:22:07,833 44k INFO Train Epoch: 174 [66%] 2023-03-17 05:22:07,833 44k INFO Losses: [2.7237067222595215, 1.9276094436645508, 9.540040016174316, 18.162906646728516, 1.0316402912139893], step: 175400, lr: 9.781166099984716e-05 2023-03-17 05:23:18,405 44k INFO Train Epoch: 174 [86%] 2023-03-17 05:23:18,405 44k INFO Losses: [2.5054330825805664, 2.2809016704559326, 8.22220516204834, 18.98043441772461, 1.407735824584961], step: 175600, lr: 9.781166099984716e-05 2023-03-17 05:24:07,612 44k INFO ====> Epoch: 174, cost 368.80 s 2023-03-17 05:24:37,785 44k INFO Train Epoch: 175 [6%] 2023-03-17 05:24:37,785 44k INFO Losses: [2.6711857318878174, 2.1359665393829346, 7.568743705749512, 16.514251708984375, 1.7605260610580444], step: 175800, lr: 9.779943454222217e-05 2023-03-17 05:25:47,941 44k INFO Train Epoch: 175 [26%] 2023-03-17 05:25:47,941 44k INFO Losses: [2.3159239292144775, 2.465128183364868, 8.46249008178711, 18.701953887939453, 1.388334035873413], step: 176000, lr: 9.779943454222217e-05 2023-03-17 05:25:50,878 44k INFO Saving model and optimizer state at iteration 175 to ./logs\44k\G_176000.pth 2023-03-17 05:25:51,565 44k INFO Saving model and optimizer state at iteration 175 to ./logs\44k\D_176000.pth 2023-03-17 05:25:52,220 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_173000.pth 2023-03-17 05:25:52,251 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_173000.pth 2023-03-17 05:27:02,327 44k INFO Train Epoch: 175 [46%] 2023-03-17 05:27:02,327 44k INFO Losses: [2.569620370864868, 2.3526501655578613, 9.748224258422852, 16.802045822143555, 1.3386958837509155], step: 176200, lr: 9.779943454222217e-05 2023-03-17 05:28:13,017 44k INFO Train Epoch: 175 [65%] 2023-03-17 05:28:13,018 44k INFO Losses: [2.29791522026062, 2.4493672847747803, 16.94289779663086, 27.608373641967773, 1.6449607610702515], step: 176400, lr: 9.779943454222217e-05 2023-03-17 05:29:23,474 44k INFO Train Epoch: 175 [85%] 2023-03-17 05:29:23,475 44k INFO Losses: [2.511824607849121, 2.234801769256592, 8.19736385345459, 22.419775009155273, 1.3745704889297485], step: 176600, lr: 9.779943454222217e-05 2023-03-17 05:30:16,217 44k INFO ====> Epoch: 175, cost 368.60 s 2023-03-17 05:30:42,827 44k INFO Train Epoch: 176 [5%] 2023-03-17 05:30:42,828 44k INFO Losses: [2.5009608268737793, 2.310910701751709, 10.522568702697754, 15.66387939453125, 1.735251545906067], step: 176800, lr: 9.778720961290439e-05 2023-03-17 05:31:53,080 44k INFO Train Epoch: 176 [25%] 2023-03-17 05:31:53,080 44k INFO Losses: [2.855753183364868, 2.010063409805298, 10.206157684326172, 17.56356430053711, 1.1767489910125732], step: 177000, lr: 9.778720961290439e-05 2023-03-17 05:31:55,942 44k INFO Saving model and optimizer state at iteration 176 to ./logs\44k\G_177000.pth 2023-03-17 05:31:56,624 44k INFO Saving model and optimizer state at iteration 176 to ./logs\44k\D_177000.pth 2023-03-17 05:31:57,236 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_174000.pth 2023-03-17 05:31:57,276 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_174000.pth 2023-03-17 05:33:07,197 44k INFO Train Epoch: 176 [45%] 2023-03-17 05:33:07,197 44k INFO Losses: [2.6599936485290527, 2.28721284866333, 9.403382301330566, 17.964378356933594, 1.6560107469558716], step: 177200, lr: 9.778720961290439e-05 2023-03-17 05:34:17,969 44k INFO Train Epoch: 176 [64%] 2023-03-17 05:34:17,970 44k INFO Losses: [2.461580276489258, 2.412217140197754, 10.919356346130371, 18.782228469848633, 0.7330804467201233], step: 177400, lr: 9.778720961290439e-05 2023-03-17 05:35:28,351 44k INFO Train Epoch: 176 [84%] 2023-03-17 05:35:28,351 44k INFO Losses: [2.5488791465759277, 2.0408122539520264, 10.444375991821289, 22.43549346923828, 1.1740151643753052], step: 177600, lr: 9.778720961290439e-05 2023-03-17 05:36:24,664 44k INFO ====> Epoch: 176, cost 368.45 s 2023-03-17 05:36:47,918 44k INFO Train Epoch: 177 [4%] 2023-03-17 05:36:47,918 44k INFO Losses: [2.2310574054718018, 2.7053918838500977, 8.863359451293945, 22.96120262145996, 1.1031264066696167], step: 177800, lr: 9.777498621170277e-05 2023-03-17 05:37:58,215 44k INFO Train Epoch: 177 [24%] 2023-03-17 05:37:58,215 44k INFO Losses: [2.527754068374634, 2.57830548286438, 9.882058143615723, 21.527891159057617, 1.4932079315185547], step: 178000, lr: 9.777498621170277e-05 2023-03-17 05:38:01,146 44k INFO Saving model and optimizer state at iteration 177 to ./logs\44k\G_178000.pth 2023-03-17 05:38:01,829 44k INFO Saving model and optimizer state at iteration 177 to ./logs\44k\D_178000.pth 2023-03-17 05:38:02,448 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_175000.pth 2023-03-17 05:38:02,483 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_175000.pth 2023-03-17 05:39:12,413 44k INFO Train Epoch: 177 [44%] 2023-03-17 05:39:12,414 44k INFO Losses: [2.219799280166626, 2.6409637928009033, 9.670913696289062, 16.71328353881836, 1.080832600593567], step: 178200, lr: 9.777498621170277e-05 2023-03-17 05:40:23,262 44k INFO Train Epoch: 177 [63%] 2023-03-17 05:40:23,263 44k INFO Losses: [2.0093135833740234, 2.6290009021759033, 15.150419235229492, 23.197689056396484, 1.2122794389724731], step: 178400, lr: 9.777498621170277e-05 2023-03-17 05:41:33,636 44k INFO Train Epoch: 177 [83%] 2023-03-17 05:41:33,636 44k INFO Losses: [2.517268180847168, 2.4594316482543945, 8.986112594604492, 23.686208724975586, 1.5818520784378052], step: 178600, lr: 9.777498621170277e-05 2023-03-17 05:42:33,530 44k INFO ====> Epoch: 177, cost 368.87 s 2023-03-17 05:42:53,372 44k INFO Train Epoch: 178 [3%] 2023-03-17 05:42:53,373 44k INFO Losses: [2.2891743183135986, 2.6025373935699463, 10.345315933227539, 19.179229736328125, 1.2005339860916138], step: 178800, lr: 9.776276433842631e-05 2023-03-17 05:44:03,883 44k INFO Train Epoch: 178 [23%] 2023-03-17 05:44:03,883 44k INFO Losses: [2.581068992614746, 2.2021408081054688, 9.189441680908203, 15.014711380004883, 1.1243538856506348], step: 179000, lr: 9.776276433842631e-05 2023-03-17 05:44:06,833 44k INFO Saving model and optimizer state at iteration 178 to ./logs\44k\G_179000.pth 2023-03-17 05:44:07,518 44k INFO Saving model and optimizer state at iteration 178 to ./logs\44k\D_179000.pth 2023-03-17 05:44:08,137 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_176000.pth 2023-03-17 05:44:08,177 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_176000.pth 2023-03-17 05:45:18,140 44k INFO Train Epoch: 178 [43%] 2023-03-17 05:45:18,141 44k INFO Losses: [2.429567575454712, 2.2678709030151367, 11.065641403198242, 18.181249618530273, 0.9680553078651428], step: 179200, lr: 9.776276433842631e-05 2023-03-17 05:46:28,818 44k INFO Train Epoch: 178 [62%] 2023-03-17 05:46:28,818 44k INFO Losses: [2.4730875492095947, 2.444882392883301, 8.91132926940918, 18.450008392333984, 1.4475833177566528], step: 179400, lr: 9.776276433842631e-05 2023-03-17 05:47:39,307 44k INFO Train Epoch: 178 [82%] 2023-03-17 05:47:39,308 44k INFO Losses: [2.4616403579711914, 2.105130434036255, 10.058845520019531, 22.370018005371094, 1.615362524986267], step: 179600, lr: 9.776276433842631e-05 2023-03-17 05:48:42,751 44k INFO ====> Epoch: 178, cost 369.22 s 2023-03-17 05:48:58,886 44k INFO Train Epoch: 179 [2%] 2023-03-17 05:48:58,886 44k INFO Losses: [2.6371912956237793, 2.325679302215576, 12.631734848022461, 20.72102165222168, 1.2923638820648193], step: 179800, lr: 9.7750543992884e-05 2023-03-17 05:50:09,482 44k INFO Train Epoch: 179 [22%] 2023-03-17 05:50:09,482 44k INFO Losses: [2.519580602645874, 2.3285396099090576, 8.97692584991455, 21.98527717590332, 1.459467887878418], step: 180000, lr: 9.7750543992884e-05 2023-03-17 05:50:12,369 44k INFO Saving model and optimizer state at iteration 179 to ./logs\44k\G_180000.pth 2023-03-17 05:50:13,053 44k INFO Saving model and optimizer state at iteration 179 to ./logs\44k\D_180000.pth 2023-03-17 05:50:13,667 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_177000.pth 2023-03-17 05:50:13,706 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_177000.pth 2023-03-17 05:51:23,474 44k INFO Train Epoch: 179 [42%] 2023-03-17 05:51:23,475 44k INFO Losses: [2.651991844177246, 2.252354621887207, 6.764916896820068, 14.603854179382324, 1.116950511932373], step: 180200, lr: 9.7750543992884e-05 2023-03-17 05:52:34,212 44k INFO Train Epoch: 179 [61%] 2023-03-17 05:52:34,213 44k INFO Losses: [2.496138095855713, 2.531445264816284, 11.48887825012207, 19.643203735351562, 1.165557622909546], step: 180400, lr: 9.7750543992884e-05 2023-03-17 05:53:44,681 44k INFO Train Epoch: 179 [81%] 2023-03-17 05:53:44,681 44k INFO Losses: [2.2985832691192627, 2.5649168491363525, 14.274898529052734, 23.637985229492188, 1.5329899787902832], step: 180600, lr: 9.7750543992884e-05 2023-03-17 05:54:51,616 44k INFO ====> Epoch: 179, cost 368.87 s 2023-03-17 05:55:04,337 44k INFO Train Epoch: 180 [1%] 2023-03-17 05:55:04,338 44k INFO Losses: [2.3403494358062744, 2.127133846282959, 10.579434394836426, 24.091386795043945, 1.0490642786026], step: 180800, lr: 9.773832517488488e-05 2023-03-17 05:56:14,849 44k INFO Train Epoch: 180 [21%] 2023-03-17 05:56:14,850 44k INFO Losses: [2.528452157974243, 2.2363688945770264, 6.383594512939453, 16.168249130249023, 1.0572669506072998], step: 181000, lr: 9.773832517488488e-05 2023-03-17 05:56:17,857 44k INFO Saving model and optimizer state at iteration 180 to ./logs\44k\G_181000.pth 2023-03-17 05:56:18,557 44k INFO Saving model and optimizer state at iteration 180 to ./logs\44k\D_181000.pth 2023-03-17 05:56:19,165 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_178000.pth 2023-03-17 05:56:19,194 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_178000.pth 2023-03-17 05:57:28,884 44k INFO Train Epoch: 180 [41%] 2023-03-17 05:57:28,884 44k INFO Losses: [2.5057499408721924, 2.2856736183166504, 13.371176719665527, 21.76732063293457, 1.850466251373291], step: 181200, lr: 9.773832517488488e-05 2023-03-17 05:58:39,700 44k INFO Train Epoch: 180 [60%] 2023-03-17 05:58:39,701 44k INFO Losses: [2.4507884979248047, 2.2485318183898926, 7.435276508331299, 17.338016510009766, 1.461917757987976], step: 181400, lr: 9.773832517488488e-05 2023-03-17 05:59:50,121 44k INFO Train Epoch: 180 [80%] 2023-03-17 05:59:50,122 44k INFO Losses: [2.4379799365997314, 2.2669472694396973, 12.555207252502441, 21.080371856689453, 1.5757323503494263], step: 181600, lr: 9.773832517488488e-05 2023-03-17 06:01:00,576 44k INFO ====> Epoch: 180, cost 368.96 s 2023-03-17 06:01:09,813 44k INFO Train Epoch: 181 [0%] 2023-03-17 06:01:09,814 44k INFO Losses: [2.398139476776123, 2.456231117248535, 10.54420280456543, 19.42152214050293, 1.216538429260254], step: 181800, lr: 9.772610788423802e-05 2023-03-17 06:02:20,386 44k INFO Train Epoch: 181 [20%] 2023-03-17 06:02:20,387 44k INFO Losses: [2.5390632152557373, 2.1548643112182617, 9.74463176727295, 21.793285369873047, 1.3132312297821045], step: 182000, lr: 9.772610788423802e-05 2023-03-17 06:02:23,366 44k INFO Saving model and optimizer state at iteration 181 to ./logs\44k\G_182000.pth 2023-03-17 06:02:24,060 44k INFO Saving model and optimizer state at iteration 181 to ./logs\44k\D_182000.pth 2023-03-17 06:02:24,676 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_179000.pth 2023-03-17 06:02:24,711 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_179000.pth 2023-03-17 06:03:34,353 44k INFO Train Epoch: 181 [40%] 2023-03-17 06:03:34,353 44k INFO Losses: [2.252544641494751, 2.994112491607666, 6.07293176651001, 18.391122817993164, 1.067116379737854], step: 182200, lr: 9.772610788423802e-05 2023-03-17 06:04:45,312 44k INFO Train Epoch: 181 [59%] 2023-03-17 06:04:45,313 44k INFO Losses: [2.411041736602783, 2.2546186447143555, 14.681248664855957, 20.431100845336914, 1.4179993867874146], step: 182400, lr: 9.772610788423802e-05 2023-03-17 06:05:55,763 44k INFO Train Epoch: 181 [79%] 2023-03-17 06:05:55,764 44k INFO Losses: [2.6164684295654297, 2.0448684692382812, 7.5200629234313965, 15.442971229553223, 1.0318686962127686], step: 182600, lr: 9.772610788423802e-05 2023-03-17 06:07:06,335 44k INFO Train Epoch: 181 [99%] 2023-03-17 06:07:06,335 44k INFO Losses: [2.3308310508728027, 2.406909942626953, 10.362029075622559, 19.656505584716797, 1.3155230283737183], step: 182800, lr: 9.772610788423802e-05 2023-03-17 06:07:09,904 44k INFO ====> Epoch: 181, cost 369.33 s 2023-03-17 06:08:25,922 44k INFO Train Epoch: 182 [19%] 2023-03-17 06:08:25,922 44k INFO Losses: [2.419107675552368, 2.6591954231262207, 9.776471138000488, 21.103031158447266, 1.4087097644805908], step: 183000, lr: 9.771389212075249e-05 2023-03-17 06:08:28,811 44k INFO Saving model and optimizer state at iteration 182 to ./logs\44k\G_183000.pth 2023-03-17 06:08:29,626 44k INFO Saving model and optimizer state at iteration 182 to ./logs\44k\D_183000.pth 2023-03-17 06:08:30,250 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_180000.pth 2023-03-17 06:08:30,279 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_180000.pth 2023-03-17 06:09:39,928 44k INFO Train Epoch: 182 [39%] 2023-03-17 06:09:39,928 44k INFO Losses: [2.403265953063965, 2.7359962463378906, 13.291855812072754, 25.802942276000977, 1.2778403759002686], step: 183200, lr: 9.771389212075249e-05 2023-03-17 06:10:50,861 44k INFO Train Epoch: 182 [58%] 2023-03-17 06:10:50,862 44k INFO Losses: [2.47495436668396, 2.5124194622039795, 10.542431831359863, 17.643651962280273, 1.2766872644424438], step: 183400, lr: 9.771389212075249e-05 2023-03-17 06:12:01,280 44k INFO Train Epoch: 182 [78%] 2023-03-17 06:12:01,281 44k INFO Losses: [2.280062437057495, 2.378777027130127, 10.766070365905762, 19.857120513916016, 1.0334125757217407], step: 183600, lr: 9.771389212075249e-05 2023-03-17 06:13:11,967 44k INFO Train Epoch: 182 [98%] 2023-03-17 06:13:11,967 44k INFO Losses: [2.8693606853485107, 2.0703909397125244, 6.561828136444092, 13.497916221618652, 0.9151167273521423], step: 183800, lr: 9.771389212075249e-05 2023-03-17 06:13:19,083 44k INFO ====> Epoch: 182, cost 369.18 s 2023-03-17 06:14:31,622 44k INFO Train Epoch: 183 [18%] 2023-03-17 06:14:31,623 44k INFO Losses: [2.7433643341064453, 1.9162547588348389, 8.16872501373291, 16.387836456298828, 0.9973092079162598], step: 184000, lr: 9.77016778842374e-05 2023-03-17 06:14:34,530 44k INFO Saving model and optimizer state at iteration 183 to ./logs\44k\G_184000.pth 2023-03-17 06:14:35,227 44k INFO Saving model and optimizer state at iteration 183 to ./logs\44k\D_184000.pth 2023-03-17 06:14:35,842 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_181000.pth 2023-03-17 06:14:35,871 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_181000.pth 2023-03-17 06:15:45,609 44k INFO Train Epoch: 183 [38%] 2023-03-17 06:15:45,610 44k INFO Losses: [2.3465216159820557, 2.316443920135498, 15.903656005859375, 21.452289581298828, 1.0794183015823364], step: 184200, lr: 9.77016778842374e-05 2023-03-17 06:16:56,414 44k INFO Train Epoch: 183 [57%] 2023-03-17 06:16:56,414 44k INFO Losses: [2.3482322692871094, 2.2509942054748535, 8.933483123779297, 18.85289764404297, 1.4662628173828125], step: 184400, lr: 9.77016778842374e-05 2023-03-17 06:18:06,932 44k INFO Train Epoch: 183 [77%] 2023-03-17 06:18:06,932 44k INFO Losses: [2.3802692890167236, 2.2943570613861084, 10.669062614440918, 17.352094650268555, 1.2414284944534302], step: 184600, lr: 9.77016778842374e-05 2023-03-17 06:19:17,508 44k INFO Train Epoch: 183 [97%] 2023-03-17 06:19:17,508 44k INFO Losses: [2.546994686126709, 2.159555196762085, 7.79276180267334, 18.040454864501953, 1.6440141201019287], step: 184800, lr: 9.77016778842374e-05 2023-03-17 06:19:28,151 44k INFO ====> Epoch: 183, cost 369.07 s 2023-03-17 06:20:36,982 44k INFO Train Epoch: 184 [17%] 2023-03-17 06:20:36,983 44k INFO Losses: [2.6632089614868164, 2.2868897914886475, 8.608024597167969, 23.708494186401367, 1.422364592552185], step: 185000, lr: 9.768946517450186e-05 2023-03-17 06:20:39,932 44k INFO Saving model and optimizer state at iteration 184 to ./logs\44k\G_185000.pth 2023-03-17 06:20:40,622 44k INFO Saving model and optimizer state at iteration 184 to ./logs\44k\D_185000.pth 2023-03-17 06:20:41,231 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_182000.pth 2023-03-17 06:20:41,258 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_182000.pth 2023-03-17 06:21:50,920 44k INFO Train Epoch: 184 [37%] 2023-03-17 06:21:50,920 44k INFO Losses: [2.5632054805755615, 2.2750046253204346, 11.963404655456543, 17.320446014404297, 1.5011937618255615], step: 185200, lr: 9.768946517450186e-05 2023-03-17 06:23:01,607 44k INFO Train Epoch: 184 [56%] 2023-03-17 06:23:01,607 44k INFO Losses: [2.644923210144043, 2.0998289585113525, 9.209622383117676, 19.026899337768555, 1.5708847045898438], step: 185400, lr: 9.768946517450186e-05 2023-03-17 06:24:11,975 44k INFO Train Epoch: 184 [76%] 2023-03-17 06:24:11,976 44k INFO Losses: [2.3974382877349854, 2.526686429977417, 11.724515914916992, 21.881731033325195, 1.2320761680603027], step: 185600, lr: 9.768946517450186e-05 2023-03-17 06:25:22,464 44k INFO Train Epoch: 184 [96%] 2023-03-17 06:25:22,465 44k INFO Losses: [2.629859447479248, 2.332376480102539, 13.257457733154297, 20.278871536254883, 1.150071144104004], step: 185800, lr: 9.768946517450186e-05 2023-03-17 06:25:36,639 44k INFO ====> Epoch: 184, cost 368.49 s 2023-03-17 06:26:41,963 44k INFO Train Epoch: 185 [16%] 2023-03-17 06:26:41,964 44k INFO Losses: [2.298135280609131, 2.493217945098877, 8.975370407104492, 17.591102600097656, 1.541397213935852], step: 186000, lr: 9.767725399135504e-05 2023-03-17 06:26:44,971 44k INFO Saving model and optimizer state at iteration 185 to ./logs\44k\G_186000.pth 2023-03-17 06:26:45,668 44k INFO Saving model and optimizer state at iteration 185 to ./logs\44k\D_186000.pth 2023-03-17 06:26:46,286 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_183000.pth 2023-03-17 06:26:46,315 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_183000.pth 2023-03-17 06:27:55,950 44k INFO Train Epoch: 185 [36%] 2023-03-17 06:27:55,950 44k INFO Losses: [2.340165138244629, 2.764162540435791, 11.030633926391602, 20.987056732177734, 1.5052953958511353], step: 186200, lr: 9.767725399135504e-05 2023-03-17 06:29:06,706 44k INFO Train Epoch: 185 [55%] 2023-03-17 06:29:06,706 44k INFO Losses: [2.6082980632781982, 2.3038980960845947, 6.9183573722839355, 15.762847900390625, 1.2604817152023315], step: 186400, lr: 9.767725399135504e-05 2023-03-17 06:30:17,112 44k INFO Train Epoch: 185 [75%] 2023-03-17 06:30:17,113 44k INFO Losses: [2.6558516025543213, 2.211677312850952, 11.19477367401123, 20.11333465576172, 1.0350037813186646], step: 186600, lr: 9.767725399135504e-05 2023-03-17 06:31:27,580 44k INFO Train Epoch: 185 [95%] 2023-03-17 06:31:27,581 44k INFO Losses: [2.2793142795562744, 2.5689051151275635, 9.03012466430664, 19.252979278564453, 1.4940913915634155], step: 186800, lr: 9.767725399135504e-05 2023-03-17 06:31:45,247 44k INFO ====> Epoch: 185, cost 368.61 s 2023-03-17 06:32:47,087 44k INFO Train Epoch: 186 [15%] 2023-03-17 06:32:47,087 44k INFO Losses: [2.802612781524658, 2.1898860931396484, 5.006014347076416, 16.965930938720703, 1.2188531160354614], step: 187000, lr: 9.766504433460612e-05 2023-03-17 06:32:50,055 44k INFO Saving model and optimizer state at iteration 186 to ./logs\44k\G_187000.pth 2023-03-17 06:32:50,755 44k INFO Saving model and optimizer state at iteration 186 to ./logs\44k\D_187000.pth 2023-03-17 06:32:51,377 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_184000.pth 2023-03-17 06:32:51,417 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_184000.pth 2023-03-17 06:34:01,050 44k INFO Train Epoch: 186 [35%] 2023-03-17 06:34:01,050 44k INFO Losses: [2.478965997695923, 2.233477830886841, 7.016595363616943, 16.691194534301758, 1.1462695598602295], step: 187200, lr: 9.766504433460612e-05 2023-03-17 06:35:11,698 44k INFO Train Epoch: 186 [54%] 2023-03-17 06:35:11,698 44k INFO Losses: [2.490816593170166, 2.43088960647583, 7.8164448738098145, 19.198184967041016, 1.4272069931030273], step: 187400, lr: 9.766504433460612e-05 2023-03-17 06:36:22,166 44k INFO Train Epoch: 186 [74%] 2023-03-17 06:36:22,166 44k INFO Losses: [2.676515579223633, 2.072221517562866, 12.414589881896973, 19.50327491760254, 0.9525483250617981], step: 187600, lr: 9.766504433460612e-05 2023-03-17 06:37:32,804 44k INFO Train Epoch: 186 [94%] 2023-03-17 06:37:32,805 44k INFO Losses: [2.3441643714904785, 2.432870864868164, 10.163408279418945, 20.300567626953125, 1.581961989402771], step: 187800, lr: 9.766504433460612e-05 2023-03-17 06:37:54,009 44k INFO ====> Epoch: 186, cost 368.76 s 2023-03-17 06:38:52,336 44k INFO Train Epoch: 187 [14%] 2023-03-17 06:38:52,337 44k INFO Losses: [2.7316460609436035, 2.2977445125579834, 10.076299667358398, 19.15890121459961, 1.4552936553955078], step: 188000, lr: 9.765283620406429e-05 2023-03-17 06:38:55,210 44k INFO Saving model and optimizer state at iteration 187 to ./logs\44k\G_188000.pth 2023-03-17 06:38:55,957 44k INFO Saving model and optimizer state at iteration 187 to ./logs\44k\D_188000.pth 2023-03-17 06:38:56,574 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_185000.pth 2023-03-17 06:38:56,603 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_185000.pth 2023-03-17 06:40:06,292 44k INFO Train Epoch: 187 [34%] 2023-03-17 06:40:06,292 44k INFO Losses: [2.2686688899993896, 2.7592356204986572, 12.649209976196289, 22.62937355041504, 1.4127696752548218], step: 188200, lr: 9.765283620406429e-05 2023-03-17 06:41:16,745 44k INFO Train Epoch: 187 [53%] 2023-03-17 06:41:16,745 44k INFO Losses: [2.3500289916992188, 2.45349383354187, 10.84105110168457, 20.309953689575195, 1.2709828615188599], step: 188400, lr: 9.765283620406429e-05 2023-03-17 06:42:27,312 44k INFO Train Epoch: 187 [73%] 2023-03-17 06:42:27,312 44k INFO Losses: [2.45880389213562, 2.228271007537842, 11.657224655151367, 18.25630760192871, 1.2011080980300903], step: 188600, lr: 9.765283620406429e-05 2023-03-17 06:43:37,916 44k INFO Train Epoch: 187 [93%] 2023-03-17 06:43:37,916 44k INFO Losses: [2.5868961811065674, 2.373466968536377, 9.688799858093262, 18.016136169433594, 1.4392507076263428], step: 188800, lr: 9.765283620406429e-05 2023-03-17 06:44:02,608 44k INFO ====> Epoch: 187, cost 368.60 s 2023-03-17 06:44:57,448 44k INFO Train Epoch: 188 [13%] 2023-03-17 06:44:57,448 44k INFO Losses: [2.559636116027832, 2.0897998809814453, 6.491578102111816, 16.60544204711914, 0.8049174547195435], step: 189000, lr: 9.764062959953878e-05 2023-03-17 06:45:00,295 44k INFO Saving model and optimizer state at iteration 188 to ./logs\44k\G_189000.pth 2023-03-17 06:45:01,037 44k INFO Saving model and optimizer state at iteration 188 to ./logs\44k\D_189000.pth 2023-03-17 06:45:01,644 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_186000.pth 2023-03-17 06:45:01,672 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_186000.pth 2023-03-17 06:46:11,409 44k INFO Train Epoch: 188 [33%] 2023-03-17 06:46:11,409 44k INFO Losses: [2.3913652896881104, 2.050966501235962, 5.90201997756958, 16.043058395385742, 1.1227123737335205], step: 189200, lr: 9.764062959953878e-05 2023-03-17 06:47:21,882 44k INFO Train Epoch: 188 [52%] 2023-03-17 06:47:21,882 44k INFO Losses: [2.3911638259887695, 2.4726133346557617, 8.065592765808105, 16.81393814086914, 1.2933887243270874], step: 189400, lr: 9.764062959953878e-05 2023-03-17 06:48:32,567 44k INFO Train Epoch: 188 [72%] 2023-03-17 06:48:32,568 44k INFO Losses: [2.621286153793335, 2.233551502227783, 12.70329761505127, 20.848466873168945, 1.3755011558532715], step: 189600, lr: 9.764062959953878e-05 2023-03-17 06:49:43,171 44k INFO Train Epoch: 188 [92%] 2023-03-17 06:49:43,171 44k INFO Losses: [2.3661816120147705, 2.2118165493011475, 14.377755165100098, 21.042137145996094, 1.4929710626602173], step: 189800, lr: 9.764062959953878e-05 2023-03-17 06:50:11,267 44k INFO ====> Epoch: 188, cost 368.66 s 2023-03-17 06:51:02,582 44k INFO Train Epoch: 189 [12%] 2023-03-17 06:51:02,582 44k INFO Losses: [2.362041711807251, 2.6351425647735596, 10.052505493164062, 20.552793502807617, 1.4402847290039062], step: 190000, lr: 9.762842452083883e-05 2023-03-17 06:51:05,449 44k INFO Saving model and optimizer state at iteration 189 to ./logs\44k\G_190000.pth 2023-03-17 06:51:06,152 44k INFO Saving model and optimizer state at iteration 189 to ./logs\44k\D_190000.pth 2023-03-17 06:51:06,768 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_187000.pth 2023-03-17 06:51:06,797 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_187000.pth 2023-03-17 06:52:16,428 44k INFO Train Epoch: 189 [32%] 2023-03-17 06:52:16,429 44k INFO Losses: [2.5325231552124023, 2.3161864280700684, 14.092612266540527, 22.625221252441406, 1.2393840551376343], step: 190200, lr: 9.762842452083883e-05 2023-03-17 06:53:26,859 44k INFO Train Epoch: 189 [51%] 2023-03-17 06:53:26,860 44k INFO Losses: [2.3373560905456543, 2.4058637619018555, 10.622371673583984, 22.430278778076172, 1.4406014680862427], step: 190400, lr: 9.762842452083883e-05 2023-03-17 06:54:37,610 44k INFO Train Epoch: 189 [71%] 2023-03-17 06:54:37,610 44k INFO Losses: [2.444945812225342, 2.0623867511749268, 13.815959930419922, 20.740846633911133, 1.1552411317825317], step: 190600, lr: 9.762842452083883e-05 2023-03-17 06:55:48,189 44k INFO Train Epoch: 189 [91%] 2023-03-17 06:55:48,189 44k INFO Losses: [2.19140625, 2.4188499450683594, 16.104698181152344, 24.440303802490234, 1.4200772047042847], step: 190800, lr: 9.762842452083883e-05 2023-03-17 06:56:19,782 44k INFO ====> Epoch: 189, cost 368.52 s 2023-03-17 06:57:07,692 44k INFO Train Epoch: 190 [11%] 2023-03-17 06:57:07,693 44k INFO Losses: [2.173816680908203, 2.771852493286133, 14.092453002929688, 23.1804141998291, 1.5688167810440063], step: 191000, lr: 9.761622096777372e-05 2023-03-17 06:57:10,575 44k INFO Saving model and optimizer state at iteration 190 to ./logs\44k\G_191000.pth 2023-03-17 06:57:11,265 44k INFO Saving model and optimizer state at iteration 190 to ./logs\44k\D_191000.pth 2023-03-17 06:57:11,890 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_188000.pth 2023-03-17 06:57:11,927 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_188000.pth 2023-03-17 06:58:21,568 44k INFO Train Epoch: 190 [31%] 2023-03-17 06:58:21,568 44k INFO Losses: [2.3383467197418213, 2.466707468032837, 12.093481063842773, 20.667787551879883, 1.2807127237319946], step: 191200, lr: 9.761622096777372e-05 2023-03-17 06:59:32,038 44k INFO Train Epoch: 190 [50%] 2023-03-17 06:59:32,038 44k INFO Losses: [2.332186698913574, 2.4081389904022217, 13.142730712890625, 21.372465133666992, 1.8509140014648438], step: 191400, lr: 9.761622096777372e-05 2023-03-17 07:00:42,723 44k INFO Train Epoch: 190 [70%] 2023-03-17 07:00:42,723 44k INFO Losses: [2.4161148071289062, 2.4276132583618164, 6.5814313888549805, 19.169797897338867, 1.3144010305404663], step: 191600, lr: 9.761622096777372e-05 2023-03-17 07:01:53,262 44k INFO Train Epoch: 190 [90%] 2023-03-17 07:01:53,263 44k INFO Losses: [2.4197394847869873, 2.47949481010437, 13.499716758728027, 20.02686309814453, 1.309718370437622], step: 191800, lr: 9.761622096777372e-05 2023-03-17 07:02:28,515 44k INFO ====> Epoch: 190, cost 368.73 s 2023-03-17 07:03:12,840 44k INFO Train Epoch: 191 [10%] 2023-03-17 07:03:12,841 44k INFO Losses: [2.419055938720703, 2.1491386890411377, 8.643927574157715, 22.066654205322266, 1.2546817064285278], step: 192000, lr: 9.760401894015275e-05 2023-03-17 07:03:15,767 44k INFO Saving model and optimizer state at iteration 191 to ./logs\44k\G_192000.pth 2023-03-17 07:03:16,465 44k INFO Saving model and optimizer state at iteration 191 to ./logs\44k\D_192000.pth 2023-03-17 07:03:17,082 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_189000.pth 2023-03-17 07:03:17,121 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_189000.pth 2023-03-17 07:04:26,815 44k INFO Train Epoch: 191 [30%] 2023-03-17 07:04:26,815 44k INFO Losses: [2.522284507751465, 2.420914649963379, 8.35836124420166, 17.994762420654297, 1.4808582067489624], step: 192200, lr: 9.760401894015275e-05 2023-03-17 07:05:37,195 44k INFO Train Epoch: 191 [50%] 2023-03-17 07:05:37,196 44k INFO Losses: [2.501030445098877, 2.3869645595550537, 4.897111892700195, 15.134775161743164, 1.3114947080612183], step: 192400, lr: 9.760401894015275e-05 2023-03-17 07:06:47,853 44k INFO Train Epoch: 191 [69%] 2023-03-17 07:06:47,854 44k INFO Losses: [2.4689719676971436, 2.2179458141326904, 11.068626403808594, 16.769256591796875, 0.885710597038269], step: 192600, lr: 9.760401894015275e-05 2023-03-17 07:07:58,450 44k INFO Train Epoch: 191 [89%] 2023-03-17 07:07:58,450 44k INFO Losses: [2.482023239135742, 2.2791199684143066, 12.277992248535156, 19.401098251342773, 1.2908927202224731], step: 192800, lr: 9.760401894015275e-05 2023-03-17 07:08:37,184 44k INFO ====> Epoch: 191, cost 368.67 s 2023-03-17 07:09:17,902 44k INFO Train Epoch: 192 [9%] 2023-03-17 07:09:17,903 44k INFO Losses: [2.509918451309204, 2.1493301391601562, 8.500226974487305, 21.132404327392578, 1.3271260261535645], step: 193000, lr: 9.759181843778522e-05 2023-03-17 07:09:20,837 44k INFO Saving model and optimizer state at iteration 192 to ./logs\44k\G_193000.pth 2023-03-17 07:09:21,507 44k INFO Saving model and optimizer state at iteration 192 to ./logs\44k\D_193000.pth 2023-03-17 07:09:22,120 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_190000.pth 2023-03-17 07:09:22,152 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_190000.pth 2023-03-17 07:10:31,968 44k INFO Train Epoch: 192 [29%] 2023-03-17 07:10:31,968 44k INFO Losses: [2.6725409030914307, 2.0015883445739746, 5.720909118652344, 12.595409393310547, 1.1172879934310913], step: 193200, lr: 9.759181843778522e-05 2023-03-17 07:11:42,207 44k INFO Train Epoch: 192 [49%] 2023-03-17 07:11:42,207 44k INFO Losses: [2.4538490772247314, 2.1961817741394043, 9.587454795837402, 19.899126052856445, 1.0069066286087036], step: 193400, lr: 9.759181843778522e-05 2023-03-17 07:12:52,993 44k INFO Train Epoch: 192 [68%] 2023-03-17 07:12:52,993 44k INFO Losses: [2.5121243000030518, 2.339268684387207, 8.999490737915039, 18.42527961730957, 1.138754963874817], step: 193600, lr: 9.759181843778522e-05 2023-03-17 07:14:03,538 44k INFO Train Epoch: 192 [88%] 2023-03-17 07:14:03,539 44k INFO Losses: [2.4080684185028076, 2.468012571334839, 10.129767417907715, 20.848876953125, 1.0220189094543457], step: 193800, lr: 9.759181843778522e-05 2023-03-17 07:14:45,764 44k INFO ====> Epoch: 192, cost 368.58 s 2023-03-17 07:15:22,985 44k INFO Train Epoch: 193 [8%] 2023-03-17 07:15:22,985 44k INFO Losses: [2.6884305477142334, 1.987067699432373, 13.821944236755371, 21.121673583984375, 1.5130189657211304], step: 194000, lr: 9.757961946048049e-05 2023-03-17 07:15:25,859 44k INFO Saving model and optimizer state at iteration 193 to ./logs\44k\G_194000.pth 2023-03-17 07:15:26,611 44k INFO Saving model and optimizer state at iteration 193 to ./logs\44k\D_194000.pth 2023-03-17 07:15:27,224 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_191000.pth 2023-03-17 07:15:27,251 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_191000.pth 2023-03-17 07:16:37,078 44k INFO Train Epoch: 193 [28%] 2023-03-17 07:16:37,078 44k INFO Losses: [2.4968924522399902, 2.2302958965301514, 8.11856460571289, 19.72678565979004, 0.9525139331817627], step: 194200, lr: 9.757961946048049e-05 2023-03-17 07:17:47,334 44k INFO Train Epoch: 193 [48%] 2023-03-17 07:17:47,334 44k INFO Losses: [2.4049925804138184, 2.2455251216888428, 14.541271209716797, 21.425344467163086, 1.2447319030761719], step: 194400, lr: 9.757961946048049e-05 2023-03-17 07:18:58,080 44k INFO Train Epoch: 193 [67%] 2023-03-17 07:18:58,080 44k INFO Losses: [2.2139761447906494, 2.564929723739624, 10.426074028015137, 18.470409393310547, 0.7297786474227905], step: 194600, lr: 9.757961946048049e-05 2023-03-17 07:20:08,714 44k INFO Train Epoch: 193 [87%] 2023-03-17 07:20:08,714 44k INFO Losses: [2.375831127166748, 2.434518575668335, 11.032670974731445, 20.0329532623291, 1.2262428998947144], step: 194800, lr: 9.757961946048049e-05 2023-03-17 07:20:54,440 44k INFO ====> Epoch: 193, cost 368.68 s 2023-03-17 07:21:28,167 44k INFO Train Epoch: 194 [7%] 2023-03-17 07:21:28,168 44k INFO Losses: [2.245487689971924, 2.733289957046509, 9.896684646606445, 18.713275909423828, 1.2765774726867676], step: 195000, lr: 9.756742200804793e-05 2023-03-17 07:21:31,087 44k INFO Saving model and optimizer state at iteration 194 to ./logs\44k\G_195000.pth 2023-03-17 07:21:31,735 44k INFO Saving model and optimizer state at iteration 194 to ./logs\44k\D_195000.pth 2023-03-17 07:21:32,347 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_192000.pth 2023-03-17 07:21:32,390 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_192000.pth 2023-03-17 07:22:42,402 44k INFO Train Epoch: 194 [27%] 2023-03-17 07:22:42,402 44k INFO Losses: [2.2106027603149414, 2.4568867683410645, 14.9183349609375, 20.781253814697266, 1.0375970602035522], step: 195200, lr: 9.756742200804793e-05 2023-03-17 07:23:52,687 44k INFO Train Epoch: 194 [47%] 2023-03-17 07:23:52,688 44k INFO Losses: [2.6136116981506348, 1.9807815551757812, 7.826221942901611, 16.94351577758789, 1.3534010648727417], step: 195400, lr: 9.756742200804793e-05 2023-03-17 07:25:03,361 44k INFO Train Epoch: 194 [66%] 2023-03-17 07:25:03,362 44k INFO Losses: [2.5425190925598145, 2.2178404331207275, 8.594837188720703, 19.733835220336914, 1.2090575695037842], step: 195600, lr: 9.756742200804793e-05 2023-03-17 07:26:13,966 44k INFO Train Epoch: 194 [86%] 2023-03-17 07:26:13,967 44k INFO Losses: [2.6793856620788574, 2.42828369140625, 6.436739444732666, 20.230318069458008, 1.157105565071106], step: 195800, lr: 9.756742200804793e-05 2023-03-17 07:27:03,141 44k INFO ====> Epoch: 194, cost 368.70 s 2023-03-17 07:27:33,378 44k INFO Train Epoch: 195 [6%] 2023-03-17 07:27:33,378 44k INFO Losses: [2.269909143447876, 2.2671263217926025, 11.53935432434082, 23.84532356262207, 1.2267518043518066], step: 196000, lr: 9.755522608029692e-05 2023-03-17 07:27:36,269 44k INFO Saving model and optimizer state at iteration 195 to ./logs\44k\G_196000.pth 2023-03-17 07:27:36,948 44k INFO Saving model and optimizer state at iteration 195 to ./logs\44k\D_196000.pth 2023-03-17 07:27:37,567 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_193000.pth 2023-03-17 07:27:37,607 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_193000.pth 2023-03-17 07:28:47,571 44k INFO Train Epoch: 195 [26%] 2023-03-17 07:28:47,571 44k INFO Losses: [2.1528842449188232, 2.4339652061462402, 14.198721885681152, 16.672754287719727, 1.1825978755950928], step: 196200, lr: 9.755522608029692e-05 2023-03-17 07:29:57,961 44k INFO Train Epoch: 195 [46%] 2023-03-17 07:29:57,962 44k INFO Losses: [2.5001473426818848, 2.3303732872009277, 8.270695686340332, 15.580238342285156, 1.0260194540023804], step: 196400, lr: 9.755522608029692e-05 2023-03-17 07:31:08,616 44k INFO Train Epoch: 195 [65%] 2023-03-17 07:31:08,616 44k INFO Losses: [2.5104470252990723, 2.3815371990203857, 10.734888076782227, 19.742795944213867, 1.4290317296981812], step: 196600, lr: 9.755522608029692e-05 2023-03-17 07:32:19,114 44k INFO Train Epoch: 195 [85%] 2023-03-17 07:32:19,114 44k INFO Losses: [2.400418281555176, 2.1132068634033203, 8.818339347839355, 21.684640884399414, 1.4527168273925781], step: 196800, lr: 9.755522608029692e-05 2023-03-17 07:33:11,873 44k INFO ====> Epoch: 195, cost 368.73 s 2023-03-17 07:33:38,506 44k INFO Train Epoch: 196 [5%] 2023-03-17 07:33:38,507 44k INFO Losses: [2.521491289138794, 1.995821237564087, 11.56248664855957, 18.267457962036133, 1.3335685729980469], step: 197000, lr: 9.754303167703689e-05 2023-03-17 07:33:41,411 44k INFO Saving model and optimizer state at iteration 196 to ./logs\44k\G_197000.pth 2023-03-17 07:33:42,083 44k INFO Saving model and optimizer state at iteration 196 to ./logs\44k\D_197000.pth 2023-03-17 07:33:42,692 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_194000.pth 2023-03-17 07:33:42,734 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_194000.pth 2023-03-17 07:34:52,763 44k INFO Train Epoch: 196 [25%] 2023-03-17 07:34:52,763 44k INFO Losses: [2.4066267013549805, 2.2568750381469727, 14.117249488830566, 19.48472023010254, 1.180380940437317], step: 197200, lr: 9.754303167703689e-05 2023-03-17 07:36:02,979 44k INFO Train Epoch: 196 [45%] 2023-03-17 07:36:02,979 44k INFO Losses: [2.3620829582214355, 2.546447277069092, 13.924671173095703, 21.940412521362305, 1.4166197776794434], step: 197400, lr: 9.754303167703689e-05 2023-03-17 07:37:13,755 44k INFO Train Epoch: 196 [64%] 2023-03-17 07:37:13,756 44k INFO Losses: [2.2715413570404053, 2.463909387588501, 12.243135452270508, 21.259061813354492, 0.879051923751831], step: 197600, lr: 9.754303167703689e-05 2023-03-17 07:38:24,251 44k INFO Train Epoch: 196 [84%] 2023-03-17 07:38:24,252 44k INFO Losses: [2.615898609161377, 1.9559142589569092, 10.186453819274902, 20.98653793334961, 1.2413794994354248], step: 197800, lr: 9.754303167703689e-05 2023-03-17 07:39:20,505 44k INFO ====> Epoch: 196, cost 368.63 s 2023-03-17 07:39:43,870 44k INFO Train Epoch: 197 [4%] 2023-03-17 07:39:43,870 44k INFO Losses: [2.6866211891174316, 2.6025044918060303, 11.010984420776367, 21.837356567382812, 0.745345413684845], step: 198000, lr: 9.753083879807726e-05 2023-03-17 07:39:46,790 44k INFO Saving model and optimizer state at iteration 197 to ./logs\44k\G_198000.pth 2023-03-17 07:39:47,452 44k INFO Saving model and optimizer state at iteration 197 to ./logs\44k\D_198000.pth 2023-03-17 07:39:48,071 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_195000.pth 2023-03-17 07:39:48,110 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_195000.pth 2023-03-17 07:40:58,182 44k INFO Train Epoch: 197 [24%] 2023-03-17 07:40:58,182 44k INFO Losses: [2.391636848449707, 2.62190318107605, 9.916550636291504, 20.411697387695312, 1.203183889389038], step: 198200, lr: 9.753083879807726e-05 2023-03-17 07:42:08,341 44k INFO Train Epoch: 197 [44%] 2023-03-17 07:42:08,342 44k INFO Losses: [2.3564553260803223, 2.2706246376037598, 10.35753059387207, 18.36391258239746, 0.8582611083984375], step: 198400, lr: 9.753083879807726e-05 2023-03-17 07:43:19,092 44k INFO Train Epoch: 197 [63%] 2023-03-17 07:43:19,093 44k INFO Losses: [2.606696128845215, 2.5457661151885986, 10.943402290344238, 21.549806594848633, 1.2153738737106323], step: 198600, lr: 9.753083879807726e-05 2023-03-17 07:44:29,523 44k INFO Train Epoch: 197 [83%] 2023-03-17 07:44:29,523 44k INFO Losses: [2.468635320663452, 2.530238151550293, 10.264094352722168, 22.62189483642578, 1.6973541975021362], step: 198800, lr: 9.753083879807726e-05 2023-03-17 07:45:29,304 44k INFO ====> Epoch: 197, cost 368.80 s 2023-03-17 07:45:48,960 44k INFO Train Epoch: 198 [3%] 2023-03-17 07:45:48,961 44k INFO Losses: [2.490971326828003, 2.3695285320281982, 10.087408065795898, 21.46518325805664, 1.3228527307510376], step: 199000, lr: 9.75186474432275e-05 2023-03-17 07:45:51,918 44k INFO Saving model and optimizer state at iteration 198 to ./logs\44k\G_199000.pth 2023-03-17 07:45:52,578 44k INFO Saving model and optimizer state at iteration 198 to ./logs\44k\D_199000.pth 2023-03-17 07:45:53,204 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_196000.pth 2023-03-17 07:45:53,245 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_196000.pth 2023-03-17 07:47:03,386 44k INFO Train Epoch: 198 [23%] 2023-03-17 07:47:03,387 44k INFO Losses: [2.543612480163574, 2.1091065406799316, 10.433424949645996, 18.554439544677734, 1.2829318046569824], step: 199200, lr: 9.75186474432275e-05 2023-03-17 07:48:13,460 44k INFO Train Epoch: 198 [43%] 2023-03-17 07:48:13,460 44k INFO Losses: [2.5808582305908203, 2.269426107406616, 12.214500427246094, 16.3416690826416, 0.9476476907730103], step: 199400, lr: 9.75186474432275e-05 2023-03-17 07:49:24,114 44k INFO Train Epoch: 198 [62%] 2023-03-17 07:49:24,114 44k INFO Losses: [2.497689962387085, 2.3896048069000244, 9.18937873840332, 18.91930389404297, 1.4161746501922607], step: 199600, lr: 9.75186474432275e-05 2023-03-17 07:50:34,580 44k INFO Train Epoch: 198 [82%] 2023-03-17 07:50:34,581 44k INFO Losses: [2.652256488800049, 2.25356125831604, 9.679055213928223, 20.268939971923828, 1.0278950929641724], step: 199800, lr: 9.75186474432275e-05 2023-03-17 07:51:37,941 44k INFO ====> Epoch: 198, cost 368.64 s 2023-03-17 07:51:54,090 44k INFO Train Epoch: 199 [2%] 2023-03-17 07:51:54,091 44k INFO Losses: [2.5567898750305176, 2.2425920963287354, 12.388092041015625, 20.925437927246094, 1.3486777544021606], step: 200000, lr: 9.750645761229709e-05 2023-03-17 07:51:56,989 44k INFO Saving model and optimizer state at iteration 199 to ./logs\44k\G_200000.pth 2023-03-17 07:51:57,685 44k INFO Saving model and optimizer state at iteration 199 to ./logs\44k\D_200000.pth 2023-03-17 07:51:58,308 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_197000.pth 2023-03-17 07:51:58,344 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_197000.pth 2023-03-17 07:53:08,579 44k INFO Train Epoch: 199 [22%] 2023-03-17 07:53:08,579 44k INFO Losses: [2.3256468772888184, 2.4527928829193115, 9.323698043823242, 20.54470443725586, 1.3112566471099854], step: 200200, lr: 9.750645761229709e-05 2023-03-17 07:54:18,609 44k INFO Train Epoch: 199 [42%] 2023-03-17 07:54:18,609 44k INFO Losses: [2.387432336807251, 2.323570728302002, 10.303509712219238, 19.777013778686523, 1.1829798221588135], step: 200400, lr: 9.750645761229709e-05 2023-03-17 07:55:29,319 44k INFO Train Epoch: 199 [61%] 2023-03-17 07:55:29,319 44k INFO Losses: [2.663313150405884, 2.0788774490356445, 11.775979042053223, 19.769941329956055, 1.6042873859405518], step: 200600, lr: 9.750645761229709e-05 2023-03-17 07:56:39,640 44k INFO Train Epoch: 199 [81%] 2023-03-17 07:56:39,641 44k INFO Losses: [2.3345391750335693, 2.4832265377044678, 14.731494903564453, 22.506362915039062, 1.381921410560608], step: 200800, lr: 9.750645761229709e-05 2023-03-17 07:57:46,626 44k INFO ====> Epoch: 199, cost 368.68 s 2023-03-17 07:57:59,227 44k INFO Train Epoch: 200 [1%] 2023-03-17 07:57:59,228 44k INFO Losses: [2.3614094257354736, 2.3057971000671387, 10.41849136352539, 22.107257843017578, 1.1478921175003052], step: 201000, lr: 9.749426930509556e-05 2023-03-17 07:58:02,158 44k INFO Saving model and optimizer state at iteration 200 to ./logs\44k\G_201000.pth 2023-03-17 07:58:02,854 44k INFO Saving model and optimizer state at iteration 200 to ./logs\44k\D_201000.pth 2023-03-17 07:58:03,468 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_198000.pth 2023-03-17 07:58:03,498 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_198000.pth 2023-03-17 07:59:13,749 44k INFO Train Epoch: 200 [21%] 2023-03-17 07:59:13,749 44k INFO Losses: [2.542792320251465, 2.2958056926727295, 9.301848411560059, 21.45038414001465, 1.5517481565475464], step: 201200, lr: 9.749426930509556e-05 2023-03-17 08:00:23,628 44k INFO Train Epoch: 200 [41%] 2023-03-17 08:00:23,629 44k INFO Losses: [2.2559118270874023, 2.194035053253174, 10.89915943145752, 21.17070198059082, 1.4436256885528564], step: 201400, lr: 9.749426930509556e-05 2023-03-17 08:01:34,427 44k INFO Train Epoch: 200 [60%] 2023-03-17 08:01:34,427 44k INFO Losses: [2.6961865425109863, 2.086853504180908, 6.615417957305908, 15.648521423339844, 1.0582547187805176], step: 201600, lr: 9.749426930509556e-05 2023-03-17 08:02:44,792 44k INFO Train Epoch: 200 [80%] 2023-03-17 08:02:44,793 44k INFO Losses: [2.6256954669952393, 2.106956720352173, 8.926183700561523, 19.340145111083984, 1.0301949977874756], step: 201800, lr: 9.749426930509556e-05 2023-03-17 08:03:55,158 44k INFO ====> Epoch: 200, cost 368.53 s 2023-03-18 02:08:28,829 44k INFO {'train': {'log_interval': 200, 'eval_interval': 1000, 'seed': 1234, 'epochs': 260, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': False, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 3}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'tubaki': 0}, 'model_dir': './logs\\44k'} 2023-03-18 02:08:28,877 44k WARNING git hash values are different. cea6df30(saved) != fd4d47fd(current) 2023-03-18 02:08:32,207 44k INFO Loaded checkpoint './logs\44k\G_201000.pth' (iteration 200) 2023-03-18 02:08:32,729 44k INFO Loaded checkpoint './logs\44k\D_201000.pth' (iteration 200) 2023-03-18 02:08:55,583 44k INFO Train Epoch: 200 [1%] 2023-03-18 02:08:55,583 44k INFO Losses: [2.35174822807312, 2.4575273990631104, 10.795761108398438, 19.91721534729004, 1.0882163047790527], step: 201000, lr: 9.748208252143241e-05 2023-03-18 02:08:59,778 44k INFO Saving model and optimizer state at iteration 200 to ./logs\44k\G_201000.pth 2023-03-18 02:09:00,533 44k INFO Saving model and optimizer state at iteration 200 to ./logs\44k\D_201000.pth 2023-03-18 02:10:22,103 44k INFO Train Epoch: 200 [21%] 2023-03-18 02:10:22,103 44k INFO Losses: [2.4596312046051025, 2.3539931774139404, 8.716135025024414, 19.043046951293945, 1.3567341566085815], step: 201200, lr: 9.748208252143241e-05 2023-03-18 02:11:40,461 44k INFO Train Epoch: 200 [41%] 2023-03-18 02:11:40,462 44k INFO Losses: [2.324265241622925, 2.665736675262451, 12.620558738708496, 19.888641357421875, 1.2449069023132324], step: 201400, lr: 9.748208252143241e-05 2023-03-18 02:12:58,544 44k INFO Train Epoch: 200 [60%] 2023-03-18 02:12:58,544 44k INFO Losses: [2.6077566146850586, 2.1092827320098877, 9.321895599365234, 15.344806671142578, 1.1959127187728882], step: 201600, lr: 9.748208252143241e-05 2023-03-18 02:14:18,417 44k INFO Train Epoch: 200 [80%] 2023-03-18 02:14:18,418 44k INFO Losses: [2.2806196212768555, 2.3022544384002686, 9.38998794555664, 15.876194953918457, 1.0729796886444092], step: 201800, lr: 9.748208252143241e-05 2023-03-18 02:15:40,796 44k INFO ====> Epoch: 200, cost 431.97 s 2023-03-18 02:15:52,108 44k INFO Train Epoch: 201 [0%] 2023-03-18 02:15:52,109 44k INFO Losses: [2.3936142921447754, 2.1073062419891357, 8.370532035827637, 19.265199661254883, 1.2496737241744995], step: 202000, lr: 9.746989726111722e-05 2023-03-18 02:15:55,664 44k INFO Saving model and optimizer state at iteration 201 to ./logs\44k\G_202000.pth 2023-03-18 02:15:56,408 44k INFO Saving model and optimizer state at iteration 201 to ./logs\44k\D_202000.pth 2023-03-18 02:15:57,077 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_199000.pth 2023-03-18 02:15:57,079 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_199000.pth 2023-03-18 02:17:15,485 44k INFO Train Epoch: 201 [20%] 2023-03-18 02:17:15,485 44k INFO Losses: [2.2829904556274414, 2.6659085750579834, 12.898998260498047, 21.984134674072266, 1.342050552368164], step: 202200, lr: 9.746989726111722e-05 2023-03-18 02:18:29,225 44k INFO Train Epoch: 201 [40%] 2023-03-18 02:18:29,225 44k INFO Losses: [2.4566853046417236, 2.335975408554077, 8.42502498626709, 22.443729400634766, 1.2382349967956543], step: 202400, lr: 9.746989726111722e-05 2023-03-18 02:19:44,913 44k INFO Train Epoch: 201 [59%] 2023-03-18 02:19:44,913 44k INFO Losses: [2.4932570457458496, 2.33976149559021, 14.196723937988281, 21.48053741455078, 1.0807658433914185], step: 202600, lr: 9.746989726111722e-05 2023-03-18 02:21:00,669 44k INFO Train Epoch: 201 [79%] 2023-03-18 02:21:00,670 44k INFO Losses: [2.530733108520508, 2.2739899158477783, 6.498410224914551, 17.12137222290039, 1.3033149242401123], step: 202800, lr: 9.746989726111722e-05 2023-03-18 02:22:15,466 44k INFO Train Epoch: 201 [99%] 2023-03-18 02:22:15,466 44k INFO Losses: [2.4283056259155273, 2.28011417388916, 8.826956748962402, 18.514310836791992, 1.3406033515930176], step: 203000, lr: 9.746989726111722e-05 2023-03-18 02:22:18,721 44k INFO Saving model and optimizer state at iteration 201 to ./logs\44k\G_203000.pth 2023-03-18 02:22:19,508 44k INFO Saving model and optimizer state at iteration 201 to ./logs\44k\D_203000.pth 2023-03-18 02:22:20,191 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_200000.pth 2023-03-18 02:22:20,192 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_200000.pth 2023-03-18 02:22:23,655 44k INFO ====> Epoch: 201, cost 402.86 s 2023-03-18 03:42:11,372 44k INFO {'train': {'log_interval': 200, 'eval_interval': 1000, 'seed': 1234, 'epochs': 260, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': False, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 3}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'tubaki': 0}, 'model_dir': './logs\\44k'} 2023-03-18 03:42:11,396 44k WARNING git hash values are different. cea6df30(saved) != fd4d47fd(current) 2023-03-18 03:42:13,320 44k INFO Loaded checkpoint './logs\44k\G_203000.pth' (iteration 201) 2023-03-18 03:42:13,660 44k INFO Loaded checkpoint './logs\44k\D_203000.pth' (iteration 201) 2023-03-18 03:42:27,753 44k INFO Train Epoch: 201 [0%] 2023-03-18 03:42:27,754 44k INFO Losses: [2.500056028366089, 2.357483386993408, 9.080839157104492, 17.467153549194336, 1.3941106796264648], step: 202000, lr: 9.745771352395957e-05 2023-03-18 03:42:31,579 44k INFO Saving model and optimizer state at iteration 201 to ./logs\44k\G_202000.pth 2023-03-18 03:42:32,275 44k INFO Saving model and optimizer state at iteration 201 to ./logs\44k\D_202000.pth 2023-03-18 03:43:49,867 44k INFO Train Epoch: 201 [20%] 2023-03-18 03:43:49,868 44k INFO Losses: [2.3223695755004883, 2.279139995574951, 10.99791431427002, 22.654396057128906, 1.3422850370407104], step: 202200, lr: 9.745771352395957e-05 2023-03-18 03:45:05,318 44k INFO Train Epoch: 201 [40%] 2023-03-18 03:45:05,319 44k INFO Losses: [2.344046115875244, 2.4708199501037598, 12.527447700500488, 24.802968978881836, 1.5284405946731567], step: 202400, lr: 9.745771352395957e-05 2023-03-18 03:46:20,049 44k INFO Train Epoch: 201 [59%] 2023-03-18 03:46:20,050 44k INFO Losses: [2.4475018978118896, 2.557140588760376, 14.923267364501953, 21.09678840637207, 1.0659692287445068], step: 202600, lr: 9.745771352395957e-05 2023-03-18 03:47:34,582 44k INFO Train Epoch: 201 [79%] 2023-03-18 03:47:34,582 44k INFO Losses: [2.4773941040039062, 2.2113289833068848, 10.065826416015625, 18.896623611450195, 1.1151163578033447], step: 202800, lr: 9.745771352395957e-05 2023-03-18 03:48:49,305 44k INFO Train Epoch: 201 [99%] 2023-03-18 03:48:49,306 44k INFO Losses: [2.8069186210632324, 2.064441680908203, 10.483488082885742, 18.412944793701172, 1.3975129127502441], step: 203000, lr: 9.745771352395957e-05 2023-03-18 03:48:52,581 44k INFO Saving model and optimizer state at iteration 201 to ./logs\44k\G_203000.pth 2023-03-18 03:48:53,341 44k INFO Saving model and optimizer state at iteration 201 to ./logs\44k\D_203000.pth 2023-03-18 03:49:00,857 44k INFO ====> Epoch: 201, cost 409.48 s 2023-03-18 03:50:19,392 44k INFO Train Epoch: 202 [19%] 2023-03-18 03:50:19,393 44k INFO Losses: [2.192999839782715, 2.823641300201416, 11.243049621582031, 24.099546432495117, 0.9467582702636719], step: 203200, lr: 9.744553130976908e-05 2023-03-18 03:51:31,229 44k INFO Train Epoch: 202 [39%] 2023-03-18 03:51:31,230 44k INFO Losses: [2.643831253051758, 2.1545746326446533, 9.000089645385742, 19.94818878173828, 0.9069694876670837], step: 203400, lr: 9.744553130976908e-05 2023-03-18 03:52:44,356 44k INFO Train Epoch: 202 [58%] 2023-03-18 03:52:44,356 44k INFO Losses: [2.3210833072662354, 2.5831658840179443, 12.062694549560547, 22.378620147705078, 1.4858194589614868], step: 203600, lr: 9.744553130976908e-05 2023-03-18 03:53:56,844 44k INFO Train Epoch: 202 [78%] 2023-03-18 03:53:56,844 44k INFO Losses: [2.3375160694122314, 2.4011173248291016, 12.502228736877441, 20.588909149169922, 1.01901376247406], step: 203800, lr: 9.744553130976908e-05 2023-03-18 03:55:09,492 44k INFO Train Epoch: 202 [98%] 2023-03-18 03:55:09,492 44k INFO Losses: [2.4101240634918213, 2.6301002502441406, 12.123987197875977, 20.02549934387207, 1.2679301500320435], step: 204000, lr: 9.744553130976908e-05 2023-03-18 03:55:12,838 44k INFO Saving model and optimizer state at iteration 202 to ./logs\44k\G_204000.pth 2023-03-18 03:55:13,598 44k INFO Saving model and optimizer state at iteration 202 to ./logs\44k\D_204000.pth 2023-03-18 03:55:14,398 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_201000.pth 2023-03-18 03:55:14,428 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_201000.pth 2023-03-18 03:55:21,579 44k INFO ====> Epoch: 202, cost 380.72 s 2023-03-18 03:56:36,797 44k INFO Train Epoch: 203 [18%] 2023-03-18 03:56:36,797 44k INFO Losses: [2.4066038131713867, 2.3037185668945312, 10.928319931030273, 20.519004821777344, 1.1219862699508667], step: 204200, lr: 9.743335061835535e-05 2023-03-18 03:57:48,806 44k INFO Train Epoch: 203 [38%] 2023-03-18 03:57:48,806 44k INFO Losses: [2.4065802097320557, 2.4168381690979004, 11.987675666809082, 19.631547927856445, 1.1523900032043457], step: 204400, lr: 9.743335061835535e-05 2023-03-18 03:59:01,875 44k INFO Train Epoch: 203 [57%] 2023-03-18 03:59:01,876 44k INFO Losses: [2.254254102706909, 2.431393623352051, 6.666529178619385, 16.503076553344727, 1.3995609283447266], step: 204600, lr: 9.743335061835535e-05 2023-03-18 04:00:14,463 44k INFO Train Epoch: 203 [77%] 2023-03-18 04:00:14,463 44k INFO Losses: [2.2534079551696777, 2.3378100395202637, 11.984272003173828, 19.66811752319336, 1.3988476991653442], step: 204800, lr: 9.743335061835535e-05 2023-03-18 04:01:27,108 44k INFO Train Epoch: 203 [97%] 2023-03-18 04:01:27,108 44k INFO Losses: [2.3037526607513428, 2.607193946838379, 9.345645904541016, 17.25993537902832, 1.1880347728729248], step: 205000, lr: 9.743335061835535e-05 2023-03-18 04:01:30,359 44k INFO Saving model and optimizer state at iteration 203 to ./logs\44k\G_205000.pth 2023-03-18 04:01:31,023 44k INFO Saving model and optimizer state at iteration 203 to ./logs\44k\D_205000.pth 2023-03-18 04:01:31,637 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_202000.pth 2023-03-18 04:01:31,679 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_202000.pth 2023-03-18 04:01:42,446 44k INFO ====> Epoch: 203, cost 380.87 s 2023-03-18 04:02:54,162 44k INFO Train Epoch: 204 [17%] 2023-03-18 04:02:54,163 44k INFO Losses: [2.1742677688598633, 2.491110324859619, 16.924602508544922, 23.259178161621094, 1.5864157676696777], step: 205200, lr: 9.742117144952805e-05 2023-03-18 04:04:06,157 44k INFO Train Epoch: 204 [37%] 2023-03-18 04:04:06,158 44k INFO Losses: [2.395693063735962, 2.5534133911132812, 9.567673683166504, 18.844717025756836, 1.07719886302948], step: 205400, lr: 9.742117144952805e-05 2023-03-18 04:05:19,165 44k INFO Train Epoch: 204 [56%] 2023-03-18 04:05:19,166 44k INFO Losses: [2.454774856567383, 2.4365999698638916, 10.100836753845215, 17.58771324157715, 1.026187777519226], step: 205600, lr: 9.742117144952805e-05 2023-03-18 04:06:31,690 44k INFO Train Epoch: 204 [76%] 2023-03-18 04:06:31,691 44k INFO Losses: [1.9061849117279053, 2.7376716136932373, 16.641199111938477, 22.299861907958984, 0.9743372201919556], step: 205800, lr: 9.742117144952805e-05 2023-03-18 04:07:44,376 44k INFO Train Epoch: 204 [96%] 2023-03-18 04:07:44,377 44k INFO Losses: [2.4892451763153076, 2.1245110034942627, 10.117720603942871, 17.60140037536621, 1.1997077465057373], step: 206000, lr: 9.742117144952805e-05 2023-03-18 04:07:47,632 44k INFO Saving model and optimizer state at iteration 204 to ./logs\44k\G_206000.pth 2023-03-18 04:07:48,342 44k INFO Saving model and optimizer state at iteration 204 to ./logs\44k\D_206000.pth 2023-03-18 04:07:48,994 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_203000.pth 2023-03-18 04:07:49,023 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_203000.pth 2023-03-18 04:08:03,542 44k INFO ====> Epoch: 204, cost 381.10 s 2023-03-18 04:09:11,464 44k INFO Train Epoch: 205 [16%] 2023-03-18 04:09:11,465 44k INFO Losses: [2.5086612701416016, 2.4617528915405273, 8.720775604248047, 17.597858428955078, 1.0623533725738525], step: 206200, lr: 9.740899380309685e-05 2023-03-18 04:10:23,505 44k INFO Train Epoch: 205 [36%] 2023-03-18 04:10:23,506 44k INFO Losses: [2.330876111984253, 2.354099988937378, 12.038134574890137, 22.553768157958984, 0.812515377998352], step: 206400, lr: 9.740899380309685e-05 2023-03-18 04:11:36,533 44k INFO Train Epoch: 205 [55%] 2023-03-18 04:11:36,534 44k INFO Losses: [2.5232763290405273, 2.3757224082946777, 8.051939010620117, 20.44830894470215, 1.3462501764297485], step: 206600, lr: 9.740899380309685e-05 2023-03-18 04:12:49,245 44k INFO Train Epoch: 205 [75%] 2023-03-18 04:12:49,246 44k INFO Losses: [2.4915404319763184, 2.383742332458496, 12.113738059997559, 23.692371368408203, 1.1959831714630127], step: 206800, lr: 9.740899380309685e-05 2023-03-18 04:14:01,960 44k INFO Train Epoch: 205 [95%] 2023-03-18 04:14:01,961 44k INFO Losses: [2.6379175186157227, 2.4986324310302734, 6.700481414794922, 17.428272247314453, 1.0300406217575073], step: 207000, lr: 9.740899380309685e-05 2023-03-18 04:14:05,092 44k INFO Saving model and optimizer state at iteration 205 to ./logs\44k\G_207000.pth 2023-03-18 04:14:05,841 44k INFO Saving model and optimizer state at iteration 205 to ./logs\44k\D_207000.pth 2023-03-18 04:14:06,504 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_204000.pth 2023-03-18 04:14:06,542 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_204000.pth 2023-03-18 04:14:24,620 44k INFO ====> Epoch: 205, cost 381.08 s 2023-03-18 04:15:29,033 44k INFO Train Epoch: 206 [15%] 2023-03-18 04:15:29,033 44k INFO Losses: [2.6133882999420166, 2.286388874053955, 7.517882823944092, 18.506744384765625, 1.3963871002197266], step: 207200, lr: 9.739681767887146e-05 2023-03-18 04:16:41,077 44k INFO Train Epoch: 206 [35%] 2023-03-18 04:16:41,077 44k INFO Losses: [2.270415782928467, 2.5976624488830566, 12.650636672973633, 22.8465518951416, 1.3679596185684204], step: 207400, lr: 9.739681767887146e-05 2023-03-18 04:17:53,885 44k INFO Train Epoch: 206 [54%] 2023-03-18 04:17:53,885 44k INFO Losses: [2.2992308139801025, 2.6822454929351807, 11.062071800231934, 23.020925521850586, 1.3925575017929077], step: 207600, lr: 9.739681767887146e-05 2023-03-18 04:19:06,549 44k INFO Train Epoch: 206 [74%] 2023-03-18 04:19:06,549 44k INFO Losses: [2.5305185317993164, 2.1054418087005615, 6.719656467437744, 16.785015106201172, 1.4449454545974731], step: 207800, lr: 9.739681767887146e-05 2023-03-18 04:20:19,241 44k INFO Train Epoch: 206 [94%] 2023-03-18 04:20:19,242 44k INFO Losses: [2.346329689025879, 2.5411758422851562, 10.774502754211426, 21.15964126586914, 0.9065169095993042], step: 208000, lr: 9.739681767887146e-05 2023-03-18 04:20:22,551 44k INFO Saving model and optimizer state at iteration 206 to ./logs\44k\G_208000.pth 2023-03-18 04:20:23,222 44k INFO Saving model and optimizer state at iteration 206 to ./logs\44k\D_208000.pth 2023-03-18 04:20:23,855 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_205000.pth 2023-03-18 04:20:23,888 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_205000.pth 2023-03-18 04:20:45,485 44k INFO ====> Epoch: 206, cost 380.87 s 2023-03-18 04:21:46,288 44k INFO Train Epoch: 207 [14%] 2023-03-18 04:21:46,289 44k INFO Losses: [2.4565656185150146, 2.3636999130249023, 11.674576759338379, 21.122556686401367, 1.1596930027008057], step: 208200, lr: 9.73846430766616e-05 2023-03-18 04:22:58,420 44k INFO Train Epoch: 207 [34%] 2023-03-18 04:22:58,421 44k INFO Losses: [2.5215349197387695, 2.2624473571777344, 9.478468894958496, 19.22565269470215, 1.2081434726715088], step: 208400, lr: 9.73846430766616e-05 2023-03-18 04:24:11,182 44k INFO Train Epoch: 207 [53%] 2023-03-18 04:24:11,182 44k INFO Losses: [2.6013290882110596, 2.0532431602478027, 5.465509414672852, 17.21483039855957, 0.8709335327148438], step: 208600, lr: 9.73846430766616e-05 2023-03-18 04:25:23,884 44k INFO Train Epoch: 207 [73%] 2023-03-18 04:25:23,884 44k INFO Losses: [2.508497714996338, 2.343064546585083, 11.80150032043457, 18.09079933166504, 1.3186862468719482], step: 208800, lr: 9.73846430766616e-05 2023-03-18 04:26:36,791 44k INFO Train Epoch: 207 [93%] 2023-03-18 04:26:36,792 44k INFO Losses: [2.502056121826172, 2.340752124786377, 10.039271354675293, 17.84286880493164, 1.1040756702423096], step: 209000, lr: 9.73846430766616e-05 2023-03-18 04:26:40,059 44k INFO Saving model and optimizer state at iteration 207 to ./logs\44k\G_209000.pth 2023-03-18 04:26:40,721 44k INFO Saving model and optimizer state at iteration 207 to ./logs\44k\D_209000.pth 2023-03-18 04:26:41,326 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_206000.pth 2023-03-18 04:26:41,357 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_206000.pth 2023-03-18 04:27:06,527 44k INFO ====> Epoch: 207, cost 381.04 s 2023-03-18 04:28:03,787 44k INFO Train Epoch: 208 [13%] 2023-03-18 04:28:03,787 44k INFO Losses: [2.408684015274048, 2.4910149574279785, 10.778939247131348, 17.81688117980957, 1.2155324220657349], step: 209200, lr: 9.7372469996277e-05 2023-03-18 04:29:15,890 44k INFO Train Epoch: 208 [33%] 2023-03-18 04:29:15,890 44k INFO Losses: [2.476435661315918, 2.5824460983276367, 8.418243408203125, 19.46013832092285, 1.443676471710205], step: 209400, lr: 9.7372469996277e-05 2023-03-18 04:30:28,463 44k INFO Train Epoch: 208 [52%] 2023-03-18 04:30:28,463 44k INFO Losses: [2.475449562072754, 2.2651803493499756, 8.007904052734375, 16.845149993896484, 1.0906004905700684], step: 209600, lr: 9.7372469996277e-05 2023-03-18 04:31:41,375 44k INFO Train Epoch: 208 [72%] 2023-03-18 04:31:41,376 44k INFO Losses: [2.5828819274902344, 2.046959161758423, 7.3825178146362305, 20.044408798217773, 1.2765185832977295], step: 209800, lr: 9.7372469996277e-05 2023-03-18 04:32:54,189 44k INFO Train Epoch: 208 [92%] 2023-03-18 04:32:54,190 44k INFO Losses: [2.4445502758026123, 2.074138641357422, 11.518509864807129, 20.77442169189453, 1.4972076416015625], step: 210000, lr: 9.7372469996277e-05 2023-03-18 04:32:57,422 44k INFO Saving model and optimizer state at iteration 208 to ./logs\44k\G_210000.pth 2023-03-18 04:32:58,193 44k INFO Saving model and optimizer state at iteration 208 to ./logs\44k\D_210000.pth 2023-03-18 04:32:58,826 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_207000.pth 2023-03-18 04:32:58,854 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_207000.pth 2023-03-18 04:33:27,642 44k INFO ====> Epoch: 208, cost 381.12 s 2023-03-18 04:34:21,201 44k INFO Train Epoch: 209 [12%] 2023-03-18 04:34:21,202 44k INFO Losses: [2.636244058609009, 2.650416135787964, 10.180407524108887, 22.000852584838867, 1.2808634042739868], step: 210200, lr: 9.736029843752747e-05 2023-03-18 04:35:33,191 44k INFO Train Epoch: 209 [32%] 2023-03-18 04:35:33,191 44k INFO Losses: [2.5330915451049805, 2.4409401416778564, 11.102428436279297, 21.991979598999023, 1.1252168416976929], step: 210400, lr: 9.736029843752747e-05 2023-03-18 04:36:45,842 44k INFO Train Epoch: 209 [51%] 2023-03-18 04:36:45,842 44k INFO Losses: [2.424656391143799, 2.328678846359253, 10.673663139343262, 22.717185974121094, 1.6736221313476562], step: 210600, lr: 9.736029843752747e-05 2023-03-18 04:37:58,817 44k INFO Train Epoch: 209 [71%] 2023-03-18 04:37:58,818 44k INFO Losses: [2.4248194694519043, 2.439309597015381, 12.745267868041992, 19.107084274291992, 1.2828978300094604], step: 210800, lr: 9.736029843752747e-05 2023-03-18 04:39:11,568 44k INFO Train Epoch: 209 [91%] 2023-03-18 04:39:11,568 44k INFO Losses: [2.264394760131836, 2.3608453273773193, 11.196949005126953, 22.525163650512695, 1.454255223274231], step: 211000, lr: 9.736029843752747e-05 2023-03-18 04:39:14,779 44k INFO Saving model and optimizer state at iteration 209 to ./logs\44k\G_211000.pth 2023-03-18 04:39:15,496 44k INFO Saving model and optimizer state at iteration 209 to ./logs\44k\D_211000.pth 2023-03-18 04:39:16,155 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_208000.pth 2023-03-18 04:39:16,193 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_208000.pth 2023-03-18 04:39:48,669 44k INFO ====> Epoch: 209, cost 381.03 s 2023-03-18 04:40:38,744 44k INFO Train Epoch: 210 [11%] 2023-03-18 04:40:38,745 44k INFO Losses: [2.3172075748443604, 2.141535997390747, 11.554900169372559, 20.619487762451172, 1.1187455654144287], step: 211200, lr: 9.734812840022278e-05 2023-03-18 04:41:50,621 44k INFO Train Epoch: 210 [31%] 2023-03-18 04:41:50,622 44k INFO Losses: [2.4044885635375977, 2.299437999725342, 9.418902397155762, 20.92523956298828, 1.3952805995941162], step: 211400, lr: 9.734812840022278e-05 2023-03-18 04:43:03,157 44k INFO Train Epoch: 210 [50%] 2023-03-18 04:43:03,157 44k INFO Losses: [2.449093818664551, 2.164060592651367, 14.668787002563477, 22.528772354125977, 1.143173336982727], step: 211600, lr: 9.734812840022278e-05 2023-03-18 04:44:16,068 44k INFO Train Epoch: 210 [70%] 2023-03-18 04:44:16,069 44k INFO Losses: [2.2295117378234863, 2.648979902267456, 11.794519424438477, 21.998340606689453, 1.3253049850463867], step: 211800, lr: 9.734812840022278e-05 2023-03-18 04:45:28,761 44k INFO Train Epoch: 210 [90%] 2023-03-18 04:45:28,761 44k INFO Losses: [2.549865245819092, 2.159679412841797, 13.532463073730469, 18.892852783203125, 1.4023420810699463], step: 212000, lr: 9.734812840022278e-05 2023-03-18 04:45:31,993 44k INFO Saving model and optimizer state at iteration 210 to ./logs\44k\G_212000.pth 2023-03-18 04:45:32,677 44k INFO Saving model and optimizer state at iteration 210 to ./logs\44k\D_212000.pth 2023-03-18 04:45:33,309 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_209000.pth 2023-03-18 04:45:33,341 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_209000.pth 2023-03-18 04:46:09,422 44k INFO ====> Epoch: 210, cost 380.75 s 2023-03-18 04:46:56,020 44k INFO Train Epoch: 211 [10%] 2023-03-18 04:46:56,021 44k INFO Losses: [2.3948841094970703, 2.60760498046875, 11.197531700134277, 16.447399139404297, 1.1700938940048218], step: 212200, lr: 9.733595988417275e-05 2023-03-18 04:48:08,073 44k INFO Train Epoch: 211 [30%] 2023-03-18 04:48:08,073 44k INFO Losses: [2.5156075954437256, 2.1601710319519043, 11.572199821472168, 20.281068801879883, 1.470662236213684], step: 212400, lr: 9.733595988417275e-05 2023-03-18 04:49:20,709 44k INFO Train Epoch: 211 [50%] 2023-03-18 04:49:20,709 44k INFO Losses: [2.576066732406616, 2.3867249488830566, 12.562385559082031, 22.084884643554688, 1.1158519983291626], step: 212600, lr: 9.733595988417275e-05 2023-03-18 04:50:33,560 44k INFO Train Epoch: 211 [69%] 2023-03-18 04:50:33,560 44k INFO Losses: [2.5379698276519775, 2.539827823638916, 8.31782341003418, 16.4893741607666, 0.9179621338844299], step: 212800, lr: 9.733595988417275e-05 2023-03-18 04:51:46,300 44k INFO Train Epoch: 211 [89%] 2023-03-18 04:51:46,301 44k INFO Losses: [2.5727360248565674, 2.0680618286132812, 8.142523765563965, 15.704126358032227, 1.201551079750061], step: 213000, lr: 9.733595988417275e-05 2023-03-18 04:51:49,535 44k INFO Saving model and optimizer state at iteration 211 to ./logs\44k\G_213000.pth 2023-03-18 04:51:50,202 44k INFO Saving model and optimizer state at iteration 211 to ./logs\44k\D_213000.pth 2023-03-18 04:51:50,823 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_210000.pth 2023-03-18 04:51:50,869 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_210000.pth 2023-03-18 04:52:30,668 44k INFO ====> Epoch: 211, cost 381.25 s 2023-03-18 04:53:13,443 44k INFO Train Epoch: 212 [9%] 2023-03-18 04:53:13,443 44k INFO Losses: [2.1886892318725586, 2.6773650646209717, 13.79354190826416, 22.5670223236084, 1.1626116037368774], step: 213200, lr: 9.732379288918723e-05 2023-03-18 04:54:25,706 44k INFO Train Epoch: 212 [29%] 2023-03-18 04:54:25,706 44k INFO Losses: [2.458017349243164, 2.49751877784729, 9.820363998413086, 17.487607955932617, 1.208638072013855], step: 213400, lr: 9.732379288918723e-05 2023-03-18 04:55:38,129 44k INFO Train Epoch: 212 [49%] 2023-03-18 04:55:38,130 44k INFO Losses: [2.358203411102295, 2.440837860107422, 11.930474281311035, 20.663816452026367, 1.4411741495132446], step: 213600, lr: 9.732379288918723e-05 2023-03-18 04:56:51,166 44k INFO Train Epoch: 212 [68%] 2023-03-18 04:56:51,167 44k INFO Losses: [2.5951459407806396, 2.260058879852295, 8.277860641479492, 16.88381576538086, 0.9772056937217712], step: 213800, lr: 9.732379288918723e-05 2023-03-18 04:58:03,946 44k INFO Train Epoch: 212 [88%] 2023-03-18 04:58:03,947 44k INFO Losses: [2.7684216499328613, 1.9909836053848267, 8.293387413024902, 14.175174713134766, 0.6331143379211426], step: 214000, lr: 9.732379288918723e-05 2023-03-18 04:58:07,164 44k INFO Saving model and optimizer state at iteration 212 to ./logs\44k\G_214000.pth 2023-03-18 04:58:07,886 44k INFO Saving model and optimizer state at iteration 212 to ./logs\44k\D_214000.pth 2023-03-18 04:58:08,539 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_211000.pth 2023-03-18 04:58:08,567 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_211000.pth 2023-03-18 04:58:51,947 44k INFO ====> Epoch: 212, cost 381.28 s 2023-03-18 04:59:31,216 44k INFO Train Epoch: 213 [8%] 2023-03-18 04:59:31,217 44k INFO Losses: [2.3342080116271973, 2.286310911178589, 11.164224624633789, 16.579256057739258, 1.1848328113555908], step: 214200, lr: 9.731162741507607e-05 2023-03-18 05:00:43,597 44k INFO Train Epoch: 213 [28%] 2023-03-18 05:00:43,597 44k INFO Losses: [2.5821752548217773, 2.20208477973938, 9.516801834106445, 19.25157356262207, 1.2589298486709595], step: 214400, lr: 9.731162741507607e-05 2023-03-18 05:01:55,960 44k INFO Train Epoch: 213 [48%] 2023-03-18 05:01:55,961 44k INFO Losses: [2.3431315422058105, 2.239010810852051, 13.188133239746094, 19.26343536376953, 1.4238587617874146], step: 214600, lr: 9.731162741507607e-05 2023-03-18 05:03:08,959 44k INFO Train Epoch: 213 [67%] 2023-03-18 05:03:08,960 44k INFO Losses: [2.5447258949279785, 2.1126294136047363, 9.794934272766113, 16.65797996520996, 1.1757471561431885], step: 214800, lr: 9.731162741507607e-05 2023-03-18 05:04:21,846 44k INFO Train Epoch: 213 [87%] 2023-03-18 05:04:21,846 44k INFO Losses: [2.2998266220092773, 2.4195752143859863, 10.544708251953125, 17.63193130493164, 1.3054320812225342], step: 215000, lr: 9.731162741507607e-05 2023-03-18 05:04:25,078 44k INFO Saving model and optimizer state at iteration 213 to ./logs\44k\G_215000.pth 2023-03-18 05:04:25,806 44k INFO Saving model and optimizer state at iteration 213 to ./logs\44k\D_215000.pth 2023-03-18 05:04:26,442 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_212000.pth 2023-03-18 05:04:26,473 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_212000.pth 2023-03-18 05:05:13,424 44k INFO ====> Epoch: 213, cost 381.48 s 2023-03-18 05:05:49,025 44k INFO Train Epoch: 214 [7%] 2023-03-18 05:05:49,026 44k INFO Losses: [2.5586633682250977, 1.9420918226242065, 11.018527030944824, 19.588359832763672, 0.8160967230796814], step: 215200, lr: 9.729946346164919e-05 2023-03-18 05:07:01,455 44k INFO Train Epoch: 214 [27%] 2023-03-18 05:07:01,455 44k INFO Losses: [2.654465675354004, 2.3712239265441895, 9.213323593139648, 16.987133026123047, 1.4045078754425049], step: 215400, lr: 9.729946346164919e-05 2023-03-18 05:08:13,874 44k INFO Train Epoch: 214 [47%] 2023-03-18 05:08:13,875 44k INFO Losses: [2.609724521636963, 2.2535369396209717, 9.657922744750977, 19.610313415527344, 1.0664334297180176], step: 215600, lr: 9.729946346164919e-05 2023-03-18 05:09:26,889 44k INFO Train Epoch: 214 [66%] 2023-03-18 05:09:26,890 44k INFO Losses: [2.4549920558929443, 2.1758179664611816, 9.546875953674316, 14.18474006652832, 1.4781588315963745], step: 215800, lr: 9.729946346164919e-05 2023-03-18 05:10:39,691 44k INFO Train Epoch: 214 [86%] 2023-03-18 05:10:39,692 44k INFO Losses: [2.6810693740844727, 2.0281684398651123, 7.835864067077637, 18.193531036376953, 0.947202205657959], step: 216000, lr: 9.729946346164919e-05 2023-03-18 05:10:42,954 44k INFO Saving model and optimizer state at iteration 214 to ./logs\44k\G_216000.pth 2023-03-18 05:10:43,680 44k INFO Saving model and optimizer state at iteration 214 to ./logs\44k\D_216000.pth 2023-03-18 05:10:44,318 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_213000.pth 2023-03-18 05:10:44,349 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_213000.pth 2023-03-18 05:11:35,028 44k INFO ====> Epoch: 214, cost 381.60 s 2023-03-18 05:12:06,876 44k INFO Train Epoch: 215 [6%] 2023-03-18 05:12:06,877 44k INFO Losses: [2.327275514602661, 2.2914764881134033, 12.687413215637207, 22.67828369140625, 1.3212991952896118], step: 216200, lr: 9.728730102871649e-05 2023-03-18 05:13:19,277 44k INFO Train Epoch: 215 [26%] 2023-03-18 05:13:19,277 44k INFO Losses: [2.4929964542388916, 2.2721335887908936, 14.425219535827637, 22.1375732421875, 1.3343838453292847], step: 216400, lr: 9.728730102871649e-05 2023-03-18 05:14:31,733 44k INFO Train Epoch: 215 [46%] 2023-03-18 05:14:31,734 44k INFO Losses: [2.674147605895996, 1.985717535018921, 9.921048164367676, 17.161544799804688, 1.1089959144592285], step: 216600, lr: 9.728730102871649e-05 2023-03-18 05:15:44,754 44k INFO Train Epoch: 215 [65%] 2023-03-18 05:15:44,755 44k INFO Losses: [2.3677427768707275, 2.227358341217041, 14.646700859069824, 23.259654998779297, 1.6954147815704346], step: 216800, lr: 9.728730102871649e-05 2023-03-18 05:16:57,421 44k INFO Train Epoch: 215 [85%] 2023-03-18 05:16:57,421 44k INFO Losses: [2.568272829055786, 2.227808713912964, 11.758462905883789, 21.828994750976562, 1.5984773635864258], step: 217000, lr: 9.728730102871649e-05 2023-03-18 05:17:00,592 44k INFO Saving model and optimizer state at iteration 215 to ./logs\44k\G_217000.pth 2023-03-18 05:17:01,285 44k INFO Saving model and optimizer state at iteration 215 to ./logs\44k\D_217000.pth 2023-03-18 05:17:01,885 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_214000.pth 2023-03-18 05:17:01,913 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_214000.pth 2023-03-18 05:17:56,178 44k INFO ====> Epoch: 215, cost 381.15 s 2023-03-18 05:18:24,477 44k INFO Train Epoch: 216 [5%] 2023-03-18 05:18:24,477 44k INFO Losses: [2.4856698513031006, 2.3112778663635254, 12.227679252624512, 20.097515106201172, 1.2972851991653442], step: 217200, lr: 9.727514011608789e-05 2023-03-18 05:19:36,873 44k INFO Train Epoch: 216 [25%] 2023-03-18 05:19:36,874 44k INFO Losses: [2.634704828262329, 2.0742194652557373, 11.289141654968262, 19.35813331604004, 1.3262357711791992], step: 217400, lr: 9.727514011608789e-05 2023-03-18 05:20:49,178 44k INFO Train Epoch: 216 [45%] 2023-03-18 05:20:49,178 44k INFO Losses: [2.4562885761260986, 2.3182320594787598, 7.525690078735352, 20.46562957763672, 1.3982411623001099], step: 217600, lr: 9.727514011608789e-05 2023-03-18 05:22:02,195 44k INFO Train Epoch: 216 [64%] 2023-03-18 05:22:02,195 44k INFO Losses: [2.220682382583618, 2.65370774269104, 15.899622917175293, 20.33005714416504, 0.7966172695159912], step: 217800, lr: 9.727514011608789e-05 2023-03-18 05:23:14,931 44k INFO Train Epoch: 216 [84%] 2023-03-18 05:23:14,931 44k INFO Losses: [2.494612216949463, 2.1929359436035156, 9.820818901062012, 20.183414459228516, 1.2915910482406616], step: 218000, lr: 9.727514011608789e-05 2023-03-18 05:23:18,165 44k INFO Saving model and optimizer state at iteration 216 to ./logs\44k\G_218000.pth 2023-03-18 05:23:18,876 44k INFO Saving model and optimizer state at iteration 216 to ./logs\44k\D_218000.pth 2023-03-18 05:23:19,516 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_215000.pth 2023-03-18 05:23:19,546 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_215000.pth 2023-03-18 05:24:17,457 44k INFO ====> Epoch: 216, cost 381.28 s 2023-03-18 05:24:42,047 44k INFO Train Epoch: 217 [4%] 2023-03-18 05:24:42,048 44k INFO Losses: [2.6268444061279297, 2.063795328140259, 12.639397621154785, 21.179443359375, 1.4780073165893555], step: 218200, lr: 9.726298072357337e-05 2023-03-18 05:25:54,557 44k INFO Train Epoch: 217 [24%] 2023-03-18 05:25:54,557 44k INFO Losses: [2.534679651260376, 2.2865090370178223, 9.492945671081543, 22.130077362060547, 1.574913501739502], step: 218400, lr: 9.726298072357337e-05 2023-03-18 05:27:06,887 44k INFO Train Epoch: 217 [44%] 2023-03-18 05:27:06,887 44k INFO Losses: [2.560112714767456, 2.2680742740631104, 8.896263122558594, 18.814821243286133, 1.1838154792785645], step: 218600, lr: 9.726298072357337e-05 2023-03-18 05:28:19,899 44k INFO Train Epoch: 217 [63%] 2023-03-18 05:28:19,900 44k INFO Losses: [2.8910012245178223, 2.2731335163116455, 11.053946495056152, 21.715473175048828, 1.460179090499878], step: 218800, lr: 9.726298072357337e-05 2023-03-18 05:29:32,491 44k INFO Train Epoch: 217 [83%] 2023-03-18 05:29:32,492 44k INFO Losses: [2.60426926612854, 2.278202772140503, 10.180852890014648, 22.983610153198242, 1.4442329406738281], step: 219000, lr: 9.726298072357337e-05 2023-03-18 05:29:35,758 44k INFO Saving model and optimizer state at iteration 217 to ./logs\44k\G_219000.pth 2023-03-18 05:29:36,440 44k INFO Saving model and optimizer state at iteration 217 to ./logs\44k\D_219000.pth 2023-03-18 05:29:37,076 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_216000.pth 2023-03-18 05:29:37,109 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_216000.pth 2023-03-18 05:30:38,753 44k INFO ====> Epoch: 217, cost 381.30 s 2023-03-18 05:30:59,701 44k INFO Train Epoch: 218 [3%] 2023-03-18 05:30:59,702 44k INFO Losses: [2.3926026821136475, 2.419358253479004, 12.079309463500977, 24.38916015625, 1.308463454246521], step: 219200, lr: 9.725082285098293e-05 2023-03-18 05:32:12,368 44k INFO Train Epoch: 218 [23%] 2023-03-18 05:32:12,368 44k INFO Losses: [2.5518064498901367, 2.2122879028320312, 13.672916412353516, 21.079816818237305, 1.0859448909759521], step: 219400, lr: 9.725082285098293e-05 2023-03-18 05:33:24,638 44k INFO Train Epoch: 218 [43%] 2023-03-18 05:33:24,638 44k INFO Losses: [2.2628049850463867, 2.49741792678833, 10.620892524719238, 18.723352432250977, 1.2868678569793701], step: 219600, lr: 9.725082285098293e-05 2023-03-18 05:34:37,540 44k INFO Train Epoch: 218 [62%] 2023-03-18 05:34:37,541 44k INFO Losses: [2.612077236175537, 2.148192882537842, 7.711968898773193, 18.851593017578125, 1.2841447591781616], step: 219800, lr: 9.725082285098293e-05 2023-03-18 05:35:50,162 44k INFO Train Epoch: 218 [82%] 2023-03-18 05:35:50,162 44k INFO Losses: [2.391085386276245, 2.3776352405548096, 11.399181365966797, 23.93021583557129, 1.0840990543365479], step: 220000, lr: 9.725082285098293e-05 2023-03-18 05:35:53,382 44k INFO Saving model and optimizer state at iteration 218 to ./logs\44k\G_220000.pth 2023-03-18 05:35:54,096 44k INFO Saving model and optimizer state at iteration 218 to ./logs\44k\D_220000.pth 2023-03-18 05:35:54,716 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_217000.pth 2023-03-18 05:35:54,749 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_217000.pth 2023-03-18 05:37:00,022 44k INFO ====> Epoch: 218, cost 381.27 s 2023-03-18 05:37:17,311 44k INFO Train Epoch: 219 [2%] 2023-03-18 05:37:17,312 44k INFO Losses: [2.403543472290039, 2.371424674987793, 12.194214820861816, 21.80869483947754, 0.8829078078269958], step: 220200, lr: 9.723866649812655e-05 2023-03-18 05:38:30,207 44k INFO Train Epoch: 219 [22%] 2023-03-18 05:38:30,207 44k INFO Losses: [2.355329751968384, 2.0819854736328125, 6.5865607261657715, 18.38526153564453, 1.2593601942062378], step: 220400, lr: 9.723866649812655e-05 2023-03-18 05:39:42,400 44k INFO Train Epoch: 219 [42%] 2023-03-18 05:39:42,400 44k INFO Losses: [2.520430564880371, 2.378749132156372, 11.162091255187988, 19.74947166442871, 1.2289481163024902], step: 220600, lr: 9.723866649812655e-05 2023-03-18 05:40:55,413 44k INFO Train Epoch: 219 [61%] 2023-03-18 05:40:55,414 44k INFO Losses: [2.340360641479492, 2.5172085762023926, 16.890743255615234, 22.503585815429688, 1.1097873449325562], step: 220800, lr: 9.723866649812655e-05 2023-03-18 05:42:08,007 44k INFO Train Epoch: 219 [81%] 2023-03-18 05:42:08,007 44k INFO Losses: [2.493055820465088, 2.3282458782196045, 13.617796897888184, 23.34046173095703, 1.2379008531570435], step: 221000, lr: 9.723866649812655e-05 2023-03-18 05:42:11,215 44k INFO Saving model and optimizer state at iteration 219 to ./logs\44k\G_221000.pth 2023-03-18 05:42:11,995 44k INFO Saving model and optimizer state at iteration 219 to ./logs\44k\D_221000.pth 2023-03-18 05:42:12,613 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_218000.pth 2023-03-18 05:42:12,654 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_218000.pth 2023-03-18 05:43:21,631 44k INFO ====> Epoch: 219, cost 381.61 s 2023-03-18 05:43:35,316 44k INFO Train Epoch: 220 [1%] 2023-03-18 05:43:35,317 44k INFO Losses: [2.387524366378784, 2.437701463699341, 12.207493782043457, 22.566143035888672, 1.1749467849731445], step: 221200, lr: 9.722651166481428e-05 2023-03-18 05:44:48,220 44k INFO Train Epoch: 220 [21%] 2023-03-18 05:44:48,221 44k INFO Losses: [2.2932724952697754, 2.431903839111328, 10.016468048095703, 19.569625854492188, 1.0502119064331055], step: 221400, lr: 9.722651166481428e-05 2023-03-18 05:46:00,295 44k INFO Train Epoch: 220 [41%] 2023-03-18 05:46:00,296 44k INFO Losses: [2.4389796257019043, 2.204777240753174, 14.506704330444336, 21.90412712097168, 1.2666748762130737], step: 221600, lr: 9.722651166481428e-05 2023-03-18 05:47:13,340 44k INFO Train Epoch: 220 [60%] 2023-03-18 05:47:13,340 44k INFO Losses: [2.5212695598602295, 2.2725446224212646, 8.619426727294922, 16.134347915649414, 0.9705204367637634], step: 221800, lr: 9.722651166481428e-05 2023-03-18 05:48:26,003 44k INFO Train Epoch: 220 [80%] 2023-03-18 05:48:26,004 44k INFO Losses: [2.568837881088257, 2.1717371940612793, 8.948863983154297, 18.880861282348633, 1.2637176513671875], step: 222000, lr: 9.722651166481428e-05 2023-03-18 05:48:29,224 44k INFO Saving model and optimizer state at iteration 220 to ./logs\44k\G_222000.pth 2023-03-18 05:48:29,933 44k INFO Saving model and optimizer state at iteration 220 to ./logs\44k\D_222000.pth 2023-03-18 05:48:30,580 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_219000.pth 2023-03-18 05:48:30,608 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_219000.pth 2023-03-18 05:49:43,079 44k INFO ====> Epoch: 220, cost 381.45 s 2023-03-18 05:49:53,030 44k INFO Train Epoch: 221 [0%] 2023-03-18 05:49:53,030 44k INFO Losses: [2.4202260971069336, 2.398791790008545, 11.515371322631836, 20.603174209594727, 1.3581533432006836], step: 222200, lr: 9.721435835085619e-05 2023-03-18 05:51:05,794 44k INFO Train Epoch: 221 [20%] 2023-03-18 05:51:05,794 44k INFO Losses: [2.2747716903686523, 2.3000996112823486, 11.438432693481445, 23.659587860107422, 1.1151525974273682], step: 222400, lr: 9.721435835085619e-05 2023-03-18 05:52:17,880 44k INFO Train Epoch: 221 [40%] 2023-03-18 05:52:17,880 44k INFO Losses: [2.496519088745117, 2.341933488845825, 7.352211952209473, 21.583324432373047, 1.4678338766098022], step: 222600, lr: 9.721435835085619e-05 2023-03-18 05:53:31,127 44k INFO Train Epoch: 221 [59%] 2023-03-18 05:53:31,127 44k INFO Losses: [2.461646318435669, 2.2936148643493652, 13.20803451538086, 20.986675262451172, 1.0845364332199097], step: 222800, lr: 9.721435835085619e-05 2023-03-18 05:54:43,772 44k INFO Train Epoch: 221 [79%] 2023-03-18 05:54:43,772 44k INFO Losses: [2.580874443054199, 2.0497238636016846, 9.0148344039917, 19.99745750427246, 1.4250463247299194], step: 223000, lr: 9.721435835085619e-05 2023-03-18 05:54:47,001 44k INFO Saving model and optimizer state at iteration 221 to ./logs\44k\G_223000.pth 2023-03-18 05:54:47,733 44k INFO Saving model and optimizer state at iteration 221 to ./logs\44k\D_223000.pth 2023-03-18 05:54:48,366 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_220000.pth 2023-03-18 05:54:48,393 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_220000.pth 2023-03-18 05:56:01,367 44k INFO Train Epoch: 221 [99%] 2023-03-18 05:56:01,368 44k INFO Losses: [2.3243489265441895, 2.536977767944336, 9.261144638061523, 17.441036224365234, 1.3319064378738403], step: 223200, lr: 9.721435835085619e-05 2023-03-18 05:56:05,000 44k INFO ====> Epoch: 221, cost 381.92 s 2023-03-18 05:57:24,297 44k INFO Train Epoch: 222 [19%] 2023-03-18 05:57:24,298 44k INFO Losses: [2.6007330417633057, 2.40103816986084, 7.840771198272705, 20.59065818786621, 0.9085755944252014], step: 223400, lr: 9.720220655606233e-05 2023-03-18 05:58:36,573 44k INFO Train Epoch: 222 [39%] 2023-03-18 05:58:36,573 44k INFO Losses: [2.4472241401672363, 2.4469683170318604, 12.100788116455078, 22.529420852661133, 1.5928704738616943], step: 223600, lr: 9.720220655606233e-05 2023-03-18 05:59:50,000 44k INFO Train Epoch: 222 [58%] 2023-03-18 05:59:50,001 44k INFO Losses: [2.356863021850586, 2.3006067276000977, 12.22581958770752, 19.945112228393555, 1.2894288301467896], step: 223800, lr: 9.720220655606233e-05 2023-03-18 06:01:02,830 44k INFO Train Epoch: 222 [78%] 2023-03-18 06:01:02,831 44k INFO Losses: [2.52915096282959, 2.282731294631958, 11.11389446258545, 19.627582550048828, 1.0925637483596802], step: 224000, lr: 9.720220655606233e-05 2023-03-18 06:01:06,061 44k INFO Saving model and optimizer state at iteration 222 to ./logs\44k\G_224000.pth 2023-03-18 06:01:06,742 44k INFO Saving model and optimizer state at iteration 222 to ./logs\44k\D_224000.pth 2023-03-18 06:01:07,423 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_221000.pth 2023-03-18 06:01:07,456 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_221000.pth 2023-03-18 06:02:20,338 44k INFO Train Epoch: 222 [98%] 2023-03-18 06:02:20,338 44k INFO Losses: [2.411238193511963, 2.3841309547424316, 11.704781532287598, 21.42701530456543, 0.9884217977523804], step: 224200, lr: 9.720220655606233e-05 2023-03-18 06:02:27,620 44k INFO ====> Epoch: 222, cost 382.62 s 2023-03-18 06:03:43,261 44k INFO Train Epoch: 223 [18%] 2023-03-18 06:03:43,262 44k INFO Losses: [2.5856757164001465, 2.0573618412017822, 6.632025718688965, 17.750986099243164, 1.1298551559448242], step: 224400, lr: 9.719005628024282e-05 2023-03-18 06:04:55,429 44k INFO Train Epoch: 223 [38%] 2023-03-18 06:04:55,430 44k INFO Losses: [2.367194414138794, 2.473968505859375, 13.571794509887695, 20.124879837036133, 1.0460540056228638], step: 224600, lr: 9.719005628024282e-05 2023-03-18 06:06:08,705 44k INFO Train Epoch: 223 [57%] 2023-03-18 06:06:08,705 44k INFO Losses: [2.358733892440796, 2.395885467529297, 12.675276756286621, 22.553518295288086, 1.2363201379776], step: 224800, lr: 9.719005628024282e-05 2023-03-18 06:07:21,583 44k INFO Train Epoch: 223 [77%] 2023-03-18 06:07:21,584 44k INFO Losses: [2.4190967082977295, 2.5775771141052246, 11.332749366760254, 18.395462036132812, 0.9208194017410278], step: 225000, lr: 9.719005628024282e-05 2023-03-18 06:07:24,757 44k INFO Saving model and optimizer state at iteration 223 to ./logs\44k\G_225000.pth 2023-03-18 06:07:25,440 44k INFO Saving model and optimizer state at iteration 223 to ./logs\44k\D_225000.pth 2023-03-18 06:07:26,107 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_222000.pth 2023-03-18 06:07:26,144 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_222000.pth 2023-03-18 06:08:39,007 44k INFO Train Epoch: 223 [97%] 2023-03-18 06:08:39,007 44k INFO Losses: [2.547549247741699, 2.3223726749420166, 11.120349884033203, 21.522775650024414, 1.6341392993927002], step: 225200, lr: 9.719005628024282e-05 2023-03-18 06:08:49,992 44k INFO ====> Epoch: 223, cost 382.37 s 2023-03-18 06:10:02,056 44k INFO Train Epoch: 224 [17%] 2023-03-18 06:10:02,056 44k INFO Losses: [2.3883562088012695, 2.261385440826416, 8.929614067077637, 18.981046676635742, 1.2017027139663696], step: 225400, lr: 9.717790752320778e-05 2023-03-18 06:11:14,403 44k INFO Train Epoch: 224 [37%] 2023-03-18 06:11:14,403 44k INFO Losses: [2.289330005645752, 2.427328109741211, 12.972779273986816, 20.903234481811523, 1.0745795965194702], step: 225600, lr: 9.717790752320778e-05 2023-03-18 06:12:27,676 44k INFO Train Epoch: 224 [56%] 2023-03-18 06:12:27,676 44k INFO Losses: [2.602627754211426, 2.1036081314086914, 10.881694793701172, 20.733182907104492, 1.421685814857483], step: 225800, lr: 9.717790752320778e-05 2023-03-18 06:13:40,480 44k INFO Train Epoch: 224 [76%] 2023-03-18 06:13:40,481 44k INFO Losses: [2.326810359954834, 2.290469169616699, 11.782238960266113, 19.755136489868164, 1.151900291442871], step: 226000, lr: 9.717790752320778e-05 2023-03-18 06:13:43,736 44k INFO Saving model and optimizer state at iteration 224 to ./logs\44k\G_226000.pth 2023-03-18 06:13:44,455 44k INFO Saving model and optimizer state at iteration 224 to ./logs\44k\D_226000.pth 2023-03-18 06:13:45,099 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_223000.pth 2023-03-18 06:13:45,136 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_223000.pth 2023-03-18 06:14:57,892 44k INFO Train Epoch: 224 [96%] 2023-03-18 06:14:57,892 44k INFO Losses: [2.490096092224121, 2.3928942680358887, 14.092375755310059, 20.369577407836914, 1.273528814315796], step: 226200, lr: 9.717790752320778e-05 2023-03-18 06:15:12,605 44k INFO ====> Epoch: 224, cost 382.61 s 2023-03-18 06:16:20,919 44k INFO Train Epoch: 225 [16%] 2023-03-18 06:16:20,920 44k INFO Losses: [2.508359432220459, 2.373239040374756, 6.943535327911377, 20.027219772338867, 1.3973078727722168], step: 226400, lr: 9.716576028476738e-05 2023-03-18 06:17:33,236 44k INFO Train Epoch: 225 [36%] 2023-03-18 06:17:33,237 44k INFO Losses: [2.3215293884277344, 2.5094165802001953, 15.179780960083008, 24.433067321777344, 1.7787549495697021], step: 226600, lr: 9.716576028476738e-05 2023-03-18 06:18:46,550 44k INFO Train Epoch: 225 [55%] 2023-03-18 06:18:46,551 44k INFO Losses: [2.481260299682617, 2.4659712314605713, 11.181989669799805, 20.416969299316406, 1.1892521381378174], step: 226800, lr: 9.716576028476738e-05 2023-03-18 06:19:59,435 44k INFO Train Epoch: 225 [75%] 2023-03-18 06:19:59,436 44k INFO Losses: [2.2757019996643066, 2.6013522148132324, 12.467174530029297, 23.140783309936523, 1.1526826620101929], step: 227000, lr: 9.716576028476738e-05 2023-03-18 06:20:02,680 44k INFO Saving model and optimizer state at iteration 225 to ./logs\44k\G_227000.pth 2023-03-18 06:20:03,412 44k INFO Saving model and optimizer state at iteration 225 to ./logs\44k\D_227000.pth 2023-03-18 06:20:04,049 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_224000.pth 2023-03-18 06:20:04,079 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_224000.pth 2023-03-18 06:21:16,882 44k INFO Train Epoch: 225 [95%] 2023-03-18 06:21:16,882 44k INFO Losses: [2.5019426345825195, 2.023519515991211, 8.073501586914062, 19.750450134277344, 1.1338454484939575], step: 227200, lr: 9.716576028476738e-05 2023-03-18 06:21:35,165 44k INFO ====> Epoch: 225, cost 382.56 s 2023-03-18 06:22:39,793 44k INFO Train Epoch: 226 [15%] 2023-03-18 06:22:39,793 44k INFO Losses: [2.572770595550537, 2.3273065090179443, 8.27176284790039, 19.71979522705078, 1.177968144416809], step: 227400, lr: 9.715361456473177e-05 2023-03-18 06:23:52,107 44k INFO Train Epoch: 226 [35%] 2023-03-18 06:23:52,108 44k INFO Losses: [2.3632359504699707, 2.449406385421753, 11.400979995727539, 20.30449676513672, 1.111584186553955], step: 227600, lr: 9.715361456473177e-05 2023-03-18 06:25:05,230 44k INFO Train Epoch: 226 [54%] 2023-03-18 06:25:05,231 44k INFO Losses: [2.4844958782196045, 2.303659677505493, 11.716140747070312, 22.410160064697266, 1.5062370300292969], step: 227800, lr: 9.715361456473177e-05 2023-03-18 06:26:18,101 44k INFO Train Epoch: 226 [74%] 2023-03-18 06:26:18,101 44k INFO Losses: [2.3980441093444824, 2.236851453781128, 9.289047241210938, 19.795156478881836, 1.0837925672531128], step: 228000, lr: 9.715361456473177e-05 2023-03-18 06:26:21,260 44k INFO Saving model and optimizer state at iteration 226 to ./logs\44k\G_228000.pth 2023-03-18 06:26:22,028 44k INFO Saving model and optimizer state at iteration 226 to ./logs\44k\D_228000.pth 2023-03-18 06:26:22,674 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_225000.pth 2023-03-18 06:26:22,709 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_225000.pth 2023-03-18 06:27:35,641 44k INFO Train Epoch: 226 [94%] 2023-03-18 06:27:35,642 44k INFO Losses: [2.6528611183166504, 1.9291887283325195, 8.185139656066895, 16.0272274017334, 0.9111958742141724], step: 228200, lr: 9.715361456473177e-05 2023-03-18 06:27:57,464 44k INFO ====> Epoch: 226, cost 382.30 s 2023-03-18 06:28:58,466 44k INFO Train Epoch: 227 [14%] 2023-03-18 06:28:58,466 44k INFO Losses: [2.678619623184204, 2.2420620918273926, 11.466313362121582, 20.216400146484375, 1.6194603443145752], step: 228400, lr: 9.714147036291117e-05 2023-03-18 06:30:10,736 44k INFO Train Epoch: 227 [34%] 2023-03-18 06:30:10,737 44k INFO Losses: [2.386756420135498, 2.7659997940063477, 8.994463920593262, 17.465314865112305, 1.327602505683899], step: 228600, lr: 9.714147036291117e-05 2023-03-18 06:31:23,695 44k INFO Train Epoch: 227 [53%] 2023-03-18 06:31:23,695 44k INFO Losses: [2.3845784664154053, 2.321280002593994, 9.259244918823242, 16.398988723754883, 1.4070732593536377], step: 228800, lr: 9.714147036291117e-05 2023-03-18 06:32:36,674 44k INFO Train Epoch: 227 [73%] 2023-03-18 06:32:36,675 44k INFO Losses: [2.505293130874634, 2.3921995162963867, 13.169219970703125, 19.302194595336914, 0.6968217492103577], step: 229000, lr: 9.714147036291117e-05 2023-03-18 06:32:39,906 44k INFO Saving model and optimizer state at iteration 227 to ./logs\44k\G_229000.pth 2023-03-18 06:32:40,637 44k INFO Saving model and optimizer state at iteration 227 to ./logs\44k\D_229000.pth 2023-03-18 06:32:41,251 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_226000.pth 2023-03-18 06:32:41,280 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_226000.pth 2023-03-18 06:33:54,112 44k INFO Train Epoch: 227 [93%] 2023-03-18 06:33:54,113 44k INFO Losses: [2.7466623783111572, 2.371680498123169, 8.890592575073242, 16.969791412353516, 1.6131129264831543], step: 229200, lr: 9.714147036291117e-05 2023-03-18 06:34:19,571 44k INFO ====> Epoch: 227, cost 382.11 s 2023-03-18 06:35:17,121 44k INFO Train Epoch: 228 [13%] 2023-03-18 06:35:17,121 44k INFO Losses: [2.379477024078369, 2.1686952114105225, 8.53384780883789, 16.403030395507812, 1.0853091478347778], step: 229400, lr: 9.71293276791158e-05 2023-03-18 06:36:29,439 44k INFO Train Epoch: 228 [33%] 2023-03-18 06:36:29,440 44k INFO Losses: [2.4116225242614746, 2.1662309169769287, 7.432581901550293, 16.39483070373535, 1.3406577110290527], step: 229600, lr: 9.71293276791158e-05 2023-03-18 06:37:42,405 44k INFO Train Epoch: 228 [52%] 2023-03-18 06:37:42,405 44k INFO Losses: [2.6359119415283203, 2.2272591590881348, 9.990165710449219, 19.03911018371582, 1.2040313482284546], step: 229800, lr: 9.71293276791158e-05 2023-03-18 06:38:55,605 44k INFO Train Epoch: 228 [72%] 2023-03-18 06:38:55,606 44k INFO Losses: [2.515958547592163, 2.361065149307251, 10.861897468566895, 20.561870574951172, 1.1799081563949585], step: 230000, lr: 9.71293276791158e-05 2023-03-18 06:38:58,887 44k INFO Saving model and optimizer state at iteration 228 to ./logs\44k\G_230000.pth 2023-03-18 06:38:59,567 44k INFO Saving model and optimizer state at iteration 228 to ./logs\44k\D_230000.pth 2023-03-18 06:39:00,209 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_227000.pth 2023-03-18 06:39:00,241 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_227000.pth 2023-03-18 06:40:13,101 44k INFO Train Epoch: 228 [92%] 2023-03-18 06:40:13,101 44k INFO Losses: [2.6847972869873047, 2.1547563076019287, 10.519286155700684, 20.499759674072266, 1.1973062753677368], step: 230200, lr: 9.71293276791158e-05 2023-03-18 06:40:42,142 44k INFO ====> Epoch: 228, cost 382.57 s 2023-03-18 06:41:36,194 44k INFO Train Epoch: 229 [12%] 2023-03-18 06:41:36,195 44k INFO Losses: [2.535609245300293, 2.2908101081848145, 8.001352310180664, 18.478466033935547, 1.3643752336502075], step: 230400, lr: 9.711718651315591e-05 2023-03-18 06:42:48,540 44k INFO Train Epoch: 229 [32%] 2023-03-18 06:42:48,540 44k INFO Losses: [2.4280974864959717, 2.094510316848755, 10.39441967010498, 20.152982711791992, 1.3168889284133911], step: 230600, lr: 9.711718651315591e-05 2023-03-18 06:44:01,569 44k INFO Train Epoch: 229 [51%] 2023-03-18 06:44:01,569 44k INFO Losses: [2.5315630435943604, 2.143937349319458, 9.546772956848145, 19.769371032714844, 1.1863094568252563], step: 230800, lr: 9.711718651315591e-05 2023-03-18 06:45:14,896 44k INFO Train Epoch: 229 [71%] 2023-03-18 06:45:14,896 44k INFO Losses: [2.5455915927886963, 2.2347841262817383, 11.53406047821045, 17.636510848999023, 1.186985731124878], step: 231000, lr: 9.711718651315591e-05 2023-03-18 06:45:18,132 44k INFO Saving model and optimizer state at iteration 229 to ./logs\44k\G_231000.pth 2023-03-18 06:45:18,825 44k INFO Saving model and optimizer state at iteration 229 to ./logs\44k\D_231000.pth 2023-03-18 06:45:19,467 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_228000.pth 2023-03-18 06:45:19,497 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_228000.pth 2023-03-18 06:46:32,419 44k INFO Train Epoch: 229 [91%] 2023-03-18 06:46:32,420 44k INFO Losses: [2.4898500442504883, 2.152693748474121, 10.182845115661621, 17.557512283325195, 1.8249956369400024], step: 231200, lr: 9.711718651315591e-05 2023-03-18 06:47:05,113 44k INFO ====> Epoch: 229, cost 382.97 s 2023-03-18 06:47:55,447 44k INFO Train Epoch: 230 [11%] 2023-03-18 06:47:55,448 44k INFO Losses: [2.3170201778411865, 2.1786413192749023, 15.374296188354492, 20.74489402770996, 1.431362271308899], step: 231400, lr: 9.710504686484176e-05 2023-03-18 06:49:07,741 44k INFO Train Epoch: 230 [31%] 2023-03-18 06:49:07,741 44k INFO Losses: [2.3959100246429443, 2.214215040206909, 10.991925239562988, 20.594619750976562, 1.4281377792358398], step: 231600, lr: 9.710504686484176e-05 2023-03-18 06:50:20,722 44k INFO Train Epoch: 230 [50%] 2023-03-18 06:50:20,723 44k INFO Losses: [2.6298580169677734, 2.006446599960327, 5.306731700897217, 14.095681190490723, 1.8168450593948364], step: 231800, lr: 9.710504686484176e-05 2023-03-18 06:51:33,919 44k INFO Train Epoch: 230 [70%] 2023-03-18 06:51:33,920 44k INFO Losses: [2.64465594291687, 2.7100658416748047, 10.379008293151855, 20.773008346557617, 1.7002183198928833], step: 232000, lr: 9.710504686484176e-05 2023-03-18 06:51:37,220 44k INFO Saving model and optimizer state at iteration 230 to ./logs\44k\G_232000.pth 2023-03-18 06:51:37,907 44k INFO Saving model and optimizer state at iteration 230 to ./logs\44k\D_232000.pth 2023-03-18 06:51:38,554 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_229000.pth 2023-03-18 06:51:38,588 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_229000.pth 2023-03-18 06:52:51,399 44k INFO Train Epoch: 230 [90%] 2023-03-18 06:52:51,400 44k INFO Losses: [2.3325281143188477, 2.608612060546875, 13.154632568359375, 17.999980926513672, 1.2154406309127808], step: 232200, lr: 9.710504686484176e-05 2023-03-18 06:53:27,857 44k INFO ====> Epoch: 230, cost 382.74 s 2023-03-18 06:54:14,469 44k INFO Train Epoch: 231 [10%] 2023-03-18 06:54:14,469 44k INFO Losses: [2.372175455093384, 2.3491342067718506, 9.007512092590332, 20.76350212097168, 1.139586091041565], step: 232400, lr: 9.709290873398365e-05 2023-03-18 06:55:26,856 44k INFO Train Epoch: 231 [30%] 2023-03-18 06:55:26,857 44k INFO Losses: [2.3710718154907227, 2.3225483894348145, 14.7891845703125, 21.548280715942383, 1.3565634489059448], step: 232600, lr: 9.709290873398365e-05 2023-03-18 06:56:39,731 44k INFO Train Epoch: 231 [50%] 2023-03-18 06:56:39,732 44k INFO Losses: [2.557474136352539, 2.103861093521118, 8.159078598022461, 18.85367202758789, 1.0357774496078491], step: 232800, lr: 9.709290873398365e-05 2023-03-18 06:57:52,836 44k INFO Train Epoch: 231 [69%] 2023-03-18 06:57:52,836 44k INFO Losses: [2.593559980392456, 1.9387595653533936, 7.101523399353027, 14.086896896362305, 0.7999070882797241], step: 233000, lr: 9.709290873398365e-05 2023-03-18 06:57:56,080 44k INFO Saving model and optimizer state at iteration 231 to ./logs\44k\G_233000.pth 2023-03-18 06:57:56,820 44k INFO Saving model and optimizer state at iteration 231 to ./logs\44k\D_233000.pth 2023-03-18 06:57:57,460 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_230000.pth 2023-03-18 06:57:57,501 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_230000.pth 2023-03-18 06:59:10,428 44k INFO Train Epoch: 231 [89%] 2023-03-18 06:59:10,428 44k INFO Losses: [2.728731632232666, 1.9518218040466309, 10.025774955749512, 17.325653076171875, 1.141607403755188], step: 233200, lr: 9.709290873398365e-05 2023-03-18 06:59:50,483 44k INFO ====> Epoch: 231, cost 382.63 s 2023-03-18 07:00:33,487 44k INFO Train Epoch: 232 [9%] 2023-03-18 07:00:33,487 44k INFO Losses: [2.690807342529297, 1.9761282205581665, 10.619765281677246, 17.785802841186523, 1.4960769414901733], step: 233400, lr: 9.70807721203919e-05 2023-03-18 07:01:45,942 44k INFO Train Epoch: 232 [29%] 2023-03-18 07:01:45,943 44k INFO Losses: [2.6558470726013184, 2.0714786052703857, 13.73005199432373, 19.142810821533203, 0.9805610775947571], step: 233600, lr: 9.70807721203919e-05 2023-03-18 07:02:58,626 44k INFO Train Epoch: 232 [49%] 2023-03-18 07:02:58,626 44k INFO Losses: [2.277385711669922, 2.9051365852355957, 12.456783294677734, 25.258520126342773, 1.2630703449249268], step: 233800, lr: 9.70807721203919e-05 2023-03-18 07:04:11,871 44k INFO Train Epoch: 232 [68%] 2023-03-18 07:04:11,871 44k INFO Losses: [2.716339111328125, 2.048454761505127, 6.400038242340088, 15.673182487487793, 1.007399082183838], step: 234000, lr: 9.70807721203919e-05 2023-03-18 07:04:15,095 44k INFO Saving model and optimizer state at iteration 232 to ./logs\44k\G_234000.pth 2023-03-18 07:04:15,869 44k INFO Saving model and optimizer state at iteration 232 to ./logs\44k\D_234000.pth 2023-03-18 07:04:16,512 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_231000.pth 2023-03-18 07:04:16,555 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_231000.pth 2023-03-18 07:05:29,373 44k INFO Train Epoch: 232 [88%] 2023-03-18 07:05:29,374 44k INFO Losses: [2.4715347290039062, 2.252667188644409, 11.086379051208496, 18.568519592285156, 1.2061829566955566], step: 234200, lr: 9.70807721203919e-05 2023-03-18 07:06:12,991 44k INFO ====> Epoch: 232, cost 382.51 s 2023-03-18 07:06:52,232 44k INFO Train Epoch: 233 [8%] 2023-03-18 07:06:52,233 44k INFO Losses: [2.2963359355926514, 2.612423896789551, 10.374870300292969, 14.03938102722168, 1.2694636583328247], step: 234400, lr: 9.706863702387684e-05 2023-03-18 07:08:04,978 44k INFO Train Epoch: 233 [28%] 2023-03-18 07:08:04,979 44k INFO Losses: [2.6204047203063965, 2.1662774085998535, 8.554608345031738, 19.040714263916016, 1.0915981531143188], step: 234600, lr: 9.706863702387684e-05 2023-03-18 07:09:17,583 44k INFO Train Epoch: 233 [48%] 2023-03-18 07:09:17,583 44k INFO Losses: [2.6231400966644287, 1.9077996015548706, 10.695106506347656, 18.96556282043457, 1.3218119144439697], step: 234800, lr: 9.706863702387684e-05 2023-03-18 07:10:30,952 44k INFO Train Epoch: 233 [67%] 2023-03-18 07:10:30,952 44k INFO Losses: [2.446401357650757, 2.034991979598999, 10.921850204467773, 18.751733779907227, 1.076803207397461], step: 235000, lr: 9.706863702387684e-05 2023-03-18 07:10:34,193 44k INFO Saving model and optimizer state at iteration 233 to ./logs\44k\G_235000.pth 2023-03-18 07:10:34,949 44k INFO Saving model and optimizer state at iteration 233 to ./logs\44k\D_235000.pth 2023-03-18 07:10:35,597 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_232000.pth 2023-03-18 07:10:35,631 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_232000.pth 2023-03-18 07:11:48,534 44k INFO Train Epoch: 233 [87%] 2023-03-18 07:11:48,534 44k INFO Losses: [2.50559139251709, 2.062480926513672, 10.407526016235352, 18.894065856933594, 1.1635998487472534], step: 235200, lr: 9.706863702387684e-05 2023-03-18 07:12:35,815 44k INFO ====> Epoch: 233, cost 382.82 s 2023-03-18 07:13:11,519 44k INFO Train Epoch: 234 [7%] 2023-03-18 07:13:11,519 44k INFO Losses: [2.3547000885009766, 2.4225447177886963, 9.923118591308594, 20.807050704956055, 1.2576576471328735], step: 235400, lr: 9.705650344424885e-05 2023-03-18 07:14:24,065 44k INFO Train Epoch: 234 [27%] 2023-03-18 07:14:24,066 44k INFO Losses: [2.6232123374938965, 2.3817434310913086, 13.108174324035645, 20.842159271240234, 1.026521921157837], step: 235600, lr: 9.705650344424885e-05 2023-03-18 07:15:36,699 44k INFO Train Epoch: 234 [47%] 2023-03-18 07:15:36,700 44k INFO Losses: [2.2798104286193848, 2.520209312438965, 13.450557708740234, 23.605941772460938, 1.283144235610962], step: 235800, lr: 9.705650344424885e-05 2023-03-18 07:16:49,783 44k INFO Train Epoch: 234 [66%] 2023-03-18 07:16:49,783 44k INFO Losses: [2.5747551918029785, 2.046416759490967, 10.179340362548828, 19.530487060546875, 1.1103103160858154], step: 236000, lr: 9.705650344424885e-05 2023-03-18 07:16:53,026 44k INFO Saving model and optimizer state at iteration 234 to ./logs\44k\G_236000.pth 2023-03-18 07:16:53,750 44k INFO Saving model and optimizer state at iteration 234 to ./logs\44k\D_236000.pth 2023-03-18 07:16:54,398 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_233000.pth 2023-03-18 07:16:54,432 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_233000.pth 2023-03-18 07:18:07,387 44k INFO Train Epoch: 234 [86%] 2023-03-18 07:18:07,387 44k INFO Losses: [2.696573257446289, 2.0560271739959717, 6.546759605407715, 16.045320510864258, 1.347862720489502], step: 236200, lr: 9.705650344424885e-05 2023-03-18 07:18:58,285 44k INFO ====> Epoch: 234, cost 382.47 s 2023-03-18 07:19:30,314 44k INFO Train Epoch: 235 [6%] 2023-03-18 07:19:30,314 44k INFO Losses: [2.567823648452759, 2.169123888015747, 7.793859481811523, 20.210180282592773, 1.0899502038955688], step: 236400, lr: 9.704437138131832e-05 2023-03-18 07:20:43,040 44k INFO Train Epoch: 235 [26%] 2023-03-18 07:20:43,041 44k INFO Losses: [2.3973398208618164, 2.2036244869232178, 11.781317710876465, 21.122587203979492, 1.479805827140808], step: 236600, lr: 9.704437138131832e-05 2023-03-18 07:21:55,719 44k INFO Train Epoch: 235 [46%] 2023-03-18 07:21:55,719 44k INFO Losses: [2.4113056659698486, 2.3190677165985107, 13.129249572753906, 20.943227767944336, 1.2263728380203247], step: 236800, lr: 9.704437138131832e-05 2023-03-18 07:23:08,923 44k INFO Train Epoch: 235 [65%] 2023-03-18 07:23:08,924 44k INFO Losses: [2.382814884185791, 2.2505221366882324, 12.692795753479004, 23.935148239135742, 1.3604700565338135], step: 237000, lr: 9.704437138131832e-05 2023-03-18 07:23:12,135 44k INFO Saving model and optimizer state at iteration 235 to ./logs\44k\G_237000.pth 2023-03-18 07:23:12,859 44k INFO Saving model and optimizer state at iteration 235 to ./logs\44k\D_237000.pth 2023-03-18 07:23:13,490 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_234000.pth 2023-03-18 07:23:13,521 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_234000.pth 2023-03-18 07:24:26,309 44k INFO Train Epoch: 235 [85%] 2023-03-18 07:24:26,309 44k INFO Losses: [2.5619149208068848, 2.2262496948242188, 9.9839506149292, 20.3087215423584, 1.239182472229004], step: 237200, lr: 9.704437138131832e-05 2023-03-18 07:25:20,931 44k INFO ====> Epoch: 235, cost 382.65 s 2023-03-18 07:25:49,050 44k INFO Train Epoch: 236 [5%] 2023-03-18 07:25:49,050 44k INFO Losses: [2.411547899246216, 2.1751465797424316, 11.44782829284668, 19.132665634155273, 1.226562261581421], step: 237400, lr: 9.703224083489565e-05 2023-03-18 07:27:01,591 44k INFO Train Epoch: 236 [25%] 2023-03-18 07:27:01,592 44k INFO Losses: [2.3872132301330566, 2.247119665145874, 10.913171768188477, 20.259368896484375, 1.1319897174835205], step: 237600, lr: 9.703224083489565e-05 2023-03-18 07:28:13,991 44k INFO Train Epoch: 236 [45%] 2023-03-18 07:28:13,992 44k INFO Losses: [2.7011184692382812, 2.2545113563537598, 9.924154281616211, 20.390283584594727, 1.12367582321167], step: 237800, lr: 9.703224083489565e-05 2023-03-18 07:29:27,052 44k INFO Train Epoch: 236 [64%] 2023-03-18 07:29:27,052 44k INFO Losses: [2.4182791709899902, 2.1242105960845947, 14.471899032592773, 19.094573974609375, 1.1191819906234741], step: 238000, lr: 9.703224083489565e-05 2023-03-18 07:29:30,231 44k INFO Saving model and optimizer state at iteration 236 to ./logs\44k\G_238000.pth 2023-03-18 07:29:30,925 44k INFO Saving model and optimizer state at iteration 236 to ./logs\44k\D_238000.pth 2023-03-18 07:29:31,558 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_235000.pth 2023-03-18 07:29:31,596 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_235000.pth 2023-03-18 07:30:44,101 44k INFO Train Epoch: 236 [84%] 2023-03-18 07:30:44,101 44k INFO Losses: [2.882418155670166, 1.8694285154342651, 5.992274284362793, 17.146480560302734, 1.3184032440185547], step: 238200, lr: 9.703224083489565e-05 2023-03-18 07:31:42,129 44k INFO ====> Epoch: 236, cost 381.20 s 2023-03-18 07:32:06,625 44k INFO Train Epoch: 237 [4%] 2023-03-18 07:32:06,625 44k INFO Losses: [2.463507890701294, 2.2898871898651123, 8.41210651397705, 19.74239158630371, 1.3747304677963257], step: 238400, lr: 9.702011180479129e-05 2023-03-18 07:33:19,376 44k INFO Train Epoch: 237 [24%] 2023-03-18 07:33:19,376 44k INFO Losses: [2.307028293609619, 2.4558067321777344, 10.29770278930664, 19.069419860839844, 1.761042594909668], step: 238600, lr: 9.702011180479129e-05 2023-03-18 07:34:31,837 44k INFO Train Epoch: 237 [44%] 2023-03-18 07:34:31,838 44k INFO Losses: [2.460188865661621, 2.2114973068237305, 9.385506629943848, 17.32344627380371, 0.8269605040550232], step: 238800, lr: 9.702011180479129e-05 2023-03-18 07:35:44,957 44k INFO Train Epoch: 237 [63%] 2023-03-18 07:35:44,958 44k INFO Losses: [2.448391914367676, 2.3898227214813232, 9.878168106079102, 22.996469497680664, 1.1052063703536987], step: 239000, lr: 9.702011180479129e-05 2023-03-18 07:35:48,209 44k INFO Saving model and optimizer state at iteration 237 to ./logs\44k\G_239000.pth 2023-03-18 07:35:48,943 44k INFO Saving model and optimizer state at iteration 237 to ./logs\44k\D_239000.pth 2023-03-18 07:35:49,578 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_236000.pth 2023-03-18 07:35:49,610 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_236000.pth 2023-03-18 07:37:02,151 44k INFO Train Epoch: 237 [83%] 2023-03-18 07:37:02,151 44k INFO Losses: [2.389051914215088, 2.433485746383667, 9.388039588928223, 23.172054290771484, 1.8897839784622192], step: 239200, lr: 9.702011180479129e-05 2023-03-18 07:38:03,965 44k INFO ====> Epoch: 237, cost 381.84 s 2023-03-18 07:38:24,846 44k INFO Train Epoch: 238 [3%] 2023-03-18 07:38:24,847 44k INFO Losses: [2.2685070037841797, 2.6818954944610596, 11.383846282958984, 23.157161712646484, 1.8642890453338623], step: 239400, lr: 9.700798429081568e-05 2023-03-18 07:39:37,581 44k INFO Train Epoch: 238 [23%] 2023-03-18 07:39:37,582 44k INFO Losses: [2.2982337474823, 2.6788218021392822, 12.901337623596191, 22.306947708129883, 1.0730412006378174], step: 239600, lr: 9.700798429081568e-05 2023-03-18 07:40:49,895 44k INFO Train Epoch: 238 [43%] 2023-03-18 07:40:49,896 44k INFO Losses: [2.6703805923461914, 2.0581934452056885, 8.933109283447266, 18.359708786010742, 1.346760869026184], step: 239800, lr: 9.700798429081568e-05 2023-03-18 07:42:02,833 44k INFO Train Epoch: 238 [62%] 2023-03-18 07:42:02,833 44k INFO Losses: [2.6530027389526367, 2.314347743988037, 9.653334617614746, 21.913503646850586, 1.36128830909729], step: 240000, lr: 9.700798429081568e-05 2023-03-18 07:42:05,946 44k INFO Saving model and optimizer state at iteration 238 to ./logs\44k\G_240000.pth 2023-03-18 07:42:06,656 44k INFO Saving model and optimizer state at iteration 238 to ./logs\44k\D_240000.pth 2023-03-18 07:42:07,292 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_237000.pth 2023-03-18 07:42:07,333 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_237000.pth 2023-03-18 07:43:19,926 44k INFO Train Epoch: 238 [82%] 2023-03-18 07:43:19,927 44k INFO Losses: [2.29264497756958, 2.7293334007263184, 12.907657623291016, 22.820213317871094, 1.000765085220337], step: 240200, lr: 9.700798429081568e-05 2023-03-18 07:44:25,339 44k INFO ====> Epoch: 238, cost 381.37 s 2023-03-18 07:44:42,609 44k INFO Train Epoch: 239 [2%] 2023-03-18 07:44:42,609 44k INFO Losses: [2.2914204597473145, 2.349306583404541, 13.382835388183594, 22.8872013092041, 1.2575081586837769], step: 240400, lr: 9.699585829277933e-05 2023-03-18 07:45:55,574 44k INFO Train Epoch: 239 [22%] 2023-03-18 07:45:55,575 44k INFO Losses: [2.527111768722534, 2.5950095653533936, 7.87494421005249, 16.92253303527832, 1.141007661819458], step: 240600, lr: 9.699585829277933e-05 2023-03-18 07:47:07,747 44k INFO Train Epoch: 239 [42%] 2023-03-18 07:47:07,747 44k INFO Losses: [2.601142644882202, 2.028020143508911, 8.779674530029297, 16.175331115722656, 1.510420799255371], step: 240800, lr: 9.699585829277933e-05 2023-03-18 07:48:20,782 44k INFO Train Epoch: 239 [61%] 2023-03-18 07:48:20,782 44k INFO Losses: [2.3739144802093506, 2.333387851715088, 12.00101089477539, 21.01430320739746, 1.2120718955993652], step: 241000, lr: 9.699585829277933e-05 2023-03-18 07:48:24,007 44k INFO Saving model and optimizer state at iteration 239 to ./logs\44k\G_241000.pth 2023-03-18 07:48:24,709 44k INFO Saving model and optimizer state at iteration 239 to ./logs\44k\D_241000.pth 2023-03-18 07:48:25,335 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_238000.pth 2023-03-18 07:48:25,376 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_238000.pth 2023-03-18 07:49:37,940 44k INFO Train Epoch: 239 [81%] 2023-03-18 07:49:37,940 44k INFO Losses: [2.4419116973876953, 2.1992688179016113, 11.631134033203125, 23.135339736938477, 1.2260794639587402], step: 241200, lr: 9.699585829277933e-05 2023-03-18 07:50:46,944 44k INFO ====> Epoch: 239, cost 381.60 s 2023-03-18 07:51:00,661 44k INFO Train Epoch: 240 [1%] 2023-03-18 07:51:00,661 44k INFO Losses: [2.1207098960876465, 2.488795757293701, 10.635626792907715, 21.728713989257812, 1.4486995935440063], step: 241400, lr: 9.698373381049272e-05 2023-03-18 07:52:13,656 44k INFO Train Epoch: 240 [21%] 2023-03-18 07:52:13,657 44k INFO Losses: [2.5653064250946045, 2.4888737201690674, 7.330862998962402, 20.654428482055664, 1.3865220546722412], step: 241600, lr: 9.698373381049272e-05 2023-03-18 07:53:25,895 44k INFO Train Epoch: 240 [41%] 2023-03-18 07:53:25,895 44k INFO Losses: [2.7080485820770264, 2.0987260341644287, 7.500275135040283, 15.339323997497559, 1.4255449771881104], step: 241800, lr: 9.698373381049272e-05 2023-03-18 07:54:39,021 44k INFO Train Epoch: 240 [60%] 2023-03-18 07:54:39,021 44k INFO Losses: [2.378457546234131, 2.328004837036133, 10.05051326751709, 19.324460983276367, 1.217647671699524], step: 242000, lr: 9.698373381049272e-05 2023-03-18 07:54:42,261 44k INFO Saving model and optimizer state at iteration 240 to ./logs\44k\G_242000.pth 2023-03-18 07:54:43,027 44k INFO Saving model and optimizer state at iteration 240 to ./logs\44k\D_242000.pth 2023-03-18 07:54:43,659 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_239000.pth 2023-03-18 07:54:43,688 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_239000.pth 2023-03-18 07:55:56,183 44k INFO Train Epoch: 240 [80%] 2023-03-18 07:55:56,183 44k INFO Losses: [2.623335123062134, 2.4481406211853027, 11.538192749023438, 20.08252716064453, 1.1795402765274048], step: 242200, lr: 9.698373381049272e-05 2023-03-18 07:57:08,812 44k INFO ====> Epoch: 240, cost 381.87 s 2023-03-18 07:57:18,969 44k INFO Train Epoch: 241 [0%] 2023-03-18 07:57:18,969 44k INFO Losses: [2.421703815460205, 2.362501382827759, 8.211528778076172, 18.0399227142334, 1.1838186979293823], step: 242400, lr: 9.69716108437664e-05 2023-03-18 07:58:31,803 44k INFO Train Epoch: 241 [20%] 2023-03-18 07:58:31,803 44k INFO Losses: [2.5147671699523926, 2.384589195251465, 10.241972923278809, 22.77591323852539, 1.2402915954589844], step: 242600, lr: 9.69716108437664e-05 2023-03-18 07:59:43,968 44k INFO Train Epoch: 241 [40%] 2023-03-18 07:59:43,969 44k INFO Losses: [2.5771982669830322, 2.566288471221924, 7.843503952026367, 22.580894470214844, 1.2970625162124634], step: 242800, lr: 9.69716108437664e-05 2023-03-18 08:00:57,119 44k INFO Train Epoch: 241 [59%] 2023-03-18 08:00:57,119 44k INFO Losses: [2.3302817344665527, 2.542316198348999, 11.409634590148926, 20.483243942260742, 1.5791614055633545], step: 243000, lr: 9.69716108437664e-05 2023-03-18 08:01:00,289 44k INFO Saving model and optimizer state at iteration 241 to ./logs\44k\G_243000.pth 2023-03-18 08:01:01,001 44k INFO Saving model and optimizer state at iteration 241 to ./logs\44k\D_243000.pth 2023-03-18 08:01:01,623 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_240000.pth 2023-03-18 08:01:01,652 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_240000.pth 2023-03-18 08:02:14,032 44k INFO Train Epoch: 241 [79%] 2023-03-18 08:02:14,033 44k INFO Losses: [2.6424098014831543, 1.9531419277191162, 7.084299087524414, 18.123342514038086, 1.422184944152832], step: 243200, lr: 9.69716108437664e-05 2023-03-18 08:03:26,990 44k INFO Train Epoch: 241 [99%] 2023-03-18 08:03:26,990 44k INFO Losses: [2.598893880844116, 2.3142294883728027, 7.932101249694824, 18.348751068115234, 1.16374933719635], step: 243400, lr: 9.69716108437664e-05 2023-03-18 08:03:30,583 44k INFO ====> Epoch: 241, cost 381.77 s 2023-03-18 08:04:49,545 44k INFO Train Epoch: 242 [19%] 2023-03-18 08:04:49,545 44k INFO Losses: [2.6260411739349365, 2.2401466369628906, 8.536432266235352, 20.453475952148438, 1.1452836990356445], step: 243600, lr: 9.695948939241093e-05 2023-03-18 08:06:01,678 44k INFO Train Epoch: 242 [39%] 2023-03-18 08:06:01,678 44k INFO Losses: [2.4839272499084473, 2.4052603244781494, 15.41381549835205, 26.000749588012695, 0.809550940990448], step: 243800, lr: 9.695948939241093e-05 2023-03-18 08:07:14,904 44k INFO Train Epoch: 242 [58%] 2023-03-18 08:07:14,904 44k INFO Losses: [2.1998419761657715, 2.7951648235321045, 13.366369247436523, 21.333770751953125, 1.3360888957977295], step: 244000, lr: 9.695948939241093e-05 2023-03-18 08:07:18,147 44k INFO Saving model and optimizer state at iteration 242 to ./logs\44k\G_244000.pth 2023-03-18 08:07:18,925 44k INFO Saving model and optimizer state at iteration 242 to ./logs\44k\D_244000.pth 2023-03-18 08:07:19,630 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_241000.pth 2023-03-18 08:07:19,666 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_241000.pth 2023-03-18 08:08:32,235 44k INFO Train Epoch: 242 [78%] 2023-03-18 08:08:32,235 44k INFO Losses: [2.440828561782837, 2.614377737045288, 11.315389633178711, 17.85517120361328, 0.8854700326919556], step: 244200, lr: 9.695948939241093e-05 2023-03-18 08:09:45,206 44k INFO Train Epoch: 242 [98%] 2023-03-18 08:09:45,207 44k INFO Losses: [2.732917308807373, 2.0250861644744873, 5.798255443572998, 15.693681716918945, 1.1998460292816162], step: 244400, lr: 9.695948939241093e-05 2023-03-18 08:09:52,468 44k INFO ====> Epoch: 242, cost 381.88 s 2023-03-18 08:11:07,927 44k INFO Train Epoch: 243 [18%] 2023-03-18 08:11:07,927 44k INFO Losses: [2.542109489440918, 2.2825381755828857, 9.453187942504883, 16.435911178588867, 0.9418092966079712], step: 244600, lr: 9.694736945623688e-05 2023-03-18 08:12:20,161 44k INFO Train Epoch: 243 [38%] 2023-03-18 08:12:20,162 44k INFO Losses: [2.518245220184326, 2.3289449214935303, 11.402377128601074, 19.09004020690918, 1.2261574268341064], step: 244800, lr: 9.694736945623688e-05 2023-03-18 08:13:33,400 44k INFO Train Epoch: 243 [57%] 2023-03-18 08:13:33,401 44k INFO Losses: [2.4516258239746094, 2.4727821350097656, 9.145573616027832, 19.57111358642578, 1.6015262603759766], step: 245000, lr: 9.694736945623688e-05 2023-03-18 08:13:36,620 44k INFO Saving model and optimizer state at iteration 243 to ./logs\44k\G_245000.pth 2023-03-18 08:13:37,339 44k INFO Saving model and optimizer state at iteration 243 to ./logs\44k\D_245000.pth 2023-03-18 08:13:37,966 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_242000.pth 2023-03-18 08:13:37,994 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_242000.pth 2023-03-18 08:14:50,542 44k INFO Train Epoch: 243 [77%] 2023-03-18 08:14:50,542 44k INFO Losses: [2.5183959007263184, 2.1145877838134766, 10.656807899475098, 20.111392974853516, 1.420533299446106], step: 245200, lr: 9.694736945623688e-05 2023-03-18 08:16:03,389 44k INFO Train Epoch: 243 [97%] 2023-03-18 08:16:03,390 44k INFO Losses: [2.359656810760498, 2.3612418174743652, 9.307564735412598, 21.248493194580078, 1.3914144039154053], step: 245400, lr: 9.694736945623688e-05 2023-03-18 08:16:14,350 44k INFO ====> Epoch: 243, cost 381.88 s 2023-03-18 08:17:26,051 44k INFO Train Epoch: 244 [17%] 2023-03-18 08:17:26,051 44k INFO Losses: [2.414745807647705, 2.3575375080108643, 10.968599319458008, 18.174570083618164, 1.3621877431869507], step: 245600, lr: 9.693525103505484e-05 2023-03-18 08:18:38,304 44k INFO Train Epoch: 244 [37%] 2023-03-18 08:18:38,305 44k INFO Losses: [2.3389906883239746, 2.2993268966674805, 12.072164535522461, 21.821258544921875, 1.0902777910232544], step: 245800, lr: 9.693525103505484e-05 2023-03-18 08:19:51,455 44k INFO Train Epoch: 244 [56%] 2023-03-18 08:19:51,455 44k INFO Losses: [2.2846596240997314, 2.283956527709961, 10.409601211547852, 20.60080909729004, 1.3195343017578125], step: 246000, lr: 9.693525103505484e-05 2023-03-18 08:19:54,735 44k INFO Saving model and optimizer state at iteration 244 to ./logs\44k\G_246000.pth 2023-03-18 08:19:55,418 44k INFO Saving model and optimizer state at iteration 244 to ./logs\44k\D_246000.pth 2023-03-18 08:19:56,064 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_243000.pth 2023-03-18 08:19:56,093 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_243000.pth 2023-03-18 08:21:08,611 44k INFO Train Epoch: 244 [76%] 2023-03-18 08:21:08,611 44k INFO Losses: [2.3994503021240234, 2.3715786933898926, 15.345438957214355, 22.601037979125977, 1.0027860403060913], step: 246200, lr: 9.693525103505484e-05 2023-03-18 08:22:21,329 44k INFO Train Epoch: 244 [96%] 2023-03-18 08:22:21,329 44k INFO Losses: [2.551975727081299, 2.3302900791168213, 12.072283744812012, 18.234050750732422, 1.2505594491958618], step: 246400, lr: 9.693525103505484e-05 2023-03-18 08:22:35,977 44k INFO ====> Epoch: 244, cost 381.63 s 2023-03-18 08:23:44,108 44k INFO Train Epoch: 245 [16%] 2023-03-18 08:23:44,108 44k INFO Losses: [2.413804769515991, 2.42429256439209, 11.07409381866455, 21.636539459228516, 1.8014425039291382], step: 246600, lr: 9.692313412867544e-05 2023-03-18 08:24:56,333 44k INFO Train Epoch: 245 [36%] 2023-03-18 08:24:56,333 44k INFO Losses: [2.267308473587036, 2.41819429397583, 14.58601188659668, 22.688175201416016, 1.3203662633895874], step: 246800, lr: 9.692313412867544e-05 2023-03-18 08:26:09,275 44k INFO Train Epoch: 245 [55%] 2023-03-18 08:26:09,276 44k INFO Losses: [2.433079242706299, 1.969465970993042, 11.954817771911621, 22.321435928344727, 1.4878195524215698], step: 247000, lr: 9.692313412867544e-05 2023-03-18 08:26:12,461 44k INFO Saving model and optimizer state at iteration 245 to ./logs\44k\G_247000.pth 2023-03-18 08:26:13,140 44k INFO Saving model and optimizer state at iteration 245 to ./logs\44k\D_247000.pth 2023-03-18 08:26:13,779 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_244000.pth 2023-03-18 08:26:13,809 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_244000.pth 2023-03-18 08:27:26,306 44k INFO Train Epoch: 245 [75%] 2023-03-18 08:27:26,306 44k INFO Losses: [2.2079715728759766, 2.534409284591675, 14.035421371459961, 21.947399139404297, 1.4403718709945679], step: 247200, lr: 9.692313412867544e-05 2023-03-18 08:28:39,197 44k INFO Train Epoch: 245 [95%] 2023-03-18 08:28:39,197 44k INFO Losses: [2.575840950012207, 2.0737459659576416, 9.394519805908203, 18.80529022216797, 1.1047694683074951], step: 247400, lr: 9.692313412867544e-05 2023-03-18 08:28:57,481 44k INFO ====> Epoch: 245, cost 381.50 s 2023-03-18 08:30:01,944 44k INFO Train Epoch: 246 [15%] 2023-03-18 08:30:01,945 44k INFO Losses: [2.792095899581909, 2.3704609870910645, 7.729785919189453, 19.777074813842773, 1.1822928190231323], step: 247600, lr: 9.691101873690936e-05 2023-03-18 08:31:14,102 44k INFO Train Epoch: 246 [35%] 2023-03-18 08:31:14,103 44k INFO Losses: [2.3377022743225098, 2.2262420654296875, 7.614841938018799, 18.22621726989746, 1.333767294883728], step: 247800, lr: 9.691101873690936e-05 2023-03-18 08:32:27,150 44k INFO Train Epoch: 246 [54%] 2023-03-18 08:32:27,150 44k INFO Losses: [2.3974666595458984, 2.2372372150421143, 7.399135112762451, 15.629326820373535, 1.4588475227355957], step: 248000, lr: 9.691101873690936e-05 2023-03-18 08:32:30,308 44k INFO Saving model and optimizer state at iteration 246 to ./logs\44k\G_248000.pth 2023-03-18 08:32:31,003 44k INFO Saving model and optimizer state at iteration 246 to ./logs\44k\D_248000.pth 2023-03-18 08:32:31,645 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_245000.pth 2023-03-18 08:32:31,677 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_245000.pth 2023-03-18 08:33:44,293 44k INFO Train Epoch: 246 [74%] 2023-03-18 08:33:44,293 44k INFO Losses: [2.399606943130493, 2.226621389389038, 12.434996604919434, 17.325740814208984, 1.3651065826416016], step: 248200, lr: 9.691101873690936e-05 2023-03-18 08:34:57,211 44k INFO Train Epoch: 246 [94%] 2023-03-18 08:34:57,211 44k INFO Losses: [2.7781496047973633, 1.9602077007293701, 9.267704963684082, 16.880495071411133, 1.2029987573623657], step: 248400, lr: 9.691101873690936e-05 2023-03-18 08:35:19,040 44k INFO ====> Epoch: 246, cost 381.56 s 2023-03-18 08:36:19,853 44k INFO Train Epoch: 247 [14%] 2023-03-18 08:36:19,853 44k INFO Losses: [2.1813554763793945, 2.43989896774292, 13.676634788513184, 24.671293258666992, 1.0899513959884644], step: 248600, lr: 9.689890485956725e-05 2023-03-18 08:37:32,200 44k INFO Train Epoch: 247 [34%] 2023-03-18 08:37:32,200 44k INFO Losses: [2.515462636947632, 2.4560065269470215, 8.162273406982422, 21.607213973999023, 0.8136369585990906], step: 248800, lr: 9.689890485956725e-05 2023-03-18 08:38:44,973 44k INFO Train Epoch: 247 [53%] 2023-03-18 08:38:44,973 44k INFO Losses: [2.6006929874420166, 2.3393988609313965, 10.538350105285645, 17.392414093017578, 1.3686500787734985], step: 249000, lr: 9.689890485956725e-05 2023-03-18 08:38:48,141 44k INFO Saving model and optimizer state at iteration 247 to ./logs\44k\G_249000.pth 2023-03-18 08:38:48,863 44k INFO Saving model and optimizer state at iteration 247 to ./logs\44k\D_249000.pth 2023-03-18 08:38:49,494 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_246000.pth 2023-03-18 08:38:49,521 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_246000.pth 2023-03-18 08:40:02,218 44k INFO Train Epoch: 247 [73%] 2023-03-18 08:40:02,218 44k INFO Losses: [2.652850866317749, 2.2333548069000244, 10.679261207580566, 19.030860900878906, 0.9153780937194824], step: 249200, lr: 9.689890485956725e-05 2023-03-18 08:41:15,057 44k INFO Train Epoch: 247 [93%] 2023-03-18 08:41:15,057 44k INFO Losses: [2.574235439300537, 2.2778844833374023, 7.890625, 21.331905364990234, 1.1257044076919556], step: 249400, lr: 9.689890485956725e-05 2023-03-18 08:41:40,472 44k INFO ====> Epoch: 247, cost 381.43 s 2023-03-18 08:42:37,759 44k INFO Train Epoch: 248 [13%] 2023-03-18 08:42:37,760 44k INFO Losses: [2.5100314617156982, 2.1448819637298584, 8.93221664428711, 16.474727630615234, 1.1094597578048706], step: 249600, lr: 9.68867924964598e-05 2023-03-18 08:43:49,996 44k INFO Train Epoch: 248 [33%] 2023-03-18 08:43:49,997 44k INFO Losses: [2.3246307373046875, 2.237438440322876, 7.833333969116211, 17.21114158630371, 1.0810003280639648], step: 249800, lr: 9.68867924964598e-05 2023-03-18 08:45:02,743 44k INFO Train Epoch: 248 [52%] 2023-03-18 08:45:02,744 44k INFO Losses: [2.299355983734131, 2.447652578353882, 11.953178405761719, 19.048404693603516, 1.0326069593429565], step: 250000, lr: 9.68867924964598e-05 2023-03-18 08:45:05,996 44k INFO Saving model and optimizer state at iteration 248 to ./logs\44k\G_250000.pth 2023-03-18 08:45:06,718 44k INFO Saving model and optimizer state at iteration 248 to ./logs\44k\D_250000.pth 2023-03-18 08:45:07,350 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_247000.pth 2023-03-18 08:45:07,381 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_247000.pth 2023-03-18 08:46:20,184 44k INFO Train Epoch: 248 [72%] 2023-03-18 08:46:20,185 44k INFO Losses: [2.4488637447357178, 2.09261155128479, 9.881348609924316, 20.45525360107422, 1.4844290018081665], step: 250200, lr: 9.68867924964598e-05 2023-03-18 08:47:33,074 44k INFO Train Epoch: 248 [92%] 2023-03-18 08:47:33,074 44k INFO Losses: [2.5166773796081543, 2.5085198879241943, 12.256851196289062, 22.17918586730957, 1.4352320432662964], step: 250400, lr: 9.68867924964598e-05 2023-03-18 08:48:02,119 44k INFO ====> Epoch: 248, cost 381.65 s 2023-03-18 08:48:55,852 44k INFO Train Epoch: 249 [12%] 2023-03-18 08:48:55,852 44k INFO Losses: [2.53895902633667, 2.2199981212615967, 12.76373291015625, 20.259178161621094, 1.2147188186645508], step: 250600, lr: 9.687468164739773e-05 2023-03-18 08:50:08,040 44k INFO Train Epoch: 249 [32%] 2023-03-18 08:50:08,040 44k INFO Losses: [2.5350136756896973, 2.400178909301758, 9.363162994384766, 19.067001342773438, 1.3850778341293335], step: 250800, lr: 9.687468164739773e-05 2023-03-18 08:51:20,693 44k INFO Train Epoch: 249 [51%] 2023-03-18 08:51:20,694 44k INFO Losses: [2.4963769912719727, 2.2007157802581787, 11.194806098937988, 18.674898147583008, 1.4095735549926758], step: 251000, lr: 9.687468164739773e-05 2023-03-18 08:51:23,887 44k INFO Saving model and optimizer state at iteration 249 to ./logs\44k\G_251000.pth 2023-03-18 08:51:24,615 44k INFO Saving model and optimizer state at iteration 249 to ./logs\44k\D_251000.pth 2023-03-18 08:51:25,264 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_248000.pth 2023-03-18 08:51:25,310 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_248000.pth 2023-03-18 08:52:38,136 44k INFO Train Epoch: 249 [71%] 2023-03-18 08:52:38,137 44k INFO Losses: [2.5622119903564453, 1.986762285232544, 9.832114219665527, 14.965584754943848, 0.9729482531547546], step: 251200, lr: 9.687468164739773e-05 2023-03-18 08:53:51,160 44k INFO Train Epoch: 249 [91%] 2023-03-18 08:53:51,161 44k INFO Losses: [2.631925582885742, 2.0201616287231445, 14.088212966918945, 20.848636627197266, 1.5365430116653442], step: 251400, lr: 9.687468164739773e-05 2023-03-18 08:54:23,762 44k INFO ====> Epoch: 249, cost 381.64 s 2023-03-18 08:55:13,935 44k INFO Train Epoch: 250 [11%] 2023-03-18 08:55:13,935 44k INFO Losses: [2.259568214416504, 2.5274853706359863, 14.246437072753906, 17.829221725463867, 1.3332369327545166], step: 251600, lr: 9.68625723121918e-05 2023-03-18 08:56:26,039 44k INFO Train Epoch: 250 [31%] 2023-03-18 08:56:26,039 44k INFO Losses: [2.303274631500244, 2.1909542083740234, 11.05036449432373, 20.69980812072754, 1.1586024761199951], step: 251800, lr: 9.68625723121918e-05 2023-03-18 08:57:38,764 44k INFO Train Epoch: 250 [50%] 2023-03-18 08:57:38,765 44k INFO Losses: [2.2218637466430664, 2.476996898651123, 14.584376335144043, 21.017009735107422, 1.2576837539672852], step: 252000, lr: 9.68625723121918e-05 2023-03-18 08:57:41,931 44k INFO Saving model and optimizer state at iteration 250 to ./logs\44k\G_252000.pth 2023-03-18 08:57:42,703 44k INFO Saving model and optimizer state at iteration 250 to ./logs\44k\D_252000.pth 2023-03-18 08:57:43,363 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_249000.pth 2023-03-18 08:57:43,394 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_249000.pth 2023-03-18 08:58:56,268 44k INFO Train Epoch: 250 [70%] 2023-03-18 08:58:56,269 44k INFO Losses: [2.438119888305664, 2.206759452819824, 8.7505464553833, 20.598812103271484, 1.4918690919876099], step: 252200, lr: 9.68625723121918e-05 2023-03-18 09:00:09,155 44k INFO Train Epoch: 250 [90%] 2023-03-18 09:00:09,155 44k INFO Losses: [2.281118631362915, 2.3243696689605713, 8.829057693481445, 17.54267692565918, 1.493385672569275], step: 252400, lr: 9.68625723121918e-05 2023-03-18 09:00:45,494 44k INFO ====> Epoch: 250, cost 381.73 s 2023-03-18 09:01:31,990 44k INFO Train Epoch: 251 [10%] 2023-03-18 09:01:31,990 44k INFO Losses: [2.626612424850464, 2.433985710144043, 8.622821807861328, 17.3492374420166, 1.5426863431930542], step: 252600, lr: 9.685046449065278e-05 2023-03-18 09:02:44,338 44k INFO Train Epoch: 251 [30%] 2023-03-18 09:02:44,338 44k INFO Losses: [2.434884548187256, 2.3362677097320557, 12.98608112335205, 20.48481559753418, 1.307424545288086], step: 252800, lr: 9.685046449065278e-05 2023-03-18 09:03:57,118 44k INFO Train Epoch: 251 [50%] 2023-03-18 09:03:57,119 44k INFO Losses: [2.379218339920044, 2.816498041152954, 9.605239868164062, 20.381587982177734, 1.292375922203064], step: 253000, lr: 9.685046449065278e-05 2023-03-18 09:04:00,277 44k INFO Saving model and optimizer state at iteration 251 to ./logs\44k\G_253000.pth 2023-03-18 09:04:00,990 44k INFO Saving model and optimizer state at iteration 251 to ./logs\44k\D_253000.pth 2023-03-18 09:04:01,617 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_250000.pth 2023-03-18 09:04:01,647 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_250000.pth 2023-03-18 09:05:14,513 44k INFO Train Epoch: 251 [69%] 2023-03-18 09:05:14,513 44k INFO Losses: [2.474180221557617, 2.2589142322540283, 8.47376823425293, 12.810439109802246, 0.9772005081176758], step: 253200, lr: 9.685046449065278e-05 2023-03-18 09:06:27,537 44k INFO Train Epoch: 251 [89%] 2023-03-18 09:06:27,537 44k INFO Losses: [2.6033873558044434, 2.042515754699707, 9.714268684387207, 16.259326934814453, 1.49953031539917], step: 253400, lr: 9.685046449065278e-05 2023-03-18 09:07:07,452 44k INFO ====> Epoch: 251, cost 381.96 s 2023-03-18 09:07:50,399 44k INFO Train Epoch: 252 [9%] 2023-03-18 09:07:50,400 44k INFO Losses: [2.4546456336975098, 2.1858646869659424, 11.209453582763672, 19.772859573364258, 1.3955624103546143], step: 253600, lr: 9.683835818259144e-05 2023-03-18 09:09:02,896 44k INFO Train Epoch: 252 [29%] 2023-03-18 09:09:02,896 44k INFO Losses: [2.4003283977508545, 2.0603175163269043, 8.753173828125, 18.300884246826172, 1.042773723602295], step: 253800, lr: 9.683835818259144e-05 2023-03-18 09:10:15,401 44k INFO Train Epoch: 252 [49%] 2023-03-18 09:10:15,402 44k INFO Losses: [2.374217987060547, 2.1952061653137207, 8.2424898147583, 18.5125789642334, 0.7695099115371704], step: 254000, lr: 9.683835818259144e-05 2023-03-18 09:10:18,515 44k INFO Saving model and optimizer state at iteration 252 to ./logs\44k\G_254000.pth 2023-03-18 09:10:19,284 44k INFO Saving model and optimizer state at iteration 252 to ./logs\44k\D_254000.pth 2023-03-18 09:10:19,995 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_251000.pth 2023-03-18 09:10:20,039 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_251000.pth 2023-03-18 09:11:32,946 44k INFO Train Epoch: 252 [68%] 2023-03-18 09:11:32,946 44k INFO Losses: [2.435281991958618, 2.2858400344848633, 8.163786888122559, 19.859832763671875, 1.1785646677017212], step: 254200, lr: 9.683835818259144e-05 2023-03-18 09:12:45,831 44k INFO Train Epoch: 252 [88%] 2023-03-18 09:12:45,832 44k INFO Losses: [2.505737066268921, 2.2505266666412354, 8.299269676208496, 14.867690086364746, 1.582324743270874], step: 254400, lr: 9.683835818259144e-05 2023-03-18 09:13:29,388 44k INFO ====> Epoch: 252, cost 381.94 s 2023-03-18 09:14:08,328 44k INFO Train Epoch: 253 [8%] 2023-03-18 09:14:08,329 44k INFO Losses: [2.2143125534057617, 2.5923800468444824, 10.749336242675781, 18.321123123168945, 0.8893724679946899], step: 254600, lr: 9.68262533878186e-05 2023-03-18 09:15:20,761 44k INFO Train Epoch: 253 [28%] 2023-03-18 09:15:20,761 44k INFO Losses: [2.382950782775879, 2.1777713298797607, 10.411404609680176, 20.5659122467041, 1.3126310110092163], step: 254800, lr: 9.68262533878186e-05 2023-03-18 09:16:33,271 44k INFO Train Epoch: 253 [48%] 2023-03-18 09:16:33,271 44k INFO Losses: [2.5302915573120117, 2.1162126064300537, 11.149401664733887, 18.18644142150879, 1.3033106327056885], step: 255000, lr: 9.68262533878186e-05 2023-03-18 09:16:36,512 44k INFO Saving model and optimizer state at iteration 253 to ./logs\44k\G_255000.pth 2023-03-18 09:16:37,235 44k INFO Saving model and optimizer state at iteration 253 to ./logs\44k\D_255000.pth 2023-03-18 09:16:37,877 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_252000.pth 2023-03-18 09:16:37,905 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_252000.pth 2023-03-18 09:17:50,807 44k INFO Train Epoch: 253 [67%] 2023-03-18 09:17:50,807 44k INFO Losses: [2.382744312286377, 2.0778160095214844, 10.051019668579102, 18.913978576660156, 1.1098787784576416], step: 255200, lr: 9.68262533878186e-05 2023-03-18 09:19:03,710 44k INFO Train Epoch: 253 [87%] 2023-03-18 09:19:03,710 44k INFO Losses: [2.496880292892456, 2.18510103225708, 8.978373527526855, 18.34113121032715, 1.3866078853607178], step: 255400, lr: 9.68262533878186e-05 2023-03-18 09:19:50,866 44k INFO ====> Epoch: 253, cost 381.48 s 2023-03-18 09:20:26,326 44k INFO Train Epoch: 254 [7%] 2023-03-18 09:20:26,327 44k INFO Losses: [2.4198901653289795, 2.2639925479888916, 9.081136703491211, 17.467979431152344, 1.109919786453247], step: 255600, lr: 9.681415010614512e-05 2023-03-18 09:21:38,865 44k INFO Train Epoch: 254 [27%] 2023-03-18 09:21:38,866 44k INFO Losses: [2.581373929977417, 2.1790874004364014, 7.4498138427734375, 15.92393970489502, 0.8538084626197815], step: 255800, lr: 9.681415010614512e-05 2023-03-18 09:22:51,450 44k INFO Train Epoch: 254 [47%] 2023-03-18 09:22:51,450 44k INFO Losses: [2.459275722503662, 2.444404363632202, 9.379767417907715, 17.62710952758789, 1.491417646408081], step: 256000, lr: 9.681415010614512e-05 2023-03-18 09:22:54,625 44k INFO Saving model and optimizer state at iteration 254 to ./logs\44k\G_256000.pth 2023-03-18 09:22:55,347 44k INFO Saving model and optimizer state at iteration 254 to ./logs\44k\D_256000.pth 2023-03-18 09:22:56,019 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_253000.pth 2023-03-18 09:22:56,051 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_253000.pth 2023-03-18 09:24:09,030 44k INFO Train Epoch: 254 [66%] 2023-03-18 09:24:09,030 44k INFO Losses: [2.737125873565674, 2.260420799255371, 8.229069709777832, 18.021198272705078, 1.014447569847107], step: 256200, lr: 9.681415010614512e-05 2023-03-18 09:25:21,948 44k INFO Train Epoch: 254 [86%] 2023-03-18 09:25:21,949 44k INFO Losses: [2.937962055206299, 1.916571855545044, 6.43010139465332, 14.102213859558105, 1.3753830194473267], step: 256400, lr: 9.681415010614512e-05 2023-03-18 09:26:12,746 44k INFO ====> Epoch: 254, cost 381.88 s 2023-03-18 09:26:44,601 44k INFO Train Epoch: 255 [6%] 2023-03-18 09:26:44,602 44k INFO Losses: [2.4092535972595215, 2.0987608432769775, 9.605754852294922, 20.828521728515625, 1.05930757522583], step: 256600, lr: 9.680204833738185e-05 2023-03-18 09:27:57,174 44k INFO Train Epoch: 255 [26%] 2023-03-18 09:27:57,175 44k INFO Losses: [2.270691156387329, 2.550675630569458, 14.29659652709961, 20.925373077392578, 1.502076268196106], step: 256800, lr: 9.680204833738185e-05 2023-03-18 09:29:09,822 44k INFO Train Epoch: 255 [46%] 2023-03-18 09:29:09,822 44k INFO Losses: [2.4305953979492188, 2.1076934337615967, 7.796544551849365, 19.284032821655273, 1.1374989748001099], step: 257000, lr: 9.680204833738185e-05 2023-03-18 09:29:13,036 44k INFO Saving model and optimizer state at iteration 255 to ./logs\44k\G_257000.pth 2023-03-18 09:29:13,799 44k INFO Saving model and optimizer state at iteration 255 to ./logs\44k\D_257000.pth 2023-03-18 09:29:14,423 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_254000.pth 2023-03-18 09:29:14,458 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_254000.pth 2023-03-18 09:30:27,277 44k INFO Train Epoch: 255 [65%] 2023-03-18 09:30:27,277 44k INFO Losses: [2.4759020805358887, 2.0829544067382812, 11.983040809631348, 20.86541748046875, 1.3286665678024292], step: 257200, lr: 9.680204833738185e-05 2023-03-18 09:31:40,022 44k INFO Train Epoch: 255 [85%] 2023-03-18 09:31:40,023 44k INFO Losses: [2.3059957027435303, 2.392235040664673, 11.505219459533691, 23.607227325439453, 1.313123106956482], step: 257400, lr: 9.680204833738185e-05 2023-03-18 09:32:34,428 44k INFO ====> Epoch: 255, cost 381.68 s 2023-03-18 09:33:02,650 44k INFO Train Epoch: 256 [5%] 2023-03-18 09:33:02,651 44k INFO Losses: [2.318875312805176, 2.367741823196411, 10.286545753479004, 16.128402709960938, 1.1669859886169434], step: 257600, lr: 9.678994808133967e-05 2023-03-18 09:34:15,220 44k INFO Train Epoch: 256 [25%] 2023-03-18 09:34:15,220 44k INFO Losses: [2.43446683883667, 2.319711208343506, 12.45263385772705, 20.262500762939453, 1.4164291620254517], step: 257800, lr: 9.678994808133967e-05 2023-03-18 09:35:27,703 44k INFO Train Epoch: 256 [45%] 2023-03-18 09:35:27,703 44k INFO Losses: [2.4715733528137207, 2.2039623260498047, 9.541393280029297, 21.591384887695312, 1.560873031616211], step: 258000, lr: 9.678994808133967e-05 2023-03-18 09:35:30,937 44k INFO Saving model and optimizer state at iteration 256 to ./logs\44k\G_258000.pth 2023-03-18 09:35:31,600 44k INFO Saving model and optimizer state at iteration 256 to ./logs\44k\D_258000.pth 2023-03-18 09:35:32,229 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_255000.pth 2023-03-18 09:35:32,257 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_255000.pth 2023-03-18 09:36:45,050 44k INFO Train Epoch: 256 [64%] 2023-03-18 09:36:45,050 44k INFO Losses: [2.4029998779296875, 2.4097042083740234, 9.449996948242188, 16.77958106994629, 0.835869312286377], step: 258200, lr: 9.678994808133967e-05 2023-03-18 09:37:57,850 44k INFO Train Epoch: 256 [84%] 2023-03-18 09:37:57,851 44k INFO Losses: [2.4529125690460205, 2.0709335803985596, 11.126363754272461, 21.03396987915039, 1.4689221382141113], step: 258400, lr: 9.678994808133967e-05 2023-03-18 09:38:55,965 44k INFO ====> Epoch: 256, cost 381.54 s 2023-03-18 09:39:20,375 44k INFO Train Epoch: 257 [4%] 2023-03-18 09:39:20,375 44k INFO Losses: [2.5618629455566406, 2.139629602432251, 11.11988353729248, 23.124378204345703, 1.271600604057312], step: 258600, lr: 9.67778493378295e-05 2023-03-18 09:40:33,091 44k INFO Train Epoch: 257 [24%] 2023-03-18 09:40:33,091 44k INFO Losses: [2.343031406402588, 2.4708845615386963, 10.293150901794434, 20.090959548950195, 1.2511394023895264], step: 258800, lr: 9.67778493378295e-05 2023-03-18 09:41:45,518 44k INFO Train Epoch: 257 [44%] 2023-03-18 09:41:45,519 44k INFO Losses: [2.3797669410705566, 2.382173538208008, 11.572092056274414, 18.887983322143555, 1.1752614974975586], step: 259000, lr: 9.67778493378295e-05 2023-03-18 09:41:48,678 44k INFO Saving model and optimizer state at iteration 257 to ./logs\44k\G_259000.pth 2023-03-18 09:41:49,353 44k INFO Saving model and optimizer state at iteration 257 to ./logs\44k\D_259000.pth 2023-03-18 09:41:49,984 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_256000.pth 2023-03-18 09:41:50,013 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_256000.pth 2023-03-18 09:43:03,023 44k INFO Train Epoch: 257 [63%] 2023-03-18 09:43:03,023 44k INFO Losses: [2.3747289180755615, 2.724156618118286, 16.274486541748047, 23.848041534423828, 1.2195459604263306], step: 259200, lr: 9.67778493378295e-05 2023-03-18 09:44:15,713 44k INFO Train Epoch: 257 [83%] 2023-03-18 09:44:15,713 44k INFO Losses: [1.9940415620803833, 2.4824881553649902, 13.898460388183594, 24.66304588317871, 1.6445965766906738], step: 259400, lr: 9.67778493378295e-05 2023-03-18 09:45:17,645 44k INFO ====> Epoch: 257, cost 381.68 s 2023-03-18 09:45:38,387 44k INFO Train Epoch: 258 [3%] 2023-03-18 09:45:38,387 44k INFO Losses: [2.4652185440063477, 2.0332376956939697, 8.526308059692383, 21.24884605407715, 1.1362745761871338], step: 259600, lr: 9.676575210666227e-05 2023-03-18 09:46:51,186 44k INFO Train Epoch: 258 [23%] 2023-03-18 09:46:51,186 44k INFO Losses: [2.4297549724578857, 2.4736745357513428, 8.427279472351074, 15.789267539978027, 1.1854524612426758], step: 259800, lr: 9.676575210666227e-05 2023-03-18 09:48:03,639 44k INFO Train Epoch: 258 [43%] 2023-03-18 09:48:03,639 44k INFO Losses: [2.660482883453369, 1.811849594116211, 8.416715621948242, 16.356464385986328, 1.2926148176193237], step: 260000, lr: 9.676575210666227e-05 2023-03-18 09:48:06,846 44k INFO Saving model and optimizer state at iteration 258 to ./logs\44k\G_260000.pth 2023-03-18 09:48:07,554 44k INFO Saving model and optimizer state at iteration 258 to ./logs\44k\D_260000.pth 2023-03-18 09:48:08,204 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_257000.pth 2023-03-18 09:48:08,243 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_257000.pth 2023-03-18 09:49:21,292 44k INFO Train Epoch: 258 [62%] 2023-03-18 09:49:21,292 44k INFO Losses: [2.417520761489868, 2.3465538024902344, 11.388467788696289, 21.296764373779297, 1.2293344736099243], step: 260200, lr: 9.676575210666227e-05 2023-03-18 09:50:34,078 44k INFO Train Epoch: 258 [82%] 2023-03-18 09:50:34,078 44k INFO Losses: [2.4127280712127686, 2.29659366607666, 7.893820285797119, 17.759544372558594, 1.2188642024993896], step: 260400, lr: 9.676575210666227e-05 2023-03-18 09:51:39,681 44k INFO ====> Epoch: 258, cost 382.04 s 2023-03-18 09:51:56,983 44k INFO Train Epoch: 259 [2%] 2023-03-18 09:51:56,983 44k INFO Losses: [2.4113802909851074, 2.112454891204834, 9.777240753173828, 19.03521728515625, 1.2678860425949097], step: 260600, lr: 9.675365638764893e-05 2023-03-18 09:53:09,984 44k INFO Train Epoch: 259 [22%] 2023-03-18 09:53:09,985 44k INFO Losses: [2.6094696521759033, 2.3804783821105957, 6.868930816650391, 17.86251449584961, 1.1080708503723145], step: 260800, lr: 9.675365638764893e-05 2023-03-18 09:54:22,303 44k INFO Train Epoch: 259 [42%] 2023-03-18 09:54:22,304 44k INFO Losses: [2.4433178901672363, 2.143448829650879, 9.458659172058105, 14.629308700561523, 1.1964508295059204], step: 261000, lr: 9.675365638764893e-05 2023-03-18 09:54:25,548 44k INFO Saving model and optimizer state at iteration 259 to ./logs\44k\G_261000.pth 2023-03-18 09:54:26,262 44k INFO Saving model and optimizer state at iteration 259 to ./logs\44k\D_261000.pth 2023-03-18 09:54:26,899 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_258000.pth 2023-03-18 09:54:26,927 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_258000.pth 2023-03-18 09:55:39,903 44k INFO Train Epoch: 259 [61%] 2023-03-18 09:55:39,903 44k INFO Losses: [2.459667682647705, 2.2401628494262695, 10.034034729003906, 19.82508087158203, 1.189092755317688], step: 261200, lr: 9.675365638764893e-05 2023-03-18 09:56:52,594 44k INFO Train Epoch: 259 [81%] 2023-03-18 09:56:52,595 44k INFO Losses: [2.451457977294922, 2.430536985397339, 12.077695846557617, 21.336233139038086, 0.9631905555725098], step: 261400, lr: 9.675365638764893e-05 2023-03-18 09:58:01,852 44k INFO ====> Epoch: 259, cost 382.17 s 2023-03-18 09:58:15,421 44k INFO Train Epoch: 260 [1%] 2023-03-18 09:58:15,421 44k INFO Losses: [2.2433271408081055, 2.811359167098999, 13.929890632629395, 24.692737579345703, 1.1705387830734253], step: 261600, lr: 9.674156218060047e-05 2023-03-18 09:59:28,368 44k INFO Train Epoch: 260 [21%] 2023-03-18 09:59:28,368 44k INFO Losses: [2.320080518722534, 2.5559515953063965, 9.182693481445312, 18.92251205444336, 1.2416419982910156], step: 261800, lr: 9.674156218060047e-05 2023-03-18 10:00:40,610 44k INFO Train Epoch: 260 [41%] 2023-03-18 10:00:40,611 44k INFO Losses: [2.229428291320801, 2.519676446914673, 12.346684455871582, 21.15596580505371, 1.2791599035263062], step: 262000, lr: 9.674156218060047e-05 2023-03-18 10:00:43,862 44k INFO Saving model and optimizer state at iteration 260 to ./logs\44k\G_262000.pth 2023-03-18 10:00:44,530 44k INFO Saving model and optimizer state at iteration 260 to ./logs\44k\D_262000.pth 2023-03-18 10:00:45,156 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_259000.pth 2023-03-18 10:00:45,184 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_259000.pth 2023-03-18 10:01:58,120 44k INFO Train Epoch: 260 [60%] 2023-03-18 10:01:58,120 44k INFO Losses: [2.429933547973633, 2.3672735691070557, 6.362410545349121, 17.271265029907227, 1.5991843938827515], step: 262200, lr: 9.674156218060047e-05 2023-03-18 10:03:10,829 44k INFO Train Epoch: 260 [80%] 2023-03-18 10:03:10,830 44k INFO Losses: [2.4051482677459717, 2.25038743019104, 7.4500932693481445, 18.98808479309082, 1.4735701084136963], step: 262400, lr: 9.674156218060047e-05 2023-03-18 10:04:23,578 44k INFO ====> Epoch: 260, cost 381.73 s 2023-03-19 01:59:55,759 44k INFO {'train': {'log_interval': 200, 'eval_interval': 1000, 'seed': 1234, 'epochs': 300, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': False, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 3}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'tubaki': 0}, 'model_dir': './logs\\44k'} 2023-03-19 01:59:55,795 44k WARNING git hash values are different. cea6df30(saved) != fd4d47fd(current) 2023-03-19 01:59:59,169 44k INFO Loaded checkpoint './logs\44k\G_262000.pth' (iteration 260) 2023-03-19 01:59:59,734 44k INFO Loaded checkpoint './logs\44k\D_262000.pth' (iteration 260) 2023-03-19 02:00:23,091 44k INFO Train Epoch: 260 [1%] 2023-03-19 02:00:23,091 44k INFO Losses: [2.3211514949798584, 2.5405774116516113, 10.594609260559082, 19.185115814208984, 1.0337001085281372], step: 261600, lr: 9.67294694853279e-05 2023-03-19 02:01:44,830 44k INFO Train Epoch: 260 [21%] 2023-03-19 02:01:44,831 44k INFO Losses: [2.6637802124023438, 2.350691795349121, 7.301225185394287, 18.115509033203125, 1.1006356477737427], step: 261800, lr: 9.67294694853279e-05 2023-03-19 02:03:02,965 44k INFO Train Epoch: 260 [41%] 2023-03-19 02:03:02,966 44k INFO Losses: [2.3746938705444336, 2.265259265899658, 10.869388580322266, 20.171342849731445, 1.405778169631958], step: 262000, lr: 9.67294694853279e-05 2023-03-19 02:03:06,984 44k INFO Saving model and optimizer state at iteration 260 to ./logs\44k\G_262000.pth 2023-03-19 02:03:07,804 44k INFO Saving model and optimizer state at iteration 260 to ./logs\44k\D_262000.pth 2023-03-19 02:04:26,429 44k INFO Train Epoch: 260 [60%] 2023-03-19 02:04:26,430 44k INFO Losses: [2.5825209617614746, 2.0350022315979004, 8.09071159362793, 15.090100288391113, 1.2028111219406128], step: 262200, lr: 9.67294694853279e-05 2023-03-19 02:05:42,925 44k INFO Train Epoch: 260 [80%] 2023-03-19 02:05:42,926 44k INFO Losses: [2.6121363639831543, 1.910766839981079, 8.228078842163086, 16.102209091186523, 1.0640830993652344], step: 262400, lr: 9.67294694853279e-05 2023-03-19 02:07:02,331 44k INFO ====> Epoch: 260, cost 426.57 s 2023-03-19 02:07:11,999 44k INFO Train Epoch: 261 [0%] 2023-03-19 02:07:12,000 44k INFO Losses: [2.422490358352661, 2.255605936050415, 6.600303649902344, 19.44715690612793, 1.1653083562850952], step: 262600, lr: 9.671737830164223e-05 2023-03-19 02:08:26,931 44k INFO Train Epoch: 261 [20%] 2023-03-19 02:08:26,931 44k INFO Losses: [2.2784781455993652, 2.4204587936401367, 10.786633491516113, 22.055896759033203, 1.3711544275283813], step: 262800, lr: 9.671737830164223e-05 2023-03-19 02:09:40,789 44k INFO Train Epoch: 261 [40%] 2023-03-19 02:09:40,790 44k INFO Losses: [2.38016414642334, 2.1829919815063477, 7.449249744415283, 19.413223266601562, 1.1498881578445435], step: 263000, lr: 9.671737830164223e-05 2023-03-19 02:09:43,994 44k INFO Saving model and optimizer state at iteration 261 to ./logs\44k\G_263000.pth 2023-03-19 02:09:44,891 44k INFO Saving model and optimizer state at iteration 261 to ./logs\44k\D_263000.pth 2023-03-19 02:09:45,737 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_260000.pth 2023-03-19 02:09:45,738 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_260000.pth 2023-03-19 02:11:00,752 44k INFO Train Epoch: 261 [59%] 2023-03-19 02:11:00,752 44k INFO Losses: [2.491502046585083, 2.414815902709961, 13.502904891967773, 21.442289352416992, 1.0283231735229492], step: 263200, lr: 9.671737830164223e-05 2023-03-19 02:12:13,373 44k INFO Train Epoch: 261 [79%] 2023-03-19 02:12:13,373 44k INFO Losses: [2.600492000579834, 2.006051778793335, 5.73085880279541, 16.408422470092773, 1.2707939147949219], step: 263400, lr: 9.671737830164223e-05 2023-03-19 02:13:26,400 44k INFO Train Epoch: 261 [99%] 2023-03-19 02:13:26,401 44k INFO Losses: [2.387883186340332, 2.2035417556762695, 8.994551658630371, 18.550861358642578, 1.2615712881088257], step: 263600, lr: 9.671737830164223e-05 2023-03-19 02:13:30,056 44k INFO ====> Epoch: 261, cost 387.73 s 2023-03-19 02:14:49,301 44k INFO Train Epoch: 262 [19%] 2023-03-19 02:14:49,301 44k INFO Losses: [2.4111647605895996, 2.4754996299743652, 6.927050590515137, 14.130660057067871, 1.0071910619735718], step: 263800, lr: 9.670528862935451e-05 2023-03-19 02:16:02,626 44k INFO Train Epoch: 262 [39%] 2023-03-19 02:16:02,626 44k INFO Losses: [2.5450007915496826, 2.3395144939422607, 10.965438842773438, 20.434831619262695, 1.4163835048675537], step: 264000, lr: 9.670528862935451e-05 2023-03-19 02:16:05,816 44k INFO Saving model and optimizer state at iteration 262 to ./logs\44k\G_264000.pth 2023-03-19 02:16:06,590 44k INFO Saving model and optimizer state at iteration 262 to ./logs\44k\D_264000.pth 2023-03-19 02:16:07,272 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_261000.pth 2023-03-19 02:16:07,273 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_261000.pth 2023-03-19 02:17:20,497 44k INFO Train Epoch: 262 [58%] 2023-03-19 02:17:20,497 44k INFO Losses: [2.471515417098999, 2.1920788288116455, 10.81757926940918, 20.5076847076416, 1.3632688522338867], step: 264200, lr: 9.670528862935451e-05 2023-03-19 02:18:34,413 44k INFO Train Epoch: 262 [78%] 2023-03-19 02:18:34,413 44k INFO Losses: [2.1657161712646484, 2.5312416553497314, 13.060168266296387, 18.855056762695312, 0.9994138479232788], step: 264400, lr: 9.670528862935451e-05 2023-03-19 02:19:49,082 44k INFO Train Epoch: 262 [98%] 2023-03-19 02:19:49,082 44k INFO Losses: [2.5518157482147217, 2.0398318767547607, 9.223038673400879, 19.826202392578125, 1.246078610420227], step: 264600, lr: 9.670528862935451e-05 2023-03-19 02:19:56,517 44k INFO ====> Epoch: 262, cost 386.46 s 2023-03-19 02:21:11,637 44k INFO Train Epoch: 263 [18%] 2023-03-19 02:21:11,637 44k INFO Losses: [2.583767890930176, 2.0606112480163574, 10.790326118469238, 19.742599487304688, 1.4583714008331299], step: 264800, lr: 9.669320046827584e-05 2023-03-19 02:22:24,693 44k INFO Train Epoch: 263 [38%] 2023-03-19 02:22:24,693 44k INFO Losses: [2.3968074321746826, 2.3754870891571045, 11.224483489990234, 19.464847564697266, 1.0073120594024658], step: 265000, lr: 9.669320046827584e-05 2023-03-19 02:22:27,793 44k INFO Saving model and optimizer state at iteration 263 to ./logs\44k\G_265000.pth 2023-03-19 02:22:28,526 44k INFO Saving model and optimizer state at iteration 263 to ./logs\44k\D_265000.pth 2023-03-19 02:22:29,222 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_262000.pth 2023-03-19 02:22:29,253 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_262000.pth 2023-03-19 02:23:42,447 44k INFO Train Epoch: 263 [57%] 2023-03-19 02:23:42,447 44k INFO Losses: [2.5012118816375732, 2.4175941944122314, 10.368206024169922, 18.34642791748047, 1.1876918077468872], step: 265200, lr: 9.669320046827584e-05 2023-03-19 02:24:55,400 44k INFO Train Epoch: 263 [77%] 2023-03-19 02:24:55,400 44k INFO Losses: [2.5349106788635254, 2.2554664611816406, 8.00788688659668, 14.268106460571289, 1.333648681640625], step: 265400, lr: 9.669320046827584e-05 2023-03-19 02:26:08,443 44k INFO Train Epoch: 263 [97%] 2023-03-19 02:26:08,444 44k INFO Losses: [2.4380109310150146, 2.546419143676758, 11.57227611541748, 20.759374618530273, 1.1405391693115234], step: 265600, lr: 9.669320046827584e-05 2023-03-19 02:26:19,443 44k INFO ====> Epoch: 263, cost 382.93 s 2023-03-19 02:27:30,963 44k INFO Train Epoch: 264 [17%] 2023-03-19 02:27:30,963 44k INFO Losses: [2.4457125663757324, 2.2385342121124268, 11.62179183959961, 21.246217727661133, 1.1906036138534546], step: 265800, lr: 9.668111381821731e-05 2023-03-19 02:28:43,679 44k INFO Train Epoch: 264 [37%] 2023-03-19 02:28:43,679 44k INFO Losses: [2.672743320465088, 2.143559217453003, 8.775983810424805, 16.63150405883789, 1.3546695709228516], step: 266000, lr: 9.668111381821731e-05 2023-03-19 02:28:46,776 44k INFO Saving model and optimizer state at iteration 264 to ./logs\44k\G_266000.pth 2023-03-19 02:28:47,572 44k INFO Saving model and optimizer state at iteration 264 to ./logs\44k\D_266000.pth 2023-03-19 02:28:48,263 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_263000.pth 2023-03-19 02:28:48,293 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_263000.pth 2023-03-19 02:30:01,951 44k INFO Train Epoch: 264 [56%] 2023-03-19 02:30:01,952 44k INFO Losses: [2.4720072746276855, 2.241614580154419, 11.464181900024414, 20.94580078125, 1.339444875717163], step: 266200, lr: 9.668111381821731e-05 2023-03-19 02:31:15,664 44k INFO Train Epoch: 264 [76%] 2023-03-19 02:31:15,664 44k INFO Losses: [2.2868282794952393, 2.3788790702819824, 14.4191255569458, 21.03072166442871, 1.2259149551391602], step: 266400, lr: 9.668111381821731e-05 2023-03-19 02:32:28,872 44k INFO Train Epoch: 264 [96%] 2023-03-19 02:32:28,872 44k INFO Losses: [2.4144997596740723, 2.1124157905578613, 9.136685371398926, 19.31741714477539, 1.3940439224243164], step: 266600, lr: 9.668111381821731e-05 2023-03-19 02:32:43,579 44k INFO ====> Epoch: 264, cost 384.14 s 2023-03-19 02:33:51,586 44k INFO Train Epoch: 265 [16%] 2023-03-19 02:33:51,586 44k INFO Losses: [2.378944158554077, 2.4741129875183105, 8.082807540893555, 17.234331130981445, 1.4621081352233887], step: 266800, lr: 9.666902867899003e-05 2023-03-19 02:35:03,908 44k INFO Train Epoch: 265 [36%] 2023-03-19 02:35:03,908 44k INFO Losses: [2.299959897994995, 2.384162187576294, 9.134665489196777, 18.174819946289062, 1.2169592380523682], step: 267000, lr: 9.666902867899003e-05 2023-03-19 02:35:07,016 44k INFO Saving model and optimizer state at iteration 265 to ./logs\44k\G_267000.pth 2023-03-19 02:35:07,820 44k INFO Saving model and optimizer state at iteration 265 to ./logs\44k\D_267000.pth 2023-03-19 02:35:08,520 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_264000.pth 2023-03-19 02:35:08,567 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_264000.pth 2023-03-19 02:36:21,915 44k INFO Train Epoch: 265 [55%] 2023-03-19 02:36:21,916 44k INFO Losses: [2.458735466003418, 2.189445972442627, 11.577871322631836, 22.164913177490234, 1.361952304840088], step: 267200, lr: 9.666902867899003e-05 2023-03-19 02:37:34,838 44k INFO Train Epoch: 265 [75%] 2023-03-19 02:37:34,838 44k INFO Losses: [2.1394054889678955, 2.608342409133911, 16.84982681274414, 24.437557220458984, 1.3146775960922241], step: 267400, lr: 9.666902867899003e-05 2023-03-19 02:38:47,842 44k INFO Train Epoch: 265 [95%] 2023-03-19 02:38:47,843 44k INFO Losses: [2.3801000118255615, 2.2831053733825684, 10.533075332641602, 20.56385612487793, 1.179853916168213], step: 267600, lr: 9.666902867899003e-05 2023-03-19 02:39:06,178 44k INFO ====> Epoch: 265, cost 382.60 s 2023-03-19 02:40:10,642 44k INFO Train Epoch: 266 [15%] 2023-03-19 02:40:10,643 44k INFO Losses: [2.6372432708740234, 2.3864753246307373, 6.219926357269287, 20.209857940673828, 1.2090160846710205], step: 267800, lr: 9.665694505040515e-05 2023-03-19 02:41:23,042 44k INFO Train Epoch: 266 [35%] 2023-03-19 02:41:23,042 44k INFO Losses: [2.362396717071533, 2.3295323848724365, 10.959969520568848, 19.592910766601562, 1.4852442741394043], step: 268000, lr: 9.665694505040515e-05 2023-03-19 02:41:26,129 44k INFO Saving model and optimizer state at iteration 266 to ./logs\44k\G_268000.pth 2023-03-19 02:41:26,886 44k INFO Saving model and optimizer state at iteration 266 to ./logs\44k\D_268000.pth 2023-03-19 02:41:27,616 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_265000.pth 2023-03-19 02:41:27,658 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_265000.pth 2023-03-19 02:42:40,757 44k INFO Train Epoch: 266 [54%] 2023-03-19 02:42:40,758 44k INFO Losses: [2.654064416885376, 1.9229744672775269, 5.6208577156066895, 14.225379943847656, 1.4120049476623535], step: 268200, lr: 9.665694505040515e-05 2023-03-19 02:43:54,044 44k INFO Train Epoch: 266 [74%] 2023-03-19 02:43:54,045 44k INFO Losses: [2.7579963207244873, 1.9401808977127075, 8.828824996948242, 16.585468292236328, 0.955632209777832], step: 268400, lr: 9.665694505040515e-05 2023-03-19 02:45:07,179 44k INFO Train Epoch: 266 [94%] 2023-03-19 02:45:07,179 44k INFO Losses: [2.6352992057800293, 2.0629770755767822, 11.406524658203125, 19.15121841430664, 1.349000334739685], step: 268600, lr: 9.665694505040515e-05 2023-03-19 02:45:29,088 44k INFO ====> Epoch: 266, cost 382.91 s 2023-03-19 02:46:29,869 44k INFO Train Epoch: 267 [14%] 2023-03-19 02:46:29,870 44k INFO Losses: [2.3119091987609863, 2.739515781402588, 14.103116989135742, 23.091737747192383, 1.242667555809021], step: 268800, lr: 9.664486293227385e-05 2023-03-19 02:47:42,281 44k INFO Train Epoch: 267 [34%] 2023-03-19 02:47:42,282 44k INFO Losses: [2.520034074783325, 2.6470906734466553, 8.503702163696289, 18.483768463134766, 1.1412053108215332], step: 269000, lr: 9.664486293227385e-05 2023-03-19 02:47:45,386 44k INFO Saving model and optimizer state at iteration 267 to ./logs\44k\G_269000.pth 2023-03-19 02:47:46,126 44k INFO Saving model and optimizer state at iteration 267 to ./logs\44k\D_269000.pth 2023-03-19 02:47:46,835 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_266000.pth 2023-03-19 02:47:46,865 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_266000.pth 2023-03-19 02:48:59,930 44k INFO Train Epoch: 267 [53%] 2023-03-19 02:48:59,931 44k INFO Losses: [2.371253728866577, 2.431213855743408, 13.446754455566406, 20.611413955688477, 1.2169926166534424], step: 269200, lr: 9.664486293227385e-05 2023-03-19 02:50:12,952 44k INFO Train Epoch: 267 [73%] 2023-03-19 02:50:12,952 44k INFO Losses: [2.4924771785736084, 2.343754291534424, 11.405534744262695, 19.440702438354492, 0.6485458612442017], step: 269400, lr: 9.664486293227385e-05 2023-03-19 02:51:26,116 44k INFO Train Epoch: 267 [93%] 2023-03-19 02:51:26,117 44k INFO Losses: [2.3594284057617188, 2.1304407119750977, 10.304728507995605, 20.6300106048584, 1.2147891521453857], step: 269600, lr: 9.664486293227385e-05 2023-03-19 02:51:51,630 44k INFO ====> Epoch: 267, cost 382.54 s 2023-03-19 02:52:48,708 44k INFO Train Epoch: 268 [13%] 2023-03-19 02:52:48,709 44k INFO Losses: [2.466020107269287, 2.1371212005615234, 11.686795234680176, 17.619367599487305, 1.2933294773101807], step: 269800, lr: 9.663278232440732e-05 2023-03-19 02:54:01,104 44k INFO Train Epoch: 268 [33%] 2023-03-19 02:54:01,105 44k INFO Losses: [2.727344036102295, 2.093787431716919, 9.448195457458496, 19.572751998901367, 1.0479401350021362], step: 270000, lr: 9.663278232440732e-05 2023-03-19 02:54:04,245 44k INFO Saving model and optimizer state at iteration 268 to ./logs\44k\G_270000.pth 2023-03-19 02:54:04,979 44k INFO Saving model and optimizer state at iteration 268 to ./logs\44k\D_270000.pth 2023-03-19 02:54:05,699 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_267000.pth 2023-03-19 02:54:05,732 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_267000.pth 2023-03-19 02:55:18,609 44k INFO Train Epoch: 268 [52%] 2023-03-19 02:55:18,610 44k INFO Losses: [2.526318073272705, 2.511281967163086, 7.923502445220947, 15.128816604614258, 1.3552827835083008], step: 270200, lr: 9.663278232440732e-05 2023-03-19 02:56:31,783 44k INFO Train Epoch: 268 [72%] 2023-03-19 02:56:31,783 44k INFO Losses: [2.3967037200927734, 2.3303709030151367, 8.66872787475586, 21.470630645751953, 1.4043810367584229], step: 270400, lr: 9.663278232440732e-05 2023-03-19 02:57:44,937 44k INFO Train Epoch: 268 [92%] 2023-03-19 02:57:44,937 44k INFO Losses: [2.2978219985961914, 2.454057455062866, 16.02345848083496, 24.1951904296875, 1.3730522394180298], step: 270600, lr: 9.663278232440732e-05 2023-03-19 02:58:14,064 44k INFO ====> Epoch: 268, cost 382.43 s 2023-03-19 02:59:07,543 44k INFO Train Epoch: 269 [12%] 2023-03-19 02:59:07,543 44k INFO Losses: [2.566420316696167, 2.246849536895752, 11.360777854919434, 18.59880828857422, 1.243043065071106], step: 270800, lr: 9.662070322661676e-05 2023-03-19 03:00:19,862 44k INFO Train Epoch: 269 [32%] 2023-03-19 03:00:19,862 44k INFO Losses: [2.289071559906006, 2.42862606048584, 14.418990135192871, 22.48687171936035, 1.3247634172439575], step: 271000, lr: 9.662070322661676e-05 2023-03-19 03:00:23,172 44k INFO Saving model and optimizer state at iteration 269 to ./logs\44k\G_271000.pth 2023-03-19 03:00:23,874 44k INFO Saving model and optimizer state at iteration 269 to ./logs\44k\D_271000.pth 2023-03-19 03:00:24,611 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_268000.pth 2023-03-19 03:00:24,639 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_268000.pth 2023-03-19 03:01:38,268 44k INFO Train Epoch: 269 [51%] 2023-03-19 03:01:38,268 44k INFO Losses: [2.1587436199188232, 2.478172779083252, 11.66782283782959, 22.798128128051758, 1.487646222114563], step: 271200, lr: 9.662070322661676e-05 2023-03-19 03:02:51,465 44k INFO Train Epoch: 269 [71%] 2023-03-19 03:02:51,466 44k INFO Losses: [2.383314847946167, 2.2130095958709717, 12.284841537475586, 19.310985565185547, 0.9509387016296387], step: 271400, lr: 9.662070322661676e-05 2023-03-19 03:04:04,680 44k INFO Train Epoch: 269 [91%] 2023-03-19 03:04:04,680 44k INFO Losses: [2.148740768432617, 2.2806949615478516, 12.513569831848145, 22.737735748291016, 1.1289342641830444], step: 271600, lr: 9.662070322661676e-05 2023-03-19 03:04:37,405 44k INFO ====> Epoch: 269, cost 383.34 s 2023-03-19 03:05:27,377 44k INFO Train Epoch: 270 [11%] 2023-03-19 03:05:27,378 44k INFO Losses: [2.5963804721832275, 2.2491917610168457, 8.273581504821777, 17.768510818481445, 1.8133206367492676], step: 271800, lr: 9.660862563871342e-05 2023-03-19 03:06:39,638 44k INFO Train Epoch: 270 [31%] 2023-03-19 03:06:39,638 44k INFO Losses: [2.3684442043304443, 2.292511224746704, 13.740839958190918, 21.580236434936523, 1.077987551689148], step: 272000, lr: 9.660862563871342e-05 2023-03-19 03:06:42,780 44k INFO Saving model and optimizer state at iteration 270 to ./logs\44k\G_272000.pth 2023-03-19 03:06:43,525 44k INFO Saving model and optimizer state at iteration 270 to ./logs\44k\D_272000.pth 2023-03-19 03:06:44,226 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_269000.pth 2023-03-19 03:06:44,255 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_269000.pth 2023-03-19 03:07:57,226 44k INFO Train Epoch: 270 [50%] 2023-03-19 03:07:57,226 44k INFO Losses: [2.687365770339966, 1.9801768064498901, 9.645769119262695, 17.63253402709961, 0.956916093826294], step: 272200, lr: 9.660862563871342e-05 2023-03-19 03:09:10,826 44k INFO Train Epoch: 270 [70%] 2023-03-19 03:09:10,826 44k INFO Losses: [2.299865484237671, 2.757887363433838, 11.205260276794434, 21.33222198486328, 1.5419161319732666], step: 272400, lr: 9.660862563871342e-05 2023-03-19 03:10:24,623 44k INFO Train Epoch: 270 [90%] 2023-03-19 03:10:24,624 44k INFO Losses: [2.6075451374053955, 2.248753547668457, 12.23843002319336, 19.954669952392578, 1.199111819267273], step: 272600, lr: 9.660862563871342e-05 2023-03-19 03:11:01,068 44k INFO ====> Epoch: 270, cost 383.66 s 2023-03-19 03:11:47,479 44k INFO Train Epoch: 271 [10%] 2023-03-19 03:11:47,479 44k INFO Losses: [2.3339056968688965, 2.511117696762085, 12.289216041564941, 21.648052215576172, 1.2231645584106445], step: 272800, lr: 9.659654956050859e-05 2023-03-19 03:12:59,760 44k INFO Train Epoch: 271 [30%] 2023-03-19 03:12:59,760 44k INFO Losses: [2.637054443359375, 2.3030049800872803, 13.3821382522583, 20.36765480041504, 1.207653284072876], step: 273000, lr: 9.659654956050859e-05 2023-03-19 03:13:02,989 44k INFO Saving model and optimizer state at iteration 271 to ./logs\44k\G_273000.pth 2023-03-19 03:13:03,736 44k INFO Saving model and optimizer state at iteration 271 to ./logs\44k\D_273000.pth 2023-03-19 03:13:04,435 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_270000.pth 2023-03-19 03:13:04,464 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_270000.pth 2023-03-19 03:14:17,282 44k INFO Train Epoch: 271 [50%] 2023-03-19 03:14:17,282 44k INFO Losses: [2.744676351547241, 2.414555788040161, 9.120198249816895, 18.580041885375977, 1.2802554368972778], step: 273200, lr: 9.659654956050859e-05 2023-03-19 03:15:30,405 44k INFO Train Epoch: 271 [69%] 2023-03-19 03:15:30,406 44k INFO Losses: [2.5093798637390137, 2.2438595294952393, 9.288117408752441, 16.23870277404785, 0.7893856763839722], step: 273400, lr: 9.659654956050859e-05 2023-03-19 03:16:43,553 44k INFO Train Epoch: 271 [89%] 2023-03-19 03:16:43,553 44k INFO Losses: [2.503312587738037, 2.331045627593994, 8.428394317626953, 15.536552429199219, 1.2716102600097656], step: 273600, lr: 9.659654956050859e-05 2023-03-19 03:17:23,541 44k INFO ====> Epoch: 271, cost 382.47 s 2023-03-19 03:18:06,091 44k INFO Train Epoch: 272 [9%] 2023-03-19 03:18:06,092 44k INFO Losses: [2.368056535720825, 2.267660617828369, 9.146784782409668, 21.67991065979004, 1.6815484762191772], step: 273800, lr: 9.658447499181352e-05 2023-03-19 03:19:18,711 44k INFO Train Epoch: 272 [29%] 2023-03-19 03:19:18,711 44k INFO Losses: [2.513354539871216, 2.556419849395752, 9.422351837158203, 18.12743377685547, 1.2388702630996704], step: 274000, lr: 9.658447499181352e-05 2023-03-19 03:19:21,777 44k INFO Saving model and optimizer state at iteration 272 to ./logs\44k\G_274000.pth 2023-03-19 03:19:22,528 44k INFO Saving model and optimizer state at iteration 272 to ./logs\44k\D_274000.pth 2023-03-19 03:19:23,225 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_271000.pth 2023-03-19 03:19:23,254 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_271000.pth 2023-03-19 03:20:35,968 44k INFO Train Epoch: 272 [49%] 2023-03-19 03:20:35,968 44k INFO Losses: [2.6381328105926514, 2.286409378051758, 8.734260559082031, 20.219284057617188, 1.330569863319397], step: 274200, lr: 9.658447499181352e-05 2023-03-19 03:21:49,472 44k INFO Train Epoch: 272 [68%] 2023-03-19 03:21:49,473 44k INFO Losses: [2.6514594554901123, 2.1952085494995117, 7.599758625030518, 17.58383560180664, 1.1117513179779053], step: 274400, lr: 9.658447499181352e-05 2023-03-19 03:23:02,751 44k INFO Train Epoch: 272 [88%] 2023-03-19 03:23:02,752 44k INFO Losses: [2.466177225112915, 2.46227765083313, 6.995729446411133, 16.50790023803711, 1.0571486949920654], step: 274600, lr: 9.658447499181352e-05 2023-03-19 03:23:46,408 44k INFO ====> Epoch: 272, cost 382.87 s 2023-03-19 03:24:25,295 44k INFO Train Epoch: 273 [8%] 2023-03-19 03:24:25,296 44k INFO Losses: [2.6838433742523193, 2.1520040035247803, 12.36935043334961, 16.02272605895996, 1.5284144878387451], step: 274800, lr: 9.657240193243954e-05 2023-03-19 03:25:37,891 44k INFO Train Epoch: 273 [28%] 2023-03-19 03:25:37,891 44k INFO Losses: [2.513544797897339, 2.2743701934814453, 5.2136712074279785, 15.798548698425293, 1.2450908422470093], step: 275000, lr: 9.657240193243954e-05 2023-03-19 03:25:40,960 44k INFO Saving model and optimizer state at iteration 273 to ./logs\44k\G_275000.pth 2023-03-19 03:25:41,752 44k INFO Saving model and optimizer state at iteration 273 to ./logs\44k\D_275000.pth 2023-03-19 03:25:42,448 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_272000.pth 2023-03-19 03:25:42,477 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_272000.pth 2023-03-19 03:26:54,990 44k INFO Train Epoch: 273 [48%] 2023-03-19 03:26:54,991 44k INFO Losses: [2.2263877391815186, 2.6394426822662354, 13.079300880432129, 21.216609954833984, 1.5885274410247803], step: 275200, lr: 9.657240193243954e-05 2023-03-19 03:28:08,438 44k INFO Train Epoch: 273 [67%] 2023-03-19 03:28:08,439 44k INFO Losses: [2.796905040740967, 2.0150084495544434, 8.962186813354492, 16.487958908081055, 0.7820655703544617], step: 275400, lr: 9.657240193243954e-05 2023-03-19 03:29:22,144 44k INFO Train Epoch: 273 [87%] 2023-03-19 03:29:22,144 44k INFO Losses: [2.3702049255371094, 2.1334588527679443, 10.048635482788086, 15.309821128845215, 1.5071865320205688], step: 275600, lr: 9.657240193243954e-05 2023-03-19 03:30:09,456 44k INFO ====> Epoch: 273, cost 383.05 s 2023-03-19 03:30:44,869 44k INFO Train Epoch: 274 [7%] 2023-03-19 03:30:44,869 44k INFO Losses: [2.198003053665161, 2.5612826347351074, 11.386404037475586, 18.73196029663086, 0.8876563310623169], step: 275800, lr: 9.656033038219798e-05 2023-03-19 03:31:58,112 44k INFO Train Epoch: 274 [27%] 2023-03-19 03:31:58,112 44k INFO Losses: [2.454402446746826, 2.2617809772491455, 12.367864608764648, 18.886520385742188, 1.477707862854004], step: 276000, lr: 9.656033038219798e-05 2023-03-19 03:32:01,281 44k INFO Saving model and optimizer state at iteration 274 to ./logs\44k\G_276000.pth 2023-03-19 03:32:02,040 44k INFO Saving model and optimizer state at iteration 274 to ./logs\44k\D_276000.pth 2023-03-19 03:32:02,743 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_273000.pth 2023-03-19 03:32:02,772 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_273000.pth 2023-03-19 03:33:15,351 44k INFO Train Epoch: 274 [47%] 2023-03-19 03:33:15,352 44k INFO Losses: [2.3763129711151123, 2.3956832885742188, 13.391439437866211, 22.720840454101562, 1.0427846908569336], step: 276200, lr: 9.656033038219798e-05 2023-03-19 03:34:28,918 44k INFO Train Epoch: 274 [66%] 2023-03-19 03:34:28,918 44k INFO Losses: [2.3566641807556152, 2.574570417404175, 11.44519329071045, 20.01541519165039, 0.9240890145301819], step: 276400, lr: 9.656033038219798e-05 2023-03-19 03:35:42,040 44k INFO Train Epoch: 274 [86%] 2023-03-19 03:35:42,040 44k INFO Losses: [2.473428726196289, 2.4536633491516113, 8.117868423461914, 17.406993865966797, 1.1994836330413818], step: 276600, lr: 9.656033038219798e-05 2023-03-19 03:36:33,045 44k INFO ====> Epoch: 274, cost 383.59 s 2023-03-19 03:37:04,987 44k INFO Train Epoch: 275 [6%] 2023-03-19 03:37:04,987 44k INFO Losses: [2.3544535636901855, 2.399888515472412, 10.84757137298584, 20.05695343017578, 1.1736810207366943], step: 276800, lr: 9.65482603409002e-05 2023-03-19 03:38:18,914 44k INFO Train Epoch: 275 [26%] 2023-03-19 03:38:18,915 44k INFO Losses: [2.4706544876098633, 2.212352991104126, 10.495268821716309, 16.422710418701172, 1.1730018854141235], step: 277000, lr: 9.65482603409002e-05 2023-03-19 03:38:22,126 44k INFO Saving model and optimizer state at iteration 275 to ./logs\44k\G_277000.pth 2023-03-19 03:38:22,884 44k INFO Saving model and optimizer state at iteration 275 to ./logs\44k\D_277000.pth 2023-03-19 03:38:23,589 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_274000.pth 2023-03-19 03:38:23,621 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_274000.pth 2023-03-19 03:39:36,622 44k INFO Train Epoch: 275 [46%] 2023-03-19 03:39:36,622 44k INFO Losses: [2.7392115592956543, 2.1714887619018555, 7.160915851593018, 16.647790908813477, 1.0157612562179565], step: 277200, lr: 9.65482603409002e-05 2023-03-19 03:40:49,892 44k INFO Train Epoch: 275 [65%] 2023-03-19 03:40:49,892 44k INFO Losses: [2.3676366806030273, 2.4089982509613037, 11.877680778503418, 20.291662216186523, 1.191890835762024], step: 277400, lr: 9.65482603409002e-05 2023-03-19 03:42:03,406 44k INFO Train Epoch: 275 [85%] 2023-03-19 03:42:03,406 44k INFO Losses: [2.40444278717041, 2.3687262535095215, 8.447147369384766, 21.371349334716797, 1.5928936004638672], step: 277600, lr: 9.65482603409002e-05 2023-03-19 03:42:58,030 44k INFO ====> Epoch: 275, cost 384.98 s 2023-03-19 03:43:26,117 44k INFO Train Epoch: 276 [5%] 2023-03-19 03:43:26,118 44k INFO Losses: [2.2617127895355225, 2.353736400604248, 15.213428497314453, 21.57476806640625, 1.1509915590286255], step: 277800, lr: 9.653619180835758e-05 2023-03-19 03:44:38,811 44k INFO Train Epoch: 276 [25%] 2023-03-19 03:44:38,812 44k INFO Losses: [2.6004862785339355, 2.2608375549316406, 11.685988426208496, 19.66523551940918, 0.9222537279129028], step: 278000, lr: 9.653619180835758e-05 2023-03-19 03:44:41,981 44k INFO Saving model and optimizer state at iteration 276 to ./logs\44k\G_278000.pth 2023-03-19 03:44:42,679 44k INFO Saving model and optimizer state at iteration 276 to ./logs\44k\D_278000.pth 2023-03-19 03:44:43,378 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_275000.pth 2023-03-19 03:44:43,408 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_275000.pth 2023-03-19 03:45:56,443 44k INFO Train Epoch: 276 [45%] 2023-03-19 03:45:56,443 44k INFO Losses: [2.311063289642334, 2.458470582962036, 10.318071365356445, 22.54372215270996, 1.1250410079956055], step: 278200, lr: 9.653619180835758e-05 2023-03-19 03:47:10,154 44k INFO Train Epoch: 276 [64%] 2023-03-19 03:47:10,154 44k INFO Losses: [2.287294864654541, 2.353163003921509, 15.682244300842285, 21.037050247192383, 1.04291570186615], step: 278400, lr: 9.653619180835758e-05 2023-03-19 03:48:23,310 44k INFO Train Epoch: 276 [84%] 2023-03-19 03:48:23,311 44k INFO Losses: [2.4393579959869385, 2.4422144889831543, 11.849480628967285, 21.567909240722656, 1.2683885097503662], step: 278600, lr: 9.653619180835758e-05 2023-03-19 03:49:21,710 44k INFO ====> Epoch: 276, cost 383.68 s 2023-03-19 03:49:46,293 44k INFO Train Epoch: 277 [4%] 2023-03-19 03:49:46,293 44k INFO Losses: [2.442868232727051, 2.3959178924560547, 11.519576072692871, 21.574291229248047, 0.552722692489624], step: 278800, lr: 9.652412478438153e-05 2023-03-19 03:50:59,258 44k INFO Train Epoch: 277 [24%] 2023-03-19 03:50:59,259 44k INFO Losses: [2.5538434982299805, 2.3003509044647217, 9.591710090637207, 20.635168075561523, 1.1744695901870728], step: 279000, lr: 9.652412478438153e-05 2023-03-19 03:51:02,469 44k INFO Saving model and optimizer state at iteration 277 to ./logs\44k\G_279000.pth 2023-03-19 03:51:03,178 44k INFO Saving model and optimizer state at iteration 277 to ./logs\44k\D_279000.pth 2023-03-19 03:51:03,882 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_276000.pth 2023-03-19 03:51:03,910 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_276000.pth 2023-03-19 03:52:16,739 44k INFO Train Epoch: 277 [44%] 2023-03-19 03:52:16,740 44k INFO Losses: [2.4501659870147705, 2.20281982421875, 9.633475303649902, 15.171695709228516, 0.9724370241165161], step: 279200, lr: 9.652412478438153e-05 2023-03-19 03:53:30,587 44k INFO Train Epoch: 277 [63%] 2023-03-19 03:53:30,587 44k INFO Losses: [2.387249231338501, 2.0532569885253906, 8.126180648803711, 17.6947078704834, 1.219207763671875], step: 279400, lr: 9.652412478438153e-05 2023-03-19 03:54:43,578 44k INFO Train Epoch: 277 [83%] 2023-03-19 03:54:43,579 44k INFO Losses: [2.5228781700134277, 2.351898670196533, 8.925992965698242, 19.443132400512695, 1.4955692291259766], step: 279600, lr: 9.652412478438153e-05 2023-03-19 03:55:45,629 44k INFO ====> Epoch: 277, cost 383.92 s 2023-03-19 03:56:06,234 44k INFO Train Epoch: 278 [3%] 2023-03-19 03:56:06,234 44k INFO Losses: [2.320244073867798, 2.1177620887756348, 9.68830680847168, 17.549638748168945, 1.321323037147522], step: 279800, lr: 9.651205926878348e-05 2023-03-19 03:57:19,221 44k INFO Train Epoch: 278 [23%] 2023-03-19 03:57:19,221 44k INFO Losses: [2.3503224849700928, 2.679189443588257, 13.201372146606445, 21.89756202697754, 0.9688938856124878], step: 280000, lr: 9.651205926878348e-05 2023-03-19 03:57:22,342 44k INFO Saving model and optimizer state at iteration 278 to ./logs\44k\G_280000.pth 2023-03-19 03:57:23,031 44k INFO Saving model and optimizer state at iteration 278 to ./logs\44k\D_280000.pth 2023-03-19 03:57:23,740 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_277000.pth 2023-03-19 03:57:23,769 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_277000.pth 2023-03-19 03:58:36,369 44k INFO Train Epoch: 278 [43%] 2023-03-19 03:58:36,369 44k INFO Losses: [2.3476595878601074, 2.2870676517486572, 11.04446792602539, 21.70940589904785, 1.265101671218872], step: 280200, lr: 9.651205926878348e-05 2023-03-19 03:59:49,637 44k INFO Train Epoch: 278 [62%] 2023-03-19 03:59:49,638 44k INFO Losses: [2.200672149658203, 2.4884278774261475, 11.401790618896484, 20.02988624572754, 1.2452495098114014], step: 280400, lr: 9.651205926878348e-05 2023-03-19 04:01:02,710 44k INFO Train Epoch: 278 [82%] 2023-03-19 04:01:02,711 44k INFO Losses: [2.563042640686035, 2.1421918869018555, 11.95141887664795, 21.75998878479004, 1.4173517227172852], step: 280600, lr: 9.651205926878348e-05 2023-03-19 04:02:08,377 44k INFO ====> Epoch: 278, cost 382.75 s 2023-03-19 04:02:25,426 44k INFO Train Epoch: 279 [2%] 2023-03-19 04:02:25,427 44k INFO Losses: [2.270941972732544, 2.3202857971191406, 13.483175277709961, 22.8260498046875, 1.315244436264038], step: 280800, lr: 9.649999526137489e-05 2023-03-19 04:03:38,576 44k INFO Train Epoch: 279 [22%] 2023-03-19 04:03:38,576 44k INFO Losses: [2.5308032035827637, 2.3926491737365723, 8.944249153137207, 20.646886825561523, 1.1271865367889404], step: 281000, lr: 9.649999526137489e-05 2023-03-19 04:03:41,809 44k INFO Saving model and optimizer state at iteration 279 to ./logs\44k\G_281000.pth 2023-03-19 04:03:42,510 44k INFO Saving model and optimizer state at iteration 279 to ./logs\44k\D_281000.pth 2023-03-19 04:03:43,210 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_278000.pth 2023-03-19 04:03:43,239 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_278000.pth 2023-03-19 04:04:55,640 44k INFO Train Epoch: 279 [42%] 2023-03-19 04:04:55,641 44k INFO Losses: [2.505892753601074, 2.4444639682769775, 6.413759231567383, 14.431293487548828, 1.1265945434570312], step: 281200, lr: 9.649999526137489e-05 2023-03-19 04:06:09,003 44k INFO Train Epoch: 279 [61%] 2023-03-19 04:06:09,003 44k INFO Losses: [2.2127861976623535, 2.508963108062744, 15.640935897827148, 20.276609420776367, 1.0112017393112183], step: 281400, lr: 9.649999526137489e-05 2023-03-19 04:07:21,886 44k INFO Train Epoch: 279 [81%] 2023-03-19 04:07:21,886 44k INFO Losses: [2.376890182495117, 2.3687639236450195, 9.419426918029785, 18.240352630615234, 0.9085153937339783], step: 281600, lr: 9.649999526137489e-05 2023-03-19 04:08:31,295 44k INFO ====> Epoch: 279, cost 382.92 s 2023-03-19 04:08:44,633 44k INFO Train Epoch: 280 [1%] 2023-03-19 04:08:44,633 44k INFO Losses: [2.4425673484802246, 2.2133541107177734, 8.892147064208984, 18.680265426635742, 0.8623195886611938], step: 281800, lr: 9.64879327619672e-05 2023-03-19 04:09:57,869 44k INFO Train Epoch: 280 [21%] 2023-03-19 04:09:57,870 44k INFO Losses: [2.6913633346557617, 2.096186876296997, 6.323838233947754, 17.4659481048584, 1.1994298696517944], step: 282000, lr: 9.64879327619672e-05 2023-03-19 04:10:01,025 44k INFO Saving model and optimizer state at iteration 280 to ./logs\44k\G_282000.pth 2023-03-19 04:10:01,766 44k INFO Saving model and optimizer state at iteration 280 to ./logs\44k\D_282000.pth 2023-03-19 04:10:02,464 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_279000.pth 2023-03-19 04:10:02,496 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_279000.pth 2023-03-19 04:11:14,842 44k INFO Train Epoch: 280 [41%] 2023-03-19 04:11:14,843 44k INFO Losses: [2.6384260654449463, 2.3982715606689453, 11.897112846374512, 19.82377052307129, 1.4077749252319336], step: 282200, lr: 9.64879327619672e-05 2023-03-19 04:12:28,296 44k INFO Train Epoch: 280 [60%] 2023-03-19 04:12:28,297 44k INFO Losses: [2.4448459148406982, 2.4187145233154297, 11.03907299041748, 20.441497802734375, 1.388116717338562], step: 282400, lr: 9.64879327619672e-05 2023-03-19 04:13:41,253 44k INFO Train Epoch: 280 [80%] 2023-03-19 04:13:41,253 44k INFO Losses: [2.4228458404541016, 2.3996472358703613, 11.916937828063965, 18.024635314941406, 1.2051351070404053], step: 282600, lr: 9.64879327619672e-05 2023-03-19 04:14:54,164 44k INFO ====> Epoch: 280, cost 382.87 s 2023-03-19 04:15:03,922 44k INFO Train Epoch: 281 [0%] 2023-03-19 04:15:03,922 44k INFO Losses: [2.55098819732666, 2.18677020072937, 5.960976600646973, 17.07769203186035, 1.3457586765289307], step: 282800, lr: 9.647587177037196e-05 2023-03-19 04:16:17,035 44k INFO Train Epoch: 281 [20%] 2023-03-19 04:16:17,035 44k INFO Losses: [2.276949405670166, 2.6204984188079834, 12.732853889465332, 22.173553466796875, 1.287111520767212], step: 283000, lr: 9.647587177037196e-05 2023-03-19 04:16:20,201 44k INFO Saving model and optimizer state at iteration 281 to ./logs\44k\G_283000.pth 2023-03-19 04:16:20,924 44k INFO Saving model and optimizer state at iteration 281 to ./logs\44k\D_283000.pth 2023-03-19 04:16:21,626 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_280000.pth 2023-03-19 04:16:21,656 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_280000.pth 2023-03-19 04:17:33,814 44k INFO Train Epoch: 281 [40%] 2023-03-19 04:17:33,814 44k INFO Losses: [2.570189952850342, 2.2866005897521973, 8.323099136352539, 22.687421798706055, 1.6649329662322998], step: 283200, lr: 9.647587177037196e-05 2023-03-19 04:18:47,332 44k INFO Train Epoch: 281 [59%] 2023-03-19 04:18:47,333 44k INFO Losses: [2.387420654296875, 2.3895516395568848, 11.779617309570312, 19.95624351501465, 0.982094407081604], step: 283400, lr: 9.647587177037196e-05 2023-03-19 04:20:00,239 44k INFO Train Epoch: 281 [79%] 2023-03-19 04:20:00,239 44k INFO Losses: [2.5293192863464355, 2.234461545944214, 8.363504409790039, 19.372112274169922, 1.1929693222045898], step: 283600, lr: 9.647587177037196e-05 2023-03-19 04:21:13,326 44k INFO Train Epoch: 281 [99%] 2023-03-19 04:21:13,326 44k INFO Losses: [2.568899631500244, 1.9609322547912598, 7.6941375732421875, 18.067607879638672, 1.1617987155914307], step: 283800, lr: 9.647587177037196e-05 2023-03-19 04:21:16,985 44k INFO ====> Epoch: 281, cost 382.82 s 2023-03-19 04:22:35,806 44k INFO Train Epoch: 282 [19%] 2023-03-19 04:22:35,806 44k INFO Losses: [2.274350166320801, 2.5502095222473145, 8.118173599243164, 20.310279846191406, 1.125897765159607], step: 284000, lr: 9.646381228640066e-05 2023-03-19 04:22:39,003 44k INFO Saving model and optimizer state at iteration 282 to ./logs\44k\G_284000.pth 2023-03-19 04:22:39,746 44k INFO Saving model and optimizer state at iteration 282 to ./logs\44k\D_284000.pth 2023-03-19 04:22:40,453 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_281000.pth 2023-03-19 04:22:40,482 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_281000.pth 2023-03-19 04:23:52,726 44k INFO Train Epoch: 282 [39%] 2023-03-19 04:23:52,727 44k INFO Losses: [2.1035189628601074, 2.3709895610809326, 14.004048347473145, 26.588788986206055, 1.0410330295562744], step: 284200, lr: 9.646381228640066e-05 2023-03-19 04:25:06,185 44k INFO Train Epoch: 282 [58%] 2023-03-19 04:25:06,185 44k INFO Losses: [2.211176633834839, 2.5907492637634277, 16.008350372314453, 23.976253509521484, 1.3352323770523071], step: 284400, lr: 9.646381228640066e-05 2023-03-19 04:26:19,075 44k INFO Train Epoch: 282 [78%] 2023-03-19 04:26:19,076 44k INFO Losses: [2.3802855014801025, 2.113701581954956, 10.281096458435059, 15.070961952209473, 1.3594555854797363], step: 284600, lr: 9.646381228640066e-05 2023-03-19 04:27:32,167 44k INFO Train Epoch: 282 [98%] 2023-03-19 04:27:32,168 44k INFO Losses: [2.8180882930755615, 2.213209629058838, 9.59243106842041, 20.017597198486328, 0.9650906920433044], step: 284800, lr: 9.646381228640066e-05 2023-03-19 04:27:39,439 44k INFO ====> Epoch: 282, cost 382.45 s 2023-03-19 04:28:54,700 44k INFO Train Epoch: 283 [18%] 2023-03-19 04:28:54,700 44k INFO Losses: [2.513207197189331, 2.225459098815918, 14.535089492797852, 20.899688720703125, 0.799858808517456], step: 285000, lr: 9.645175430986486e-05 2023-03-19 04:28:57,838 44k INFO Saving model and optimizer state at iteration 283 to ./logs\44k\G_285000.pth 2023-03-19 04:28:58,553 44k INFO Saving model and optimizer state at iteration 283 to ./logs\44k\D_285000.pth 2023-03-19 04:28:59,250 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_282000.pth 2023-03-19 04:28:59,279 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_282000.pth 2023-03-19 04:30:11,519 44k INFO Train Epoch: 283 [38%] 2023-03-19 04:30:11,519 44k INFO Losses: [2.616781711578369, 2.2581655979156494, 11.303821563720703, 20.766843795776367, 0.9658932685852051], step: 285200, lr: 9.645175430986486e-05 2023-03-19 04:31:24,953 44k INFO Train Epoch: 283 [57%] 2023-03-19 04:31:24,953 44k INFO Losses: [2.413766384124756, 2.347011089324951, 8.94131088256836, 19.267284393310547, 1.4877933263778687], step: 285400, lr: 9.645175430986486e-05 2023-03-19 04:32:37,977 44k INFO Train Epoch: 283 [77%] 2023-03-19 04:32:37,978 44k INFO Losses: [2.474992036819458, 2.328104257583618, 10.626486778259277, 18.73859405517578, 0.7000563144683838], step: 285600, lr: 9.645175430986486e-05 2023-03-19 04:33:51,069 44k INFO Train Epoch: 283 [97%] 2023-03-19 04:33:51,069 44k INFO Losses: [2.488048791885376, 2.1781649589538574, 9.74384880065918, 19.443950653076172, 1.1259227991104126], step: 285800, lr: 9.645175430986486e-05 2023-03-19 04:34:02,238 44k INFO ====> Epoch: 283, cost 382.80 s 2023-03-19 04:35:13,946 44k INFO Train Epoch: 284 [17%] 2023-03-19 04:35:13,947 44k INFO Losses: [2.2441608905792236, 2.2969212532043457, 11.162654876708984, 22.2508487701416, 1.109834909439087], step: 286000, lr: 9.643969784057613e-05 2023-03-19 04:35:17,076 44k INFO Saving model and optimizer state at iteration 284 to ./logs\44k\G_286000.pth 2023-03-19 04:35:17,825 44k INFO Saving model and optimizer state at iteration 284 to ./logs\44k\D_286000.pth 2023-03-19 04:35:18,532 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_283000.pth 2023-03-19 04:35:18,568 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_283000.pth 2023-03-19 04:36:30,944 44k INFO Train Epoch: 284 [37%] 2023-03-19 04:36:30,945 44k INFO Losses: [2.3246307373046875, 2.333706855773926, 9.034671783447266, 17.94074058532715, 1.2631568908691406], step: 286200, lr: 9.643969784057613e-05 2023-03-19 04:37:44,339 44k INFO Train Epoch: 284 [56%] 2023-03-19 04:37:44,339 44k INFO Losses: [2.424689292907715, 2.168078899383545, 10.684359550476074, 20.74469757080078, 1.2105023860931396], step: 286400, lr: 9.643969784057613e-05 2023-03-19 04:38:57,316 44k INFO Train Epoch: 284 [76%] 2023-03-19 04:38:57,316 44k INFO Losses: [2.1857783794403076, 2.463939666748047, 13.856277465820312, 19.65338706970215, 1.3142435550689697], step: 286600, lr: 9.643969784057613e-05 2023-03-19 04:40:10,335 44k INFO Train Epoch: 284 [96%] 2023-03-19 04:40:10,336 44k INFO Losses: [2.8088040351867676, 2.2309439182281494, 10.63555908203125, 16.79025650024414, 0.9551410675048828], step: 286800, lr: 9.643969784057613e-05 2023-03-19 04:40:24,981 44k INFO ====> Epoch: 284, cost 382.74 s 2023-03-19 04:41:32,914 44k INFO Train Epoch: 285 [16%] 2023-03-19 04:41:32,915 44k INFO Losses: [2.4699089527130127, 2.411487340927124, 6.5932464599609375, 16.748254776000977, 1.1519222259521484], step: 287000, lr: 9.642764287834605e-05 2023-03-19 04:41:36,037 44k INFO Saving model and optimizer state at iteration 285 to ./logs\44k\G_287000.pth 2023-03-19 04:41:36,746 44k INFO Saving model and optimizer state at iteration 285 to ./logs\44k\D_287000.pth 2023-03-19 04:41:37,444 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_284000.pth 2023-03-19 04:41:37,472 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_284000.pth 2023-03-19 04:42:49,448 44k INFO Train Epoch: 285 [36%] 2023-03-19 04:42:49,448 44k INFO Losses: [2.5226974487304688, 2.2957100868225098, 8.43271541595459, 17.03009033203125, 1.4445157051086426], step: 287200, lr: 9.642764287834605e-05 2023-03-19 04:44:02,532 44k INFO Train Epoch: 285 [55%] 2023-03-19 04:44:02,532 44k INFO Losses: [2.65705943107605, 2.330120086669922, 8.153820991516113, 19.664735794067383, 1.28411865234375], step: 287400, lr: 9.642764287834605e-05 2023-03-19 04:45:15,288 44k INFO Train Epoch: 285 [75%] 2023-03-19 04:45:15,289 44k INFO Losses: [2.3091201782226562, 2.4248597621917725, 12.359339714050293, 23.422727584838867, 1.220099687576294], step: 287600, lr: 9.642764287834605e-05 2023-03-19 04:46:28,213 44k INFO Train Epoch: 285 [95%] 2023-03-19 04:46:28,213 44k INFO Losses: [2.6543688774108887, 2.5803141593933105, 8.711217880249023, 18.202396392822266, 1.0679558515548706], step: 287800, lr: 9.642764287834605e-05 2023-03-19 04:46:46,462 44k INFO ====> Epoch: 285, cost 381.48 s 2023-03-19 04:47:50,790 44k INFO Train Epoch: 286 [15%] 2023-03-19 04:47:50,790 44k INFO Losses: [2.538494110107422, 2.3235161304473877, 6.772620677947998, 18.171159744262695, 1.366495966911316], step: 288000, lr: 9.641558942298625e-05 2023-03-19 04:47:53,879 44k INFO Saving model and optimizer state at iteration 286 to ./logs\44k\G_288000.pth 2023-03-19 04:47:54,615 44k INFO Saving model and optimizer state at iteration 286 to ./logs\44k\D_288000.pth 2023-03-19 04:47:55,307 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_285000.pth 2023-03-19 04:47:55,338 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_285000.pth 2023-03-19 04:49:07,522 44k INFO Train Epoch: 286 [35%] 2023-03-19 04:49:07,523 44k INFO Losses: [2.3765480518341064, 2.1782894134521484, 11.070967674255371, 17.89308738708496, 1.377540111541748], step: 288200, lr: 9.641558942298625e-05 2023-03-19 04:50:20,616 44k INFO Train Epoch: 286 [54%] 2023-03-19 04:50:20,616 44k INFO Losses: [2.2789485454559326, 2.4490764141082764, 10.99787712097168, 20.70478057861328, 1.5375165939331055], step: 288400, lr: 9.641558942298625e-05 2023-03-19 04:51:33,457 44k INFO Train Epoch: 286 [74%] 2023-03-19 04:51:33,457 44k INFO Losses: [2.5836029052734375, 2.326846122741699, 9.018221855163574, 19.830299377441406, 0.7653000950813293], step: 288600, lr: 9.641558942298625e-05 2023-03-19 04:52:46,435 44k INFO Train Epoch: 286 [94%] 2023-03-19 04:52:46,436 44k INFO Losses: [2.6866235733032227, 2.3248348236083984, 11.227510452270508, 19.750898361206055, 1.3304716348648071], step: 288800, lr: 9.641558942298625e-05 2023-03-19 04:53:08,247 44k INFO ====> Epoch: 286, cost 381.79 s 2023-03-19 04:54:08,876 44k INFO Train Epoch: 287 [14%] 2023-03-19 04:54:08,877 44k INFO Losses: [2.372108221054077, 2.361734628677368, 12.05683708190918, 23.538354873657227, 0.9588818550109863], step: 289000, lr: 9.640353747430838e-05 2023-03-19 04:54:11,987 44k INFO Saving model and optimizer state at iteration 287 to ./logs\44k\G_289000.pth 2023-03-19 04:54:12,682 44k INFO Saving model and optimizer state at iteration 287 to ./logs\44k\D_289000.pth 2023-03-19 04:54:13,376 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_286000.pth 2023-03-19 04:54:13,404 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_286000.pth 2023-03-19 04:55:25,558 44k INFO Train Epoch: 287 [34%] 2023-03-19 04:55:25,558 44k INFO Losses: [2.874606132507324, 2.326791524887085, 7.5316338539123535, 16.211814880371094, 0.7741466760635376], step: 289200, lr: 9.640353747430838e-05 2023-03-19 04:56:38,424 44k INFO Train Epoch: 287 [53%] 2023-03-19 04:56:38,425 44k INFO Losses: [2.372384548187256, 2.2299611568450928, 9.139547348022461, 14.223345756530762, 1.5106658935546875], step: 289400, lr: 9.640353747430838e-05 2023-03-19 04:57:51,303 44k INFO Train Epoch: 287 [73%] 2023-03-19 04:57:51,303 44k INFO Losses: [2.6008479595184326, 2.243004322052002, 7.862316131591797, 15.033079147338867, 1.1443034410476685], step: 289600, lr: 9.640353747430838e-05 2023-03-19 04:59:04,381 44k INFO Train Epoch: 287 [93%] 2023-03-19 04:59:04,382 44k INFO Losses: [2.400128126144409, 2.3758130073547363, 8.55626106262207, 15.537242889404297, 1.4833239316940308], step: 289800, lr: 9.640353747430838e-05 2023-03-19 04:59:29,780 44k INFO ====> Epoch: 287, cost 381.53 s 2023-03-19 05:00:26,819 44k INFO Train Epoch: 288 [13%] 2023-03-19 05:00:26,819 44k INFO Losses: [2.4829320907592773, 2.2471790313720703, 12.708600044250488, 21.75237274169922, 0.8115440607070923], step: 290000, lr: 9.639148703212408e-05 2023-03-19 05:00:29,972 44k INFO Saving model and optimizer state at iteration 288 to ./logs\44k\G_290000.pth 2023-03-19 05:00:30,683 44k INFO Saving model and optimizer state at iteration 288 to ./logs\44k\D_290000.pth 2023-03-19 05:00:31,373 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_287000.pth 2023-03-19 05:00:31,404 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_287000.pth 2023-03-19 05:01:43,535 44k INFO Train Epoch: 288 [33%] 2023-03-19 05:01:43,536 44k INFO Losses: [2.2925376892089844, 2.4413344860076904, 10.514029502868652, 21.389068603515625, 1.0821139812469482], step: 290200, lr: 9.639148703212408e-05 2023-03-19 05:02:56,276 44k INFO Train Epoch: 288 [52%] 2023-03-19 05:02:56,277 44k INFO Losses: [2.377732753753662, 2.381941795349121, 10.839522361755371, 18.078651428222656, 1.2941498756408691], step: 290400, lr: 9.639148703212408e-05 2023-03-19 05:04:09,362 44k INFO Train Epoch: 288 [72%] 2023-03-19 05:04:09,362 44k INFO Losses: [2.293163776397705, 2.468888282775879, 12.820639610290527, 23.883914947509766, 1.1474876403808594], step: 290600, lr: 9.639148703212408e-05 2023-03-19 05:05:22,328 44k INFO Train Epoch: 288 [92%] 2023-03-19 05:05:22,329 44k INFO Losses: [2.4649789333343506, 2.272773504257202, 11.385313034057617, 20.9810848236084, 1.117942452430725], step: 290800, lr: 9.639148703212408e-05 2023-03-19 05:05:51,340 44k INFO ====> Epoch: 288, cost 381.56 s 2023-03-19 05:06:44,785 44k INFO Train Epoch: 289 [12%] 2023-03-19 05:06:44,786 44k INFO Losses: [2.426103115081787, 2.48097562789917, 10.955981254577637, 21.11809730529785, 1.0999583005905151], step: 291000, lr: 9.637943809624507e-05 2023-03-19 05:06:47,933 44k INFO Saving model and optimizer state at iteration 289 to ./logs\44k\G_291000.pth 2023-03-19 05:06:48,636 44k INFO Saving model and optimizer state at iteration 289 to ./logs\44k\D_291000.pth 2023-03-19 05:06:49,330 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_288000.pth 2023-03-19 05:06:49,361 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_288000.pth 2023-03-19 05:08:01,421 44k INFO Train Epoch: 289 [32%] 2023-03-19 05:08:01,422 44k INFO Losses: [2.3417422771453857, 2.2823219299316406, 13.395572662353516, 21.17780876159668, 1.0718668699264526], step: 291200, lr: 9.637943809624507e-05 2023-03-19 05:09:14,266 44k INFO Train Epoch: 289 [51%] 2023-03-19 05:09:14,267 44k INFO Losses: [2.3160154819488525, 2.7257297039031982, 11.42202091217041, 22.23776626586914, 1.2467055320739746], step: 291400, lr: 9.637943809624507e-05 2023-03-19 05:10:27,343 44k INFO Train Epoch: 289 [71%] 2023-03-19 05:10:27,343 44k INFO Losses: [2.4847447872161865, 2.2255334854125977, 13.638755798339844, 19.761913299560547, 1.1151857376098633], step: 291600, lr: 9.637943809624507e-05 2023-03-19 05:11:40,285 44k INFO Train Epoch: 289 [91%] 2023-03-19 05:11:40,286 44k INFO Losses: [2.3755455017089844, 2.0804686546325684, 10.77182388305664, 19.114301681518555, 1.1881662607192993], step: 291800, lr: 9.637943809624507e-05 2023-03-19 05:12:13,054 44k INFO ====> Epoch: 289, cost 381.71 s 2023-03-19 05:13:03,074 44k INFO Train Epoch: 290 [11%] 2023-03-19 05:13:03,075 44k INFO Losses: [2.526630401611328, 2.1565136909484863, 13.384556770324707, 21.920635223388672, 1.3151726722717285], step: 292000, lr: 9.636739066648303e-05 2023-03-19 05:13:06,169 44k INFO Saving model and optimizer state at iteration 290 to ./logs\44k\G_292000.pth 2023-03-19 05:13:06,921 44k INFO Saving model and optimizer state at iteration 290 to ./logs\44k\D_292000.pth 2023-03-19 05:13:07,617 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_289000.pth 2023-03-19 05:13:07,646 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_289000.pth 2023-03-19 05:14:19,656 44k INFO Train Epoch: 290 [31%] 2023-03-19 05:14:19,657 44k INFO Losses: [2.6784605979919434, 2.1349546909332275, 13.275276184082031, 20.251766204833984, 1.0212132930755615], step: 292200, lr: 9.636739066648303e-05 2023-03-19 05:15:32,467 44k INFO Train Epoch: 290 [50%] 2023-03-19 05:15:32,467 44k INFO Losses: [2.485191583633423, 2.0955872535705566, 8.767870903015137, 16.929790496826172, 1.3607667684555054], step: 292400, lr: 9.636739066648303e-05 2023-03-19 05:16:45,497 44k INFO Train Epoch: 290 [70%] 2023-03-19 05:16:45,497 44k INFO Losses: [2.3550570011138916, 2.6084890365600586, 10.138739585876465, 21.573070526123047, 1.412053108215332], step: 292600, lr: 9.636739066648303e-05 2023-03-19 05:17:58,408 44k INFO Train Epoch: 290 [90%] 2023-03-19 05:17:58,408 44k INFO Losses: [2.3764395713806152, 2.2535240650177, 15.253986358642578, 20.969266891479492, 1.0740611553192139], step: 292800, lr: 9.636739066648303e-05 2023-03-19 05:18:34,867 44k INFO ====> Epoch: 290, cost 381.81 s 2023-03-19 05:19:21,133 44k INFO Train Epoch: 291 [10%] 2023-03-19 05:19:21,134 44k INFO Losses: [2.371422052383423, 2.4844369888305664, 12.894975662231445, 24.55255699157715, 1.3137545585632324], step: 293000, lr: 9.635534474264972e-05 2023-03-19 05:19:24,279 44k INFO Saving model and optimizer state at iteration 291 to ./logs\44k\G_293000.pth 2023-03-19 05:19:25,021 44k INFO Saving model and optimizer state at iteration 291 to ./logs\44k\D_293000.pth 2023-03-19 05:19:25,714 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_290000.pth 2023-03-19 05:19:25,742 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_290000.pth 2023-03-19 05:20:37,891 44k INFO Train Epoch: 291 [30%] 2023-03-19 05:20:37,892 44k INFO Losses: [2.346627950668335, 2.338118553161621, 11.717251777648926, 21.423940658569336, 1.5258749723434448], step: 293200, lr: 9.635534474264972e-05 2023-03-19 05:21:50,632 44k INFO Train Epoch: 291 [50%] 2023-03-19 05:21:50,633 44k INFO Losses: [2.5018038749694824, 2.3134171962738037, 7.639124870300293, 16.92024803161621, 1.3369805812835693], step: 293400, lr: 9.635534474264972e-05 2023-03-19 05:23:03,627 44k INFO Train Epoch: 291 [69%] 2023-03-19 05:23:03,627 44k INFO Losses: [2.3916149139404297, 2.1741583347320557, 11.023260116577148, 17.260353088378906, 1.3251681327819824], step: 293600, lr: 9.635534474264972e-05 2023-03-19 05:24:16,634 44k INFO Train Epoch: 291 [89%] 2023-03-19 05:24:16,634 44k INFO Losses: [2.5193657875061035, 2.3322110176086426, 10.607394218444824, 18.366466522216797, 1.1270793676376343], step: 293800, lr: 9.635534474264972e-05 2023-03-19 05:24:56,581 44k INFO ====> Epoch: 291, cost 381.71 s 2023-03-19 05:25:39,154 44k INFO Train Epoch: 292 [9%] 2023-03-19 05:25:39,154 44k INFO Losses: [2.5226104259490967, 2.3341681957244873, 11.910983085632324, 19.03371238708496, 1.4938966035842896], step: 294000, lr: 9.634330032455689e-05 2023-03-19 05:25:42,260 44k INFO Saving model and optimizer state at iteration 292 to ./logs\44k\G_294000.pth 2023-03-19 05:25:42,996 44k INFO Saving model and optimizer state at iteration 292 to ./logs\44k\D_294000.pth 2023-03-19 05:25:43,704 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_291000.pth 2023-03-19 05:25:43,732 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_291000.pth 2023-03-19 05:26:55,924 44k INFO Train Epoch: 292 [29%] 2023-03-19 05:26:55,925 44k INFO Losses: [2.4770541191101074, 2.4023122787475586, 13.043848991394043, 21.049278259277344, 1.1694024801254272], step: 294200, lr: 9.634330032455689e-05 2023-03-19 05:28:08,463 44k INFO Train Epoch: 292 [49%] 2023-03-19 05:28:08,463 44k INFO Losses: [2.563185930252075, 2.3399100303649902, 11.648331642150879, 19.40500259399414, 1.5104721784591675], step: 294400, lr: 9.634330032455689e-05 2023-03-19 05:29:21,598 44k INFO Train Epoch: 292 [68%] 2023-03-19 05:29:21,598 44k INFO Losses: [2.5072181224823, 2.4021878242492676, 8.043902397155762, 19.02021598815918, 1.0766503810882568], step: 294600, lr: 9.634330032455689e-05 2023-03-19 05:30:34,476 44k INFO Train Epoch: 292 [88%] 2023-03-19 05:30:34,477 44k INFO Losses: [2.530237913131714, 2.1530275344848633, 9.760208129882812, 14.808650016784668, 0.7375126481056213], step: 294800, lr: 9.634330032455689e-05 2023-03-19 05:31:18,037 44k INFO ====> Epoch: 292, cost 381.46 s 2023-03-19 05:31:56,971 44k INFO Train Epoch: 293 [8%] 2023-03-19 05:31:56,971 44k INFO Losses: [2.496915340423584, 2.0729124546051025, 11.283163070678711, 15.050057411193848, 0.8533767461776733], step: 295000, lr: 9.633125741201631e-05 2023-03-19 05:32:00,127 44k INFO Saving model and optimizer state at iteration 293 to ./logs\44k\G_295000.pth 2023-03-19 05:32:00,828 44k INFO Saving model and optimizer state at iteration 293 to ./logs\44k\D_295000.pth 2023-03-19 05:32:01,521 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_292000.pth 2023-03-19 05:32:01,551 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_292000.pth 2023-03-19 05:33:13,901 44k INFO Train Epoch: 293 [28%] 2023-03-19 05:33:13,902 44k INFO Losses: [2.524256467819214, 2.272778272628784, 8.502959251403809, 21.35285758972168, 1.051665186882019], step: 295200, lr: 9.633125741201631e-05 2023-03-19 05:34:26,496 44k INFO Train Epoch: 293 [48%] 2023-03-19 05:34:26,496 44k INFO Losses: [2.3550655841827393, 2.528852939605713, 13.748939514160156, 20.0826473236084, 1.1001648902893066], step: 295400, lr: 9.633125741201631e-05 2023-03-19 05:35:39,688 44k INFO Train Epoch: 293 [67%] 2023-03-19 05:35:39,688 44k INFO Losses: [2.4765117168426514, 2.0691919326782227, 12.357734680175781, 21.256071090698242, 0.8628212809562683], step: 295600, lr: 9.633125741201631e-05 2023-03-19 05:36:52,771 44k INFO Train Epoch: 293 [87%] 2023-03-19 05:36:52,771 44k INFO Losses: [2.36820125579834, 2.302945613861084, 10.222594261169434, 19.159250259399414, 1.1087803840637207], step: 295800, lr: 9.633125741201631e-05 2023-03-19 05:37:39,908 44k INFO ====> Epoch: 293, cost 381.87 s 2023-03-19 05:38:15,249 44k INFO Train Epoch: 294 [7%] 2023-03-19 05:38:15,250 44k INFO Losses: [2.307727336883545, 2.572909355163574, 10.454960823059082, 20.535768508911133, 1.1856873035430908], step: 296000, lr: 9.631921600483981e-05 2023-03-19 05:38:18,418 44k INFO Saving model and optimizer state at iteration 294 to ./logs\44k\G_296000.pth 2023-03-19 05:38:19,116 44k INFO Saving model and optimizer state at iteration 294 to ./logs\44k\D_296000.pth 2023-03-19 05:38:19,830 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_293000.pth 2023-03-19 05:38:19,858 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_293000.pth 2023-03-19 05:39:32,353 44k INFO Train Epoch: 294 [27%] 2023-03-19 05:39:32,353 44k INFO Losses: [2.4130828380584717, 2.36647891998291, 12.301198959350586, 18.803415298461914, 1.0760365724563599], step: 296200, lr: 9.631921600483981e-05 2023-03-19 05:40:44,988 44k INFO Train Epoch: 294 [47%] 2023-03-19 05:40:44,989 44k INFO Losses: [2.3966336250305176, 2.4798500537872314, 9.285295486450195, 23.684595108032227, 1.3838318586349487], step: 296400, lr: 9.631921600483981e-05 2023-03-19 05:41:58,169 44k INFO Train Epoch: 294 [66%] 2023-03-19 05:41:58,169 44k INFO Losses: [2.650820255279541, 2.1864774227142334, 9.3277587890625, 15.634380340576172, 1.2046231031417847], step: 296600, lr: 9.631921600483981e-05 2023-03-19 05:43:11,127 44k INFO Train Epoch: 294 [86%] 2023-03-19 05:43:11,127 44k INFO Losses: [2.55839204788208, 2.169550657272339, 7.523523330688477, 19.800487518310547, 1.0720404386520386], step: 296800, lr: 9.631921600483981e-05 2023-03-19 05:44:01,884 44k INFO ====> Epoch: 294, cost 381.98 s 2023-03-19 05:44:33,376 44k INFO Train Epoch: 295 [6%] 2023-03-19 05:44:33,376 44k INFO Losses: [2.2766573429107666, 2.2842915058135986, 10.26395034790039, 21.778053283691406, 1.23930823802948], step: 297000, lr: 9.63071761028392e-05 2023-03-19 05:44:36,563 44k INFO Saving model and optimizer state at iteration 295 to ./logs\44k\G_297000.pth 2023-03-19 05:44:37,250 44k INFO Saving model and optimizer state at iteration 295 to ./logs\44k\D_297000.pth 2023-03-19 05:44:37,946 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_294000.pth 2023-03-19 05:44:37,975 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_294000.pth 2023-03-19 05:45:50,339 44k INFO Train Epoch: 295 [26%] 2023-03-19 05:45:50,340 44k INFO Losses: [2.410383701324463, 2.3124921321868896, 10.421703338623047, 13.787405014038086, 1.1468799114227295], step: 297200, lr: 9.63071761028392e-05 2023-03-19 05:47:02,990 44k INFO Train Epoch: 295 [46%] 2023-03-19 05:47:02,990 44k INFO Losses: [2.6204752922058105, 2.118741512298584, 7.826787948608398, 18.963651657104492, 1.291804552078247], step: 297400, lr: 9.63071761028392e-05 2023-03-19 05:48:16,064 44k INFO Train Epoch: 295 [65%] 2023-03-19 05:48:16,064 44k INFO Losses: [2.539644718170166, 2.3774707317352295, 9.819436073303223, 19.67876625061035, 1.4576747417449951], step: 297600, lr: 9.63071761028392e-05 2023-03-19 05:49:28,914 44k INFO Train Epoch: 295 [85%] 2023-03-19 05:49:28,914 44k INFO Losses: [2.4844210147857666, 2.249117374420166, 8.901260375976562, 18.434995651245117, 1.163118839263916], step: 297800, lr: 9.63071761028392e-05 2023-03-19 05:50:23,460 44k INFO ====> Epoch: 295, cost 381.58 s 2023-03-19 05:50:51,384 44k INFO Train Epoch: 296 [5%] 2023-03-19 05:50:51,384 44k INFO Losses: [2.356710433959961, 2.4940693378448486, 13.692913055419922, 21.106998443603516, 0.7786864638328552], step: 298000, lr: 9.629513770582634e-05 2023-03-19 05:50:54,563 44k INFO Saving model and optimizer state at iteration 296 to ./logs\44k\G_298000.pth 2023-03-19 05:50:55,320 44k INFO Saving model and optimizer state at iteration 296 to ./logs\44k\D_298000.pth 2023-03-19 05:50:56,022 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_295000.pth 2023-03-19 05:50:56,051 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_295000.pth 2023-03-19 05:52:08,547 44k INFO Train Epoch: 296 [25%] 2023-03-19 05:52:08,547 44k INFO Losses: [2.531388521194458, 2.193697929382324, 10.335982322692871, 19.41373062133789, 1.1615749597549438], step: 298200, lr: 9.629513770582634e-05 2023-03-19 05:53:21,068 44k INFO Train Epoch: 296 [45%] 2023-03-19 05:53:21,069 44k INFO Losses: [2.038114070892334, 2.8352179527282715, 11.242876052856445, 17.492053985595703, 1.4033020734786987], step: 298400, lr: 9.629513770582634e-05 2023-03-19 05:54:34,164 44k INFO Train Epoch: 296 [64%] 2023-03-19 05:54:34,165 44k INFO Losses: [2.4437875747680664, 2.4612720012664795, 11.855488777160645, 19.379554748535156, 0.9415152072906494], step: 298600, lr: 9.629513770582634e-05 2023-03-19 05:55:46,990 44k INFO Train Epoch: 296 [84%] 2023-03-19 05:55:46,991 44k INFO Losses: [2.411179304122925, 2.114713191986084, 12.275195121765137, 21.627111434936523, 1.1645004749298096], step: 298800, lr: 9.629513770582634e-05 2023-03-19 05:56:45,198 44k INFO ====> Epoch: 296, cost 381.74 s 2023-03-19 05:57:09,381 44k INFO Train Epoch: 297 [4%] 2023-03-19 05:57:09,382 44k INFO Losses: [2.413172483444214, 2.292495012283325, 9.854723930358887, 20.097890853881836, 1.0784988403320312], step: 299000, lr: 9.628310081361311e-05 2023-03-19 05:57:12,458 44k INFO Saving model and optimizer state at iteration 297 to ./logs\44k\G_299000.pth 2023-03-19 05:57:13,200 44k INFO Saving model and optimizer state at iteration 297 to ./logs\44k\D_299000.pth 2023-03-19 05:57:13,901 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_296000.pth 2023-03-19 05:57:13,930 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_296000.pth 2023-03-19 05:58:26,595 44k INFO Train Epoch: 297 [24%] 2023-03-19 05:58:26,595 44k INFO Losses: [2.590606212615967, 1.9765299558639526, 9.846349716186523, 18.993465423583984, 1.1218935251235962], step: 299200, lr: 9.628310081361311e-05 2023-03-19 05:59:39,087 44k INFO Train Epoch: 297 [44%] 2023-03-19 05:59:39,087 44k INFO Losses: [2.4165544509887695, 2.4097487926483154, 9.052254676818848, 16.999366760253906, 0.9259368777275085], step: 299400, lr: 9.628310081361311e-05 2023-03-19 06:00:52,205 44k INFO Train Epoch: 297 [63%] 2023-03-19 06:00:52,206 44k INFO Losses: [2.440910816192627, 2.3593406677246094, 12.647468566894531, 18.502155303955078, 1.1721043586730957], step: 299600, lr: 9.628310081361311e-05 2023-03-19 06:02:04,944 44k INFO Train Epoch: 297 [83%] 2023-03-19 06:02:04,945 44k INFO Losses: [2.5381593704223633, 2.4057154655456543, 8.928377151489258, 23.998186111450195, 1.4931552410125732], step: 299800, lr: 9.628310081361311e-05 2023-03-19 06:03:06,862 44k INFO ====> Epoch: 297, cost 381.66 s 2023-03-19 06:03:27,581 44k INFO Train Epoch: 298 [3%] 2023-03-19 06:03:27,582 44k INFO Losses: [2.5335206985473633, 2.4393692016601562, 10.921186447143555, 19.922853469848633, 1.0403211116790771], step: 300000, lr: 9.627106542601141e-05 2023-03-19 06:03:30,737 44k INFO Saving model and optimizer state at iteration 298 to ./logs\44k\G_300000.pth 2023-03-19 06:03:31,480 44k INFO Saving model and optimizer state at iteration 298 to ./logs\44k\D_300000.pth 2023-03-19 06:03:32,174 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_297000.pth 2023-03-19 06:03:32,202 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_297000.pth 2023-03-19 06:04:44,935 44k INFO Train Epoch: 298 [23%] 2023-03-19 06:04:44,936 44k INFO Losses: [2.5168702602386475, 2.3179566860198975, 10.024338722229004, 17.908615112304688, 1.3760931491851807], step: 300200, lr: 9.627106542601141e-05 2023-03-19 06:05:57,354 44k INFO Train Epoch: 298 [43%] 2023-03-19 06:05:57,354 44k INFO Losses: [2.374986410140991, 2.435373306274414, 11.697335243225098, 20.721189498901367, 0.978035569190979], step: 300400, lr: 9.627106542601141e-05 2023-03-19 06:07:10,527 44k INFO Train Epoch: 298 [62%] 2023-03-19 06:07:10,528 44k INFO Losses: [2.6629762649536133, 2.3976385593414307, 9.780478477478027, 19.76105499267578, 1.464755892753601], step: 300600, lr: 9.627106542601141e-05 2023-03-19 06:08:23,303 44k INFO Train Epoch: 298 [82%] 2023-03-19 06:08:23,303 44k INFO Losses: [2.511948585510254, 2.3419318199157715, 10.608918190002441, 20.408233642578125, 1.196950912475586], step: 300800, lr: 9.627106542601141e-05 2023-03-19 06:09:28,842 44k INFO ====> Epoch: 298, cost 381.98 s 2023-03-19 06:09:45,754 44k INFO Train Epoch: 299 [2%] 2023-03-19 06:09:45,755 44k INFO Losses: [2.476038932800293, 2.372464656829834, 11.352453231811523, 21.092975616455078, 1.0079160928726196], step: 301000, lr: 9.625903154283315e-05 2023-03-19 06:09:48,810 44k INFO Saving model and optimizer state at iteration 299 to ./logs\44k\G_301000.pth 2023-03-19 06:09:49,512 44k INFO Saving model and optimizer state at iteration 299 to ./logs\44k\D_301000.pth 2023-03-19 06:09:50,210 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_298000.pth 2023-03-19 06:09:50,239 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_298000.pth 2023-03-19 06:11:03,282 44k INFO Train Epoch: 299 [22%] 2023-03-19 06:11:03,282 44k INFO Losses: [2.622191905975342, 2.2337636947631836, 6.851716995239258, 21.98033332824707, 1.3363298177719116], step: 301200, lr: 9.625903154283315e-05 2023-03-19 06:12:15,546 44k INFO Train Epoch: 299 [42%] 2023-03-19 06:12:15,546 44k INFO Losses: [2.3392527103424072, 2.7367897033691406, 9.122644424438477, 16.36308479309082, 1.1410976648330688], step: 301400, lr: 9.625903154283315e-05 2023-03-19 06:13:28,817 44k INFO Train Epoch: 299 [61%] 2023-03-19 06:13:28,818 44k INFO Losses: [2.600703716278076, 2.3225960731506348, 10.988908767700195, 20.19976234436035, 1.0950205326080322], step: 301600, lr: 9.625903154283315e-05 2023-03-19 06:14:41,615 44k INFO Train Epoch: 299 [81%] 2023-03-19 06:14:41,615 44k INFO Losses: [2.5605034828186035, 2.395315408706665, 9.380200386047363, 20.876789093017578, 1.3205935955047607], step: 301800, lr: 9.625903154283315e-05 2023-03-19 06:15:50,808 44k INFO ====> Epoch: 299, cost 381.97 s 2023-03-19 06:16:04,186 44k INFO Train Epoch: 300 [1%] 2023-03-19 06:16:04,186 44k INFO Losses: [2.6398797035217285, 2.0802934169769287, 11.860674858093262, 21.181856155395508, 1.2678302526474], step: 302000, lr: 9.62469991638903e-05 2023-03-19 06:16:07,258 44k INFO Saving model and optimizer state at iteration 300 to ./logs\44k\G_302000.pth 2023-03-19 06:16:07,993 44k INFO Saving model and optimizer state at iteration 300 to ./logs\44k\D_302000.pth 2023-03-19 06:16:08,681 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_299000.pth 2023-03-19 06:16:08,712 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_299000.pth 2023-03-19 06:17:21,762 44k INFO Train Epoch: 300 [21%] 2023-03-19 06:17:21,762 44k INFO Losses: [2.44287109375, 2.339365005493164, 5.467808723449707, 18.5118350982666, 1.214806318283081], step: 302200, lr: 9.62469991638903e-05 2023-03-19 06:18:33,994 44k INFO Train Epoch: 300 [41%] 2023-03-19 06:18:33,995 44k INFO Losses: [2.523818016052246, 1.9876346588134766, 11.496214866638184, 18.519207000732422, 1.6754769086837769], step: 302400, lr: 9.62469991638903e-05 2023-03-19 06:19:47,302 44k INFO Train Epoch: 300 [60%] 2023-03-19 06:19:47,302 44k INFO Losses: [2.5739831924438477, 2.406487464904785, 10.81116008758545, 19.159873962402344, 1.1755107641220093], step: 302600, lr: 9.62469991638903e-05 2023-03-19 06:21:00,109 44k INFO Train Epoch: 300 [80%] 2023-03-19 06:21:00,109 44k INFO Losses: [2.406280040740967, 2.475963830947876, 7.3182806968688965, 16.511987686157227, 1.6205320358276367], step: 302800, lr: 9.62469991638903e-05 2023-03-19 06:22:12,847 44k INFO ====> Epoch: 300, cost 382.04 s