dimitarrskv commited on
Commit
3bfcfcd
1 Parent(s): c9d1664

Upload folder using huggingface_hub

Browse files
.summary/0/events.out.tfevents.1693871676.921002c1ac2b ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5499ea5afeccc4c858c9ad281a9f5786cb730ddc150ec3b42cb71791b7fe70e4
3
+ size 245865
README.md CHANGED
@@ -15,7 +15,7 @@ model-index:
15
  type: doom_health_gathering_supreme
16
  metrics:
17
  - type: mean_reward
18
- value: 4.01 +/- 0.80
19
  name: mean_reward
20
  verified: false
21
  ---
 
15
  type: doom_health_gathering_supreme
16
  metrics:
17
  - type: mean_reward
18
+ value: 7.10 +/- 4.31
19
  name: mean_reward
20
  verified: false
21
  ---
checkpoint_p0/best_000000514_2105344_reward_14.841.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:63bfe354aeaffb22a182a5c7cf61c083caeb45afd94374c620215dd67092b377
3
+ size 34928806
checkpoint_p0/checkpoint_000000431_1765376.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1d11e78c15e68001796f67c0007f462e5e1842b486b305c5c56a6eb2c899bac0
3
+ size 34929220
checkpoint_p0/checkpoint_000000514_2105344.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8aae112b7ebcf9c6242ff83b185bcccb9de84e20d0d1cb58eba359bb33bcbb3a
3
+ size 34929220
config.json CHANGED
@@ -65,7 +65,7 @@
65
  "summaries_use_frameskip": true,
66
  "heartbeat_interval": 20,
67
  "heartbeat_reporting_interval": 600,
68
- "train_for_env_steps": 1100000,
69
  "train_for_seconds": 10000000000,
70
  "save_every_sec": 120,
71
  "keep_checkpoints": 2,
 
65
  "summaries_use_frameskip": true,
66
  "heartbeat_interval": 20,
67
  "heartbeat_reporting_interval": 600,
68
+ "train_for_env_steps": 2100000,
69
  "train_for_seconds": 10000000000,
70
  "save_every_sec": 120,
71
  "keep_checkpoints": 2,
replay.mp4 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c93677db54aaffed5cc8540f77e7a09646150117b252792ce0d99792eab13e96
3
- size 5919946
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5e442a57ca2fbf0638e395909b8ae77502f375f113b0f3f2bce70a17b74b8512
3
+ size 13169990
sf_log.txt CHANGED
@@ -679,3 +679,832 @@ main_loop: 391.3949
679
  [2023-09-04 23:53:10,904][01455] Avg episode rewards: #0: 4.512, true rewards: #0: 4.012
680
  [2023-09-04 23:53:10,909][01455] Avg episode reward: 4.512, avg true_objective: 4.012
681
  [2023-09-04 23:53:33,988][01455] Replay video saved to /content/train_dir/default_experiment/replay.mp4!
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
679
  [2023-09-04 23:53:10,904][01455] Avg episode rewards: #0: 4.512, true rewards: #0: 4.012
680
  [2023-09-04 23:53:10,909][01455] Avg episode reward: 4.512, avg true_objective: 4.012
681
  [2023-09-04 23:53:33,988][01455] Replay video saved to /content/train_dir/default_experiment/replay.mp4!
682
+ [2023-09-04 23:53:37,685][01455] The model has been pushed to https://huggingface.co/dimitarrskv/rl_course_vizdoom_health_gathering_supreme
683
+ [2023-09-04 23:54:36,291][01455] Environment doom_basic already registered, overwriting...
684
+ [2023-09-04 23:54:36,294][01455] Environment doom_two_colors_easy already registered, overwriting...
685
+ [2023-09-04 23:54:36,297][01455] Environment doom_two_colors_hard already registered, overwriting...
686
+ [2023-09-04 23:54:36,298][01455] Environment doom_dm already registered, overwriting...
687
+ [2023-09-04 23:54:36,301][01455] Environment doom_dwango5 already registered, overwriting...
688
+ [2023-09-04 23:54:36,302][01455] Environment doom_my_way_home_flat_actions already registered, overwriting...
689
+ [2023-09-04 23:54:36,303][01455] Environment doom_defend_the_center_flat_actions already registered, overwriting...
690
+ [2023-09-04 23:54:36,304][01455] Environment doom_my_way_home already registered, overwriting...
691
+ [2023-09-04 23:54:36,305][01455] Environment doom_deadly_corridor already registered, overwriting...
692
+ [2023-09-04 23:54:36,306][01455] Environment doom_defend_the_center already registered, overwriting...
693
+ [2023-09-04 23:54:36,307][01455] Environment doom_defend_the_line already registered, overwriting...
694
+ [2023-09-04 23:54:36,309][01455] Environment doom_health_gathering already registered, overwriting...
695
+ [2023-09-04 23:54:36,310][01455] Environment doom_health_gathering_supreme already registered, overwriting...
696
+ [2023-09-04 23:54:36,311][01455] Environment doom_battle already registered, overwriting...
697
+ [2023-09-04 23:54:36,313][01455] Environment doom_battle2 already registered, overwriting...
698
+ [2023-09-04 23:54:36,314][01455] Environment doom_duel_bots already registered, overwriting...
699
+ [2023-09-04 23:54:36,316][01455] Environment doom_deathmatch_bots already registered, overwriting...
700
+ [2023-09-04 23:54:36,317][01455] Environment doom_duel already registered, overwriting...
701
+ [2023-09-04 23:54:36,318][01455] Environment doom_deathmatch_full already registered, overwriting...
702
+ [2023-09-04 23:54:36,319][01455] Environment doom_benchmark already registered, overwriting...
703
+ [2023-09-04 23:54:36,321][01455] register_encoder_factory: <function make_vizdoom_encoder at 0x7e0fb9266440>
704
+ [2023-09-04 23:54:36,348][01455] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json
705
+ [2023-09-04 23:54:36,356][01455] Overriding arg 'train_for_env_steps' with value 2100000 passed from command line
706
+ [2023-09-04 23:54:36,362][01455] Experiment dir /content/train_dir/default_experiment already exists!
707
+ [2023-09-04 23:54:36,365][01455] Resuming existing experiment from /content/train_dir/default_experiment...
708
+ [2023-09-04 23:54:36,368][01455] Weights and Biases integration disabled
709
+ [2023-09-04 23:54:36,372][01455] Environment var CUDA_VISIBLE_DEVICES is 0
710
+
711
+ [2023-09-04 23:54:38,496][01455] Starting experiment with the following configuration:
712
+ help=False
713
+ algo=APPO
714
+ env=doom_health_gathering_supreme
715
+ experiment=default_experiment
716
+ train_dir=/content/train_dir
717
+ restart_behavior=resume
718
+ device=gpu
719
+ seed=None
720
+ num_policies=1
721
+ async_rl=True
722
+ serial_mode=False
723
+ batched_sampling=False
724
+ num_batches_to_accumulate=2
725
+ worker_num_splits=2
726
+ policy_workers_per_policy=1
727
+ max_policy_lag=1000
728
+ num_workers=8
729
+ num_envs_per_worker=4
730
+ batch_size=1024
731
+ num_batches_per_epoch=1
732
+ num_epochs=1
733
+ rollout=32
734
+ recurrence=32
735
+ shuffle_minibatches=False
736
+ gamma=0.99
737
+ reward_scale=1.0
738
+ reward_clip=1000.0
739
+ value_bootstrap=False
740
+ normalize_returns=True
741
+ exploration_loss_coeff=0.001
742
+ value_loss_coeff=0.5
743
+ kl_loss_coeff=0.0
744
+ exploration_loss=symmetric_kl
745
+ gae_lambda=0.95
746
+ ppo_clip_ratio=0.1
747
+ ppo_clip_value=0.2
748
+ with_vtrace=False
749
+ vtrace_rho=1.0
750
+ vtrace_c=1.0
751
+ optimizer=adam
752
+ adam_eps=1e-06
753
+ adam_beta1=0.9
754
+ adam_beta2=0.999
755
+ max_grad_norm=4.0
756
+ learning_rate=0.0001
757
+ lr_schedule=constant
758
+ lr_schedule_kl_threshold=0.008
759
+ lr_adaptive_min=1e-06
760
+ lr_adaptive_max=0.01
761
+ obs_subtract_mean=0.0
762
+ obs_scale=255.0
763
+ normalize_input=True
764
+ normalize_input_keys=None
765
+ decorrelate_experience_max_seconds=0
766
+ decorrelate_envs_on_one_worker=True
767
+ actor_worker_gpus=[]
768
+ set_workers_cpu_affinity=True
769
+ force_envs_single_thread=False
770
+ default_niceness=0
771
+ log_to_file=True
772
+ experiment_summaries_interval=10
773
+ flush_summaries_interval=30
774
+ stats_avg=100
775
+ summaries_use_frameskip=True
776
+ heartbeat_interval=20
777
+ heartbeat_reporting_interval=600
778
+ train_for_env_steps=2100000
779
+ train_for_seconds=10000000000
780
+ save_every_sec=120
781
+ keep_checkpoints=2
782
+ load_checkpoint_kind=latest
783
+ save_milestones_sec=-1
784
+ save_best_every_sec=5
785
+ save_best_metric=reward
786
+ save_best_after=100000
787
+ benchmark=False
788
+ encoder_mlp_layers=[512, 512]
789
+ encoder_conv_architecture=convnet_simple
790
+ encoder_conv_mlp_layers=[512]
791
+ use_rnn=True
792
+ rnn_size=512
793
+ rnn_type=gru
794
+ rnn_num_layers=1
795
+ decoder_mlp_layers=[]
796
+ nonlinearity=elu
797
+ policy_initialization=orthogonal
798
+ policy_init_gain=1.0
799
+ actor_critic_share_weights=True
800
+ adaptive_stddev=True
801
+ continuous_tanh_scale=0.0
802
+ initial_stddev=1.0
803
+ use_env_info_cache=False
804
+ env_gpu_actions=False
805
+ env_gpu_observations=True
806
+ env_frameskip=4
807
+ env_framestack=1
808
+ pixel_format=CHW
809
+ use_record_episode_statistics=False
810
+ with_wandb=False
811
+ wandb_user=None
812
+ wandb_project=sample_factory
813
+ wandb_group=None
814
+ wandb_job_type=SF
815
+ wandb_tags=[]
816
+ with_pbt=False
817
+ pbt_mix_policies_in_one_env=True
818
+ pbt_period_env_steps=5000000
819
+ pbt_start_mutation=20000000
820
+ pbt_replace_fraction=0.3
821
+ pbt_mutation_rate=0.15
822
+ pbt_replace_reward_gap=0.1
823
+ pbt_replace_reward_gap_absolute=1e-06
824
+ pbt_optimize_gamma=False
825
+ pbt_target_objective=true_objective
826
+ pbt_perturb_min=1.1
827
+ pbt_perturb_max=1.5
828
+ num_agents=-1
829
+ num_humans=0
830
+ num_bots=-1
831
+ start_bot_difficulty=None
832
+ timelimit=None
833
+ res_w=128
834
+ res_h=72
835
+ wide_aspect_ratio=False
836
+ eval_env_frameskip=1
837
+ fps=35
838
+ command_line=--env=doom_health_gathering_supreme --num_workers=8 --num_envs_per_worker=4 --train_for_env_steps=1100000
839
+ cli_args={'env': 'doom_health_gathering_supreme', 'num_workers': 8, 'num_envs_per_worker': 4, 'train_for_env_steps': 1100000}
840
+ git_hash=unknown
841
+ git_repo_name=not a git repository
842
+ [2023-09-04 23:54:38,500][01455] Saving configuration to /content/train_dir/default_experiment/config.json...
843
+ [2023-09-04 23:54:38,505][01455] Rollout worker 0 uses device cpu
844
+ [2023-09-04 23:54:38,506][01455] Rollout worker 1 uses device cpu
845
+ [2023-09-04 23:54:38,509][01455] Rollout worker 2 uses device cpu
846
+ [2023-09-04 23:54:38,510][01455] Rollout worker 3 uses device cpu
847
+ [2023-09-04 23:54:38,512][01455] Rollout worker 4 uses device cpu
848
+ [2023-09-04 23:54:38,513][01455] Rollout worker 5 uses device cpu
849
+ [2023-09-04 23:54:38,514][01455] Rollout worker 6 uses device cpu
850
+ [2023-09-04 23:54:38,515][01455] Rollout worker 7 uses device cpu
851
+ [2023-09-04 23:54:38,591][01455] Using GPUs [0] for process 0 (actually maps to GPUs [0])
852
+ [2023-09-04 23:54:38,593][01455] InferenceWorker_p0-w0: min num requests: 2
853
+ [2023-09-04 23:54:38,623][01455] Starting all processes...
854
+ [2023-09-04 23:54:38,626][01455] Starting process learner_proc0
855
+ [2023-09-04 23:54:38,677][01455] Starting all processes...
856
+ [2023-09-04 23:54:38,691][01455] Starting process inference_proc0-0
857
+ [2023-09-04 23:54:38,691][01455] Starting process rollout_proc0
858
+ [2023-09-04 23:54:38,696][01455] Starting process rollout_proc1
859
+ [2023-09-04 23:54:38,696][01455] Starting process rollout_proc2
860
+ [2023-09-04 23:54:38,696][01455] Starting process rollout_proc3
861
+ [2023-09-04 23:54:38,696][01455] Starting process rollout_proc4
862
+ [2023-09-04 23:54:38,696][01455] Starting process rollout_proc5
863
+ [2023-09-04 23:54:38,696][01455] Starting process rollout_proc6
864
+ [2023-09-04 23:54:38,696][01455] Starting process rollout_proc7
865
+ [2023-09-04 23:54:54,927][14762] Worker 7 uses CPU cores [1]
866
+ [2023-09-04 23:54:55,027][14754] Using GPUs [0] for process 0 (actually maps to GPUs [0])
867
+ [2023-09-04 23:54:55,033][14754] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0
868
+ [2023-09-04 23:54:55,161][14754] Num visible devices: 1
869
+ [2023-09-04 23:54:55,218][14756] Worker 1 uses CPU cores [1]
870
+ [2023-09-04 23:54:55,223][14755] Worker 0 uses CPU cores [0]
871
+ [2023-09-04 23:54:55,333][14761] Worker 6 uses CPU cores [0]
872
+ [2023-09-04 23:54:55,355][14758] Worker 3 uses CPU cores [1]
873
+ [2023-09-04 23:54:55,368][14739] Using GPUs [0] for process 0 (actually maps to GPUs [0])
874
+ [2023-09-04 23:54:55,369][14739] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0
875
+ [2023-09-04 23:54:55,383][14760] Worker 4 uses CPU cores [0]
876
+ [2023-09-04 23:54:55,390][14759] Worker 5 uses CPU cores [1]
877
+ [2023-09-04 23:54:55,391][14739] Num visible devices: 1
878
+ [2023-09-04 23:54:55,403][14757] Worker 2 uses CPU cores [0]
879
+ [2023-09-04 23:54:55,414][14739] Starting seed is not provided
880
+ [2023-09-04 23:54:55,415][14739] Using GPUs [0] for process 0 (actually maps to GPUs [0])
881
+ [2023-09-04 23:54:55,415][14739] Initializing actor-critic model on device cuda:0
882
+ [2023-09-04 23:54:55,416][14739] RunningMeanStd input shape: (3, 72, 128)
883
+ [2023-09-04 23:54:55,417][14739] RunningMeanStd input shape: (1,)
884
+ [2023-09-04 23:54:55,430][14739] ConvEncoder: input_channels=3
885
+ [2023-09-04 23:54:55,555][14739] Conv encoder output size: 512
886
+ [2023-09-04 23:54:55,555][14739] Policy head output size: 512
887
+ [2023-09-04 23:54:55,570][14739] Created Actor Critic model with architecture:
888
+ [2023-09-04 23:54:55,570][14739] ActorCriticSharedWeights(
889
+ (obs_normalizer): ObservationNormalizer(
890
+ (running_mean_std): RunningMeanStdDictInPlace(
891
+ (running_mean_std): ModuleDict(
892
+ (obs): RunningMeanStdInPlace()
893
+ )
894
+ )
895
+ )
896
+ (returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace)
897
+ (encoder): VizdoomEncoder(
898
+ (basic_encoder): ConvEncoder(
899
+ (enc): RecursiveScriptModule(
900
+ original_name=ConvEncoderImpl
901
+ (conv_head): RecursiveScriptModule(
902
+ original_name=Sequential
903
+ (0): RecursiveScriptModule(original_name=Conv2d)
904
+ (1): RecursiveScriptModule(original_name=ELU)
905
+ (2): RecursiveScriptModule(original_name=Conv2d)
906
+ (3): RecursiveScriptModule(original_name=ELU)
907
+ (4): RecursiveScriptModule(original_name=Conv2d)
908
+ (5): RecursiveScriptModule(original_name=ELU)
909
+ )
910
+ (mlp_layers): RecursiveScriptModule(
911
+ original_name=Sequential
912
+ (0): RecursiveScriptModule(original_name=Linear)
913
+ (1): RecursiveScriptModule(original_name=ELU)
914
+ )
915
+ )
916
+ )
917
+ )
918
+ (core): ModelCoreRNN(
919
+ (core): GRU(512, 512)
920
+ )
921
+ (decoder): MlpDecoder(
922
+ (mlp): Identity()
923
+ )
924
+ (critic_linear): Linear(in_features=512, out_features=1, bias=True)
925
+ (action_parameterization): ActionParameterizationDefault(
926
+ (distribution_linear): Linear(in_features=512, out_features=5, bias=True)
927
+ )
928
+ )
929
+ [2023-09-04 23:54:55,798][14739] Using optimizer <class 'torch.optim.adam.Adam'>
930
+ [2023-09-04 23:54:55,799][14739] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000270_1105920.pth...
931
+ [2023-09-04 23:54:55,834][14739] Loading model from checkpoint
932
+ [2023-09-04 23:54:55,839][14739] Loaded experiment state at self.train_step=270, self.env_steps=1105920
933
+ [2023-09-04 23:54:55,839][14739] Initialized policy 0 weights for model version 270
934
+ [2023-09-04 23:54:55,842][14739] LearnerWorker_p0 finished initialization!
935
+ [2023-09-04 23:54:55,843][14739] Using GPUs [0] for process 0 (actually maps to GPUs [0])
936
+ [2023-09-04 23:54:56,042][14754] RunningMeanStd input shape: (3, 72, 128)
937
+ [2023-09-04 23:54:56,044][14754] RunningMeanStd input shape: (1,)
938
+ [2023-09-04 23:54:56,056][14754] ConvEncoder: input_channels=3
939
+ [2023-09-04 23:54:56,161][14754] Conv encoder output size: 512
940
+ [2023-09-04 23:54:56,161][14754] Policy head output size: 512
941
+ [2023-09-04 23:54:56,223][01455] Inference worker 0-0 is ready!
942
+ [2023-09-04 23:54:56,224][01455] All inference workers are ready! Signal rollout workers to start!
943
+ [2023-09-04 23:54:56,372][01455] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 1105920. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
944
+ [2023-09-04 23:54:56,541][14762] Doom resolution: 160x120, resize resolution: (128, 72)
945
+ [2023-09-04 23:54:56,565][14760] Doom resolution: 160x120, resize resolution: (128, 72)
946
+ [2023-09-04 23:54:56,569][14761] Doom resolution: 160x120, resize resolution: (128, 72)
947
+ [2023-09-04 23:54:56,585][14757] Doom resolution: 160x120, resize resolution: (128, 72)
948
+ [2023-09-04 23:54:56,607][14755] Doom resolution: 160x120, resize resolution: (128, 72)
949
+ [2023-09-04 23:54:56,609][14756] Doom resolution: 160x120, resize resolution: (128, 72)
950
+ [2023-09-04 23:54:56,612][14758] Doom resolution: 160x120, resize resolution: (128, 72)
951
+ [2023-09-04 23:54:56,616][14759] Doom resolution: 160x120, resize resolution: (128, 72)
952
+ [2023-09-04 23:54:58,439][14762] Decorrelating experience for 0 frames...
953
+ [2023-09-04 23:54:58,450][14759] Decorrelating experience for 0 frames...
954
+ [2023-09-04 23:54:58,469][14756] Decorrelating experience for 0 frames...
955
+ [2023-09-04 23:54:58,583][01455] Heartbeat connected on Batcher_0
956
+ [2023-09-04 23:54:58,587][01455] Heartbeat connected on LearnerWorker_p0
957
+ [2023-09-04 23:54:58,635][01455] Heartbeat connected on InferenceWorker_p0-w0
958
+ [2023-09-04 23:54:59,362][14761] Decorrelating experience for 0 frames...
959
+ [2023-09-04 23:54:59,365][14760] Decorrelating experience for 0 frames...
960
+ [2023-09-04 23:54:59,360][14757] Decorrelating experience for 0 frames...
961
+ [2023-09-04 23:54:59,393][14755] Decorrelating experience for 0 frames...
962
+ [2023-09-04 23:55:00,542][14762] Decorrelating experience for 32 frames...
963
+ [2023-09-04 23:55:00,547][14759] Decorrelating experience for 32 frames...
964
+ [2023-09-04 23:55:00,625][14756] Decorrelating experience for 32 frames...
965
+ [2023-09-04 23:55:01,213][14757] Decorrelating experience for 32 frames...
966
+ [2023-09-04 23:55:01,312][14755] Decorrelating experience for 32 frames...
967
+ [2023-09-04 23:55:01,372][01455] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 1105920. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
968
+ [2023-09-04 23:55:01,459][14758] Decorrelating experience for 0 frames...
969
+ [2023-09-04 23:55:02,321][14760] Decorrelating experience for 32 frames...
970
+ [2023-09-04 23:55:02,874][14761] Decorrelating experience for 32 frames...
971
+ [2023-09-04 23:55:03,204][14762] Decorrelating experience for 64 frames...
972
+ [2023-09-04 23:55:03,386][14758] Decorrelating experience for 32 frames...
973
+ [2023-09-04 23:55:03,400][14759] Decorrelating experience for 64 frames...
974
+ [2023-09-04 23:55:03,717][14755] Decorrelating experience for 64 frames...
975
+ [2023-09-04 23:55:04,603][14760] Decorrelating experience for 64 frames...
976
+ [2023-09-04 23:55:04,616][14756] Decorrelating experience for 64 frames...
977
+ [2023-09-04 23:55:04,775][14757] Decorrelating experience for 64 frames...
978
+ [2023-09-04 23:55:04,839][14762] Decorrelating experience for 96 frames...
979
+ [2023-09-04 23:55:05,027][14761] Decorrelating experience for 64 frames...
980
+ [2023-09-04 23:55:05,084][01455] Heartbeat connected on RolloutWorker_w7
981
+ [2023-09-04 23:55:06,054][14757] Decorrelating experience for 96 frames...
982
+ [2023-09-04 23:55:06,173][14759] Decorrelating experience for 96 frames...
983
+ [2023-09-04 23:55:06,304][01455] Heartbeat connected on RolloutWorker_w2
984
+ [2023-09-04 23:55:06,372][01455] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 1105920. Throughput: 0: 1.2. Samples: 12. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
985
+ [2023-09-04 23:55:06,379][01455] Avg episode reward: [(0, '1.280')]
986
+ [2023-09-04 23:55:06,483][01455] Heartbeat connected on RolloutWorker_w5
987
+ [2023-09-04 23:55:06,553][14761] Decorrelating experience for 96 frames...
988
+ [2023-09-04 23:55:06,607][14756] Decorrelating experience for 96 frames...
989
+ [2023-09-04 23:55:06,891][14758] Decorrelating experience for 64 frames...
990
+ [2023-09-04 23:55:06,969][01455] Heartbeat connected on RolloutWorker_w1
991
+ [2023-09-04 23:55:07,001][01455] Heartbeat connected on RolloutWorker_w6
992
+ [2023-09-04 23:55:08,700][14760] Decorrelating experience for 96 frames...
993
+ [2023-09-04 23:55:09,064][14755] Decorrelating experience for 96 frames...
994
+ [2023-09-04 23:55:09,505][01455] Heartbeat connected on RolloutWorker_w4
995
+ [2023-09-04 23:55:09,937][01455] Heartbeat connected on RolloutWorker_w0
996
+ [2023-09-04 23:55:10,464][14739] Signal inference workers to stop experience collection...
997
+ [2023-09-04 23:55:10,482][14754] InferenceWorker_p0-w0: stopping experience collection
998
+ [2023-09-04 23:55:10,518][14758] Decorrelating experience for 96 frames...
999
+ [2023-09-04 23:55:10,604][01455] Heartbeat connected on RolloutWorker_w3
1000
+ [2023-09-04 23:55:11,372][01455] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 1105920. Throughput: 0: 160.7. Samples: 2410. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
1001
+ [2023-09-04 23:55:11,374][01455] Avg episode reward: [(0, '3.729')]
1002
+ [2023-09-04 23:55:11,597][14739] Signal inference workers to resume experience collection...
1003
+ [2023-09-04 23:55:11,598][14754] InferenceWorker_p0-w0: resuming experience collection
1004
+ [2023-09-04 23:55:16,374][01455] Fps is (10 sec: 1638.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 1122304. Throughput: 0: 199.0. Samples: 3980. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0)
1005
+ [2023-09-04 23:55:16,381][01455] Avg episode reward: [(0, '4.079')]
1006
+ [2023-09-04 23:55:21,372][01455] Fps is (10 sec: 2867.2, 60 sec: 1146.9, 300 sec: 1146.9). Total num frames: 1134592. Throughput: 0: 304.8. Samples: 7620. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
1007
+ [2023-09-04 23:55:21,378][01455] Avg episode reward: [(0, '4.961')]
1008
+ [2023-09-04 23:55:25,220][14754] Updated weights for policy 0, policy_version 280 (0.0017)
1009
+ [2023-09-04 23:55:26,373][01455] Fps is (10 sec: 2457.8, 60 sec: 1365.3, 300 sec: 1365.3). Total num frames: 1146880. Throughput: 0: 385.3. Samples: 11560. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
1010
+ [2023-09-04 23:55:26,378][01455] Avg episode reward: [(0, '5.128')]
1011
+ [2023-09-04 23:55:31,372][01455] Fps is (10 sec: 3276.8, 60 sec: 1755.4, 300 sec: 1755.4). Total num frames: 1167360. Throughput: 0: 408.6. Samples: 14300. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
1012
+ [2023-09-04 23:55:31,375][01455] Avg episode reward: [(0, '5.277')]
1013
+ [2023-09-04 23:55:36,378][01455] Fps is (10 sec: 3684.5, 60 sec: 1945.3, 300 sec: 1945.3). Total num frames: 1183744. Throughput: 0: 504.7. Samples: 20192. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
1014
+ [2023-09-04 23:55:36,387][01455] Avg episode reward: [(0, '5.129')]
1015
+ [2023-09-04 23:55:36,443][14754] Updated weights for policy 0, policy_version 290 (0.0027)
1016
+ [2023-09-04 23:55:41,373][01455] Fps is (10 sec: 3276.8, 60 sec: 2093.5, 300 sec: 2093.5). Total num frames: 1200128. Throughput: 0: 532.9. Samples: 23980. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
1017
+ [2023-09-04 23:55:41,382][01455] Avg episode reward: [(0, '5.248')]
1018
+ [2023-09-04 23:55:46,372][01455] Fps is (10 sec: 2868.7, 60 sec: 2129.9, 300 sec: 2129.9). Total num frames: 1212416. Throughput: 0: 574.4. Samples: 25848. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
1019
+ [2023-09-04 23:55:46,375][01455] Avg episode reward: [(0, '5.641')]
1020
+ [2023-09-04 23:55:46,382][14739] Saving new best policy, reward=5.641!
1021
+ [2023-09-04 23:55:50,067][14754] Updated weights for policy 0, policy_version 300 (0.0033)
1022
+ [2023-09-04 23:55:51,372][01455] Fps is (10 sec: 3276.8, 60 sec: 2308.7, 300 sec: 2308.7). Total num frames: 1232896. Throughput: 0: 697.1. Samples: 31380. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
1023
+ [2023-09-04 23:55:51,375][01455] Avg episode reward: [(0, '5.893')]
1024
+ [2023-09-04 23:55:51,383][14739] Saving new best policy, reward=5.893!
1025
+ [2023-09-04 23:55:56,378][01455] Fps is (10 sec: 3684.5, 60 sec: 2389.1, 300 sec: 2389.1). Total num frames: 1249280. Throughput: 0: 771.0. Samples: 37110. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
1026
+ [2023-09-04 23:55:56,388][01455] Avg episode reward: [(0, '5.916')]
1027
+ [2023-09-04 23:55:56,400][14739] Saving new best policy, reward=5.916!
1028
+ [2023-09-04 23:56:01,373][01455] Fps is (10 sec: 2867.1, 60 sec: 2594.1, 300 sec: 2394.6). Total num frames: 1261568. Throughput: 0: 776.1. Samples: 38902. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
1029
+ [2023-09-04 23:56:01,379][01455] Avg episode reward: [(0, '5.739')]
1030
+ [2023-09-04 23:56:03,239][14754] Updated weights for policy 0, policy_version 310 (0.0028)
1031
+ [2023-09-04 23:56:06,372][01455] Fps is (10 sec: 2458.9, 60 sec: 2798.9, 300 sec: 2399.1). Total num frames: 1273856. Throughput: 0: 779.2. Samples: 42686. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
1032
+ [2023-09-04 23:56:06,375][01455] Avg episode reward: [(0, '5.661')]
1033
+ [2023-09-04 23:56:11,372][01455] Fps is (10 sec: 3276.9, 60 sec: 3140.3, 300 sec: 2512.2). Total num frames: 1294336. Throughput: 0: 817.6. Samples: 48350. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
1034
+ [2023-09-04 23:56:11,378][01455] Avg episode reward: [(0, '5.821')]
1035
+ [2023-09-04 23:56:14,615][14754] Updated weights for policy 0, policy_version 320 (0.0043)
1036
+ [2023-09-04 23:56:16,373][01455] Fps is (10 sec: 4096.0, 60 sec: 3208.6, 300 sec: 2611.2). Total num frames: 1314816. Throughput: 0: 822.0. Samples: 51292. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
1037
+ [2023-09-04 23:56:16,374][01455] Avg episode reward: [(0, '5.625')]
1038
+ [2023-09-04 23:56:21,373][01455] Fps is (10 sec: 3276.7, 60 sec: 3208.5, 300 sec: 2602.2). Total num frames: 1327104. Throughput: 0: 789.1. Samples: 55696. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
1039
+ [2023-09-04 23:56:21,378][01455] Avg episode reward: [(0, '5.429')]
1040
+ [2023-09-04 23:56:26,377][01455] Fps is (10 sec: 2456.6, 60 sec: 3208.3, 300 sec: 2594.0). Total num frames: 1339392. Throughput: 0: 790.1. Samples: 59536. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
1041
+ [2023-09-04 23:56:26,379][01455] Avg episode reward: [(0, '5.610')]
1042
+ [2023-09-04 23:56:28,717][14754] Updated weights for policy 0, policy_version 330 (0.0013)
1043
+ [2023-09-04 23:56:31,372][01455] Fps is (10 sec: 3276.9, 60 sec: 3208.5, 300 sec: 2673.2). Total num frames: 1359872. Throughput: 0: 810.6. Samples: 62324. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
1044
+ [2023-09-04 23:56:31,380][01455] Avg episode reward: [(0, '5.613')]
1045
+ [2023-09-04 23:56:36,372][01455] Fps is (10 sec: 4097.8, 60 sec: 3277.1, 300 sec: 2744.3). Total num frames: 1380352. Throughput: 0: 823.1. Samples: 68420. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
1046
+ [2023-09-04 23:56:36,380][01455] Avg episode reward: [(0, '6.203')]
1047
+ [2023-09-04 23:56:36,394][14739] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000337_1380352.pth...
1048
+ [2023-09-04 23:56:36,525][14739] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000241_987136.pth
1049
+ [2023-09-04 23:56:36,534][14739] Saving new best policy, reward=6.203!
1050
+ [2023-09-04 23:56:40,644][14754] Updated weights for policy 0, policy_version 340 (0.0016)
1051
+ [2023-09-04 23:56:41,372][01455] Fps is (10 sec: 3276.8, 60 sec: 3208.5, 300 sec: 2730.7). Total num frames: 1392640. Throughput: 0: 789.6. Samples: 72636. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
1052
+ [2023-09-04 23:56:41,376][01455] Avg episode reward: [(0, '6.473')]
1053
+ [2023-09-04 23:56:41,379][14739] Saving new best policy, reward=6.473!
1054
+ [2023-09-04 23:56:46,373][01455] Fps is (10 sec: 2457.5, 60 sec: 3208.5, 300 sec: 2718.2). Total num frames: 1404928. Throughput: 0: 790.4. Samples: 74470. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
1055
+ [2023-09-04 23:56:46,378][01455] Avg episode reward: [(0, '6.912')]
1056
+ [2023-09-04 23:56:46,390][14739] Saving new best policy, reward=6.912!
1057
+ [2023-09-04 23:56:51,372][01455] Fps is (10 sec: 2867.2, 60 sec: 3140.3, 300 sec: 2742.5). Total num frames: 1421312. Throughput: 0: 814.4. Samples: 79332. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
1058
+ [2023-09-04 23:56:51,378][01455] Avg episode reward: [(0, '6.891')]
1059
+ [2023-09-04 23:56:53,375][14754] Updated weights for policy 0, policy_version 350 (0.0030)
1060
+ [2023-09-04 23:56:56,372][01455] Fps is (10 sec: 4096.1, 60 sec: 3277.1, 300 sec: 2833.1). Total num frames: 1445888. Throughput: 0: 823.9. Samples: 85424. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
1061
+ [2023-09-04 23:56:56,379][01455] Avg episode reward: [(0, '7.131')]
1062
+ [2023-09-04 23:56:56,389][14739] Saving new best policy, reward=7.131!
1063
+ [2023-09-04 23:57:01,372][01455] Fps is (10 sec: 3686.4, 60 sec: 3276.8, 300 sec: 2818.0). Total num frames: 1458176. Throughput: 0: 805.8. Samples: 87554. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
1064
+ [2023-09-04 23:57:01,382][01455] Avg episode reward: [(0, '7.265')]
1065
+ [2023-09-04 23:57:01,385][14739] Saving new best policy, reward=7.265!
1066
+ [2023-09-04 23:57:06,373][01455] Fps is (10 sec: 2457.4, 60 sec: 3276.8, 300 sec: 2804.2). Total num frames: 1470464. Throughput: 0: 789.9. Samples: 91244. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
1067
+ [2023-09-04 23:57:06,380][01455] Avg episode reward: [(0, '7.116')]
1068
+ [2023-09-04 23:57:07,440][14754] Updated weights for policy 0, policy_version 360 (0.0020)
1069
+ [2023-09-04 23:57:11,373][01455] Fps is (10 sec: 2867.2, 60 sec: 3208.5, 300 sec: 2821.7). Total num frames: 1486848. Throughput: 0: 817.6. Samples: 96326. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
1070
+ [2023-09-04 23:57:11,380][01455] Avg episode reward: [(0, '7.060')]
1071
+ [2023-09-04 23:57:16,372][01455] Fps is (10 sec: 3686.7, 60 sec: 3208.5, 300 sec: 2867.2). Total num frames: 1507328. Throughput: 0: 823.5. Samples: 99382. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
1072
+ [2023-09-04 23:57:16,376][01455] Avg episode reward: [(0, '7.042')]
1073
+ [2023-09-04 23:57:17,683][14754] Updated weights for policy 0, policy_version 370 (0.0015)
1074
+ [2023-09-04 23:57:21,377][01455] Fps is (10 sec: 3684.6, 60 sec: 3276.5, 300 sec: 2881.2). Total num frames: 1523712. Throughput: 0: 805.1. Samples: 104652. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
1075
+ [2023-09-04 23:57:21,383][01455] Avg episode reward: [(0, '7.738')]
1076
+ [2023-09-04 23:57:21,390][14739] Saving new best policy, reward=7.738!
1077
+ [2023-09-04 23:57:26,372][01455] Fps is (10 sec: 2867.2, 60 sec: 3277.0, 300 sec: 2867.2). Total num frames: 1536000. Throughput: 0: 796.5. Samples: 108478. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
1078
+ [2023-09-04 23:57:26,378][01455] Avg episode reward: [(0, '7.998')]
1079
+ [2023-09-04 23:57:26,396][14739] Saving new best policy, reward=7.998!
1080
+ [2023-09-04 23:57:31,372][01455] Fps is (10 sec: 2868.6, 60 sec: 3208.5, 300 sec: 2880.4). Total num frames: 1552384. Throughput: 0: 799.7. Samples: 110458. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
1081
+ [2023-09-04 23:57:31,376][01455] Avg episode reward: [(0, '8.727')]
1082
+ [2023-09-04 23:57:31,378][14739] Saving new best policy, reward=8.727!
1083
+ [2023-09-04 23:57:31,967][14754] Updated weights for policy 0, policy_version 380 (0.0018)
1084
+ [2023-09-04 23:57:36,373][01455] Fps is (10 sec: 3686.4, 60 sec: 3208.5, 300 sec: 2918.4). Total num frames: 1572864. Throughput: 0: 825.2. Samples: 116464. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
1085
+ [2023-09-04 23:57:36,375][01455] Avg episode reward: [(0, '8.839')]
1086
+ [2023-09-04 23:57:36,385][14739] Saving new best policy, reward=8.839!
1087
+ [2023-09-04 23:57:41,372][01455] Fps is (10 sec: 3686.4, 60 sec: 3276.8, 300 sec: 2929.3). Total num frames: 1589248. Throughput: 0: 799.4. Samples: 121396. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
1088
+ [2023-09-04 23:57:41,375][01455] Avg episode reward: [(0, '8.963')]
1089
+ [2023-09-04 23:57:41,383][14739] Saving new best policy, reward=8.963!
1090
+ [2023-09-04 23:57:44,534][14754] Updated weights for policy 0, policy_version 390 (0.0023)
1091
+ [2023-09-04 23:57:46,378][01455] Fps is (10 sec: 2865.6, 60 sec: 3276.5, 300 sec: 2915.3). Total num frames: 1601536. Throughput: 0: 792.7. Samples: 123228. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
1092
+ [2023-09-04 23:57:46,384][01455] Avg episode reward: [(0, '8.799')]
1093
+ [2023-09-04 23:57:51,372][01455] Fps is (10 sec: 2457.6, 60 sec: 3208.5, 300 sec: 2902.3). Total num frames: 1613824. Throughput: 0: 804.4. Samples: 127442. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
1094
+ [2023-09-04 23:57:51,380][01455] Avg episode reward: [(0, '8.514')]
1095
+ [2023-09-04 23:57:56,376][01455] Fps is (10 sec: 3277.6, 60 sec: 3140.1, 300 sec: 2935.4). Total num frames: 1634304. Throughput: 0: 826.4. Samples: 133518. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
1096
+ [2023-09-04 23:57:56,378][01455] Avg episode reward: [(0, '9.051')]
1097
+ [2023-09-04 23:57:56,468][14739] Saving new best policy, reward=9.051!
1098
+ [2023-09-04 23:57:56,468][14754] Updated weights for policy 0, policy_version 400 (0.0029)
1099
+ [2023-09-04 23:58:01,375][01455] Fps is (10 sec: 3685.6, 60 sec: 3208.4, 300 sec: 2944.7). Total num frames: 1650688. Throughput: 0: 820.8. Samples: 136320. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
1100
+ [2023-09-04 23:58:01,377][01455] Avg episode reward: [(0, '9.667')]
1101
+ [2023-09-04 23:58:01,380][14739] Saving new best policy, reward=9.667!
1102
+ [2023-09-04 23:58:06,372][01455] Fps is (10 sec: 2868.1, 60 sec: 3208.6, 300 sec: 2931.9). Total num frames: 1662976. Throughput: 0: 786.4. Samples: 140038. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
1103
+ [2023-09-04 23:58:06,377][01455] Avg episode reward: [(0, '9.737')]
1104
+ [2023-09-04 23:58:06,394][14739] Saving new best policy, reward=9.737!
1105
+ [2023-09-04 23:58:10,713][14754] Updated weights for policy 0, policy_version 410 (0.0017)
1106
+ [2023-09-04 23:58:11,372][01455] Fps is (10 sec: 2867.8, 60 sec: 3208.5, 300 sec: 2940.7). Total num frames: 1679360. Throughput: 0: 798.6. Samples: 144414. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
1107
+ [2023-09-04 23:58:11,375][01455] Avg episode reward: [(0, '10.161')]
1108
+ [2023-09-04 23:58:11,387][14739] Saving new best policy, reward=10.161!
1109
+ [2023-09-04 23:58:16,372][01455] Fps is (10 sec: 3686.4, 60 sec: 3208.5, 300 sec: 2969.6). Total num frames: 1699840. Throughput: 0: 818.2. Samples: 147278. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
1110
+ [2023-09-04 23:58:16,378][01455] Avg episode reward: [(0, '9.869')]
1111
+ [2023-09-04 23:58:21,375][01455] Fps is (10 sec: 3685.4, 60 sec: 3208.6, 300 sec: 2977.1). Total num frames: 1716224. Throughput: 0: 814.7. Samples: 153128. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
1112
+ [2023-09-04 23:58:21,378][01455] Avg episode reward: [(0, '9.733')]
1113
+ [2023-09-04 23:58:22,077][14754] Updated weights for policy 0, policy_version 420 (0.0017)
1114
+ [2023-09-04 23:58:26,374][01455] Fps is (10 sec: 2866.6, 60 sec: 3208.4, 300 sec: 2964.7). Total num frames: 1728512. Throughput: 0: 788.7. Samples: 156888. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
1115
+ [2023-09-04 23:58:26,380][01455] Avg episode reward: [(0, '10.604')]
1116
+ [2023-09-04 23:58:26,390][14739] Saving new best policy, reward=10.604!
1117
+ [2023-09-04 23:58:31,372][01455] Fps is (10 sec: 2868.0, 60 sec: 3208.5, 300 sec: 2972.0). Total num frames: 1744896. Throughput: 0: 788.4. Samples: 158700. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
1118
+ [2023-09-04 23:58:31,375][01455] Avg episode reward: [(0, '10.637')]
1119
+ [2023-09-04 23:58:31,382][14739] Saving new best policy, reward=10.637!
1120
+ [2023-09-04 23:58:35,443][14754] Updated weights for policy 0, policy_version 430 (0.0019)
1121
+ [2023-09-04 23:58:36,376][01455] Fps is (10 sec: 3276.4, 60 sec: 3140.1, 300 sec: 2978.9). Total num frames: 1761280. Throughput: 0: 815.2. Samples: 164130. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
1122
+ [2023-09-04 23:58:36,382][01455] Avg episode reward: [(0, '11.111')]
1123
+ [2023-09-04 23:58:36,427][14739] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000431_1765376.pth...
1124
+ [2023-09-04 23:58:36,554][14739] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000270_1105920.pth
1125
+ [2023-09-04 23:58:36,563][14739] Saving new best policy, reward=11.111!
1126
+ [2023-09-04 23:58:41,372][01455] Fps is (10 sec: 3686.4, 60 sec: 3208.5, 300 sec: 3003.7). Total num frames: 1781760. Throughput: 0: 804.4. Samples: 169712. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
1127
+ [2023-09-04 23:58:41,380][01455] Avg episode reward: [(0, '11.784')]
1128
+ [2023-09-04 23:58:41,385][14739] Saving new best policy, reward=11.784!
1129
+ [2023-09-04 23:58:46,372][01455] Fps is (10 sec: 3277.9, 60 sec: 3208.8, 300 sec: 2991.9). Total num frames: 1794048. Throughput: 0: 782.9. Samples: 171550. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
1130
+ [2023-09-04 23:58:46,377][01455] Avg episode reward: [(0, '11.139')]
1131
+ [2023-09-04 23:58:49,137][14754] Updated weights for policy 0, policy_version 440 (0.0014)
1132
+ [2023-09-04 23:58:51,372][01455] Fps is (10 sec: 2457.6, 60 sec: 3208.5, 300 sec: 2980.5). Total num frames: 1806336. Throughput: 0: 785.0. Samples: 175362. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
1133
+ [2023-09-04 23:58:51,381][01455] Avg episode reward: [(0, '11.389')]
1134
+ [2023-09-04 23:58:56,372][01455] Fps is (10 sec: 3276.8, 60 sec: 3208.7, 300 sec: 3003.7). Total num frames: 1826816. Throughput: 0: 812.7. Samples: 180984. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
1135
+ [2023-09-04 23:58:56,379][01455] Avg episode reward: [(0, '10.801')]
1136
+ [2023-09-04 23:59:00,326][14754] Updated weights for policy 0, policy_version 450 (0.0026)
1137
+ [2023-09-04 23:59:01,373][01455] Fps is (10 sec: 4096.0, 60 sec: 3276.9, 300 sec: 3026.0). Total num frames: 1847296. Throughput: 0: 815.0. Samples: 183954. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
1138
+ [2023-09-04 23:59:01,375][01455] Avg episode reward: [(0, '10.566')]
1139
+ [2023-09-04 23:59:06,372][01455] Fps is (10 sec: 3276.8, 60 sec: 3276.8, 300 sec: 3014.7). Total num frames: 1859584. Throughput: 0: 784.2. Samples: 188416. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
1140
+ [2023-09-04 23:59:06,378][01455] Avg episode reward: [(0, '10.313')]
1141
+ [2023-09-04 23:59:11,373][01455] Fps is (10 sec: 2457.6, 60 sec: 3208.5, 300 sec: 3003.7). Total num frames: 1871872. Throughput: 0: 783.4. Samples: 192138. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
1142
+ [2023-09-04 23:59:11,380][01455] Avg episode reward: [(0, '10.777')]
1143
+ [2023-09-04 23:59:14,601][14754] Updated weights for policy 0, policy_version 460 (0.0015)
1144
+ [2023-09-04 23:59:16,373][01455] Fps is (10 sec: 2867.1, 60 sec: 3140.2, 300 sec: 3009.0). Total num frames: 1888256. Throughput: 0: 804.5. Samples: 194902. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
1145
+ [2023-09-04 23:59:16,376][01455] Avg episode reward: [(0, '11.765')]
1146
+ [2023-09-04 23:59:21,372][01455] Fps is (10 sec: 3686.5, 60 sec: 3208.7, 300 sec: 3029.5). Total num frames: 1908736. Throughput: 0: 818.6. Samples: 200966. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
1147
+ [2023-09-04 23:59:21,374][01455] Avg episode reward: [(0, '11.503')]
1148
+ [2023-09-04 23:59:26,373][01455] Fps is (10 sec: 3276.9, 60 sec: 3208.6, 300 sec: 3018.9). Total num frames: 1921024. Throughput: 0: 791.5. Samples: 205328. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
1149
+ [2023-09-04 23:59:26,377][01455] Avg episode reward: [(0, '12.490')]
1150
+ [2023-09-04 23:59:26,403][14739] Saving new best policy, reward=12.490!
1151
+ [2023-09-04 23:59:26,410][14754] Updated weights for policy 0, policy_version 470 (0.0021)
1152
+ [2023-09-04 23:59:31,373][01455] Fps is (10 sec: 2457.4, 60 sec: 3140.2, 300 sec: 3008.7). Total num frames: 1933312. Throughput: 0: 790.9. Samples: 207140. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
1153
+ [2023-09-04 23:59:31,376][01455] Avg episode reward: [(0, '12.634')]
1154
+ [2023-09-04 23:59:31,383][14739] Saving new best policy, reward=12.634!
1155
+ [2023-09-04 23:59:36,373][01455] Fps is (10 sec: 3276.8, 60 sec: 3208.7, 300 sec: 3028.1). Total num frames: 1953792. Throughput: 0: 810.0. Samples: 211810. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
1156
+ [2023-09-04 23:59:36,375][01455] Avg episode reward: [(0, '12.658')]
1157
+ [2023-09-04 23:59:36,392][14739] Saving new best policy, reward=12.658!
1158
+ [2023-09-04 23:59:39,120][14754] Updated weights for policy 0, policy_version 480 (0.0024)
1159
+ [2023-09-04 23:59:41,372][01455] Fps is (10 sec: 4096.3, 60 sec: 3208.5, 300 sec: 3046.8). Total num frames: 1974272. Throughput: 0: 813.6. Samples: 217598. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
1160
+ [2023-09-04 23:59:41,374][01455] Avg episode reward: [(0, '13.624')]
1161
+ [2023-09-04 23:59:41,379][14739] Saving new best policy, reward=13.624!
1162
+ [2023-09-04 23:59:46,373][01455] Fps is (10 sec: 3276.5, 60 sec: 3208.5, 300 sec: 3036.7). Total num frames: 1986560. Throughput: 0: 797.5. Samples: 219844. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
1163
+ [2023-09-04 23:59:46,378][01455] Avg episode reward: [(0, '13.888')]
1164
+ [2023-09-04 23:59:46,395][14739] Saving new best policy, reward=13.888!
1165
+ [2023-09-04 23:59:51,373][01455] Fps is (10 sec: 2457.4, 60 sec: 3208.5, 300 sec: 3026.9). Total num frames: 1998848. Throughput: 0: 781.1. Samples: 223564. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
1166
+ [2023-09-04 23:59:51,379][01455] Avg episode reward: [(0, '13.813')]
1167
+ [2023-09-04 23:59:53,792][14754] Updated weights for policy 0, policy_version 490 (0.0022)
1168
+ [2023-09-04 23:59:56,372][01455] Fps is (10 sec: 2867.5, 60 sec: 3140.3, 300 sec: 3082.4). Total num frames: 2015232. Throughput: 0: 806.6. Samples: 228436. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
1169
+ [2023-09-04 23:59:56,375][01455] Avg episode reward: [(0, '14.494')]
1170
+ [2023-09-04 23:59:56,387][14739] Saving new best policy, reward=14.494!
1171
+ [2023-09-05 00:00:01,373][01455] Fps is (10 sec: 3686.7, 60 sec: 3140.3, 300 sec: 3151.8). Total num frames: 2035712. Throughput: 0: 809.3. Samples: 231322. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
1172
+ [2023-09-05 00:00:01,376][01455] Avg episode reward: [(0, '13.699')]
1173
+ [2023-09-05 00:00:04,458][14754] Updated weights for policy 0, policy_version 500 (0.0017)
1174
+ [2023-09-05 00:00:06,373][01455] Fps is (10 sec: 3686.3, 60 sec: 3208.5, 300 sec: 3207.4). Total num frames: 2052096. Throughput: 0: 793.9. Samples: 236692. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
1175
+ [2023-09-05 00:00:06,377][01455] Avg episode reward: [(0, '13.461')]
1176
+ [2023-09-05 00:00:11,372][01455] Fps is (10 sec: 2867.2, 60 sec: 3208.5, 300 sec: 3193.5). Total num frames: 2064384. Throughput: 0: 779.3. Samples: 240398. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
1177
+ [2023-09-05 00:00:11,379][01455] Avg episode reward: [(0, '13.877')]
1178
+ [2023-09-05 00:00:16,372][01455] Fps is (10 sec: 2457.6, 60 sec: 3140.3, 300 sec: 3193.5). Total num frames: 2076672. Throughput: 0: 781.2. Samples: 242294. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
1179
+ [2023-09-05 00:00:16,380][01455] Avg episode reward: [(0, '13.705')]
1180
+ [2023-09-05 00:00:18,437][14754] Updated weights for policy 0, policy_version 510 (0.0017)
1181
+ [2023-09-05 00:00:21,372][01455] Fps is (10 sec: 3686.4, 60 sec: 3208.5, 300 sec: 3235.1). Total num frames: 2101248. Throughput: 0: 810.0. Samples: 248258. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
1182
+ [2023-09-05 00:00:21,375][01455] Avg episode reward: [(0, '14.528')]
1183
+ [2023-09-05 00:00:21,381][14739] Saving new best policy, reward=14.528!
1184
+ [2023-09-05 00:00:22,422][14739] Stopping Batcher_0...
1185
+ [2023-09-05 00:00:22,423][14739] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000514_2105344.pth...
1186
+ [2023-09-05 00:00:22,424][01455] Component Batcher_0 stopped!
1187
+ [2023-09-05 00:00:22,424][14739] Loop batcher_evt_loop terminating...
1188
+ [2023-09-05 00:00:22,498][01455] Component RolloutWorker_w3 stopped!
1189
+ [2023-09-05 00:00:22,504][14758] Stopping RolloutWorker_w3...
1190
+ [2023-09-05 00:00:22,505][14758] Loop rollout_proc3_evt_loop terminating...
1191
+ [2023-09-05 00:00:22,513][14757] Stopping RolloutWorker_w2...
1192
+ [2023-09-05 00:00:22,514][01455] Component RolloutWorker_w2 stopped!
1193
+ [2023-09-05 00:00:22,520][01455] Component RolloutWorker_w7 stopped!
1194
+ [2023-09-05 00:00:22,524][01455] Component RolloutWorker_w6 stopped!
1195
+ [2023-09-05 00:00:22,523][14761] Stopping RolloutWorker_w6...
1196
+ [2023-09-05 00:00:22,514][14757] Loop rollout_proc2_evt_loop terminating...
1197
+ [2023-09-05 00:00:22,529][14762] Stopping RolloutWorker_w7...
1198
+ [2023-09-05 00:00:22,530][14762] Loop rollout_proc7_evt_loop terminating...
1199
+ [2023-09-05 00:00:22,527][14761] Loop rollout_proc6_evt_loop terminating...
1200
+ [2023-09-05 00:00:22,539][14755] Stopping RolloutWorker_w0...
1201
+ [2023-09-05 00:00:22,540][01455] Component RolloutWorker_w0 stopped!
1202
+ [2023-09-05 00:00:22,544][14755] Loop rollout_proc0_evt_loop terminating...
1203
+ [2023-09-05 00:00:22,553][14754] Weights refcount: 2 0
1204
+ [2023-09-05 00:00:22,565][14754] Stopping InferenceWorker_p0-w0...
1205
+ [2023-09-05 00:00:22,567][14754] Loop inference_proc0-0_evt_loop terminating...
1206
+ [2023-09-05 00:00:22,571][01455] Component InferenceWorker_p0-w0 stopped!
1207
+ [2023-09-05 00:00:22,593][14759] Stopping RolloutWorker_w5...
1208
+ [2023-09-05 00:00:22,589][01455] Component RolloutWorker_w5 stopped!
1209
+ [2023-09-05 00:00:22,602][14759] Loop rollout_proc5_evt_loop terminating...
1210
+ [2023-09-05 00:00:22,613][14739] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000337_1380352.pth
1211
+ [2023-09-05 00:00:22,612][01455] Component RolloutWorker_w1 stopped!
1212
+ [2023-09-05 00:00:22,618][14756] Stopping RolloutWorker_w1...
1213
+ [2023-09-05 00:00:22,619][14760] Stopping RolloutWorker_w4...
1214
+ [2023-09-05 00:00:22,620][01455] Component RolloutWorker_w4 stopped!
1215
+ [2023-09-05 00:00:22,625][14756] Loop rollout_proc1_evt_loop terminating...
1216
+ [2023-09-05 00:00:22,621][14760] Loop rollout_proc4_evt_loop terminating...
1217
+ [2023-09-05 00:00:22,644][14739] Saving new best policy, reward=14.841!
1218
+ [2023-09-05 00:00:22,818][14739] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000514_2105344.pth...
1219
+ [2023-09-05 00:00:22,984][14739] Stopping LearnerWorker_p0...
1220
+ [2023-09-05 00:00:22,990][14739] Loop learner_proc0_evt_loop terminating...
1221
+ [2023-09-05 00:00:22,985][01455] Component LearnerWorker_p0 stopped!
1222
+ [2023-09-05 00:00:22,991][01455] Waiting for process learner_proc0 to stop...
1223
+ [2023-09-05 00:00:24,305][01455] Waiting for process inference_proc0-0 to join...
1224
+ [2023-09-05 00:00:24,308][01455] Waiting for process rollout_proc0 to join...
1225
+ [2023-09-05 00:00:27,411][01455] Waiting for process rollout_proc1 to join...
1226
+ [2023-09-05 00:00:27,417][01455] Waiting for process rollout_proc2 to join...
1227
+ [2023-09-05 00:00:27,419][01455] Waiting for process rollout_proc3 to join...
1228
+ [2023-09-05 00:00:27,420][01455] Waiting for process rollout_proc4 to join...
1229
+ [2023-09-05 00:00:27,422][01455] Waiting for process rollout_proc5 to join...
1230
+ [2023-09-05 00:00:27,426][01455] Waiting for process rollout_proc6 to join...
1231
+ [2023-09-05 00:00:27,427][01455] Waiting for process rollout_proc7 to join...
1232
+ [2023-09-05 00:00:27,429][01455] Batcher 0 profile tree view:
1233
+ batching: 7.1021, releasing_batches: 0.0060
1234
+ [2023-09-05 00:00:27,432][01455] InferenceWorker_p0-w0 profile tree view:
1235
+ wait_policy: 0.0001
1236
+ wait_policy_total: 149.1708
1237
+ update_model: 2.1895
1238
+ weight_update: 0.0017
1239
+ one_step: 0.0103
1240
+ handle_policy_step: 161.4464
1241
+ deserialize: 4.2840, stack: 0.8391, obs_to_device_normalize: 30.8101, forward: 88.9750, send_messages: 7.6103
1242
+ prepare_outputs: 21.0363
1243
+ to_cpu: 12.1065
1244
+ [2023-09-05 00:00:27,434][01455] Learner 0 profile tree view:
1245
+ misc: 0.0014, prepare_batch: 8.0998
1246
+ train: 19.9241
1247
+ epoch_init: 0.0016, minibatch_init: 0.0018, losses_postprocess: 0.1377, kl_divergence: 0.1630, after_optimizer: 1.0744
1248
+ calculate_losses: 6.8144
1249
+ losses_init: 0.0009, forward_head: 0.5335, bptt_initial: 4.3388, tail: 0.2769, advantages_returns: 0.0504, losses: 1.0191
1250
+ bptt: 0.5203
1251
+ bptt_forward_core: 0.5046
1252
+ update: 11.5625
1253
+ clip: 8.1099
1254
+ [2023-09-05 00:00:27,435][01455] RolloutWorker_w0 profile tree view:
1255
+ wait_for_trajectories: 0.1325, enqueue_policy_requests: 42.8028, env_step: 232.9382, overhead: 6.6466, complete_rollouts: 2.2017
1256
+ save_policy_outputs: 5.7486
1257
+ split_output_tensors: 2.5590
1258
+ [2023-09-05 00:00:27,436][01455] RolloutWorker_w7 profile tree view:
1259
+ wait_for_trajectories: 0.0825, enqueue_policy_requests: 40.3923, env_step: 238.9885, overhead: 6.5488, complete_rollouts: 1.7473
1260
+ save_policy_outputs: 5.7772
1261
+ split_output_tensors: 2.9102
1262
+ [2023-09-05 00:00:27,441][01455] Loop Runner_EvtLoop terminating...
1263
+ [2023-09-05 00:00:27,442][01455] Runner profile tree view:
1264
+ main_loop: 348.8191
1265
+ [2023-09-05 00:00:27,443][01455] Collected {0: 2105344}, FPS: 2865.2
1266
+ [2023-09-05 00:00:27,497][01455] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json
1267
+ [2023-09-05 00:00:27,500][01455] Overriding arg 'num_workers' with value 1 passed from command line
1268
+ [2023-09-05 00:00:27,503][01455] Adding new argument 'no_render'=True that is not in the saved config file!
1269
+ [2023-09-05 00:00:27,504][01455] Adding new argument 'save_video'=True that is not in the saved config file!
1270
+ [2023-09-05 00:00:27,506][01455] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file!
1271
+ [2023-09-05 00:00:27,509][01455] Adding new argument 'video_name'=None that is not in the saved config file!
1272
+ [2023-09-05 00:00:27,513][01455] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file!
1273
+ [2023-09-05 00:00:27,516][01455] Adding new argument 'max_num_episodes'=10 that is not in the saved config file!
1274
+ [2023-09-05 00:00:27,517][01455] Adding new argument 'push_to_hub'=False that is not in the saved config file!
1275
+ [2023-09-05 00:00:27,520][01455] Adding new argument 'hf_repository'=None that is not in the saved config file!
1276
+ [2023-09-05 00:00:27,525][01455] Adding new argument 'policy_index'=0 that is not in the saved config file!
1277
+ [2023-09-05 00:00:27,526][01455] Adding new argument 'eval_deterministic'=False that is not in the saved config file!
1278
+ [2023-09-05 00:00:27,527][01455] Adding new argument 'train_script'=None that is not in the saved config file!
1279
+ [2023-09-05 00:00:27,529][01455] Adding new argument 'enjoy_script'=None that is not in the saved config file!
1280
+ [2023-09-05 00:00:27,535][01455] Using frameskip 1 and render_action_repeat=4 for evaluation
1281
+ [2023-09-05 00:00:27,595][01455] RunningMeanStd input shape: (3, 72, 128)
1282
+ [2023-09-05 00:00:27,598][01455] RunningMeanStd input shape: (1,)
1283
+ [2023-09-05 00:00:27,617][01455] ConvEncoder: input_channels=3
1284
+ [2023-09-05 00:00:27,687][01455] Conv encoder output size: 512
1285
+ [2023-09-05 00:00:27,690][01455] Policy head output size: 512
1286
+ [2023-09-05 00:00:27,719][01455] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000514_2105344.pth...
1287
+ [2023-09-05 00:00:28,489][01455] Num frames 100...
1288
+ [2023-09-05 00:00:28,692][01455] Num frames 200...
1289
+ [2023-09-05 00:00:28,892][01455] Num frames 300...
1290
+ [2023-09-05 00:00:29,097][01455] Num frames 400...
1291
+ [2023-09-05 00:00:29,298][01455] Num frames 500...
1292
+ [2023-09-05 00:00:29,497][01455] Num frames 600...
1293
+ [2023-09-05 00:00:29,696][01455] Num frames 700...
1294
+ [2023-09-05 00:00:29,887][01455] Avg episode rewards: #0: 13.680, true rewards: #0: 7.680
1295
+ [2023-09-05 00:00:29,889][01455] Avg episode reward: 13.680, avg true_objective: 7.680
1296
+ [2023-09-05 00:00:29,957][01455] Num frames 800...
1297
+ [2023-09-05 00:00:30,096][01455] Num frames 900...
1298
+ [2023-09-05 00:00:30,235][01455] Num frames 1000...
1299
+ [2023-09-05 00:00:30,384][01455] Num frames 1100...
1300
+ [2023-09-05 00:00:30,528][01455] Num frames 1200...
1301
+ [2023-09-05 00:00:30,651][01455] Avg episode rewards: #0: 10.240, true rewards: #0: 6.240
1302
+ [2023-09-05 00:00:30,652][01455] Avg episode reward: 10.240, avg true_objective: 6.240
1303
+ [2023-09-05 00:00:30,730][01455] Num frames 1300...
1304
+ [2023-09-05 00:00:30,864][01455] Num frames 1400...
1305
+ [2023-09-05 00:00:31,014][01455] Num frames 1500...
1306
+ [2023-09-05 00:00:31,147][01455] Num frames 1600...
1307
+ [2023-09-05 00:00:31,281][01455] Num frames 1700...
1308
+ [2023-09-05 00:00:31,419][01455] Num frames 1800...
1309
+ [2023-09-05 00:00:31,557][01455] Num frames 1900...
1310
+ [2023-09-05 00:00:31,689][01455] Num frames 2000...
1311
+ [2023-09-05 00:00:31,830][01455] Num frames 2100...
1312
+ [2023-09-05 00:00:31,969][01455] Num frames 2200...
1313
+ [2023-09-05 00:00:32,107][01455] Num frames 2300...
1314
+ [2023-09-05 00:00:32,242][01455] Num frames 2400...
1315
+ [2023-09-05 00:00:32,388][01455] Num frames 2500...
1316
+ [2023-09-05 00:00:32,520][01455] Num frames 2600...
1317
+ [2023-09-05 00:00:32,660][01455] Num frames 2700...
1318
+ [2023-09-05 00:00:32,798][01455] Avg episode rewards: #0: 17.530, true rewards: #0: 9.197
1319
+ [2023-09-05 00:00:32,800][01455] Avg episode reward: 17.530, avg true_objective: 9.197
1320
+ [2023-09-05 00:00:32,856][01455] Num frames 2800...
1321
+ [2023-09-05 00:00:32,988][01455] Num frames 2900...
1322
+ [2023-09-05 00:00:33,120][01455] Num frames 3000...
1323
+ [2023-09-05 00:00:33,260][01455] Num frames 3100...
1324
+ [2023-09-05 00:00:33,425][01455] Num frames 3200...
1325
+ [2023-09-05 00:00:33,556][01455] Num frames 3300...
1326
+ [2023-09-05 00:00:33,695][01455] Num frames 3400...
1327
+ [2023-09-05 00:00:33,795][01455] Avg episode rewards: #0: 15.828, true rewards: #0: 8.577
1328
+ [2023-09-05 00:00:33,797][01455] Avg episode reward: 15.828, avg true_objective: 8.577
1329
+ [2023-09-05 00:00:33,894][01455] Num frames 3500...
1330
+ [2023-09-05 00:00:34,029][01455] Num frames 3600...
1331
+ [2023-09-05 00:00:34,167][01455] Num frames 3700...
1332
+ [2023-09-05 00:00:34,296][01455] Num frames 3800...
1333
+ [2023-09-05 00:00:34,429][01455] Num frames 3900...
1334
+ [2023-09-05 00:00:34,567][01455] Num frames 4000...
1335
+ [2023-09-05 00:00:34,635][01455] Avg episode rewards: #0: 14.614, true rewards: #0: 8.014
1336
+ [2023-09-05 00:00:34,637][01455] Avg episode reward: 14.614, avg true_objective: 8.014
1337
+ [2023-09-05 00:00:34,767][01455] Num frames 4100...
1338
+ [2023-09-05 00:00:34,901][01455] Num frames 4200...
1339
+ [2023-09-05 00:00:35,045][01455] Num frames 4300...
1340
+ [2023-09-05 00:00:35,178][01455] Num frames 4400...
1341
+ [2023-09-05 00:00:35,313][01455] Num frames 4500...
1342
+ [2023-09-05 00:00:35,455][01455] Num frames 4600...
1343
+ [2023-09-05 00:00:35,596][01455] Num frames 4700...
1344
+ [2023-09-05 00:00:35,754][01455] Avg episode rewards: #0: 14.792, true rewards: #0: 7.958
1345
+ [2023-09-05 00:00:35,755][01455] Avg episode reward: 14.792, avg true_objective: 7.958
1346
+ [2023-09-05 00:00:35,793][01455] Num frames 4800...
1347
+ [2023-09-05 00:00:35,927][01455] Num frames 4900...
1348
+ [2023-09-05 00:00:36,058][01455] Num frames 5000...
1349
+ [2023-09-05 00:00:36,199][01455] Num frames 5100...
1350
+ [2023-09-05 00:00:36,331][01455] Num frames 5200...
1351
+ [2023-09-05 00:00:36,478][01455] Num frames 5300...
1352
+ [2023-09-05 00:00:36,608][01455] Num frames 5400...
1353
+ [2023-09-05 00:00:36,744][01455] Num frames 5500...
1354
+ [2023-09-05 00:00:36,878][01455] Num frames 5600...
1355
+ [2023-09-05 00:00:37,010][01455] Num frames 5700...
1356
+ [2023-09-05 00:00:37,151][01455] Num frames 5800...
1357
+ [2023-09-05 00:00:37,299][01455] Num frames 5900...
1358
+ [2023-09-05 00:00:37,437][01455] Num frames 6000...
1359
+ [2023-09-05 00:00:37,574][01455] Num frames 6100...
1360
+ [2023-09-05 00:00:37,717][01455] Num frames 6200...
1361
+ [2023-09-05 00:00:37,796][01455] Avg episode rewards: #0: 18.164, true rewards: #0: 8.879
1362
+ [2023-09-05 00:00:37,797][01455] Avg episode reward: 18.164, avg true_objective: 8.879
1363
+ [2023-09-05 00:00:37,918][01455] Num frames 6300...
1364
+ [2023-09-05 00:00:38,057][01455] Num frames 6400...
1365
+ [2023-09-05 00:00:38,199][01455] Num frames 6500...
1366
+ [2023-09-05 00:00:38,327][01455] Num frames 6600...
1367
+ [2023-09-05 00:00:38,481][01455] Num frames 6700...
1368
+ [2023-09-05 00:00:38,627][01455] Num frames 6800...
1369
+ [2023-09-05 00:00:38,767][01455] Num frames 6900...
1370
+ [2023-09-05 00:00:38,935][01455] Avg episode rewards: #0: 17.720, true rewards: #0: 8.720
1371
+ [2023-09-05 00:00:38,936][01455] Avg episode reward: 17.720, avg true_objective: 8.720
1372
+ [2023-09-05 00:00:38,972][01455] Num frames 7000...
1373
+ [2023-09-05 00:00:39,104][01455] Num frames 7100...
1374
+ [2023-09-05 00:00:39,246][01455] Num frames 7200...
1375
+ [2023-09-05 00:00:39,384][01455] Num frames 7300...
1376
+ [2023-09-05 00:00:39,530][01455] Num frames 7400...
1377
+ [2023-09-05 00:00:39,679][01455] Num frames 7500...
1378
+ [2023-09-05 00:00:39,815][01455] Num frames 7600...
1379
+ [2023-09-05 00:00:39,951][01455] Num frames 7700...
1380
+ [2023-09-05 00:00:40,150][01455] Num frames 7800...
1381
+ [2023-09-05 00:00:40,337][01455] Num frames 7900...
1382
+ [2023-09-05 00:00:40,527][01455] Num frames 8000...
1383
+ [2023-09-05 00:00:40,716][01455] Num frames 8100...
1384
+ [2023-09-05 00:00:40,910][01455] Num frames 8200...
1385
+ [2023-09-05 00:00:41,116][01455] Num frames 8300...
1386
+ [2023-09-05 00:00:41,308][01455] Num frames 8400...
1387
+ [2023-09-05 00:00:41,500][01455] Num frames 8500...
1388
+ [2023-09-05 00:00:41,695][01455] Num frames 8600...
1389
+ [2023-09-05 00:00:41,808][01455] Avg episode rewards: #0: 19.806, true rewards: #0: 9.583
1390
+ [2023-09-05 00:00:41,811][01455] Avg episode reward: 19.806, avg true_objective: 9.583
1391
+ [2023-09-05 00:00:41,954][01455] Num frames 8700...
1392
+ [2023-09-05 00:00:42,151][01455] Num frames 8800...
1393
+ [2023-09-05 00:00:42,341][01455] Num frames 8900...
1394
+ [2023-09-05 00:00:42,544][01455] Num frames 9000...
1395
+ [2023-09-05 00:00:42,689][01455] Avg episode rewards: #0: 18.441, true rewards: #0: 9.041
1396
+ [2023-09-05 00:00:42,691][01455] Avg episode reward: 18.441, avg true_objective: 9.041
1397
+ [2023-09-05 00:01:41,615][01455] Replay video saved to /content/train_dir/default_experiment/replay.mp4!
1398
+ [2023-09-05 00:01:42,185][01455] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json
1399
+ [2023-09-05 00:01:42,190][01455] Overriding arg 'num_workers' with value 1 passed from command line
1400
+ [2023-09-05 00:01:42,193][01455] Adding new argument 'no_render'=True that is not in the saved config file!
1401
+ [2023-09-05 00:01:42,196][01455] Adding new argument 'save_video'=True that is not in the saved config file!
1402
+ [2023-09-05 00:01:42,199][01455] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file!
1403
+ [2023-09-05 00:01:42,201][01455] Adding new argument 'video_name'=None that is not in the saved config file!
1404
+ [2023-09-05 00:01:42,204][01455] Adding new argument 'max_num_frames'=100000 that is not in the saved config file!
1405
+ [2023-09-05 00:01:42,207][01455] Adding new argument 'max_num_episodes'=10 that is not in the saved config file!
1406
+ [2023-09-05 00:01:42,209][01455] Adding new argument 'push_to_hub'=True that is not in the saved config file!
1407
+ [2023-09-05 00:01:42,211][01455] Adding new argument 'hf_repository'='dimitarrskv/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file!
1408
+ [2023-09-05 00:01:42,212][01455] Adding new argument 'policy_index'=0 that is not in the saved config file!
1409
+ [2023-09-05 00:01:42,214][01455] Adding new argument 'eval_deterministic'=False that is not in the saved config file!
1410
+ [2023-09-05 00:01:42,216][01455] Adding new argument 'train_script'=None that is not in the saved config file!
1411
+ [2023-09-05 00:01:42,218][01455] Adding new argument 'enjoy_script'=None that is not in the saved config file!
1412
+ [2023-09-05 00:01:42,220][01455] Using frameskip 1 and render_action_repeat=4 for evaluation
1413
+ [2023-09-05 00:01:42,277][01455] RunningMeanStd input shape: (3, 72, 128)
1414
+ [2023-09-05 00:01:42,281][01455] RunningMeanStd input shape: (1,)
1415
+ [2023-09-05 00:01:42,301][01455] ConvEncoder: input_channels=3
1416
+ [2023-09-05 00:01:42,365][01455] Conv encoder output size: 512
1417
+ [2023-09-05 00:01:42,368][01455] Policy head output size: 512
1418
+ [2023-09-05 00:01:42,398][01455] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000514_2105344.pth...
1419
+ [2023-09-05 00:01:43,397][01455] Num frames 100...
1420
+ [2023-09-05 00:01:43,585][01455] Num frames 200...
1421
+ [2023-09-05 00:01:43,793][01455] Num frames 300...
1422
+ [2023-09-05 00:01:43,973][01455] Num frames 400...
1423
+ [2023-09-05 00:01:44,116][01455] Avg episode rewards: #0: 5.480, true rewards: #0: 4.480
1424
+ [2023-09-05 00:01:44,118][01455] Avg episode reward: 5.480, avg true_objective: 4.480
1425
+ [2023-09-05 00:01:44,218][01455] Num frames 500...
1426
+ [2023-09-05 00:01:44,403][01455] Num frames 600...
1427
+ [2023-09-05 00:01:44,593][01455] Num frames 700...
1428
+ [2023-09-05 00:01:44,782][01455] Num frames 800...
1429
+ [2023-09-05 00:01:45,016][01455] Avg episode rewards: #0: 5.480, true rewards: #0: 4.480
1430
+ [2023-09-05 00:01:45,018][01455] Avg episode reward: 5.480, avg true_objective: 4.480
1431
+ [2023-09-05 00:01:45,029][01455] Num frames 900...
1432
+ [2023-09-05 00:01:45,229][01455] Num frames 1000...
1433
+ [2023-09-05 00:01:45,429][01455] Num frames 1100...
1434
+ [2023-09-05 00:01:45,613][01455] Num frames 1200...
1435
+ [2023-09-05 00:01:45,828][01455] Num frames 1300...
1436
+ [2023-09-05 00:01:46,012][01455] Num frames 1400...
1437
+ [2023-09-05 00:01:46,195][01455] Num frames 1500...
1438
+ [2023-09-05 00:01:46,395][01455] Num frames 1600...
1439
+ [2023-09-05 00:01:46,590][01455] Avg episode rewards: #0: 7.880, true rewards: #0: 5.547
1440
+ [2023-09-05 00:01:46,592][01455] Avg episode reward: 7.880, avg true_objective: 5.547
1441
+ [2023-09-05 00:01:46,696][01455] Num frames 1700...
1442
+ [2023-09-05 00:01:46,913][01455] Num frames 1800...
1443
+ [2023-09-05 00:01:47,106][01455] Num frames 1900...
1444
+ [2023-09-05 00:01:47,291][01455] Num frames 2000...
1445
+ [2023-09-05 00:01:47,486][01455] Num frames 2100...
1446
+ [2023-09-05 00:01:47,701][01455] Avg episode rewards: #0: 7.690, true rewards: #0: 5.440
1447
+ [2023-09-05 00:01:47,703][01455] Avg episode reward: 7.690, avg true_objective: 5.440
1448
+ [2023-09-05 00:01:47,758][01455] Num frames 2200...
1449
+ [2023-09-05 00:01:47,956][01455] Num frames 2300...
1450
+ [2023-09-05 00:01:48,154][01455] Num frames 2400...
1451
+ [2023-09-05 00:01:48,304][01455] Num frames 2500...
1452
+ [2023-09-05 00:01:48,483][01455] Avg episode rewards: #0: 7.384, true rewards: #0: 5.184
1453
+ [2023-09-05 00:01:48,485][01455] Avg episode reward: 7.384, avg true_objective: 5.184
1454
+ [2023-09-05 00:01:48,503][01455] Num frames 2600...
1455
+ [2023-09-05 00:01:48,638][01455] Num frames 2700...
1456
+ [2023-09-05 00:01:48,774][01455] Num frames 2800...
1457
+ [2023-09-05 00:01:48,918][01455] Num frames 2900...
1458
+ [2023-09-05 00:01:49,063][01455] Num frames 3000...
1459
+ [2023-09-05 00:01:49,202][01455] Num frames 3100...
1460
+ [2023-09-05 00:01:49,307][01455] Avg episode rewards: #0: 7.393, true rewards: #0: 5.227
1461
+ [2023-09-05 00:01:49,309][01455] Avg episode reward: 7.393, avg true_objective: 5.227
1462
+ [2023-09-05 00:01:49,397][01455] Num frames 3200...
1463
+ [2023-09-05 00:01:49,534][01455] Num frames 3300...
1464
+ [2023-09-05 00:01:49,670][01455] Num frames 3400...
1465
+ [2023-09-05 00:01:49,809][01455] Num frames 3500...
1466
+ [2023-09-05 00:01:49,948][01455] Num frames 3600...
1467
+ [2023-09-05 00:01:50,085][01455] Num frames 3700...
1468
+ [2023-09-05 00:01:50,218][01455] Num frames 3800...
1469
+ [2023-09-05 00:01:50,325][01455] Avg episode rewards: #0: 7.914, true rewards: #0: 5.486
1470
+ [2023-09-05 00:01:50,327][01455] Avg episode reward: 7.914, avg true_objective: 5.486
1471
+ [2023-09-05 00:01:50,416][01455] Num frames 3900...
1472
+ [2023-09-05 00:01:50,556][01455] Num frames 4000...
1473
+ [2023-09-05 00:01:50,707][01455] Num frames 4100...
1474
+ [2023-09-05 00:01:50,839][01455] Num frames 4200...
1475
+ [2023-09-05 00:01:50,981][01455] Num frames 4300...
1476
+ [2023-09-05 00:01:51,118][01455] Num frames 4400...
1477
+ [2023-09-05 00:01:51,251][01455] Num frames 4500...
1478
+ [2023-09-05 00:01:51,383][01455] Num frames 4600...
1479
+ [2023-09-05 00:01:51,522][01455] Num frames 4700...
1480
+ [2023-09-05 00:01:51,662][01455] Num frames 4800...
1481
+ [2023-09-05 00:01:51,804][01455] Num frames 4900...
1482
+ [2023-09-05 00:01:51,944][01455] Num frames 5000...
1483
+ [2023-09-05 00:01:52,077][01455] Num frames 5100...
1484
+ [2023-09-05 00:01:52,212][01455] Num frames 5200...
1485
+ [2023-09-05 00:01:52,341][01455] Num frames 5300...
1486
+ [2023-09-05 00:01:52,475][01455] Num frames 5400...
1487
+ [2023-09-05 00:01:52,608][01455] Num frames 5500...
1488
+ [2023-09-05 00:01:52,744][01455] Num frames 5600...
1489
+ [2023-09-05 00:01:52,872][01455] Num frames 5700...
1490
+ [2023-09-05 00:01:53,019][01455] Avg episode rewards: #0: 13.075, true rewards: #0: 7.200
1491
+ [2023-09-05 00:01:53,020][01455] Avg episode reward: 13.075, avg true_objective: 7.200
1492
+ [2023-09-05 00:01:53,084][01455] Num frames 5800...
1493
+ [2023-09-05 00:01:53,217][01455] Num frames 5900...
1494
+ [2023-09-05 00:01:53,360][01455] Num frames 6000...
1495
+ [2023-09-05 00:01:53,553][01455] Num frames 6100...
1496
+ [2023-09-05 00:01:53,757][01455] Num frames 6200...
1497
+ [2023-09-05 00:01:53,964][01455] Num frames 6300...
1498
+ [2023-09-05 00:01:54,181][01455] Num frames 6400...
1499
+ [2023-09-05 00:01:54,384][01455] Num frames 6500...
1500
+ [2023-09-05 00:01:54,582][01455] Num frames 6600...
1501
+ [2023-09-05 00:01:54,750][01455] Avg episode rewards: #0: 13.729, true rewards: #0: 7.396
1502
+ [2023-09-05 00:01:54,757][01455] Avg episode reward: 13.729, avg true_objective: 7.396
1503
+ [2023-09-05 00:01:54,851][01455] Num frames 6700...
1504
+ [2023-09-05 00:01:55,057][01455] Num frames 6800...
1505
+ [2023-09-05 00:01:55,257][01455] Num frames 6900...
1506
+ [2023-09-05 00:01:55,468][01455] Num frames 7000...
1507
+ [2023-09-05 00:01:55,667][01455] Num frames 7100...
1508
+ [2023-09-05 00:01:55,734][01455] Avg episode rewards: #0: 12.904, true rewards: #0: 7.104
1509
+ [2023-09-05 00:01:55,738][01455] Avg episode reward: 12.904, avg true_objective: 7.104
1510
+ [2023-09-05 00:02:42,526][01455] Replay video saved to /content/train_dir/default_experiment/replay.mp4!