[2023-03-08 13:37:19,359][669675] Saving configuration to /home/michal/programming/deep-rl-course/train_dir/default_experiment/config.json... [2023-03-08 13:37:19,359][669675] Rollout worker 0 uses device cpu [2023-03-08 13:37:19,360][669675] Rollout worker 1 uses device cpu [2023-03-08 13:37:19,360][669675] Rollout worker 2 uses device cpu [2023-03-08 13:37:19,361][669675] Rollout worker 3 uses device cpu [2023-03-08 13:37:19,361][669675] Rollout worker 4 uses device cpu [2023-03-08 13:37:19,361][669675] Rollout worker 5 uses device cpu [2023-03-08 13:37:19,362][669675] Rollout worker 6 uses device cpu [2023-03-08 13:37:19,362][669675] Rollout worker 7 uses device cpu [2023-03-08 13:37:19,411][669675] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2023-03-08 13:37:19,412][669675] InferenceWorker_p0-w0: min num requests: 2 [2023-03-08 13:37:19,430][669675] Starting all processes... [2023-03-08 13:37:19,430][669675] Starting process learner_proc0 [2023-03-08 13:37:19,480][669675] Starting all processes... [2023-03-08 13:37:19,484][669675] Starting process inference_proc0-0 [2023-03-08 13:37:19,485][669675] Starting process rollout_proc0 [2023-03-08 13:37:19,485][669675] Starting process rollout_proc1 [2023-03-08 13:37:19,486][669675] Starting process rollout_proc2 [2023-03-08 13:37:19,486][669675] Starting process rollout_proc3 [2023-03-08 13:37:19,486][669675] Starting process rollout_proc4 [2023-03-08 13:37:19,486][669675] Starting process rollout_proc5 [2023-03-08 13:37:19,486][669675] Starting process rollout_proc6 [2023-03-08 13:37:19,491][669675] Starting process rollout_proc7 [2023-03-08 13:37:20,414][670949] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2023-03-08 13:37:20,414][670949] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 [2023-03-08 13:37:20,419][670949] Num visible devices: 1 [2023-03-08 13:37:20,424][670962] Worker 0 uses CPU cores [0, 1] [2023-03-08 13:37:20,444][670949] Starting seed is not provided [2023-03-08 13:37:20,444][670949] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2023-03-08 13:37:20,445][670949] Initializing actor-critic model on device cuda:0 [2023-03-08 13:37:20,445][670949] RunningMeanStd input shape: (3, 72, 128) [2023-03-08 13:37:20,445][670949] RunningMeanStd input shape: (1,) [2023-03-08 13:37:20,462][670949] ConvEncoder: input_channels=3 [2023-03-08 13:37:20,464][670965] Worker 2 uses CPU cores [4, 5] [2023-03-08 13:37:20,471][670963] Worker 1 uses CPU cores [2, 3] [2023-03-08 13:37:20,556][670949] Conv encoder output size: 512 [2023-03-08 13:37:20,556][670949] Policy head output size: 512 [2023-03-08 13:37:20,565][670949] Created Actor Critic model with architecture: [2023-03-08 13:37:20,566][670949] ActorCriticSharedWeights( (obs_normalizer): ObservationNormalizer( (running_mean_std): RunningMeanStdDictInPlace( (running_mean_std): ModuleDict( (obs): RunningMeanStdInPlace() ) ) ) (returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) (encoder): VizdoomEncoder( (basic_encoder): ConvEncoder( (enc): RecursiveScriptModule( original_name=ConvEncoderImpl (conv_head): RecursiveScriptModule( original_name=Sequential (0): RecursiveScriptModule(original_name=Conv2d) (1): RecursiveScriptModule(original_name=ELU) (2): RecursiveScriptModule(original_name=Conv2d) (3): RecursiveScriptModule(original_name=ELU) (4): RecursiveScriptModule(original_name=Conv2d) (5): RecursiveScriptModule(original_name=ELU) ) (mlp_layers): RecursiveScriptModule( original_name=Sequential (0): RecursiveScriptModule(original_name=Linear) (1): RecursiveScriptModule(original_name=ELU) ) ) ) ) (core): ModelCoreRNN( (core): GRU(512, 512) ) (decoder): MlpDecoder( (mlp): Identity() ) (critic_linear): Linear(in_features=512, out_features=1, bias=True) (action_parameterization): ActionParameterizationDefault( (distribution_linear): Linear(in_features=512, out_features=5, bias=True) ) ) [2023-03-08 13:37:20,578][670985] Worker 7 uses CPU cores [14, 15] [2023-03-08 13:37:20,579][670967] Worker 4 uses CPU cores [8, 9] [2023-03-08 13:37:20,583][670969] Worker 6 uses CPU cores [12, 13] [2023-03-08 13:37:20,601][670964] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2023-03-08 13:37:20,601][670964] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 [2023-03-08 13:37:20,608][670964] Num visible devices: 1 [2023-03-08 13:37:20,625][670968] Worker 5 uses CPU cores [10, 11] [2023-03-08 13:37:20,652][670966] Worker 3 uses CPU cores [6, 7] [2023-03-08 13:37:21,689][670949] Using optimizer [2023-03-08 13:37:21,690][670949] No checkpoints found [2023-03-08 13:37:21,690][670949] Did not load from checkpoint, starting from scratch! [2023-03-08 13:37:21,690][670949] Initialized policy 0 weights for model version 0 [2023-03-08 13:37:21,692][670949] LearnerWorker_p0 finished initialization! [2023-03-08 13:37:21,692][670949] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2023-03-08 13:37:21,762][670964] RunningMeanStd input shape: (3, 72, 128) [2023-03-08 13:37:21,762][670964] RunningMeanStd input shape: (1,) [2023-03-08 13:37:21,769][670964] ConvEncoder: input_channels=3 [2023-03-08 13:37:21,834][670964] Conv encoder output size: 512 [2023-03-08 13:37:21,834][670964] Policy head output size: 512 [2023-03-08 13:37:22,875][669675] Inference worker 0-0 is ready! [2023-03-08 13:37:22,876][669675] All inference workers are ready! Signal rollout workers to start! [2023-03-08 13:37:22,911][670963] Doom resolution: 160x120, resize resolution: (128, 72) [2023-03-08 13:37:22,914][670965] Doom resolution: 160x120, resize resolution: (128, 72) [2023-03-08 13:37:22,918][670969] Doom resolution: 160x120, resize resolution: (128, 72) [2023-03-08 13:37:22,919][670968] Doom resolution: 160x120, resize resolution: (128, 72) [2023-03-08 13:37:22,928][670985] Doom resolution: 160x120, resize resolution: (128, 72) [2023-03-08 13:37:22,928][670966] Doom resolution: 160x120, resize resolution: (128, 72) [2023-03-08 13:37:22,929][670962] Doom resolution: 160x120, resize resolution: (128, 72) [2023-03-08 13:37:22,929][670967] Doom resolution: 160x120, resize resolution: (128, 72) [2023-03-08 13:37:23,122][670962] Decorrelating experience for 0 frames... [2023-03-08 13:37:23,122][670963] Decorrelating experience for 0 frames... [2023-03-08 13:37:23,122][670985] Decorrelating experience for 0 frames... [2023-03-08 13:37:23,123][670965] Decorrelating experience for 0 frames... [2023-03-08 13:37:23,123][670969] Decorrelating experience for 0 frames... [2023-03-08 13:37:23,257][670969] Decorrelating experience for 32 frames... [2023-03-08 13:37:23,258][670965] Decorrelating experience for 32 frames... [2023-03-08 13:37:23,266][670963] Decorrelating experience for 32 frames... [2023-03-08 13:37:23,272][669675] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 0. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) [2023-03-08 13:37:23,306][670985] Decorrelating experience for 32 frames... [2023-03-08 13:37:23,306][670967] Decorrelating experience for 0 frames... [2023-03-08 13:37:23,346][670966] Decorrelating experience for 0 frames... [2023-03-08 13:37:23,457][670965] Decorrelating experience for 64 frames... [2023-03-08 13:37:23,458][670985] Decorrelating experience for 64 frames... [2023-03-08 13:37:23,466][670967] Decorrelating experience for 32 frames... [2023-03-08 13:37:23,466][670969] Decorrelating experience for 64 frames... [2023-03-08 13:37:23,479][670966] Decorrelating experience for 32 frames... [2023-03-08 13:37:23,544][670962] Decorrelating experience for 32 frames... [2023-03-08 13:37:23,621][670985] Decorrelating experience for 96 frames... [2023-03-08 13:37:23,627][670965] Decorrelating experience for 96 frames... [2023-03-08 13:37:23,635][670967] Decorrelating experience for 64 frames... [2023-03-08 13:37:23,666][670969] Decorrelating experience for 96 frames... [2023-03-08 13:37:23,804][670963] Decorrelating experience for 64 frames... [2023-03-08 13:37:23,829][670966] Decorrelating experience for 64 frames... [2023-03-08 13:37:23,854][670967] Decorrelating experience for 96 frames... [2023-03-08 13:37:23,882][670962] Decorrelating experience for 64 frames... [2023-03-08 13:37:24,039][670968] Decorrelating experience for 0 frames... [2023-03-08 13:37:24,042][670963] Decorrelating experience for 96 frames... [2023-03-08 13:37:24,056][670966] Decorrelating experience for 96 frames... [2023-03-08 13:37:24,099][670962] Decorrelating experience for 96 frames... [2023-03-08 13:37:24,216][670949] Signal inference workers to stop experience collection... [2023-03-08 13:37:24,218][670964] InferenceWorker_p0-w0: stopping experience collection [2023-03-08 13:37:24,232][670968] Decorrelating experience for 32 frames... [2023-03-08 13:37:24,388][670968] Decorrelating experience for 64 frames... [2023-03-08 13:37:24,475][670949] Signal inference workers to resume experience collection... [2023-03-08 13:37:24,475][670964] InferenceWorker_p0-w0: resuming experience collection [2023-03-08 13:37:24,533][670968] Decorrelating experience for 96 frames... [2023-03-08 13:37:25,922][670964] Updated weights for policy 0, policy_version 10 (0.0168) [2023-03-08 13:37:27,103][670964] Updated weights for policy 0, policy_version 20 (0.0006) [2023-03-08 13:37:28,272][669675] Fps is (10 sec: 23756.6, 60 sec: 23756.6, 300 sec: 23756.6). Total num frames: 118784. Throughput: 0: 2183.2. Samples: 10916. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) [2023-03-08 13:37:28,273][669675] Avg episode reward: [(0, '4.545')] [2023-03-08 13:37:28,277][670949] Saving new best policy, reward=4.545! [2023-03-08 13:37:28,372][670964] Updated weights for policy 0, policy_version 30 (0.0007) [2023-03-08 13:37:29,596][670964] Updated weights for policy 0, policy_version 40 (0.0006) [2023-03-08 13:37:30,793][670964] Updated weights for policy 0, policy_version 50 (0.0006) [2023-03-08 13:37:32,060][670964] Updated weights for policy 0, policy_version 60 (0.0006) [2023-03-08 13:37:33,231][670964] Updated weights for policy 0, policy_version 70 (0.0006) [2023-03-08 13:37:33,272][669675] Fps is (10 sec: 28672.1, 60 sec: 28672.1, 300 sec: 28672.1). Total num frames: 286720. Throughput: 0: 6086.0. Samples: 60860. Policy #0 lag: (min: 0.0, avg: 0.9, max: 2.0) [2023-03-08 13:37:33,273][669675] Avg episode reward: [(0, '4.602')] [2023-03-08 13:37:33,274][670949] Saving new best policy, reward=4.602! [2023-03-08 13:37:34,475][670964] Updated weights for policy 0, policy_version 80 (0.0006) [2023-03-08 13:37:34,573][670969] EvtLoop [rollout_proc6_evt_loop, process=rollout_proc6] unhandled exception in slot='advance_rollouts' connected to emitter=Emitter(object_id='InferenceWorker_p0-w0', signal_name='advance6'), args=(0, 0) Traceback (most recent call last): File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/signal_slot/signal_slot.py", line 355, in _process_signal slot_callable(*args) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/sample_factory/algo/sampling/rollout_worker.py", line 241, in advance_rollouts complete_rollouts, episodic_stats = runner.advance_rollouts(policy_id, self.timing) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/sample_factory/algo/sampling/non_batched_sampling.py", line 634, in advance_rollouts new_obs, rewards, terminated, truncated, infos = e.step(actions) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/gym/core.py", line 319, in step return self.env.step(action) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/sample_factory/algo/utils/make_env.py", line 129, in step obs, rew, terminated, truncated, info = self.env.step(action) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/sample_factory/algo/utils/make_env.py", line 115, in step obs, rew, terminated, truncated, info = self.env.step(action) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/sf_examples/vizdoom/doom/wrappers/scenario_wrappers/gathering_reward_shaping.py", line 33, in step observation, reward, terminated, truncated, info = self.env.step(action) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/gym/core.py", line 384, in step observation, reward, terminated, truncated, info = self.env.step(action) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/sample_factory/envs/env_wrappers.py", line 88, in step obs, reward, terminated, truncated, info = self.env.step(action) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/gym/core.py", line 319, in step return self.env.step(action) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/sf_examples/vizdoom/doom/wrappers/multiplayer_stats.py", line 54, in step obs, reward, terminated, truncated, info = self.env.step(action) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/sf_examples/vizdoom/doom/doom_gym.py", line 452, in step reward = self.game.make_action(actions_flattened, self.skip_frames) vizdoom.vizdoom.SignalException: Signal SIGINT received. ViZDoom instance has been closed. [2023-03-08 13:37:34,575][670969] Unhandled exception Signal SIGINT received. ViZDoom instance has been closed. in evt loop rollout_proc6_evt_loop [2023-03-08 13:37:34,577][670965] EvtLoop [rollout_proc2_evt_loop, process=rollout_proc2] unhandled exception in slot='advance_rollouts' connected to emitter=Emitter(object_id='InferenceWorker_p0-w0', signal_name='advance2'), args=(1, 0) Traceback (most recent call last): File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/signal_slot/signal_slot.py", line 355, in _process_signal slot_callable(*args) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/sample_factory/algo/sampling/rollout_worker.py", line 241, in advance_rollouts complete_rollouts, episodic_stats = runner.advance_rollouts(policy_id, self.timing) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/sample_factory/algo/sampling/non_batched_sampling.py", line 634, in advance_rollouts new_obs, rewards, terminated, truncated, infos = e.step(actions) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/gym/core.py", line 319, in step return self.env.step(action) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/sample_factory/algo/utils/make_env.py", line 129, in step obs, rew, terminated, truncated, info = self.env.step(action) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/sample_factory/algo/utils/make_env.py", line 115, in step obs, rew, terminated, truncated, info = self.env.step(action) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/sf_examples/vizdoom/doom/wrappers/scenario_wrappers/gathering_reward_shaping.py", line 33, in step observation, reward, terminated, truncated, info = self.env.step(action) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/gym/core.py", line 384, in step observation, reward, terminated, truncated, info = self.env.step(action) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/sample_factory/envs/env_wrappers.py", line 88, in step obs, reward, terminated, truncated, info = self.env.step(action) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/gym/core.py", line 319, in step return self.env.step(action) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/sf_examples/vizdoom/doom/wrappers/multiplayer_stats.py", line 54, in step obs, reward, terminated, truncated, info = self.env.step(action) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/sf_examples/vizdoom/doom/doom_gym.py", line 452, in step reward = self.game.make_action(actions_flattened, self.skip_frames) vizdoom.vizdoom.SignalException: Signal SIGINT received. ViZDoom instance has been closed. [2023-03-08 13:37:34,577][670963] EvtLoop [rollout_proc1_evt_loop, process=rollout_proc1] unhandled exception in slot='advance_rollouts' connected to emitter=Emitter(object_id='InferenceWorker_p0-w0', signal_name='advance1'), args=(1, 0) Traceback (most recent call last): File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/signal_slot/signal_slot.py", line 355, in _process_signal slot_callable(*args) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/sample_factory/algo/sampling/rollout_worker.py", line 241, in advance_rollouts complete_rollouts, episodic_stats = runner.advance_rollouts(policy_id, self.timing) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/sample_factory/algo/sampling/non_batched_sampling.py", line 634, in advance_rollouts new_obs, rewards, terminated, truncated, infos = e.step(actions) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/gym/core.py", line 319, in step return self.env.step(action) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/sample_factory/algo/utils/make_env.py", line 129, in step obs, rew, terminated, truncated, info = self.env.step(action) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/sample_factory/algo/utils/make_env.py", line 115, in step obs, rew, terminated, truncated, info = self.env.step(action) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/sf_examples/vizdoom/doom/wrappers/scenario_wrappers/gathering_reward_shaping.py", line 33, in step observation, reward, terminated, truncated, info = self.env.step(action) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/gym/core.py", line 384, in step observation, reward, terminated, truncated, info = self.env.step(action) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/sample_factory/envs/env_wrappers.py", line 88, in step obs, reward, terminated, truncated, info = self.env.step(action) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/gym/core.py", line 319, in step return self.env.step(action) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/sf_examples/vizdoom/doom/wrappers/multiplayer_stats.py", line 54, in step obs, reward, terminated, truncated, info = self.env.step(action) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/sf_examples/vizdoom/doom/doom_gym.py", line 452, in step reward = self.game.make_action(actions_flattened, self.skip_frames) vizdoom.vizdoom.SignalException: Signal SIGINT received. ViZDoom instance has been closed. [2023-03-08 13:37:34,579][670965] Unhandled exception Signal SIGINT received. ViZDoom instance has been closed. in evt loop rollout_proc2_evt_loop [2023-03-08 13:37:34,579][670963] Unhandled exception Signal SIGINT received. ViZDoom instance has been closed. in evt loop rollout_proc1_evt_loop [2023-03-08 13:37:34,579][670968] EvtLoop [rollout_proc5_evt_loop, process=rollout_proc5] unhandled exception in slot='advance_rollouts' connected to emitter=Emitter(object_id='InferenceWorker_p0-w0', signal_name='advance5'), args=(0, 0) Traceback (most recent call last): File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/signal_slot/signal_slot.py", line 355, in _process_signal slot_callable(*args) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/sample_factory/algo/sampling/rollout_worker.py", line 241, in advance_rollouts complete_rollouts, episodic_stats = runner.advance_rollouts(policy_id, self.timing) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/sample_factory/algo/sampling/non_batched_sampling.py", line 634, in advance_rollouts new_obs, rewards, terminated, truncated, infos = e.step(actions) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/gym/core.py", line 319, in step return self.env.step(action) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/sample_factory/algo/utils/make_env.py", line 129, in step obs, rew, terminated, truncated, info = self.env.step(action) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/sample_factory/algo/utils/make_env.py", line 115, in step obs, rew, terminated, truncated, info = self.env.step(action) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/sf_examples/vizdoom/doom/wrappers/scenario_wrappers/gathering_reward_shaping.py", line 33, in step observation, reward, terminated, truncated, info = self.env.step(action) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/gym/core.py", line 384, in step observation, reward, terminated, truncated, info = self.env.step(action) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/sample_factory/envs/env_wrappers.py", line 88, in step obs, reward, terminated, truncated, info = self.env.step(action) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/gym/core.py", line 319, in step return self.env.step(action) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/sf_examples/vizdoom/doom/wrappers/multiplayer_stats.py", line 54, in step obs, reward, terminated, truncated, info = self.env.step(action) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/sf_examples/vizdoom/doom/doom_gym.py", line 452, in step reward = self.game.make_action(actions_flattened, self.skip_frames) vizdoom.vizdoom.SignalException: Signal SIGINT received. ViZDoom instance has been closed. [2023-03-08 13:37:34,580][670985] EvtLoop [rollout_proc7_evt_loop, process=rollout_proc7] unhandled exception in slot='advance_rollouts' connected to emitter=Emitter(object_id='InferenceWorker_p0-w0', signal_name='advance7'), args=(0, 0) Traceback (most recent call last): File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/signal_slot/signal_slot.py", line 355, in _process_signal slot_callable(*args) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/sample_factory/algo/sampling/rollout_worker.py", line 241, in advance_rollouts complete_rollouts, episodic_stats = runner.advance_rollouts(policy_id, self.timing) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/sample_factory/algo/sampling/non_batched_sampling.py", line 634, in advance_rollouts new_obs, rewards, terminated, truncated, infos = e.step(actions) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/gym/core.py", line 319, in step return self.env.step(action) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/sample_factory/algo/utils/make_env.py", line 129, in step obs, rew, terminated, truncated, info = self.env.step(action) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/sample_factory/algo/utils/make_env.py", line 115, in step obs, rew, terminated, truncated, info = self.env.step(action) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/sf_examples/vizdoom/doom/wrappers/scenario_wrappers/gathering_reward_shaping.py", line 33, in step observation, reward, terminated, truncated, info = self.env.step(action) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/gym/core.py", line 384, in step observation, reward, terminated, truncated, info = self.env.step(action) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/sample_factory/envs/env_wrappers.py", line 88, in step obs, reward, terminated, truncated, info = self.env.step(action) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/gym/core.py", line 319, in step return self.env.step(action) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/sf_examples/vizdoom/doom/wrappers/multiplayer_stats.py", line 54, in step obs, reward, terminated, truncated, info = self.env.step(action) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/sf_examples/vizdoom/doom/doom_gym.py", line 452, in step reward = self.game.make_action(actions_flattened, self.skip_frames) vizdoom.vizdoom.SignalException: Signal SIGINT received. ViZDoom instance has been closed. [2023-03-08 13:37:34,581][670968] Unhandled exception Signal SIGINT received. ViZDoom instance has been closed. in evt loop rollout_proc5_evt_loop [2023-03-08 13:37:34,581][670985] Unhandled exception Signal SIGINT received. ViZDoom instance has been closed. in evt loop rollout_proc7_evt_loop [2023-03-08 13:37:34,580][669675] Keyboard interrupt detected in the event loop EvtLoop [Runner_EvtLoop, process=main process 669675], exiting... [2023-03-08 13:37:34,580][670966] EvtLoop [rollout_proc3_evt_loop, process=rollout_proc3] unhandled exception in slot='advance_rollouts' connected to emitter=Emitter(object_id='InferenceWorker_p0-w0', signal_name='advance3'), args=(0, 0) Traceback (most recent call last): File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/signal_slot/signal_slot.py", line 355, in _process_signal slot_callable(*args) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/sample_factory/algo/sampling/rollout_worker.py", line 241, in advance_rollouts complete_rollouts, episodic_stats = runner.advance_rollouts(policy_id, self.timing) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/sample_factory/algo/sampling/non_batched_sampling.py", line 634, in advance_rollouts new_obs, rewards, terminated, truncated, infos = e.step(actions) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/gym/core.py", line 319, in step return self.env.step(action) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/sample_factory/algo/utils/make_env.py", line 129, in step obs, rew, terminated, truncated, info = self.env.step(action) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/sample_factory/algo/utils/make_env.py", line 115, in step obs, rew, terminated, truncated, info = self.env.step(action) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/sf_examples/vizdoom/doom/wrappers/scenario_wrappers/gathering_reward_shaping.py", line 33, in step observation, reward, terminated, truncated, info = self.env.step(action) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/gym/core.py", line 384, in step observation, reward, terminated, truncated, info = self.env.step(action) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/sample_factory/envs/env_wrappers.py", line 88, in step obs, reward, terminated, truncated, info = self.env.step(action) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/gym/core.py", line 319, in step return self.env.step(action) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/sf_examples/vizdoom/doom/wrappers/multiplayer_stats.py", line 54, in step obs, reward, terminated, truncated, info = self.env.step(action) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/sf_examples/vizdoom/doom/doom_gym.py", line 452, in step reward = self.game.make_action(actions_flattened, self.skip_frames) vizdoom.vizdoom.SignalException: Signal SIGINT received. ViZDoom instance has been closed. [2023-03-08 13:37:34,582][670966] Unhandled exception Signal SIGINT received. ViZDoom instance has been closed. in evt loop rollout_proc3_evt_loop [2023-03-08 13:37:34,582][670967] EvtLoop [rollout_proc4_evt_loop, process=rollout_proc4] unhandled exception in slot='advance_rollouts' connected to emitter=Emitter(object_id='InferenceWorker_p0-w0', signal_name='advance4'), args=(1, 0) Traceback (most recent call last): File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/signal_slot/signal_slot.py", line 355, in _process_signal slot_callable(*args) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/sample_factory/algo/sampling/rollout_worker.py", line 241, in advance_rollouts complete_rollouts, episodic_stats = runner.advance_rollouts(policy_id, self.timing) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/sample_factory/algo/sampling/non_batched_sampling.py", line 634, in advance_rollouts new_obs, rewards, terminated, truncated, infos = e.step(actions) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/gym/core.py", line 319, in step return self.env.step(action) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/sample_factory/algo/utils/make_env.py", line 129, in step obs, rew, terminated, truncated, info = self.env.step(action) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/sample_factory/algo/utils/make_env.py", line 115, in step obs, rew, terminated, truncated, info = self.env.step(action) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/sf_examples/vizdoom/doom/wrappers/scenario_wrappers/gathering_reward_shaping.py", line 33, in step observation, reward, terminated, truncated, info = self.env.step(action) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/gym/core.py", line 384, in step observation, reward, terminated, truncated, info = self.env.step(action) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/sample_factory/envs/env_wrappers.py", line 88, in step obs, reward, terminated, truncated, info = self.env.step(action) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/gym/core.py", line 319, in step return self.env.step(action) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/sf_examples/vizdoom/doom/wrappers/multiplayer_stats.py", line 54, in step obs, reward, terminated, truncated, info = self.env.step(action) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/sf_examples/vizdoom/doom/doom_gym.py", line 452, in step reward = self.game.make_action(actions_flattened, self.skip_frames) vizdoom.vizdoom.SignalException: Signal SIGINT received. ViZDoom instance has been closed. [2023-03-08 13:37:34,585][670949] Stopping Batcher_0... [2023-03-08 13:37:34,585][670949] Loop batcher_evt_loop terminating... [2023-03-08 13:37:34,584][669675] Runner profile tree view: main_loop: 15.1547 [2023-03-08 13:37:34,587][670967] Unhandled exception Signal SIGINT received. ViZDoom instance has been closed. in evt loop rollout_proc4_evt_loop [2023-03-08 13:37:34,586][669675] Collected {0: 327680}, FPS: 21622.4 [2023-03-08 13:37:34,597][670949] Saving /home/michal/programming/deep-rl-course/train_dir/default_experiment/checkpoint_p0/checkpoint_000000081_331776.pth... [2023-03-08 13:37:34,600][670962] EvtLoop [rollout_proc0_evt_loop, process=rollout_proc0] unhandled exception in slot='advance_rollouts' connected to emitter=Emitter(object_id='InferenceWorker_p0-w0', signal_name='advance0'), args=(1, 0) Traceback (most recent call last): File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/signal_slot/signal_slot.py", line 355, in _process_signal slot_callable(*args) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/sample_factory/algo/sampling/rollout_worker.py", line 241, in advance_rollouts complete_rollouts, episodic_stats = runner.advance_rollouts(policy_id, self.timing) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/sample_factory/algo/sampling/non_batched_sampling.py", line 634, in advance_rollouts new_obs, rewards, terminated, truncated, infos = e.step(actions) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/gym/core.py", line 319, in step return self.env.step(action) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/sample_factory/algo/utils/make_env.py", line 129, in step obs, rew, terminated, truncated, info = self.env.step(action) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/sample_factory/algo/utils/make_env.py", line 115, in step obs, rew, terminated, truncated, info = self.env.step(action) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/sf_examples/vizdoom/doom/wrappers/scenario_wrappers/gathering_reward_shaping.py", line 33, in step observation, reward, terminated, truncated, info = self.env.step(action) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/gym/core.py", line 384, in step observation, reward, terminated, truncated, info = self.env.step(action) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/sample_factory/envs/env_wrappers.py", line 88, in step obs, reward, terminated, truncated, info = self.env.step(action) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/gym/core.py", line 319, in step return self.env.step(action) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/sf_examples/vizdoom/doom/wrappers/multiplayer_stats.py", line 54, in step obs, reward, terminated, truncated, info = self.env.step(action) File "/home/michal/anaconda3/envs/deep-rl/lib/python3.9/site-packages/sf_examples/vizdoom/doom/doom_gym.py", line 452, in step reward = self.game.make_action(actions_flattened, self.skip_frames) vizdoom.vizdoom.SignalException: Signal SIGINT received. ViZDoom instance has been closed. [2023-03-08 13:37:34,602][670962] Unhandled exception Signal SIGINT received. ViZDoom instance has been closed. in evt loop rollout_proc0_evt_loop [2023-03-08 13:37:34,654][670964] Weights refcount: 2 0 [2023-03-08 13:37:34,661][670964] Stopping InferenceWorker_p0-w0... [2023-03-08 13:37:34,662][670964] Loop inference_proc0-0_evt_loop terminating... [2023-03-08 13:37:34,697][670949] Stopping LearnerWorker_p0... [2023-03-08 13:37:34,697][670949] Loop learner_proc0_evt_loop terminating... [2023-03-08 14:31:27,767][671990] Saving configuration to /home/michal/programming/deep-rl-course/train_dir/default_experiment/config.json... [2023-03-08 14:31:27,768][671990] Rollout worker 0 uses device cpu [2023-03-08 14:31:27,768][671990] Rollout worker 1 uses device cpu [2023-03-08 14:31:27,768][671990] Rollout worker 2 uses device cpu [2023-03-08 14:31:27,768][671990] Rollout worker 3 uses device cpu [2023-03-08 14:31:27,769][671990] Rollout worker 4 uses device cpu [2023-03-08 14:31:27,769][671990] Rollout worker 5 uses device cpu [2023-03-08 14:31:27,769][671990] Rollout worker 6 uses device cpu [2023-03-08 14:31:27,770][671990] Rollout worker 7 uses device cpu [2023-03-08 14:31:27,820][671990] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2023-03-08 14:31:27,821][671990] InferenceWorker_p0-w0: min num requests: 2 [2023-03-08 14:31:27,841][671990] Starting all processes... [2023-03-08 14:31:27,842][671990] Starting process learner_proc0 [2023-03-08 14:31:27,891][671990] Starting all processes... [2023-03-08 14:31:27,895][671990] Starting process inference_proc0-0 [2023-03-08 14:31:27,896][671990] Starting process rollout_proc0 [2023-03-08 14:31:27,896][671990] Starting process rollout_proc1 [2023-03-08 14:31:27,896][671990] Starting process rollout_proc2 [2023-03-08 14:31:27,897][671990] Starting process rollout_proc3 [2023-03-08 14:31:27,899][671990] Starting process rollout_proc4 [2023-03-08 14:31:27,899][671990] Starting process rollout_proc5 [2023-03-08 14:31:27,899][671990] Starting process rollout_proc6 [2023-03-08 14:31:27,904][671990] Starting process rollout_proc7 [2023-03-08 14:31:28,692][682716] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2023-03-08 14:31:28,692][682716] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 [2023-03-08 14:31:28,705][682716] Num visible devices: 1 [2023-03-08 14:31:28,732][682716] Starting seed is not provided [2023-03-08 14:31:28,732][682716] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2023-03-08 14:31:28,732][682716] Initializing actor-critic model on device cuda:0 [2023-03-08 14:31:28,733][682716] RunningMeanStd input shape: (3, 72, 128) [2023-03-08 14:31:28,733][682716] RunningMeanStd input shape: (1,) [2023-03-08 14:31:28,745][682716] ConvEncoder: input_channels=3 [2023-03-08 14:31:28,799][682729] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2023-03-08 14:31:28,799][682729] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 [2023-03-08 14:31:28,803][682729] Num visible devices: 1 [2023-03-08 14:31:28,820][682747] Worker 3 uses CPU cores [6, 7] [2023-03-08 14:31:28,826][682746] Worker 1 uses CPU cores [2, 3] [2023-03-08 14:31:28,828][682749] Worker 4 uses CPU cores [8, 9] [2023-03-08 14:31:28,829][682730] Worker 0 uses CPU cores [0, 1] [2023-03-08 14:31:28,829][682748] Worker 2 uses CPU cores [4, 5] [2023-03-08 14:31:28,845][682716] Conv encoder output size: 512 [2023-03-08 14:31:28,846][682716] Policy head output size: 512 [2023-03-08 14:31:28,854][682716] Created Actor Critic model with architecture: [2023-03-08 14:31:28,855][682716] ActorCriticSharedWeights( (obs_normalizer): ObservationNormalizer( (running_mean_std): RunningMeanStdDictInPlace( (running_mean_std): ModuleDict( (obs): RunningMeanStdInPlace() ) ) ) (returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) (encoder): VizdoomEncoder( (basic_encoder): ConvEncoder( (enc): RecursiveScriptModule( original_name=ConvEncoderImpl (conv_head): RecursiveScriptModule( original_name=Sequential (0): RecursiveScriptModule(original_name=Conv2d) (1): RecursiveScriptModule(original_name=ELU) (2): RecursiveScriptModule(original_name=Conv2d) (3): RecursiveScriptModule(original_name=ELU) (4): RecursiveScriptModule(original_name=Conv2d) (5): RecursiveScriptModule(original_name=ELU) ) (mlp_layers): RecursiveScriptModule( original_name=Sequential (0): RecursiveScriptModule(original_name=Linear) (1): RecursiveScriptModule(original_name=ELU) ) ) ) ) (core): ModelCoreRNN( (core): GRU(512, 512) ) (decoder): MlpDecoder( (mlp): Identity() ) (critic_linear): Linear(in_features=512, out_features=1, bias=True) (action_parameterization): ActionParameterizationDefault( (distribution_linear): Linear(in_features=512, out_features=5, bias=True) ) ) [2023-03-08 14:31:28,857][682752] Worker 7 uses CPU cores [14, 15] [2023-03-08 14:31:28,878][682751] Worker 6 uses CPU cores [12, 13] [2023-03-08 14:31:28,925][682750] Worker 5 uses CPU cores [10, 11] [2023-03-08 14:31:29,909][682716] Using optimizer [2023-03-08 14:31:29,910][682716] Loading state from checkpoint /home/michal/programming/deep-rl-course/train_dir/default_experiment/checkpoint_p0/checkpoint_000000081_331776.pth... [2023-03-08 14:31:29,925][682716] Loading model from checkpoint [2023-03-08 14:31:29,927][682716] Loaded experiment state at self.train_step=81, self.env_steps=331776 [2023-03-08 14:31:29,928][682716] Initialized policy 0 weights for model version 81 [2023-03-08 14:31:29,929][682716] LearnerWorker_p0 finished initialization! [2023-03-08 14:31:29,929][682716] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2023-03-08 14:31:29,991][682729] RunningMeanStd input shape: (3, 72, 128) [2023-03-08 14:31:29,992][682729] RunningMeanStd input shape: (1,) [2023-03-08 14:31:29,998][682729] ConvEncoder: input_channels=3 [2023-03-08 14:31:30,058][682729] Conv encoder output size: 512 [2023-03-08 14:31:30,058][682729] Policy head output size: 512 [2023-03-08 14:31:30,993][671990] Inference worker 0-0 is ready! [2023-03-08 14:31:30,994][671990] All inference workers are ready! Signal rollout workers to start! [2023-03-08 14:31:31,028][682747] Doom resolution: 160x120, resize resolution: (128, 72) [2023-03-08 14:31:31,028][682746] Doom resolution: 160x120, resize resolution: (128, 72) [2023-03-08 14:31:31,029][682750] Doom resolution: 160x120, resize resolution: (128, 72) [2023-03-08 14:31:31,029][682752] Doom resolution: 160x120, resize resolution: (128, 72) [2023-03-08 14:31:31,040][682730] Doom resolution: 160x120, resize resolution: (128, 72) [2023-03-08 14:31:31,040][682749] Doom resolution: 160x120, resize resolution: (128, 72) [2023-03-08 14:31:31,040][682751] Doom resolution: 160x120, resize resolution: (128, 72) [2023-03-08 14:31:31,041][682748] Doom resolution: 160x120, resize resolution: (128, 72) [2023-03-08 14:31:31,217][682752] Decorrelating experience for 0 frames... [2023-03-08 14:31:31,218][682748] Decorrelating experience for 0 frames... [2023-03-08 14:31:31,220][682750] Decorrelating experience for 0 frames... [2023-03-08 14:31:31,220][682746] Decorrelating experience for 0 frames... [2023-03-08 14:31:31,345][682746] Decorrelating experience for 32 frames... [2023-03-08 14:31:31,345][682752] Decorrelating experience for 32 frames... [2023-03-08 14:31:31,345][682750] Decorrelating experience for 32 frames... [2023-03-08 14:31:31,351][682751] Decorrelating experience for 0 frames... [2023-03-08 14:31:31,409][682748] Decorrelating experience for 32 frames... [2023-03-08 14:31:31,507][682751] Decorrelating experience for 32 frames... [2023-03-08 14:31:31,508][682750] Decorrelating experience for 64 frames... [2023-03-08 14:31:31,553][682747] Decorrelating experience for 0 frames... [2023-03-08 14:31:31,563][682752] Decorrelating experience for 64 frames... [2023-03-08 14:31:31,568][682746] Decorrelating experience for 64 frames... [2023-03-08 14:31:31,661][682751] Decorrelating experience for 64 frames... [2023-03-08 14:31:31,710][682752] Decorrelating experience for 96 frames... [2023-03-08 14:31:31,710][682730] Decorrelating experience for 0 frames... [2023-03-08 14:31:31,711][682750] Decorrelating experience for 96 frames... [2023-03-08 14:31:31,725][682746] Decorrelating experience for 96 frames... [2023-03-08 14:31:31,800][671990] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 331776. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) [2023-03-08 14:31:31,830][682751] Decorrelating experience for 96 frames... [2023-03-08 14:31:31,864][682749] Decorrelating experience for 0 frames... [2023-03-08 14:31:31,866][682747] Decorrelating experience for 32 frames... [2023-03-08 14:31:32,054][682747] Decorrelating experience for 64 frames... [2023-03-08 14:31:32,072][682730] Decorrelating experience for 32 frames... [2023-03-08 14:31:32,077][682749] Decorrelating experience for 32 frames... [2023-03-08 14:31:32,211][682716] Signal inference workers to stop experience collection... [2023-03-08 14:31:32,214][682729] InferenceWorker_p0-w0: stopping experience collection [2023-03-08 14:31:32,246][682747] Decorrelating experience for 96 frames... [2023-03-08 14:31:32,253][682730] Decorrelating experience for 64 frames... [2023-03-08 14:31:32,256][682749] Decorrelating experience for 64 frames... [2023-03-08 14:31:32,286][682748] Decorrelating experience for 64 frames... [2023-03-08 14:31:32,372][682716] Signal inference workers to resume experience collection... [2023-03-08 14:31:32,372][682729] InferenceWorker_p0-w0: resuming experience collection [2023-03-08 14:31:32,434][682748] Decorrelating experience for 96 frames... [2023-03-08 14:31:32,453][682749] Decorrelating experience for 96 frames... [2023-03-08 14:31:32,454][682730] Decorrelating experience for 96 frames... [2023-03-08 14:31:33,655][682729] Updated weights for policy 0, policy_version 91 (0.0140) [2023-03-08 14:31:34,792][682729] Updated weights for policy 0, policy_version 101 (0.0005) [2023-03-08 14:31:35,898][682729] Updated weights for policy 0, policy_version 111 (0.0005) [2023-03-08 14:31:36,800][671990] Fps is (10 sec: 31129.4, 60 sec: 31129.4, 300 sec: 31129.4). Total num frames: 487424. Throughput: 0: 2831.6. Samples: 14158. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2023-03-08 14:31:36,801][671990] Avg episode reward: [(0, '4.411')] [2023-03-08 14:31:36,994][682729] Updated weights for policy 0, policy_version 121 (0.0006) [2023-03-08 14:31:38,163][682729] Updated weights for policy 0, policy_version 131 (0.0006) [2023-03-08 14:31:39,276][682729] Updated weights for policy 0, policy_version 141 (0.0006) [2023-03-08 14:31:40,399][682729] Updated weights for policy 0, policy_version 151 (0.0005) [2023-03-08 14:31:41,488][682729] Updated weights for policy 0, policy_version 161 (0.0006) [2023-03-08 14:31:41,800][671990] Fps is (10 sec: 33587.2, 60 sec: 33587.2, 300 sec: 33587.2). Total num frames: 667648. Throughput: 0: 6863.0. Samples: 68630. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-03-08 14:31:41,801][671990] Avg episode reward: [(0, '4.328')] [2023-03-08 14:31:42,632][682729] Updated weights for policy 0, policy_version 171 (0.0006) [2023-03-08 14:31:43,767][682729] Updated weights for policy 0, policy_version 181 (0.0006) [2023-03-08 14:31:44,894][682729] Updated weights for policy 0, policy_version 191 (0.0005) [2023-03-08 14:31:46,000][682729] Updated weights for policy 0, policy_version 201 (0.0006) [2023-03-08 14:31:46,800][671990] Fps is (10 sec: 36454.3, 60 sec: 34679.3, 300 sec: 34679.3). Total num frames: 851968. Throughput: 0: 8238.5. Samples: 123578. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) [2023-03-08 14:31:46,801][671990] Avg episode reward: [(0, '4.684')] [2023-03-08 14:31:46,804][682716] Saving new best policy, reward=4.684! [2023-03-08 14:31:47,159][682729] Updated weights for policy 0, policy_version 211 (0.0006) [2023-03-08 14:31:47,814][671990] Heartbeat connected on Batcher_0 [2023-03-08 14:31:47,824][671990] Heartbeat connected on InferenceWorker_p0-w0 [2023-03-08 14:31:47,825][671990] Heartbeat connected on RolloutWorker_w0 [2023-03-08 14:31:47,828][671990] Heartbeat connected on RolloutWorker_w1 [2023-03-08 14:31:47,830][671990] Heartbeat connected on RolloutWorker_w2 [2023-03-08 14:31:47,832][671990] Heartbeat connected on RolloutWorker_w3 [2023-03-08 14:31:47,835][671990] Heartbeat connected on RolloutWorker_w4 [2023-03-08 14:31:47,837][671990] Heartbeat connected on RolloutWorker_w5 [2023-03-08 14:31:47,839][671990] Heartbeat connected on RolloutWorker_w6 [2023-03-08 14:31:47,840][671990] Heartbeat connected on LearnerWorker_p0 [2023-03-08 14:31:47,845][671990] Heartbeat connected on RolloutWorker_w7 [2023-03-08 14:31:48,287][682729] Updated weights for policy 0, policy_version 221 (0.0006) [2023-03-08 14:31:49,409][682729] Updated weights for policy 0, policy_version 231 (0.0006) [2023-03-08 14:31:50,502][682729] Updated weights for policy 0, policy_version 241 (0.0005) [2023-03-08 14:31:51,642][682729] Updated weights for policy 0, policy_version 251 (0.0005) [2023-03-08 14:31:51,800][671990] Fps is (10 sec: 36453.9, 60 sec: 35020.6, 300 sec: 35020.6). Total num frames: 1032192. Throughput: 0: 7529.3. Samples: 150588. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) [2023-03-08 14:31:51,801][671990] Avg episode reward: [(0, '4.603')] [2023-03-08 14:31:52,750][682729] Updated weights for policy 0, policy_version 261 (0.0006) [2023-03-08 14:31:53,928][682729] Updated weights for policy 0, policy_version 271 (0.0005) [2023-03-08 14:31:55,044][682729] Updated weights for policy 0, policy_version 281 (0.0006) [2023-03-08 14:31:56,141][682729] Updated weights for policy 0, policy_version 291 (0.0006) [2023-03-08 14:31:56,800][671990] Fps is (10 sec: 36044.9, 60 sec: 35225.5, 300 sec: 35225.5). Total num frames: 1212416. Throughput: 0: 8215.5. Samples: 205388. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) [2023-03-08 14:31:56,801][671990] Avg episode reward: [(0, '4.862')] [2023-03-08 14:31:56,808][682716] Saving new best policy, reward=4.862! [2023-03-08 14:31:57,261][682729] Updated weights for policy 0, policy_version 301 (0.0006) [2023-03-08 14:31:58,382][682729] Updated weights for policy 0, policy_version 311 (0.0005) [2023-03-08 14:31:59,477][682729] Updated weights for policy 0, policy_version 321 (0.0005) [2023-03-08 14:32:00,577][682729] Updated weights for policy 0, policy_version 331 (0.0006) [2023-03-08 14:32:01,700][682729] Updated weights for policy 0, policy_version 341 (0.0005) [2023-03-08 14:32:01,800][671990] Fps is (10 sec: 36454.7, 60 sec: 35498.6, 300 sec: 35498.6). Total num frames: 1396736. Throughput: 0: 8691.4. Samples: 260742. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) [2023-03-08 14:32:01,802][671990] Avg episode reward: [(0, '5.818')] [2023-03-08 14:32:01,815][682716] Saving new best policy, reward=5.818! [2023-03-08 14:32:02,784][682729] Updated weights for policy 0, policy_version 351 (0.0006) [2023-03-08 14:32:03,870][682729] Updated weights for policy 0, policy_version 361 (0.0006) [2023-03-08 14:32:04,975][682729] Updated weights for policy 0, policy_version 371 (0.0006) [2023-03-08 14:32:06,069][682729] Updated weights for policy 0, policy_version 381 (0.0005) [2023-03-08 14:32:06,800][671990] Fps is (10 sec: 37273.5, 60 sec: 35810.7, 300 sec: 35810.7). Total num frames: 1585152. Throughput: 0: 8248.6. Samples: 288702. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2023-03-08 14:32:06,801][671990] Avg episode reward: [(0, '6.887')] [2023-03-08 14:32:06,819][682716] Saving new best policy, reward=6.887! [2023-03-08 14:32:07,149][682729] Updated weights for policy 0, policy_version 391 (0.0006) [2023-03-08 14:32:08,242][682729] Updated weights for policy 0, policy_version 401 (0.0006) [2023-03-08 14:32:09,314][682729] Updated weights for policy 0, policy_version 411 (0.0005) [2023-03-08 14:32:10,390][682729] Updated weights for policy 0, policy_version 421 (0.0005) [2023-03-08 14:32:11,471][682729] Updated weights for policy 0, policy_version 431 (0.0005) [2023-03-08 14:32:11,800][671990] Fps is (10 sec: 37683.4, 60 sec: 36044.8, 300 sec: 36044.8). Total num frames: 1773568. Throughput: 0: 8632.1. Samples: 345284. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) [2023-03-08 14:32:11,801][671990] Avg episode reward: [(0, '8.134')] [2023-03-08 14:32:11,808][682716] Saving new best policy, reward=8.134! [2023-03-08 14:32:12,592][682729] Updated weights for policy 0, policy_version 441 (0.0005) [2023-03-08 14:32:13,668][682729] Updated weights for policy 0, policy_version 451 (0.0006) [2023-03-08 14:32:14,758][682729] Updated weights for policy 0, policy_version 461 (0.0006) [2023-03-08 14:32:15,855][682729] Updated weights for policy 0, policy_version 471 (0.0006) [2023-03-08 14:32:16,800][671990] Fps is (10 sec: 37683.3, 60 sec: 36226.8, 300 sec: 36226.8). Total num frames: 1961984. Throughput: 0: 8923.7. Samples: 401566. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2023-03-08 14:32:16,801][671990] Avg episode reward: [(0, '10.635')] [2023-03-08 14:32:16,829][682716] Saving new best policy, reward=10.635! [2023-03-08 14:32:16,956][682729] Updated weights for policy 0, policy_version 481 (0.0006) [2023-03-08 14:32:18,149][682729] Updated weights for policy 0, policy_version 491 (0.0006) [2023-03-08 14:32:19,411][682729] Updated weights for policy 0, policy_version 501 (0.0006) [2023-03-08 14:32:20,531][682729] Updated weights for policy 0, policy_version 511 (0.0005) [2023-03-08 14:32:21,649][682729] Updated weights for policy 0, policy_version 521 (0.0005) [2023-03-08 14:32:21,800][671990] Fps is (10 sec: 36454.4, 60 sec: 36126.7, 300 sec: 36126.7). Total num frames: 2138112. Throughput: 0: 9176.1. Samples: 427084. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2023-03-08 14:32:21,801][671990] Avg episode reward: [(0, '14.048')] [2023-03-08 14:32:21,802][682716] Saving new best policy, reward=14.048! [2023-03-08 14:32:22,938][682729] Updated weights for policy 0, policy_version 531 (0.0006) [2023-03-08 14:32:24,157][682729] Updated weights for policy 0, policy_version 541 (0.0006) [2023-03-08 14:32:25,360][682729] Updated weights for policy 0, policy_version 551 (0.0006) [2023-03-08 14:32:26,488][682729] Updated weights for policy 0, policy_version 561 (0.0006) [2023-03-08 14:32:26,800][671990] Fps is (10 sec: 34406.1, 60 sec: 35895.8, 300 sec: 35895.8). Total num frames: 2306048. Throughput: 0: 9121.2. Samples: 479086. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2023-03-08 14:32:26,801][671990] Avg episode reward: [(0, '14.162')] [2023-03-08 14:32:26,805][682716] Saving new best policy, reward=14.162! [2023-03-08 14:32:27,751][682729] Updated weights for policy 0, policy_version 571 (0.0006) [2023-03-08 14:32:29,007][682729] Updated weights for policy 0, policy_version 581 (0.0005) [2023-03-08 14:32:30,571][682729] Updated weights for policy 0, policy_version 591 (0.0007) [2023-03-08 14:32:31,800][671990] Fps is (10 sec: 31538.7, 60 sec: 35362.1, 300 sec: 35362.1). Total num frames: 2453504. Throughput: 0: 8947.1. Samples: 526198. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2023-03-08 14:32:31,801][671990] Avg episode reward: [(0, '19.425')] [2023-03-08 14:32:31,803][682716] Saving new best policy, reward=19.425! [2023-03-08 14:32:32,014][682729] Updated weights for policy 0, policy_version 601 (0.0006) [2023-03-08 14:32:33,354][682729] Updated weights for policy 0, policy_version 611 (0.0006) [2023-03-08 14:32:34,646][682729] Updated weights for policy 0, policy_version 621 (0.0006) [2023-03-08 14:32:35,933][682729] Updated weights for policy 0, policy_version 631 (0.0006) [2023-03-08 14:32:36,800][671990] Fps is (10 sec: 30720.3, 60 sec: 35430.4, 300 sec: 35099.6). Total num frames: 2613248. Throughput: 0: 8840.9. Samples: 548426. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2023-03-08 14:32:36,802][671990] Avg episode reward: [(0, '16.826')] [2023-03-08 14:32:37,081][682729] Updated weights for policy 0, policy_version 641 (0.0006) [2023-03-08 14:32:38,247][682729] Updated weights for policy 0, policy_version 651 (0.0006) [2023-03-08 14:32:39,376][682729] Updated weights for policy 0, policy_version 661 (0.0006) [2023-03-08 14:32:40,533][682729] Updated weights for policy 0, policy_version 671 (0.0006) [2023-03-08 14:32:41,800][671990] Fps is (10 sec: 33177.9, 60 sec: 35293.8, 300 sec: 35050.0). Total num frames: 2785280. Throughput: 0: 8779.0. Samples: 600442. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) [2023-03-08 14:32:41,802][671990] Avg episode reward: [(0, '17.019')] [2023-03-08 14:32:41,871][682729] Updated weights for policy 0, policy_version 681 (0.0006) [2023-03-08 14:32:43,289][682729] Updated weights for policy 0, policy_version 691 (0.0006) [2023-03-08 14:32:44,689][682729] Updated weights for policy 0, policy_version 701 (0.0006) [2023-03-08 14:32:45,913][682729] Updated weights for policy 0, policy_version 711 (0.0006) [2023-03-08 14:32:46,800][671990] Fps is (10 sec: 32358.4, 60 sec: 34747.8, 300 sec: 34734.1). Total num frames: 2936832. Throughput: 0: 8576.5. Samples: 646684. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2023-03-08 14:32:46,801][671990] Avg episode reward: [(0, '22.064')] [2023-03-08 14:32:46,804][682716] Saving new best policy, reward=22.064! [2023-03-08 14:32:47,146][682729] Updated weights for policy 0, policy_version 721 (0.0006) [2023-03-08 14:32:48,272][682729] Updated weights for policy 0, policy_version 731 (0.0006) [2023-03-08 14:32:49,718][682729] Updated weights for policy 0, policy_version 741 (0.0006) [2023-03-08 14:32:50,891][682729] Updated weights for policy 0, policy_version 751 (0.0006) [2023-03-08 14:32:51,800][671990] Fps is (10 sec: 31948.6, 60 sec: 34543.0, 300 sec: 34662.4). Total num frames: 3104768. Throughput: 0: 8507.7. Samples: 671550. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2023-03-08 14:32:51,801][671990] Avg episode reward: [(0, '18.601')] [2023-03-08 14:32:52,025][682729] Updated weights for policy 0, policy_version 761 (0.0006) [2023-03-08 14:32:53,121][682729] Updated weights for policy 0, policy_version 771 (0.0005) [2023-03-08 14:32:54,273][682729] Updated weights for policy 0, policy_version 781 (0.0006) [2023-03-08 14:32:55,412][682729] Updated weights for policy 0, policy_version 791 (0.0006) [2023-03-08 14:32:56,591][682729] Updated weights for policy 0, policy_version 801 (0.0006) [2023-03-08 14:32:56,800][671990] Fps is (10 sec: 34816.0, 60 sec: 34542.9, 300 sec: 34743.7). Total num frames: 3284992. Throughput: 0: 8419.5. Samples: 724162. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) [2023-03-08 14:32:56,801][671990] Avg episode reward: [(0, '19.367')] [2023-03-08 14:32:57,697][682729] Updated weights for policy 0, policy_version 811 (0.0006) [2023-03-08 14:32:58,813][682729] Updated weights for policy 0, policy_version 821 (0.0006) [2023-03-08 14:32:59,899][682729] Updated weights for policy 0, policy_version 831 (0.0006) [2023-03-08 14:33:00,985][682729] Updated weights for policy 0, policy_version 841 (0.0005) [2023-03-08 14:33:01,800][671990] Fps is (10 sec: 36864.3, 60 sec: 34611.2, 300 sec: 34907.0). Total num frames: 3473408. Throughput: 0: 8390.0. Samples: 779116. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2023-03-08 14:33:01,801][671990] Avg episode reward: [(0, '20.222')] [2023-03-08 14:33:02,102][682729] Updated weights for policy 0, policy_version 851 (0.0006) [2023-03-08 14:33:03,243][682729] Updated weights for policy 0, policy_version 861 (0.0006) [2023-03-08 14:33:04,400][682729] Updated weights for policy 0, policy_version 871 (0.0006) [2023-03-08 14:33:05,506][682729] Updated weights for policy 0, policy_version 881 (0.0005) [2023-03-08 14:33:06,672][682729] Updated weights for policy 0, policy_version 891 (0.0006) [2023-03-08 14:33:06,800][671990] Fps is (10 sec: 36863.6, 60 sec: 34474.6, 300 sec: 34966.9). Total num frames: 3653632. Throughput: 0: 8430.3. Samples: 806450. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) [2023-03-08 14:33:06,801][671990] Avg episode reward: [(0, '22.082')] [2023-03-08 14:33:06,805][682716] Saving new best policy, reward=22.082! [2023-03-08 14:33:07,783][682729] Updated weights for policy 0, policy_version 901 (0.0005) [2023-03-08 14:33:08,924][682729] Updated weights for policy 0, policy_version 911 (0.0006) [2023-03-08 14:33:10,055][682729] Updated weights for policy 0, policy_version 921 (0.0005) [2023-03-08 14:33:11,179][682729] Updated weights for policy 0, policy_version 931 (0.0006) [2023-03-08 14:33:11,800][671990] Fps is (10 sec: 36045.0, 60 sec: 34338.2, 300 sec: 35020.8). Total num frames: 3833856. Throughput: 0: 8478.0. Samples: 860594. Policy #0 lag: (min: 0.0, avg: 0.9, max: 2.0) [2023-03-08 14:33:11,801][671990] Avg episode reward: [(0, '20.605')] [2023-03-08 14:33:12,236][682729] Updated weights for policy 0, policy_version 941 (0.0006) [2023-03-08 14:33:13,344][682729] Updated weights for policy 0, policy_version 951 (0.0005) [2023-03-08 14:33:14,453][682729] Updated weights for policy 0, policy_version 961 (0.0005) [2023-03-08 14:33:15,616][682729] Updated weights for policy 0, policy_version 971 (0.0006) [2023-03-08 14:33:16,402][682716] Stopping Batcher_0... [2023-03-08 14:33:16,402][682716] Loop batcher_evt_loop terminating... [2023-03-08 14:33:16,402][682716] Saving /home/michal/programming/deep-rl-course/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... [2023-03-08 14:33:16,402][671990] Component Batcher_0 stopped! [2023-03-08 14:33:16,411][682751] Stopping RolloutWorker_w6... [2023-03-08 14:33:16,411][682747] Stopping RolloutWorker_w3... [2023-03-08 14:33:16,412][682751] Loop rollout_proc6_evt_loop terminating... [2023-03-08 14:33:16,412][682747] Loop rollout_proc3_evt_loop terminating... [2023-03-08 14:33:16,412][682746] Stopping RolloutWorker_w1... [2023-03-08 14:33:16,411][671990] Component RolloutWorker_w6 stopped! [2023-03-08 14:33:16,412][682748] Stopping RolloutWorker_w2... [2023-03-08 14:33:16,413][682746] Loop rollout_proc1_evt_loop terminating... [2023-03-08 14:33:16,413][682748] Loop rollout_proc2_evt_loop terminating... [2023-03-08 14:33:16,413][671990] Component RolloutWorker_w3 stopped! [2023-03-08 14:33:16,414][682750] Stopping RolloutWorker_w5... [2023-03-08 14:33:16,415][682750] Loop rollout_proc5_evt_loop terminating... [2023-03-08 14:33:16,415][682752] Stopping RolloutWorker_w7... [2023-03-08 14:33:16,415][671990] Component RolloutWorker_w1 stopped! [2023-03-08 14:33:16,415][682752] Loop rollout_proc7_evt_loop terminating... [2023-03-08 14:33:16,415][671990] Component RolloutWorker_w2 stopped! [2023-03-08 14:33:16,416][671990] Component RolloutWorker_w5 stopped! [2023-03-08 14:33:16,416][671990] Component RolloutWorker_w7 stopped! [2023-03-08 14:33:16,418][682729] Weights refcount: 2 0 [2023-03-08 14:33:16,421][682729] Stopping InferenceWorker_p0-w0... [2023-03-08 14:33:16,421][671990] Component InferenceWorker_p0-w0 stopped! [2023-03-08 14:33:16,422][682729] Loop inference_proc0-0_evt_loop terminating... [2023-03-08 14:33:16,440][682749] Stopping RolloutWorker_w4... [2023-03-08 14:33:16,441][682749] Loop rollout_proc4_evt_loop terminating... [2023-03-08 14:33:16,441][671990] Component RolloutWorker_w4 stopped! [2023-03-08 14:33:16,477][682730] Stopping RolloutWorker_w0... [2023-03-08 14:33:16,478][682730] Loop rollout_proc0_evt_loop terminating... [2023-03-08 14:33:16,477][671990] Component RolloutWorker_w0 stopped! [2023-03-08 14:33:16,493][682716] Saving /home/michal/programming/deep-rl-course/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... [2023-03-08 14:33:16,584][682716] Stopping LearnerWorker_p0... [2023-03-08 14:33:16,584][682716] Loop learner_proc0_evt_loop terminating... [2023-03-08 14:33:16,584][671990] Component LearnerWorker_p0 stopped! [2023-03-08 14:33:16,586][671990] Waiting for process learner_proc0 to stop... [2023-03-08 14:33:16,976][671990] Waiting for process inference_proc0-0 to join... [2023-03-08 14:33:16,977][671990] Waiting for process rollout_proc0 to join... [2023-03-08 14:33:16,978][671990] Waiting for process rollout_proc1 to join... [2023-03-08 14:33:16,978][671990] Waiting for process rollout_proc2 to join... [2023-03-08 14:33:16,979][671990] Waiting for process rollout_proc3 to join... [2023-03-08 14:33:16,979][671990] Waiting for process rollout_proc4 to join... [2023-03-08 14:33:16,980][671990] Waiting for process rollout_proc5 to join... [2023-03-08 14:33:16,980][671990] Waiting for process rollout_proc6 to join... [2023-03-08 14:33:16,981][671990] Waiting for process rollout_proc7 to join... [2023-03-08 14:33:16,981][671990] Batcher 0 profile tree view: batching: 10.8707, releasing_batches: 0.0115 [2023-03-08 14:33:16,982][671990] InferenceWorker_p0-w0 profile tree view: wait_policy: 0.0000 wait_policy_total: 2.2257 update_model: 1.3996 weight_update: 0.0006 one_step: 0.0009 handle_policy_step: 96.0541 deserialize: 5.3775, stack: 0.4795, obs_to_device_normalize: 29.8978, forward: 30.9313, send_messages: 5.9810 prepare_outputs: 19.0649 to_cpu: 14.4507 [2023-03-08 14:33:16,982][671990] Learner 0 profile tree view: misc: 0.0051, prepare_batch: 6.8415 train: 21.3246 epoch_init: 0.0031, minibatch_init: 0.0037, losses_postprocess: 0.3033, kl_divergence: 0.1304, after_optimizer: 10.3912 calculate_losses: 6.6995 losses_init: 0.0019, forward_head: 0.3111, bptt_initial: 4.7548, tail: 0.2907, advantages_returns: 0.0916, losses: 0.6107 bptt: 0.5510 bptt_forward_core: 0.5280 update: 3.5779 clip: 0.5862 [2023-03-08 14:33:16,983][671990] RolloutWorker_w0 profile tree view: wait_for_trajectories: 0.0811, enqueue_policy_requests: 3.7469, env_step: 59.1615, overhead: 4.0872, complete_rollouts: 0.1260 save_policy_outputs: 4.2104 split_output_tensors: 2.0489 [2023-03-08 14:33:16,983][671990] RolloutWorker_w7 profile tree view: wait_for_trajectories: 0.0810, enqueue_policy_requests: 3.7304, env_step: 59.5087, overhead: 4.1451, complete_rollouts: 0.1230 save_policy_outputs: 4.2532 split_output_tensors: 2.0561 [2023-03-08 14:33:16,984][671990] Loop Runner_EvtLoop terminating... [2023-03-08 14:33:16,984][671990] Runner profile tree view: main_loop: 109.1432 [2023-03-08 14:33:16,985][671990] Collected {0: 4005888}, FPS: 33663.2 [2023-03-08 14:40:07,216][671990] Loading existing experiment configuration from /home/michal/programming/deep-rl-course/train_dir/default_experiment/config.json [2023-03-08 14:40:07,217][671990] Overriding arg 'num_workers' with value 1 passed from command line [2023-03-08 14:40:07,217][671990] Adding new argument 'no_render'=True that is not in the saved config file! [2023-03-08 14:40:07,217][671990] Adding new argument 'save_video'=True that is not in the saved config file! [2023-03-08 14:40:07,218][671990] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! [2023-03-08 14:40:07,218][671990] Adding new argument 'video_name'=None that is not in the saved config file! [2023-03-08 14:40:07,218][671990] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file! [2023-03-08 14:40:07,219][671990] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! [2023-03-08 14:40:07,219][671990] Adding new argument 'push_to_hub'=False that is not in the saved config file! [2023-03-08 14:40:07,219][671990] Adding new argument 'hf_repository'=None that is not in the saved config file! [2023-03-08 14:40:07,219][671990] Adding new argument 'policy_index'=0 that is not in the saved config file! [2023-03-08 14:40:07,220][671990] Adding new argument 'eval_deterministic'=False that is not in the saved config file! [2023-03-08 14:40:07,220][671990] Adding new argument 'train_script'=None that is not in the saved config file! [2023-03-08 14:40:07,220][671990] Adding new argument 'enjoy_script'=None that is not in the saved config file! [2023-03-08 14:40:07,221][671990] Using frameskip 1 and render_action_repeat=4 for evaluation [2023-03-08 14:40:07,228][671990] Doom resolution: 160x120, resize resolution: (128, 72) [2023-03-08 14:40:07,229][671990] RunningMeanStd input shape: (3, 72, 128) [2023-03-08 14:40:07,229][671990] RunningMeanStd input shape: (1,) [2023-03-08 14:40:07,237][671990] ConvEncoder: input_channels=3 [2023-03-08 14:40:07,303][671990] Conv encoder output size: 512 [2023-03-08 14:40:07,304][671990] Policy head output size: 512 [2023-03-08 14:40:08,412][671990] Loading state from checkpoint /home/michal/programming/deep-rl-course/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... [2023-03-08 14:40:08,706][671990] Num frames 100... [2023-03-08 14:40:08,766][671990] Num frames 200... [2023-03-08 14:40:08,828][671990] Num frames 300... [2023-03-08 14:40:08,891][671990] Num frames 400... [2023-03-08 14:40:08,951][671990] Num frames 500... [2023-03-08 14:40:09,012][671990] Num frames 600... [2023-03-08 14:40:09,071][671990] Num frames 700... [2023-03-08 14:40:09,134][671990] Num frames 800... [2023-03-08 14:40:09,206][671990] Avg episode rewards: #0: 19.320, true rewards: #0: 8.320 [2023-03-08 14:40:09,207][671990] Avg episode reward: 19.320, avg true_objective: 8.320 [2023-03-08 14:40:09,252][671990] Num frames 900... [2023-03-08 14:40:09,312][671990] Num frames 1000... [2023-03-08 14:40:09,370][671990] Num frames 1100... [2023-03-08 14:40:09,428][671990] Num frames 1200... [2023-03-08 14:40:09,487][671990] Num frames 1300... [2023-03-08 14:40:09,548][671990] Num frames 1400... [2023-03-08 14:40:09,609][671990] Num frames 1500... [2023-03-08 14:40:09,674][671990] Num frames 1600... [2023-03-08 14:40:09,784][671990] Avg episode rewards: #0: 17.480, true rewards: #0: 8.480 [2023-03-08 14:40:09,785][671990] Avg episode reward: 17.480, avg true_objective: 8.480 [2023-03-08 14:40:09,791][671990] Num frames 1700... [2023-03-08 14:40:09,851][671990] Num frames 1800... [2023-03-08 14:40:09,910][671990] Num frames 1900... [2023-03-08 14:40:09,969][671990] Num frames 2000... [2023-03-08 14:40:10,027][671990] Num frames 2100... [2023-03-08 14:40:10,086][671990] Num frames 2200... [2023-03-08 14:40:10,145][671990] Num frames 2300... [2023-03-08 14:40:10,238][671990] Avg episode rewards: #0: 16.897, true rewards: #0: 7.897 [2023-03-08 14:40:10,239][671990] Avg episode reward: 16.897, avg true_objective: 7.897 [2023-03-08 14:40:10,263][671990] Num frames 2400... [2023-03-08 14:40:10,322][671990] Num frames 2500... [2023-03-08 14:40:10,381][671990] Num frames 2600... [2023-03-08 14:40:10,444][671990] Num frames 2700... [2023-03-08 14:40:10,505][671990] Num frames 2800... [2023-03-08 14:40:10,565][671990] Num frames 2900... [2023-03-08 14:40:10,627][671990] Num frames 3000... [2023-03-08 14:40:10,722][671990] Avg episode rewards: #0: 15.433, true rewards: #0: 7.682 [2023-03-08 14:40:10,723][671990] Avg episode reward: 15.433, avg true_objective: 7.682 [2023-03-08 14:40:10,743][671990] Num frames 3100... [2023-03-08 14:40:10,802][671990] Num frames 3200... [2023-03-08 14:40:10,862][671990] Num frames 3300... [2023-03-08 14:40:10,920][671990] Num frames 3400... [2023-03-08 14:40:10,979][671990] Num frames 3500... [2023-03-08 14:40:11,041][671990] Num frames 3600... [2023-03-08 14:40:11,103][671990] Num frames 3700... [2023-03-08 14:40:11,161][671990] Num frames 3800... [2023-03-08 14:40:11,227][671990] Num frames 3900... [2023-03-08 14:40:11,287][671990] Num frames 4000... [2023-03-08 14:40:11,350][671990] Num frames 4100... [2023-03-08 14:40:11,411][671990] Num frames 4200... [2023-03-08 14:40:11,469][671990] Num frames 4300... [2023-03-08 14:40:11,531][671990] Num frames 4400... [2023-03-08 14:40:11,596][671990] Num frames 4500... [2023-03-08 14:40:11,655][671990] Num frames 4600... [2023-03-08 14:40:11,715][671990] Num frames 4700... [2023-03-08 14:40:11,775][671990] Num frames 4800... [2023-03-08 14:40:11,835][671990] Num frames 4900... [2023-03-08 14:40:11,896][671990] Num frames 5000... [2023-03-08 14:40:11,957][671990] Num frames 5100... [2023-03-08 14:40:12,056][671990] Avg episode rewards: #0: 23.346, true rewards: #0: 10.346 [2023-03-08 14:40:12,056][671990] Avg episode reward: 23.346, avg true_objective: 10.346 [2023-03-08 14:40:12,079][671990] Num frames 5200... [2023-03-08 14:40:12,143][671990] Num frames 5300... [2023-03-08 14:40:12,204][671990] Num frames 5400... [2023-03-08 14:40:12,265][671990] Num frames 5500... [2023-03-08 14:40:12,324][671990] Num frames 5600... [2023-03-08 14:40:12,383][671990] Num frames 5700... [2023-03-08 14:40:12,442][671990] Num frames 5800... [2023-03-08 14:40:12,502][671990] Num frames 5900... [2023-03-08 14:40:12,563][671990] Num frames 6000... [2023-03-08 14:40:12,624][671990] Num frames 6100... [2023-03-08 14:40:12,685][671990] Num frames 6200... [2023-03-08 14:40:12,747][671990] Num frames 6300... [2023-03-08 14:40:12,845][671990] Avg episode rewards: #0: 24.122, true rewards: #0: 10.622 [2023-03-08 14:40:12,846][671990] Avg episode reward: 24.122, avg true_objective: 10.622 [2023-03-08 14:40:12,867][671990] Num frames 6400... [2023-03-08 14:40:12,933][671990] Num frames 6500... [2023-03-08 14:40:13,000][671990] Num frames 6600... [2023-03-08 14:40:13,059][671990] Num frames 6700... [2023-03-08 14:40:13,122][671990] Num frames 6800... [2023-03-08 14:40:13,182][671990] Num frames 6900... [2023-03-08 14:40:13,241][671990] Num frames 7000... [2023-03-08 14:40:13,302][671990] Num frames 7100... [2023-03-08 14:40:13,361][671990] Num frames 7200... [2023-03-08 14:40:13,422][671990] Num frames 7300... [2023-03-08 14:40:13,484][671990] Num frames 7400... [2023-03-08 14:40:13,556][671990] Avg episode rewards: #0: 23.613, true rewards: #0: 10.613 [2023-03-08 14:40:13,557][671990] Avg episode reward: 23.613, avg true_objective: 10.613 [2023-03-08 14:40:13,604][671990] Num frames 7500... [2023-03-08 14:40:13,664][671990] Num frames 7600... [2023-03-08 14:40:13,723][671990] Num frames 7700... [2023-03-08 14:40:13,782][671990] Num frames 7800... [2023-03-08 14:40:13,841][671990] Num frames 7900... [2023-03-08 14:40:13,901][671990] Num frames 8000... [2023-03-08 14:40:13,960][671990] Num frames 8100... [2023-03-08 14:40:14,019][671990] Num frames 8200... [2023-03-08 14:40:14,079][671990] Num frames 8300... [2023-03-08 14:40:14,139][671990] Num frames 8400... [2023-03-08 14:40:14,201][671990] Num frames 8500... [2023-03-08 14:40:14,261][671990] Num frames 8600... [2023-03-08 14:40:14,322][671990] Num frames 8700... [2023-03-08 14:40:14,410][671990] Avg episode rewards: #0: 24.446, true rewards: #0: 10.946 [2023-03-08 14:40:14,411][671990] Avg episode reward: 24.446, avg true_objective: 10.946 [2023-03-08 14:40:14,439][671990] Num frames 8800... [2023-03-08 14:40:14,499][671990] Num frames 8900... [2023-03-08 14:40:14,559][671990] Num frames 9000... [2023-03-08 14:40:14,618][671990] Num frames 9100... [2023-03-08 14:40:14,678][671990] Num frames 9200... [2023-03-08 14:40:14,737][671990] Num frames 9300... [2023-03-08 14:40:14,796][671990] Num frames 9400... [2023-03-08 14:40:14,856][671990] Num frames 9500... [2023-03-08 14:40:14,916][671990] Num frames 9600... [2023-03-08 14:40:14,976][671990] Num frames 9700... [2023-03-08 14:40:15,039][671990] Num frames 9800... [2023-03-08 14:40:15,099][671990] Num frames 9900... [2023-03-08 14:40:15,161][671990] Num frames 10000... [2023-03-08 14:40:15,237][671990] Avg episode rewards: #0: 24.819, true rewards: #0: 11.152 [2023-03-08 14:40:15,238][671990] Avg episode reward: 24.819, avg true_objective: 11.152 [2023-03-08 14:40:15,277][671990] Num frames 10100... [2023-03-08 14:40:15,336][671990] Num frames 10200... [2023-03-08 14:40:15,397][671990] Num frames 10300... [2023-03-08 14:40:15,460][671990] Num frames 10400... [2023-03-08 14:40:15,520][671990] Num frames 10500... [2023-03-08 14:40:15,581][671990] Num frames 10600... [2023-03-08 14:40:15,641][671990] Num frames 10700... [2023-03-08 14:40:15,738][671990] Avg episode rewards: #0: 23.873, true rewards: #0: 10.773 [2023-03-08 14:40:15,739][671990] Avg episode reward: 23.873, avg true_objective: 10.773 [2023-03-08 14:40:26,813][671990] Replay video saved to /home/michal/programming/deep-rl-course/train_dir/default_experiment/replay.mp4! [2023-03-08 14:41:24,935][671990] Loading existing experiment configuration from /home/michal/programming/deep-rl-course/train_dir/default_experiment/config.json [2023-03-08 14:41:24,936][671990] Overriding arg 'num_workers' with value 1 passed from command line [2023-03-08 14:41:24,936][671990] Adding new argument 'no_render'=True that is not in the saved config file! [2023-03-08 14:41:24,936][671990] Adding new argument 'save_video'=True that is not in the saved config file! [2023-03-08 14:41:24,937][671990] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! [2023-03-08 14:41:24,937][671990] Adding new argument 'video_name'=None that is not in the saved config file! [2023-03-08 14:41:24,937][671990] Adding new argument 'max_num_frames'=100000 that is not in the saved config file! [2023-03-08 14:41:24,938][671990] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! [2023-03-08 14:41:24,938][671990] Adding new argument 'push_to_hub'=True that is not in the saved config file! [2023-03-08 14:41:24,938][671990] Adding new argument 'hf_repository'='michal512/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file! [2023-03-08 14:41:24,939][671990] Adding new argument 'policy_index'=0 that is not in the saved config file! [2023-03-08 14:41:24,939][671990] Adding new argument 'eval_deterministic'=False that is not in the saved config file! [2023-03-08 14:41:24,940][671990] Adding new argument 'train_script'=None that is not in the saved config file! [2023-03-08 14:41:24,941][671990] Adding new argument 'enjoy_script'=None that is not in the saved config file! [2023-03-08 14:41:24,941][671990] Using frameskip 1 and render_action_repeat=4 for evaluation [2023-03-08 14:41:24,948][671990] RunningMeanStd input shape: (3, 72, 128) [2023-03-08 14:41:24,949][671990] RunningMeanStd input shape: (1,) [2023-03-08 14:41:24,955][671990] ConvEncoder: input_channels=3 [2023-03-08 14:41:24,972][671990] Conv encoder output size: 512 [2023-03-08 14:41:24,972][671990] Policy head output size: 512 [2023-03-08 14:41:24,997][671990] Loading state from checkpoint /home/michal/programming/deep-rl-course/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... [2023-03-08 14:41:25,284][671990] Num frames 100... [2023-03-08 14:41:25,343][671990] Num frames 200... [2023-03-08 14:41:25,400][671990] Num frames 300... [2023-03-08 14:41:25,463][671990] Num frames 400... [2023-03-08 14:41:25,522][671990] Num frames 500... [2023-03-08 14:41:25,580][671990] Num frames 600... [2023-03-08 14:41:25,639][671990] Num frames 700... [2023-03-08 14:41:25,698][671990] Num frames 800... [2023-03-08 14:41:25,757][671990] Num frames 900... [2023-03-08 14:41:25,827][671990] Avg episode rewards: #0: 18.280, true rewards: #0: 9.280 [2023-03-08 14:41:25,828][671990] Avg episode reward: 18.280, avg true_objective: 9.280 [2023-03-08 14:41:25,875][671990] Num frames 1000... [2023-03-08 14:41:25,934][671990] Num frames 1100... [2023-03-08 14:41:26,007][671990] Num frames 1200... [2023-03-08 14:41:26,076][671990] Num frames 1300... [2023-03-08 14:41:26,139][671990] Num frames 1400... [2023-03-08 14:41:26,205][671990] Num frames 1500... [2023-03-08 14:41:26,271][671990] Num frames 1600... [2023-03-08 14:41:26,330][671990] Num frames 1700... [2023-03-08 14:41:26,421][671990] Avg episode rewards: #0: 18.305, true rewards: #0: 8.805 [2023-03-08 14:41:26,422][671990] Avg episode reward: 18.305, avg true_objective: 8.805 [2023-03-08 14:41:26,449][671990] Num frames 1800... [2023-03-08 14:41:26,507][671990] Num frames 1900... [2023-03-08 14:41:26,569][671990] Num frames 2000... [2023-03-08 14:41:26,628][671990] Num frames 2100... [2023-03-08 14:41:26,687][671990] Num frames 2200... [2023-03-08 14:41:26,745][671990] Num frames 2300... [2023-03-08 14:41:26,808][671990] Num frames 2400... [2023-03-08 14:41:26,869][671990] Num frames 2500... [2023-03-08 14:41:26,928][671990] Num frames 2600... [2023-03-08 14:41:26,987][671990] Num frames 2700... [2023-03-08 14:41:27,046][671990] Num frames 2800... [2023-03-08 14:41:27,105][671990] Num frames 2900... [2023-03-08 14:41:27,163][671990] Num frames 3000... [2023-03-08 14:41:27,259][671990] Avg episode rewards: #0: 20.577, true rewards: #0: 10.243 [2023-03-08 14:41:27,260][671990] Avg episode reward: 20.577, avg true_objective: 10.243 [2023-03-08 14:41:27,279][671990] Num frames 3100... [2023-03-08 14:41:27,338][671990] Num frames 3200... [2023-03-08 14:41:27,398][671990] Num frames 3300... [2023-03-08 14:41:27,458][671990] Num frames 3400... [2023-03-08 14:41:27,518][671990] Num frames 3500... [2023-03-08 14:41:27,578][671990] Num frames 3600... [2023-03-08 14:41:27,641][671990] Num frames 3700... [2023-03-08 14:41:27,700][671990] Num frames 3800... [2023-03-08 14:41:27,759][671990] Num frames 3900... [2023-03-08 14:41:27,817][671990] Num frames 4000... [2023-03-08 14:41:27,890][671990] Avg episode rewards: #0: 20.333, true rewards: #0: 10.082 [2023-03-08 14:41:27,891][671990] Avg episode reward: 20.333, avg true_objective: 10.082 [2023-03-08 14:41:27,935][671990] Num frames 4100... [2023-03-08 14:41:27,996][671990] Num frames 4200... [2023-03-08 14:41:28,056][671990] Num frames 4300... [2023-03-08 14:41:28,116][671990] Num frames 4400... [2023-03-08 14:41:28,218][671990] Avg episode rewards: #0: 17.362, true rewards: #0: 8.962 [2023-03-08 14:41:28,219][671990] Avg episode reward: 17.362, avg true_objective: 8.962 [2023-03-08 14:41:28,234][671990] Num frames 4500... [2023-03-08 14:41:28,297][671990] Num frames 4600... [2023-03-08 14:41:28,359][671990] Num frames 4700... [2023-03-08 14:41:28,420][671990] Num frames 4800... [2023-03-08 14:41:28,480][671990] Num frames 4900... [2023-03-08 14:41:28,540][671990] Num frames 5000... [2023-03-08 14:41:28,601][671990] Num frames 5100... [2023-03-08 14:41:28,663][671990] Num frames 5200... [2023-03-08 14:41:28,723][671990] Num frames 5300... [2023-03-08 14:41:28,784][671990] Num frames 5400... [2023-03-08 14:41:28,846][671990] Num frames 5500... [2023-03-08 14:41:28,922][671990] Avg episode rewards: #0: 17.895, true rewards: #0: 9.228 [2023-03-08 14:41:28,923][671990] Avg episode reward: 17.895, avg true_objective: 9.228 [2023-03-08 14:41:28,965][671990] Num frames 5600... [2023-03-08 14:41:29,033][671990] Num frames 5700... [2023-03-08 14:41:29,092][671990] Num frames 5800... [2023-03-08 14:41:29,151][671990] Num frames 5900... [2023-03-08 14:41:29,211][671990] Num frames 6000... [2023-03-08 14:41:29,271][671990] Num frames 6100... [2023-03-08 14:41:29,329][671990] Num frames 6200... [2023-03-08 14:41:29,387][671990] Num frames 6300... [2023-03-08 14:41:29,445][671990] Num frames 6400... [2023-03-08 14:41:29,502][671990] Num frames 6500... [2023-03-08 14:41:29,561][671990] Num frames 6600... [2023-03-08 14:41:29,624][671990] Num frames 6700... [2023-03-08 14:41:29,684][671990] Num frames 6800... [2023-03-08 14:41:29,740][671990] Avg episode rewards: #0: 19.579, true rewards: #0: 9.721 [2023-03-08 14:41:29,741][671990] Avg episode reward: 19.579, avg true_objective: 9.721 [2023-03-08 14:41:29,801][671990] Num frames 6900... [2023-03-08 14:41:29,864][671990] Num frames 7000... [2023-03-08 14:41:29,922][671990] Num frames 7100... [2023-03-08 14:41:29,980][671990] Num frames 7200... [2023-03-08 14:41:30,039][671990] Num frames 7300... [2023-03-08 14:41:30,097][671990] Num frames 7400... [2023-03-08 14:41:30,156][671990] Num frames 7500... [2023-03-08 14:41:30,214][671990] Num frames 7600... [2023-03-08 14:41:30,273][671990] Num frames 7700... [2023-03-08 14:41:30,332][671990] Num frames 7800... [2023-03-08 14:41:30,389][671990] Avg episode rewards: #0: 19.508, true rewards: #0: 9.757 [2023-03-08 14:41:30,390][671990] Avg episode reward: 19.508, avg true_objective: 9.757 [2023-03-08 14:41:30,448][671990] Num frames 7900... [2023-03-08 14:41:30,506][671990] Num frames 8000... [2023-03-08 14:41:30,567][671990] Num frames 8100... [2023-03-08 14:41:30,627][671990] Num frames 8200... [2023-03-08 14:41:30,718][671990] Avg episode rewards: #0: 17.949, true rewards: #0: 9.171 [2023-03-08 14:41:30,718][671990] Avg episode reward: 17.949, avg true_objective: 9.171 [2023-03-08 14:41:30,749][671990] Num frames 8300... [2023-03-08 14:41:30,808][671990] Num frames 8400... [2023-03-08 14:41:30,869][671990] Num frames 8500... [2023-03-08 14:41:30,931][671990] Num frames 8600... [2023-03-08 14:41:30,993][671990] Num frames 8700... [2023-03-08 14:41:31,053][671990] Num frames 8800... [2023-03-08 14:41:31,136][671990] Avg episode rewards: #0: 17.552, true rewards: #0: 8.852 [2023-03-08 14:41:31,137][671990] Avg episode reward: 17.552, avg true_objective: 8.852 [2023-03-08 14:41:40,764][671990] Replay video saved to /home/michal/programming/deep-rl-course/train_dir/default_experiment/replay.mp4!