mnavas commited on
Commit
2a246f1
1 Parent(s): 11ab216

Upload . with huggingface_hub

Browse files
.summary/0/events.out.tfevents.1677248091.1b4f54364242 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:40eed92ff0c3aea026cc794369a29d0d3b5337b2a4e2bc36b93cbd90a89b374f
3
+ size 4801
README.md CHANGED
@@ -15,7 +15,7 @@ model-index:
15
  type: doom_health_gathering_supreme
16
  metrics:
17
  - type: mean_reward
18
- value: 8.90 +/- 5.34
19
  name: mean_reward
20
  verified: false
21
  ---
 
15
  type: doom_health_gathering_supreme
16
  metrics:
17
  - type: mean_reward
18
+ value: 9.90 +/- 5.25
19
  name: mean_reward
20
  verified: false
21
  ---
checkpoint_p0/checkpoint_000000984_4030464.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f0e8008e4bc275114f6c934935d1ee437f4fb0ca8db58f2795d18f5884c32b0b
3
+ size 34929220
config.json CHANGED
@@ -65,7 +65,7 @@
65
  "summaries_use_frameskip": true,
66
  "heartbeat_interval": 20,
67
  "heartbeat_reporting_interval": 600,
68
- "train_for_env_steps": 2000000,
69
  "train_for_seconds": 10000000000,
70
  "save_every_sec": 120,
71
  "keep_checkpoints": 2,
 
65
  "summaries_use_frameskip": true,
66
  "heartbeat_interval": 20,
67
  "heartbeat_reporting_interval": 600,
68
+ "train_for_env_steps": 1000000,
69
  "train_for_seconds": 10000000000,
70
  "save_every_sec": 120,
71
  "keep_checkpoints": 2,
replay.mp4 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:6b55282d8cd2cab67d7d5d558bbfad544466278619d50dd5c36367a4071609c6
3
- size 16692227
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b1adecf1c1f0ddcff46a2bfe99613d3e75b615aed67d7e7e5150ed0c16c6f88c
3
+ size 19357564
sf_log.txt CHANGED
@@ -2468,3 +2468,730 @@ main_loop: 38.5571
2468
  [2023-02-24 14:11:36,650][00980] Avg episode rewards: #0: 20.196, true rewards: #0: 8.896
2469
  [2023-02-24 14:11:36,652][00980] Avg episode reward: 20.196, avg true_objective: 8.896
2470
  [2023-02-24 14:12:30,762][00980] Replay video saved to /content/train_dir/default_experiment/replay.mp4!
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2468
  [2023-02-24 14:11:36,650][00980] Avg episode rewards: #0: 20.196, true rewards: #0: 8.896
2469
  [2023-02-24 14:11:36,652][00980] Avg episode reward: 20.196, avg true_objective: 8.896
2470
  [2023-02-24 14:12:30,762][00980] Replay video saved to /content/train_dir/default_experiment/replay.mp4!
2471
+ [2023-02-24 14:12:33,506][00980] The model has been pushed to https://huggingface.co/mnavas/rl_course_vizdoom_health_gathering_supreme
2472
+ [2023-02-24 14:14:51,332][00980] Environment doom_basic already registered, overwriting...
2473
+ [2023-02-24 14:14:51,335][00980] Environment doom_two_colors_easy already registered, overwriting...
2474
+ [2023-02-24 14:14:51,337][00980] Environment doom_two_colors_hard already registered, overwriting...
2475
+ [2023-02-24 14:14:51,338][00980] Environment doom_dm already registered, overwriting...
2476
+ [2023-02-24 14:14:51,339][00980] Environment doom_dwango5 already registered, overwriting...
2477
+ [2023-02-24 14:14:51,341][00980] Environment doom_my_way_home_flat_actions already registered, overwriting...
2478
+ [2023-02-24 14:14:51,342][00980] Environment doom_defend_the_center_flat_actions already registered, overwriting...
2479
+ [2023-02-24 14:14:51,343][00980] Environment doom_my_way_home already registered, overwriting...
2480
+ [2023-02-24 14:14:51,344][00980] Environment doom_deadly_corridor already registered, overwriting...
2481
+ [2023-02-24 14:14:51,345][00980] Environment doom_defend_the_center already registered, overwriting...
2482
+ [2023-02-24 14:14:51,347][00980] Environment doom_defend_the_line already registered, overwriting...
2483
+ [2023-02-24 14:14:51,348][00980] Environment doom_health_gathering already registered, overwriting...
2484
+ [2023-02-24 14:14:51,349][00980] Environment doom_health_gathering_supreme already registered, overwriting...
2485
+ [2023-02-24 14:14:51,350][00980] Environment doom_battle already registered, overwriting...
2486
+ [2023-02-24 14:14:51,351][00980] Environment doom_battle2 already registered, overwriting...
2487
+ [2023-02-24 14:14:51,353][00980] Environment doom_duel_bots already registered, overwriting...
2488
+ [2023-02-24 14:14:51,354][00980] Environment doom_deathmatch_bots already registered, overwriting...
2489
+ [2023-02-24 14:14:51,356][00980] Environment doom_duel already registered, overwriting...
2490
+ [2023-02-24 14:14:51,357][00980] Environment doom_deathmatch_full already registered, overwriting...
2491
+ [2023-02-24 14:14:51,358][00980] Environment doom_benchmark already registered, overwriting...
2492
+ [2023-02-24 14:14:51,359][00980] register_encoder_factory: <function make_vizdoom_encoder at 0x7ff7e26f99d0>
2493
+ [2023-02-24 14:14:51,386][00980] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json
2494
+ [2023-02-24 14:14:51,387][00980] Overriding arg 'train_for_env_steps' with value 1000000 passed from command line
2495
+ [2023-02-24 14:14:51,393][00980] Experiment dir /content/train_dir/default_experiment already exists!
2496
+ [2023-02-24 14:14:51,398][00980] Resuming existing experiment from /content/train_dir/default_experiment...
2497
+ [2023-02-24 14:14:51,399][00980] Weights and Biases integration disabled
2498
+ [2023-02-24 14:14:51,406][00980] Environment var CUDA_VISIBLE_DEVICES is 0
2499
+
2500
+ [2023-02-24 14:14:53,401][00980] Starting experiment with the following configuration:
2501
+ help=False
2502
+ algo=APPO
2503
+ env=doom_health_gathering_supreme
2504
+ experiment=default_experiment
2505
+ train_dir=/content/train_dir
2506
+ restart_behavior=resume
2507
+ device=gpu
2508
+ seed=None
2509
+ num_policies=1
2510
+ async_rl=True
2511
+ serial_mode=False
2512
+ batched_sampling=False
2513
+ num_batches_to_accumulate=2
2514
+ worker_num_splits=2
2515
+ policy_workers_per_policy=1
2516
+ max_policy_lag=1000
2517
+ num_workers=8
2518
+ num_envs_per_worker=4
2519
+ batch_size=1024
2520
+ num_batches_per_epoch=1
2521
+ num_epochs=1
2522
+ rollout=32
2523
+ recurrence=32
2524
+ shuffle_minibatches=False
2525
+ gamma=0.99
2526
+ reward_scale=1.0
2527
+ reward_clip=1000.0
2528
+ value_bootstrap=False
2529
+ normalize_returns=True
2530
+ exploration_loss_coeff=0.001
2531
+ value_loss_coeff=0.5
2532
+ kl_loss_coeff=0.0
2533
+ exploration_loss=symmetric_kl
2534
+ gae_lambda=0.95
2535
+ ppo_clip_ratio=0.1
2536
+ ppo_clip_value=0.2
2537
+ with_vtrace=False
2538
+ vtrace_rho=1.0
2539
+ vtrace_c=1.0
2540
+ optimizer=adam
2541
+ adam_eps=1e-06
2542
+ adam_beta1=0.9
2543
+ adam_beta2=0.999
2544
+ max_grad_norm=4.0
2545
+ learning_rate=0.0001
2546
+ lr_schedule=constant
2547
+ lr_schedule_kl_threshold=0.008
2548
+ lr_adaptive_min=1e-06
2549
+ lr_adaptive_max=0.01
2550
+ obs_subtract_mean=0.0
2551
+ obs_scale=255.0
2552
+ normalize_input=True
2553
+ normalize_input_keys=None
2554
+ decorrelate_experience_max_seconds=0
2555
+ decorrelate_envs_on_one_worker=True
2556
+ actor_worker_gpus=[]
2557
+ set_workers_cpu_affinity=True
2558
+ force_envs_single_thread=False
2559
+ default_niceness=0
2560
+ log_to_file=True
2561
+ experiment_summaries_interval=10
2562
+ flush_summaries_interval=30
2563
+ stats_avg=100
2564
+ summaries_use_frameskip=True
2565
+ heartbeat_interval=20
2566
+ heartbeat_reporting_interval=600
2567
+ train_for_env_steps=1000000
2568
+ train_for_seconds=10000000000
2569
+ save_every_sec=120
2570
+ keep_checkpoints=2
2571
+ load_checkpoint_kind=latest
2572
+ save_milestones_sec=-1
2573
+ save_best_every_sec=5
2574
+ save_best_metric=reward
2575
+ save_best_after=100000
2576
+ benchmark=False
2577
+ encoder_mlp_layers=[512, 512]
2578
+ encoder_conv_architecture=convnet_simple
2579
+ encoder_conv_mlp_layers=[512]
2580
+ use_rnn=True
2581
+ rnn_size=512
2582
+ rnn_type=gru
2583
+ rnn_num_layers=1
2584
+ decoder_mlp_layers=[]
2585
+ nonlinearity=elu
2586
+ policy_initialization=orthogonal
2587
+ policy_init_gain=1.0
2588
+ actor_critic_share_weights=True
2589
+ adaptive_stddev=True
2590
+ continuous_tanh_scale=0.0
2591
+ initial_stddev=1.0
2592
+ use_env_info_cache=False
2593
+ env_gpu_actions=False
2594
+ env_gpu_observations=True
2595
+ env_frameskip=4
2596
+ env_framestack=1
2597
+ pixel_format=CHW
2598
+ use_record_episode_statistics=False
2599
+ with_wandb=False
2600
+ wandb_user=None
2601
+ wandb_project=sample_factory
2602
+ wandb_group=None
2603
+ wandb_job_type=SF
2604
+ wandb_tags=[]
2605
+ with_pbt=False
2606
+ pbt_mix_policies_in_one_env=True
2607
+ pbt_period_env_steps=5000000
2608
+ pbt_start_mutation=20000000
2609
+ pbt_replace_fraction=0.3
2610
+ pbt_mutation_rate=0.15
2611
+ pbt_replace_reward_gap=0.1
2612
+ pbt_replace_reward_gap_absolute=1e-06
2613
+ pbt_optimize_gamma=False
2614
+ pbt_target_objective=true_objective
2615
+ pbt_perturb_min=1.1
2616
+ pbt_perturb_max=1.5
2617
+ num_agents=-1
2618
+ num_humans=0
2619
+ num_bots=-1
2620
+ start_bot_difficulty=None
2621
+ timelimit=None
2622
+ res_w=128
2623
+ res_h=72
2624
+ wide_aspect_ratio=False
2625
+ eval_env_frameskip=1
2626
+ fps=35
2627
+ command_line=--env=doom_health_gathering_supreme --num_workers=8 --num_envs_per_worker=4 --train_for_env_steps=4000000
2628
+ cli_args={'env': 'doom_health_gathering_supreme', 'num_workers': 8, 'num_envs_per_worker': 4, 'train_for_env_steps': 4000000}
2629
+ git_hash=unknown
2630
+ git_repo_name=not a git repository
2631
+ [2023-02-24 14:14:53,404][00980] Saving configuration to /content/train_dir/default_experiment/config.json...
2632
+ [2023-02-24 14:14:53,407][00980] Rollout worker 0 uses device cpu
2633
+ [2023-02-24 14:14:53,408][00980] Rollout worker 1 uses device cpu
2634
+ [2023-02-24 14:14:53,409][00980] Rollout worker 2 uses device cpu
2635
+ [2023-02-24 14:14:53,414][00980] Rollout worker 3 uses device cpu
2636
+ [2023-02-24 14:14:53,415][00980] Rollout worker 4 uses device cpu
2637
+ [2023-02-24 14:14:53,416][00980] Rollout worker 5 uses device cpu
2638
+ [2023-02-24 14:14:53,417][00980] Rollout worker 6 uses device cpu
2639
+ [2023-02-24 14:14:53,418][00980] Rollout worker 7 uses device cpu
2640
+ [2023-02-24 14:14:53,580][00980] Using GPUs [0] for process 0 (actually maps to GPUs [0])
2641
+ [2023-02-24 14:14:53,582][00980] InferenceWorker_p0-w0: min num requests: 2
2642
+ [2023-02-24 14:14:53,630][00980] Starting all processes...
2643
+ [2023-02-24 14:14:53,634][00980] Starting process learner_proc0
2644
+ [2023-02-24 14:14:53,825][00980] Starting all processes...
2645
+ [2023-02-24 14:14:53,836][00980] Starting process inference_proc0-0
2646
+ [2023-02-24 14:14:53,837][00980] Starting process rollout_proc0
2647
+ [2023-02-24 14:14:53,837][00980] Starting process rollout_proc1
2648
+ [2023-02-24 14:14:53,837][00980] Starting process rollout_proc2
2649
+ [2023-02-24 14:14:53,837][00980] Starting process rollout_proc3
2650
+ [2023-02-24 14:14:53,968][00980] Starting process rollout_proc4
2651
+ [2023-02-24 14:14:53,982][00980] Starting process rollout_proc5
2652
+ [2023-02-24 14:14:53,987][00980] Starting process rollout_proc6
2653
+ [2023-02-24 14:14:53,993][00980] Starting process rollout_proc7
2654
+ [2023-02-24 14:15:03,371][26253] Using GPUs [0] for process 0 (actually maps to GPUs [0])
2655
+ [2023-02-24 14:15:03,375][26253] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0
2656
+ [2023-02-24 14:15:03,434][26253] Num visible devices: 1
2657
+ [2023-02-24 14:15:03,466][26253] Starting seed is not provided
2658
+ [2023-02-24 14:15:03,467][26253] Using GPUs [0] for process 0 (actually maps to GPUs [0])
2659
+ [2023-02-24 14:15:03,468][26253] Initializing actor-critic model on device cuda:0
2660
+ [2023-02-24 14:15:03,469][26253] RunningMeanStd input shape: (3, 72, 128)
2661
+ [2023-02-24 14:15:03,470][26253] RunningMeanStd input shape: (1,)
2662
+ [2023-02-24 14:15:03,546][26253] ConvEncoder: input_channels=3
2663
+ [2023-02-24 14:15:04,380][26267] Using GPUs [0] for process 0 (actually maps to GPUs [0])
2664
+ [2023-02-24 14:15:04,381][26267] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0
2665
+ [2023-02-24 14:15:04,438][26267] Num visible devices: 1
2666
+ [2023-02-24 14:15:04,447][26253] Conv encoder output size: 512
2667
+ [2023-02-24 14:15:04,451][26253] Policy head output size: 512
2668
+ [2023-02-24 14:15:04,535][26253] Created Actor Critic model with architecture:
2669
+ [2023-02-24 14:15:04,537][26253] ActorCriticSharedWeights(
2670
+ (obs_normalizer): ObservationNormalizer(
2671
+ (running_mean_std): RunningMeanStdDictInPlace(
2672
+ (running_mean_std): ModuleDict(
2673
+ (obs): RunningMeanStdInPlace()
2674
+ )
2675
+ )
2676
+ )
2677
+ (returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace)
2678
+ (encoder): VizdoomEncoder(
2679
+ (basic_encoder): ConvEncoder(
2680
+ (enc): RecursiveScriptModule(
2681
+ original_name=ConvEncoderImpl
2682
+ (conv_head): RecursiveScriptModule(
2683
+ original_name=Sequential
2684
+ (0): RecursiveScriptModule(original_name=Conv2d)
2685
+ (1): RecursiveScriptModule(original_name=ELU)
2686
+ (2): RecursiveScriptModule(original_name=Conv2d)
2687
+ (3): RecursiveScriptModule(original_name=ELU)
2688
+ (4): RecursiveScriptModule(original_name=Conv2d)
2689
+ (5): RecursiveScriptModule(original_name=ELU)
2690
+ )
2691
+ (mlp_layers): RecursiveScriptModule(
2692
+ original_name=Sequential
2693
+ (0): RecursiveScriptModule(original_name=Linear)
2694
+ (1): RecursiveScriptModule(original_name=ELU)
2695
+ )
2696
+ )
2697
+ )
2698
+ )
2699
+ (core): ModelCoreRNN(
2700
+ (core): GRU(512, 512)
2701
+ )
2702
+ (decoder): MlpDecoder(
2703
+ (mlp): Identity()
2704
+ )
2705
+ (critic_linear): Linear(in_features=512, out_features=1, bias=True)
2706
+ (action_parameterization): ActionParameterizationDefault(
2707
+ (distribution_linear): Linear(in_features=512, out_features=5, bias=True)
2708
+ )
2709
+ )
2710
+ [2023-02-24 14:15:04,916][26268] Worker 1 uses CPU cores [1]
2711
+ [2023-02-24 14:15:05,071][26270] Worker 0 uses CPU cores [0]
2712
+ [2023-02-24 14:15:05,141][26272] Worker 3 uses CPU cores [1]
2713
+ [2023-02-24 14:15:05,438][26278] Worker 2 uses CPU cores [0]
2714
+ [2023-02-24 14:15:05,710][26282] Worker 6 uses CPU cores [0]
2715
+ [2023-02-24 14:15:05,772][26280] Worker 4 uses CPU cores [0]
2716
+ [2023-02-24 14:15:05,851][26288] Worker 7 uses CPU cores [1]
2717
+ [2023-02-24 14:15:05,918][26290] Worker 5 uses CPU cores [1]
2718
+ [2023-02-24 14:15:08,019][26253] Using optimizer <class 'torch.optim.adam.Adam'>
2719
+ [2023-02-24 14:15:08,021][26253] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000982_4022272.pth...
2720
+ [2023-02-24 14:15:08,064][26253] Loading model from checkpoint
2721
+ [2023-02-24 14:15:08,071][26253] Loaded experiment state at self.train_step=982, self.env_steps=4022272
2722
+ [2023-02-24 14:15:08,072][26253] Initialized policy 0 weights for model version 982
2723
+ [2023-02-24 14:15:08,083][26253] Using GPUs [0] for process 0 (actually maps to GPUs [0])
2724
+ [2023-02-24 14:15:08,090][26253] LearnerWorker_p0 finished initialization!
2725
+ [2023-02-24 14:15:08,365][26267] RunningMeanStd input shape: (3, 72, 128)
2726
+ [2023-02-24 14:15:08,367][26267] RunningMeanStd input shape: (1,)
2727
+ [2023-02-24 14:15:08,389][26267] ConvEncoder: input_channels=3
2728
+ [2023-02-24 14:15:08,548][26267] Conv encoder output size: 512
2729
+ [2023-02-24 14:15:08,549][26267] Policy head output size: 512
2730
+ [2023-02-24 14:15:11,407][00980] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 4022272. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
2731
+ [2023-02-24 14:15:11,633][00980] Inference worker 0-0 is ready!
2732
+ [2023-02-24 14:15:11,635][00980] All inference workers are ready! Signal rollout workers to start!
2733
+ [2023-02-24 14:15:11,741][26272] Doom resolution: 160x120, resize resolution: (128, 72)
2734
+ [2023-02-24 14:15:11,744][26288] Doom resolution: 160x120, resize resolution: (128, 72)
2735
+ [2023-02-24 14:15:11,740][26290] Doom resolution: 160x120, resize resolution: (128, 72)
2736
+ [2023-02-24 14:15:11,738][26268] Doom resolution: 160x120, resize resolution: (128, 72)
2737
+ [2023-02-24 14:15:11,837][26270] Doom resolution: 160x120, resize resolution: (128, 72)
2738
+ [2023-02-24 14:15:11,846][26278] Doom resolution: 160x120, resize resolution: (128, 72)
2739
+ [2023-02-24 14:15:11,840][26282] Doom resolution: 160x120, resize resolution: (128, 72)
2740
+ [2023-02-24 14:15:11,854][26280] Doom resolution: 160x120, resize resolution: (128, 72)
2741
+ [2023-02-24 14:15:12,819][26280] Decorrelating experience for 0 frames...
2742
+ [2023-02-24 14:15:12,826][26270] Decorrelating experience for 0 frames...
2743
+ [2023-02-24 14:15:13,068][26288] Decorrelating experience for 0 frames...
2744
+ [2023-02-24 14:15:13,073][26290] Decorrelating experience for 0 frames...
2745
+ [2023-02-24 14:15:13,078][26272] Decorrelating experience for 0 frames...
2746
+ [2023-02-24 14:15:13,323][26280] Decorrelating experience for 32 frames...
2747
+ [2023-02-24 14:15:13,570][00980] Heartbeat connected on Batcher_0
2748
+ [2023-02-24 14:15:13,575][00980] Heartbeat connected on LearnerWorker_p0
2749
+ [2023-02-24 14:15:13,606][00980] Heartbeat connected on InferenceWorker_p0-w0
2750
+ [2023-02-24 14:15:13,949][26270] Decorrelating experience for 32 frames...
2751
+ [2023-02-24 14:15:13,964][26278] Decorrelating experience for 0 frames...
2752
+ [2023-02-24 14:15:14,383][26278] Decorrelating experience for 32 frames...
2753
+ [2023-02-24 14:15:14,500][26290] Decorrelating experience for 32 frames...
2754
+ [2023-02-24 14:15:14,514][26288] Decorrelating experience for 32 frames...
2755
+ [2023-02-24 14:15:14,521][26268] Decorrelating experience for 0 frames...
2756
+ [2023-02-24 14:15:14,519][26272] Decorrelating experience for 32 frames...
2757
+ [2023-02-24 14:15:15,359][26278] Decorrelating experience for 64 frames...
2758
+ [2023-02-24 14:15:15,390][26268] Decorrelating experience for 32 frames...
2759
+ [2023-02-24 14:15:15,574][26290] Decorrelating experience for 64 frames...
2760
+ [2023-02-24 14:15:15,614][26280] Decorrelating experience for 64 frames...
2761
+ [2023-02-24 14:15:15,688][26270] Decorrelating experience for 64 frames...
2762
+ [2023-02-24 14:15:16,407][00980] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 4022272. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
2763
+ [2023-02-24 14:15:16,520][26272] Decorrelating experience for 64 frames...
2764
+ [2023-02-24 14:15:16,649][26278] Decorrelating experience for 96 frames...
2765
+ [2023-02-24 14:15:16,700][26268] Decorrelating experience for 64 frames...
2766
+ [2023-02-24 14:15:16,752][26288] Decorrelating experience for 64 frames...
2767
+ [2023-02-24 14:15:16,844][00980] Heartbeat connected on RolloutWorker_w2
2768
+ [2023-02-24 14:15:16,853][26282] Decorrelating experience for 0 frames...
2769
+ [2023-02-24 14:15:16,994][26280] Decorrelating experience for 96 frames...
2770
+ [2023-02-24 14:15:17,218][00980] Heartbeat connected on RolloutWorker_w4
2771
+ [2023-02-24 14:15:17,608][26270] Decorrelating experience for 96 frames...
2772
+ [2023-02-24 14:15:17,961][00980] Heartbeat connected on RolloutWorker_w0
2773
+ [2023-02-24 14:15:18,366][26272] Decorrelating experience for 96 frames...
2774
+ [2023-02-24 14:15:18,585][26282] Decorrelating experience for 32 frames...
2775
+ [2023-02-24 14:15:18,665][00980] Heartbeat connected on RolloutWorker_w3
2776
+ [2023-02-24 14:15:18,671][26290] Decorrelating experience for 96 frames...
2777
+ [2023-02-24 14:15:18,677][26268] Decorrelating experience for 96 frames...
2778
+ [2023-02-24 14:15:18,724][26288] Decorrelating experience for 96 frames...
2779
+ [2023-02-24 14:15:19,012][00980] Heartbeat connected on RolloutWorker_w5
2780
+ [2023-02-24 14:15:19,020][00980] Heartbeat connected on RolloutWorker_w1
2781
+ [2023-02-24 14:15:19,070][00980] Heartbeat connected on RolloutWorker_w7
2782
+ [2023-02-24 14:15:21,407][00980] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 4022272. Throughput: 0: 175.6. Samples: 1756. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
2783
+ [2023-02-24 14:15:21,413][00980] Avg episode reward: [(0, '2.045')]
2784
+ [2023-02-24 14:15:21,489][26253] Signal inference workers to stop experience collection...
2785
+ [2023-02-24 14:15:21,510][26267] InferenceWorker_p0-w0: stopping experience collection
2786
+ [2023-02-24 14:15:21,564][26282] Decorrelating experience for 64 frames...
2787
+ [2023-02-24 14:15:22,135][26282] Decorrelating experience for 96 frames...
2788
+ [2023-02-24 14:15:22,232][00980] Heartbeat connected on RolloutWorker_w6
2789
+ [2023-02-24 14:15:24,615][26253] Signal inference workers to resume experience collection...
2790
+ [2023-02-24 14:15:24,647][26253] Stopping Batcher_0...
2791
+ [2023-02-24 14:15:24,648][26253] Loop batcher_evt_loop terminating...
2792
+ [2023-02-24 14:15:24,644][26267] Weights refcount: 2 0
2793
+ [2023-02-24 14:15:24,648][00980] Component Batcher_0 stopped!
2794
+ [2023-02-24 14:15:24,658][26267] Stopping InferenceWorker_p0-w0...
2795
+ [2023-02-24 14:15:24,659][26267] Loop inference_proc0-0_evt_loop terminating...
2796
+ [2023-02-24 14:15:24,658][00980] Component InferenceWorker_p0-w0 stopped!
2797
+ [2023-02-24 14:15:24,856][00980] Component RolloutWorker_w7 stopped!
2798
+ [2023-02-24 14:15:24,860][26272] Stopping RolloutWorker_w3...
2799
+ [2023-02-24 14:15:24,861][00980] Component RolloutWorker_w3 stopped!
2800
+ [2023-02-24 14:15:24,861][26288] Stopping RolloutWorker_w7...
2801
+ [2023-02-24 14:15:24,868][26288] Loop rollout_proc7_evt_loop terminating...
2802
+ [2023-02-24 14:15:24,871][26272] Loop rollout_proc3_evt_loop terminating...
2803
+ [2023-02-24 14:15:24,877][00980] Component RolloutWorker_w0 stopped!
2804
+ [2023-02-24 14:15:24,877][26268] Stopping RolloutWorker_w1...
2805
+ [2023-02-24 14:15:24,878][26290] Stopping RolloutWorker_w5...
2806
+ [2023-02-24 14:15:24,881][00980] Component RolloutWorker_w1 stopped!
2807
+ [2023-02-24 14:15:24,886][00980] Component RolloutWorker_w5 stopped!
2808
+ [2023-02-24 14:15:24,881][26268] Loop rollout_proc1_evt_loop terminating...
2809
+ [2023-02-24 14:15:24,881][26290] Loop rollout_proc5_evt_loop terminating...
2810
+ [2023-02-24 14:15:24,899][00980] Component RolloutWorker_w4 stopped!
2811
+ [2023-02-24 14:15:24,905][26280] Stopping RolloutWorker_w4...
2812
+ [2023-02-24 14:15:24,906][26280] Loop rollout_proc4_evt_loop terminating...
2813
+ [2023-02-24 14:15:24,911][26282] Stopping RolloutWorker_w6...
2814
+ [2023-02-24 14:15:24,911][26282] Loop rollout_proc6_evt_loop terminating...
2815
+ [2023-02-24 14:15:24,880][26270] Stopping RolloutWorker_w0...
2816
+ [2023-02-24 14:15:24,914][26270] Loop rollout_proc0_evt_loop terminating...
2817
+ [2023-02-24 14:15:24,910][00980] Component RolloutWorker_w6 stopped!
2818
+ [2023-02-24 14:15:24,931][26278] Stopping RolloutWorker_w2...
2819
+ [2023-02-24 14:15:24,932][26278] Loop rollout_proc2_evt_loop terminating...
2820
+ [2023-02-24 14:15:24,931][00980] Component RolloutWorker_w2 stopped!
2821
+ [2023-02-24 14:15:28,114][26253] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000984_4030464.pth...
2822
+ [2023-02-24 14:15:28,273][26253] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000980_4014080.pth
2823
+ [2023-02-24 14:15:28,279][26253] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000984_4030464.pth...
2824
+ [2023-02-24 14:15:28,479][00980] Component LearnerWorker_p0 stopped!
2825
+ [2023-02-24 14:15:28,483][00980] Waiting for process learner_proc0 to stop...
2826
+ [2023-02-24 14:15:28,485][26253] Stopping LearnerWorker_p0...
2827
+ [2023-02-24 14:15:28,486][26253] Loop learner_proc0_evt_loop terminating...
2828
+ [2023-02-24 14:15:29,668][00980] Waiting for process inference_proc0-0 to join...
2829
+ [2023-02-24 14:15:29,670][00980] Waiting for process rollout_proc0 to join...
2830
+ [2023-02-24 14:15:29,672][00980] Waiting for process rollout_proc1 to join...
2831
+ [2023-02-24 14:15:29,674][00980] Waiting for process rollout_proc2 to join...
2832
+ [2023-02-24 14:15:29,678][00980] Waiting for process rollout_proc3 to join...
2833
+ [2023-02-24 14:15:29,680][00980] Waiting for process rollout_proc4 to join...
2834
+ [2023-02-24 14:15:29,682][00980] Waiting for process rollout_proc5 to join...
2835
+ [2023-02-24 14:15:29,685][00980] Waiting for process rollout_proc6 to join...
2836
+ [2023-02-24 14:15:29,687][00980] Waiting for process rollout_proc7 to join...
2837
+ [2023-02-24 14:15:29,689][00980] Batcher 0 profile tree view:
2838
+ batching: 0.0539, releasing_batches: 0.0311
2839
+ [2023-02-24 14:15:29,691][00980] InferenceWorker_p0-w0 profile tree view:
2840
+ update_model: 0.0124
2841
+ wait_policy: 0.0012
2842
+ wait_policy_total: 6.5012
2843
+ one_step: 0.0023
2844
+ handle_policy_step: 3.1320
2845
+ deserialize: 0.0360, stack: 0.0068, obs_to_device_normalize: 0.2785, forward: 2.5249, send_messages: 0.0576
2846
+ prepare_outputs: 0.1663
2847
+ to_cpu: 0.0961
2848
+ [2023-02-24 14:15:29,693][00980] Learner 0 profile tree view:
2849
+ misc: 0.0000, prepare_batch: 6.1913
2850
+ train: 1.6359
2851
+ epoch_init: 0.0000, minibatch_init: 0.0000, losses_postprocess: 0.0004, kl_divergence: 0.0013, after_optimizer: 0.0296
2852
+ calculate_losses: 0.2418
2853
+ losses_init: 0.0000, forward_head: 0.1123, bptt_initial: 0.1049, tail: 0.0029, advantages_returns: 0.0009, losses: 0.0165
2854
+ bptt: 0.0040
2855
+ bptt_forward_core: 0.0039
2856
+ update: 1.3507
2857
+ clip: 0.0144
2858
+ [2023-02-24 14:15:29,696][00980] RolloutWorker_w0 profile tree view:
2859
+ wait_for_trajectories: 0.0009, enqueue_policy_requests: 0.4267, env_step: 2.1053, overhead: 0.0685, complete_rollouts: 0.0505
2860
+ save_policy_outputs: 0.0325
2861
+ split_output_tensors: 0.0155
2862
+ [2023-02-24 14:15:29,698][00980] RolloutWorker_w7 profile tree view:
2863
+ wait_for_trajectories: 0.0007, enqueue_policy_requests: 0.4189, env_step: 1.8447, overhead: 0.0353, complete_rollouts: 0.0094
2864
+ save_policy_outputs: 0.0314
2865
+ split_output_tensors: 0.0157
2866
+ [2023-02-24 14:15:29,702][00980] Loop Runner_EvtLoop terminating...
2867
+ [2023-02-24 14:15:29,706][00980] Runner profile tree view:
2868
+ main_loop: 36.0757
2869
+ [2023-02-24 14:15:29,709][00980] Collected {0: 4030464}, FPS: 227.1
2870
+ [2023-02-24 14:15:29,766][00980] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json
2871
+ [2023-02-24 14:15:29,767][00980] Overriding arg 'num_workers' with value 1 passed from command line
2872
+ [2023-02-24 14:15:29,770][00980] Adding new argument 'no_render'=True that is not in the saved config file!
2873
+ [2023-02-24 14:15:29,772][00980] Adding new argument 'save_video'=True that is not in the saved config file!
2874
+ [2023-02-24 14:15:29,773][00980] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file!
2875
+ [2023-02-24 14:15:29,777][00980] Adding new argument 'video_name'=None that is not in the saved config file!
2876
+ [2023-02-24 14:15:29,779][00980] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file!
2877
+ [2023-02-24 14:15:29,785][00980] Adding new argument 'max_num_episodes'=10 that is not in the saved config file!
2878
+ [2023-02-24 14:15:29,787][00980] Adding new argument 'push_to_hub'=False that is not in the saved config file!
2879
+ [2023-02-24 14:15:29,790][00980] Adding new argument 'hf_repository'=None that is not in the saved config file!
2880
+ [2023-02-24 14:15:29,792][00980] Adding new argument 'policy_index'=0 that is not in the saved config file!
2881
+ [2023-02-24 14:15:29,794][00980] Adding new argument 'eval_deterministic'=False that is not in the saved config file!
2882
+ [2023-02-24 14:15:29,796][00980] Adding new argument 'train_script'=None that is not in the saved config file!
2883
+ [2023-02-24 14:15:29,798][00980] Adding new argument 'enjoy_script'=None that is not in the saved config file!
2884
+ [2023-02-24 14:15:29,799][00980] Using frameskip 1 and render_action_repeat=4 for evaluation
2885
+ [2023-02-24 14:15:29,822][00980] RunningMeanStd input shape: (3, 72, 128)
2886
+ [2023-02-24 14:15:29,824][00980] RunningMeanStd input shape: (1,)
2887
+ [2023-02-24 14:15:29,839][00980] ConvEncoder: input_channels=3
2888
+ [2023-02-24 14:15:29,891][00980] Conv encoder output size: 512
2889
+ [2023-02-24 14:15:29,893][00980] Policy head output size: 512
2890
+ [2023-02-24 14:15:29,920][00980] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000984_4030464.pth...
2891
+ [2023-02-24 14:15:30,396][00980] Num frames 100...
2892
+ [2023-02-24 14:15:30,515][00980] Num frames 200...
2893
+ [2023-02-24 14:15:30,642][00980] Num frames 300...
2894
+ [2023-02-24 14:15:30,770][00980] Num frames 400...
2895
+ [2023-02-24 14:15:30,891][00980] Num frames 500...
2896
+ [2023-02-24 14:15:31,011][00980] Num frames 600...
2897
+ [2023-02-24 14:15:31,130][00980] Num frames 700...
2898
+ [2023-02-24 14:15:31,247][00980] Num frames 800...
2899
+ [2023-02-24 14:15:31,380][00980] Num frames 900...
2900
+ [2023-02-24 14:15:31,500][00980] Num frames 1000...
2901
+ [2023-02-24 14:15:31,622][00980] Num frames 1100...
2902
+ [2023-02-24 14:15:31,753][00980] Num frames 1200...
2903
+ [2023-02-24 14:15:31,868][00980] Num frames 1300...
2904
+ [2023-02-24 14:15:31,937][00980] Avg episode rewards: #0: 32.120, true rewards: #0: 13.120
2905
+ [2023-02-24 14:15:31,941][00980] Avg episode reward: 32.120, avg true_objective: 13.120
2906
+ [2023-02-24 14:15:32,038][00980] Num frames 1400...
2907
+ [2023-02-24 14:15:32,150][00980] Num frames 1500...
2908
+ [2023-02-24 14:15:32,267][00980] Num frames 1600...
2909
+ [2023-02-24 14:15:32,426][00980] Num frames 1700...
2910
+ [2023-02-24 14:15:32,547][00980] Num frames 1800...
2911
+ [2023-02-24 14:15:32,662][00980] Num frames 1900...
2912
+ [2023-02-24 14:15:32,779][00980] Num frames 2000...
2913
+ [2023-02-24 14:15:32,896][00980] Num frames 2100...
2914
+ [2023-02-24 14:15:33,028][00980] Num frames 2200...
2915
+ [2023-02-24 14:15:33,171][00980] Avg episode rewards: #0: 26.360, true rewards: #0: 11.360
2916
+ [2023-02-24 14:15:33,172][00980] Avg episode reward: 26.360, avg true_objective: 11.360
2917
+ [2023-02-24 14:15:33,210][00980] Num frames 2300...
2918
+ [2023-02-24 14:15:33,326][00980] Num frames 2400...
2919
+ [2023-02-24 14:15:33,457][00980] Num frames 2500...
2920
+ [2023-02-24 14:15:33,571][00980] Num frames 2600...
2921
+ [2023-02-24 14:15:33,687][00980] Num frames 2700...
2922
+ [2023-02-24 14:15:33,804][00980] Num frames 2800...
2923
+ [2023-02-24 14:15:33,921][00980] Num frames 2900...
2924
+ [2023-02-24 14:15:34,041][00980] Num frames 3000...
2925
+ [2023-02-24 14:15:34,159][00980] Num frames 3100...
2926
+ [2023-02-24 14:15:34,273][00980] Num frames 3200...
2927
+ [2023-02-24 14:15:34,400][00980] Num frames 3300...
2928
+ [2023-02-24 14:15:34,516][00980] Num frames 3400...
2929
+ [2023-02-24 14:15:34,640][00980] Num frames 3500...
2930
+ [2023-02-24 14:15:34,761][00980] Num frames 3600...
2931
+ [2023-02-24 14:15:34,880][00980] Num frames 3700...
2932
+ [2023-02-24 14:15:35,005][00980] Num frames 3800...
2933
+ [2023-02-24 14:15:35,126][00980] Num frames 3900...
2934
+ [2023-02-24 14:15:35,241][00980] Num frames 4000...
2935
+ [2023-02-24 14:15:35,361][00980] Num frames 4100...
2936
+ [2023-02-24 14:15:35,490][00980] Num frames 4200...
2937
+ [2023-02-24 14:15:35,608][00980] Num frames 4300...
2938
+ [2023-02-24 14:15:35,744][00980] Avg episode rewards: #0: 36.906, true rewards: #0: 14.573
2939
+ [2023-02-24 14:15:35,747][00980] Avg episode reward: 36.906, avg true_objective: 14.573
2940
+ [2023-02-24 14:15:35,781][00980] Num frames 4400...
2941
+ [2023-02-24 14:15:35,898][00980] Num frames 4500...
2942
+ [2023-02-24 14:15:36,012][00980] Num frames 4600...
2943
+ [2023-02-24 14:15:36,127][00980] Num frames 4700...
2944
+ [2023-02-24 14:15:36,241][00980] Num frames 4800...
2945
+ [2023-02-24 14:15:36,362][00980] Num frames 4900...
2946
+ [2023-02-24 14:15:36,488][00980] Num frames 5000...
2947
+ [2023-02-24 14:15:36,611][00980] Num frames 5100...
2948
+ [2023-02-24 14:15:36,733][00980] Num frames 5200...
2949
+ [2023-02-24 14:15:36,851][00980] Num frames 5300...
2950
+ [2023-02-24 14:15:36,973][00980] Num frames 5400...
2951
+ [2023-02-24 14:15:37,089][00980] Num frames 5500...
2952
+ [2023-02-24 14:15:37,211][00980] Num frames 5600...
2953
+ [2023-02-24 14:15:37,290][00980] Avg episode rewards: #0: 34.550, true rewards: #0: 14.050
2954
+ [2023-02-24 14:15:37,292][00980] Avg episode reward: 34.550, avg true_objective: 14.050
2955
+ [2023-02-24 14:15:37,387][00980] Num frames 5700...
2956
+ [2023-02-24 14:15:37,516][00980] Num frames 5800...
2957
+ [2023-02-24 14:15:37,633][00980] Num frames 5900...
2958
+ [2023-02-24 14:15:37,749][00980] Num frames 6000...
2959
+ [2023-02-24 14:15:37,864][00980] Num frames 6100...
2960
+ [2023-02-24 14:15:37,987][00980] Num frames 6200...
2961
+ [2023-02-24 14:15:38,103][00980] Num frames 6300...
2962
+ [2023-02-24 14:15:38,225][00980] Num frames 6400...
2963
+ [2023-02-24 14:15:38,353][00980] Num frames 6500...
2964
+ [2023-02-24 14:15:38,486][00980] Num frames 6600...
2965
+ [2023-02-24 14:15:38,604][00980] Num frames 6700...
2966
+ [2023-02-24 14:15:38,729][00980] Num frames 6800...
2967
+ [2023-02-24 14:15:38,852][00980] Num frames 6900...
2968
+ [2023-02-24 14:15:38,971][00980] Num frames 7000...
2969
+ [2023-02-24 14:15:39,085][00980] Num frames 7100...
2970
+ [2023-02-24 14:15:39,209][00980] Num frames 7200...
2971
+ [2023-02-24 14:15:39,329][00980] Num frames 7300...
2972
+ [2023-02-24 14:15:39,455][00980] Num frames 7400...
2973
+ [2023-02-24 14:15:39,636][00980] Num frames 7500...
2974
+ [2023-02-24 14:15:39,812][00980] Num frames 7600...
2975
+ [2023-02-24 14:15:39,987][00980] Num frames 7700...
2976
+ [2023-02-24 14:15:40,080][00980] Avg episode rewards: #0: 38.440, true rewards: #0: 15.440
2977
+ [2023-02-24 14:15:40,085][00980] Avg episode reward: 38.440, avg true_objective: 15.440
2978
+ [2023-02-24 14:15:40,231][00980] Num frames 7800...
2979
+ [2023-02-24 14:15:40,392][00980] Num frames 7900...
2980
+ [2023-02-24 14:15:40,558][00980] Num frames 8000...
2981
+ [2023-02-24 14:15:40,738][00980] Num frames 8100...
2982
+ [2023-02-24 14:15:40,913][00980] Num frames 8200...
2983
+ [2023-02-24 14:15:41,077][00980] Num frames 8300...
2984
+ [2023-02-24 14:15:41,241][00980] Num frames 8400...
2985
+ [2023-02-24 14:15:41,422][00980] Num frames 8500...
2986
+ [2023-02-24 14:15:41,602][00980] Num frames 8600...
2987
+ [2023-02-24 14:15:41,767][00980] Num frames 8700...
2988
+ [2023-02-24 14:15:41,940][00980] Num frames 8800...
2989
+ [2023-02-24 14:15:42,110][00980] Num frames 8900...
2990
+ [2023-02-24 14:15:42,289][00980] Num frames 9000...
2991
+ [2023-02-24 14:15:42,463][00980] Num frames 9100...
2992
+ [2023-02-24 14:15:42,635][00980] Num frames 9200...
2993
+ [2023-02-24 14:15:42,803][00980] Num frames 9300...
2994
+ [2023-02-24 14:15:42,946][00980] Avg episode rewards: #0: 39.086, true rewards: #0: 15.587
2995
+ [2023-02-24 14:15:42,949][00980] Avg episode reward: 39.086, avg true_objective: 15.587
2996
+ [2023-02-24 14:15:43,029][00980] Num frames 9400...
2997
+ [2023-02-24 14:15:43,183][00980] Num frames 9500...
2998
+ [2023-02-24 14:15:43,304][00980] Num frames 9600...
2999
+ [2023-02-24 14:15:43,420][00980] Num frames 9700...
3000
+ [2023-02-24 14:15:43,533][00980] Num frames 9800...
3001
+ [2023-02-24 14:15:43,655][00980] Num frames 9900...
3002
+ [2023-02-24 14:15:43,779][00980] Num frames 10000...
3003
+ [2023-02-24 14:15:43,897][00980] Num frames 10100...
3004
+ [2023-02-24 14:15:44,021][00980] Avg episode rewards: #0: 35.645, true rewards: #0: 14.503
3005
+ [2023-02-24 14:15:44,023][00980] Avg episode reward: 35.645, avg true_objective: 14.503
3006
+ [2023-02-24 14:15:44,080][00980] Num frames 10200...
3007
+ [2023-02-24 14:15:44,204][00980] Num frames 10300...
3008
+ [2023-02-24 14:15:44,320][00980] Num frames 10400...
3009
+ [2023-02-24 14:15:44,439][00980] Num frames 10500...
3010
+ [2023-02-24 14:15:44,561][00980] Num frames 10600...
3011
+ [2023-02-24 14:15:44,685][00980] Num frames 10700...
3012
+ [2023-02-24 14:15:44,805][00980] Num frames 10800...
3013
+ [2023-02-24 14:15:44,890][00980] Avg episode rewards: #0: 33.030, true rewards: #0: 13.530
3014
+ [2023-02-24 14:15:44,892][00980] Avg episode reward: 33.030, avg true_objective: 13.530
3015
+ [2023-02-24 14:15:44,983][00980] Num frames 10900...
3016
+ [2023-02-24 14:15:45,100][00980] Num frames 11000...
3017
+ [2023-02-24 14:15:45,217][00980] Num frames 11100...
3018
+ [2023-02-24 14:15:45,335][00980] Num frames 11200...
3019
+ [2023-02-24 14:15:45,451][00980] Num frames 11300...
3020
+ [2023-02-24 14:15:45,575][00980] Num frames 11400...
3021
+ [2023-02-24 14:15:45,700][00980] Num frames 11500...
3022
+ [2023-02-24 14:15:45,823][00980] Num frames 11600...
3023
+ [2023-02-24 14:15:45,942][00980] Num frames 11700...
3024
+ [2023-02-24 14:15:46,060][00980] Num frames 11800...
3025
+ [2023-02-24 14:15:46,186][00980] Num frames 11900...
3026
+ [2023-02-24 14:15:46,306][00980] Num frames 12000...
3027
+ [2023-02-24 14:15:46,428][00980] Num frames 12100...
3028
+ [2023-02-24 14:15:46,545][00980] Num frames 12200...
3029
+ [2023-02-24 14:15:46,675][00980] Num frames 12300...
3030
+ [2023-02-24 14:15:46,792][00980] Num frames 12400...
3031
+ [2023-02-24 14:15:46,916][00980] Num frames 12500...
3032
+ [2023-02-24 14:15:47,032][00980] Num frames 12600...
3033
+ [2023-02-24 14:15:47,152][00980] Num frames 12700...
3034
+ [2023-02-24 14:15:47,277][00980] Num frames 12800...
3035
+ [2023-02-24 14:15:47,398][00980] Num frames 12900...
3036
+ [2023-02-24 14:15:47,490][00980] Avg episode rewards: #0: 36.026, true rewards: #0: 14.360
3037
+ [2023-02-24 14:15:47,491][00980] Avg episode reward: 36.026, avg true_objective: 14.360
3038
+ [2023-02-24 14:15:47,582][00980] Num frames 13000...
3039
+ [2023-02-24 14:15:47,716][00980] Num frames 13100...
3040
+ [2023-02-24 14:15:47,833][00980] Num frames 13200...
3041
+ [2023-02-24 14:15:47,956][00980] Num frames 13300...
3042
+ [2023-02-24 14:15:48,076][00980] Num frames 13400...
3043
+ [2023-02-24 14:15:48,190][00980] Num frames 13500...
3044
+ [2023-02-24 14:15:48,313][00980] Num frames 13600...
3045
+ [2023-02-24 14:15:48,435][00980] Num frames 13700...
3046
+ [2023-02-24 14:15:48,560][00980] Num frames 13800...
3047
+ [2023-02-24 14:15:48,680][00980] Num frames 13900...
3048
+ [2023-02-24 14:15:48,804][00980] Num frames 14000...
3049
+ [2023-02-24 14:15:48,921][00980] Num frames 14100...
3050
+ [2023-02-24 14:15:49,040][00980] Num frames 14200...
3051
+ [2023-02-24 14:15:49,157][00980] Num frames 14300...
3052
+ [2023-02-24 14:15:49,279][00980] Num frames 14400...
3053
+ [2023-02-24 14:15:49,411][00980] Num frames 14500...
3054
+ [2023-02-24 14:15:49,532][00980] Avg episode rewards: #0: 36.556, true rewards: #0: 14.556
3055
+ [2023-02-24 14:15:49,535][00980] Avg episode reward: 36.556, avg true_objective: 14.556
3056
+ [2023-02-24 14:17:18,905][00980] Replay video saved to /content/train_dir/default_experiment/replay.mp4!
3057
+ [2023-02-24 14:17:18,985][00980] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json
3058
+ [2023-02-24 14:17:18,989][00980] Overriding arg 'num_workers' with value 1 passed from command line
3059
+ [2023-02-24 14:17:18,992][00980] Adding new argument 'no_render'=True that is not in the saved config file!
3060
+ [2023-02-24 14:17:18,996][00980] Adding new argument 'save_video'=True that is not in the saved config file!
3061
+ [2023-02-24 14:17:18,999][00980] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file!
3062
+ [2023-02-24 14:17:19,001][00980] Adding new argument 'video_name'=None that is not in the saved config file!
3063
+ [2023-02-24 14:17:19,003][00980] Adding new argument 'max_num_frames'=100000 that is not in the saved config file!
3064
+ [2023-02-24 14:17:19,006][00980] Adding new argument 'max_num_episodes'=10 that is not in the saved config file!
3065
+ [2023-02-24 14:17:19,007][00980] Adding new argument 'push_to_hub'=True that is not in the saved config file!
3066
+ [2023-02-24 14:17:19,012][00980] Adding new argument 'hf_repository'='mnavas/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file!
3067
+ [2023-02-24 14:17:19,013][00980] Adding new argument 'policy_index'=0 that is not in the saved config file!
3068
+ [2023-02-24 14:17:19,014][00980] Adding new argument 'eval_deterministic'=False that is not in the saved config file!
3069
+ [2023-02-24 14:17:19,015][00980] Adding new argument 'train_script'=None that is not in the saved config file!
3070
+ [2023-02-24 14:17:19,016][00980] Adding new argument 'enjoy_script'=None that is not in the saved config file!
3071
+ [2023-02-24 14:17:19,018][00980] Using frameskip 1 and render_action_repeat=4 for evaluation
3072
+ [2023-02-24 14:17:19,052][00980] RunningMeanStd input shape: (3, 72, 128)
3073
+ [2023-02-24 14:17:19,055][00980] RunningMeanStd input shape: (1,)
3074
+ [2023-02-24 14:17:19,074][00980] ConvEncoder: input_channels=3
3075
+ [2023-02-24 14:17:19,143][00980] Conv encoder output size: 512
3076
+ [2023-02-24 14:17:19,145][00980] Policy head output size: 512
3077
+ [2023-02-24 14:17:19,181][00980] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000984_4030464.pth...
3078
+ [2023-02-24 14:17:19,838][00980] Num frames 100...
3079
+ [2023-02-24 14:17:20,012][00980] Num frames 200...
3080
+ [2023-02-24 14:17:20,171][00980] Num frames 300...
3081
+ [2023-02-24 14:17:20,352][00980] Num frames 400...
3082
+ [2023-02-24 14:17:20,438][00980] Avg episode rewards: #0: 6.160, true rewards: #0: 4.160
3083
+ [2023-02-24 14:17:20,439][00980] Avg episode reward: 6.160, avg true_objective: 4.160
3084
+ [2023-02-24 14:17:20,580][00980] Num frames 500...
3085
+ [2023-02-24 14:17:20,745][00980] Num frames 600...
3086
+ [2023-02-24 14:17:20,914][00980] Num frames 700...
3087
+ [2023-02-24 14:17:21,087][00980] Num frames 800...
3088
+ [2023-02-24 14:17:21,258][00980] Num frames 900...
3089
+ [2023-02-24 14:17:21,441][00980] Num frames 1000...
3090
+ [2023-02-24 14:17:21,622][00980] Num frames 1100...
3091
+ [2023-02-24 14:17:21,792][00980] Num frames 1200...
3092
+ [2023-02-24 14:17:21,958][00980] Num frames 1300...
3093
+ [2023-02-24 14:17:22,089][00980] Num frames 1400...
3094
+ [2023-02-24 14:17:22,210][00980] Num frames 1500...
3095
+ [2023-02-24 14:17:22,333][00980] Num frames 1600...
3096
+ [2023-02-24 14:17:22,450][00980] Num frames 1700...
3097
+ [2023-02-24 14:17:22,567][00980] Num frames 1800...
3098
+ [2023-02-24 14:17:22,680][00980] Num frames 1900...
3099
+ [2023-02-24 14:17:22,797][00980] Num frames 2000...
3100
+ [2023-02-24 14:17:22,912][00980] Num frames 2100...
3101
+ [2023-02-24 14:17:23,025][00980] Num frames 2200...
3102
+ [2023-02-24 14:17:23,148][00980] Num frames 2300...
3103
+ [2023-02-24 14:17:23,270][00980] Num frames 2400...
3104
+ [2023-02-24 14:17:23,398][00980] Num frames 2500...
3105
+ [2023-02-24 14:17:23,475][00980] Avg episode rewards: #0: 29.079, true rewards: #0: 12.580
3106
+ [2023-02-24 14:17:23,478][00980] Avg episode reward: 29.079, avg true_objective: 12.580
3107
+ [2023-02-24 14:17:23,575][00980] Num frames 2600...
3108
+ [2023-02-24 14:17:23,691][00980] Num frames 2700...
3109
+ [2023-02-24 14:17:23,831][00980] Avg episode rewards: #0: 20.906, true rewards: #0: 9.240
3110
+ [2023-02-24 14:17:23,833][00980] Avg episode reward: 20.906, avg true_objective: 9.240
3111
+ [2023-02-24 14:17:23,868][00980] Num frames 2800...
3112
+ [2023-02-24 14:17:23,988][00980] Num frames 2900...
3113
+ [2023-02-24 14:17:24,105][00980] Num frames 3000...
3114
+ [2023-02-24 14:17:24,223][00980] Num frames 3100...
3115
+ [2023-02-24 14:17:24,347][00980] Num frames 3200...
3116
+ [2023-02-24 14:17:24,463][00980] Num frames 3300...
3117
+ [2023-02-24 14:17:24,578][00980] Num frames 3400...
3118
+ [2023-02-24 14:17:24,692][00980] Num frames 3500...
3119
+ [2023-02-24 14:17:24,812][00980] Num frames 3600...
3120
+ [2023-02-24 14:17:24,929][00980] Num frames 3700...
3121
+ [2023-02-24 14:17:25,002][00980] Avg episode rewards: #0: 21.285, true rewards: #0: 9.285
3122
+ [2023-02-24 14:17:25,003][00980] Avg episode reward: 21.285, avg true_objective: 9.285
3123
+ [2023-02-24 14:17:25,104][00980] Num frames 3800...
3124
+ [2023-02-24 14:17:25,229][00980] Num frames 3900...
3125
+ [2023-02-24 14:17:25,358][00980] Num frames 4000...
3126
+ [2023-02-24 14:17:25,475][00980] Num frames 4100...
3127
+ [2023-02-24 14:17:25,593][00980] Num frames 4200...
3128
+ [2023-02-24 14:17:25,712][00980] Num frames 4300...
3129
+ [2023-02-24 14:17:25,829][00980] Num frames 4400...
3130
+ [2023-02-24 14:17:25,953][00980] Num frames 4500...
3131
+ [2023-02-24 14:17:26,071][00980] Num frames 4600...
3132
+ [2023-02-24 14:17:26,194][00980] Num frames 4700...
3133
+ [2023-02-24 14:17:26,308][00980] Num frames 4800...
3134
+ [2023-02-24 14:17:26,431][00980] Num frames 4900...
3135
+ [2023-02-24 14:17:26,549][00980] Num frames 5000...
3136
+ [2023-02-24 14:17:26,623][00980] Avg episode rewards: #0: 24.032, true rewards: #0: 10.032
3137
+ [2023-02-24 14:17:26,627][00980] Avg episode reward: 24.032, avg true_objective: 10.032
3138
+ [2023-02-24 14:17:26,723][00980] Num frames 5100...
3139
+ [2023-02-24 14:17:26,837][00980] Num frames 5200...
3140
+ [2023-02-24 14:17:26,959][00980] Num frames 5300...
3141
+ [2023-02-24 14:17:27,078][00980] Num frames 5400...
3142
+ [2023-02-24 14:17:27,197][00980] Num frames 5500...
3143
+ [2023-02-24 14:17:27,320][00980] Num frames 5600...
3144
+ [2023-02-24 14:17:27,438][00980] Num frames 5700...
3145
+ [2023-02-24 14:17:27,560][00980] Num frames 5800...
3146
+ [2023-02-24 14:17:27,707][00980] Avg episode rewards: #0: 23.467, true rewards: #0: 9.800
3147
+ [2023-02-24 14:17:27,709][00980] Avg episode reward: 23.467, avg true_objective: 9.800
3148
+ [2023-02-24 14:17:27,735][00980] Num frames 5900...
3149
+ [2023-02-24 14:17:27,855][00980] Num frames 6000...
3150
+ [2023-02-24 14:17:27,971][00980] Num frames 6100...
3151
+ [2023-02-24 14:17:28,088][00980] Num frames 6200...
3152
+ [2023-02-24 14:17:28,208][00980] Num frames 6300...
3153
+ [2023-02-24 14:17:28,326][00980] Num frames 6400...
3154
+ [2023-02-24 14:17:28,447][00980] Num frames 6500...
3155
+ [2023-02-24 14:17:28,572][00980] Num frames 6600...
3156
+ [2023-02-24 14:17:28,748][00980] Avg episode rewards: #0: 22.854, true rewards: #0: 9.569
3157
+ [2023-02-24 14:17:28,751][00980] Avg episode reward: 22.854, avg true_objective: 9.569
3158
+ [2023-02-24 14:17:28,755][00980] Num frames 6700...
3159
+ [2023-02-24 14:17:28,880][00980] Num frames 6800...
3160
+ [2023-02-24 14:17:28,994][00980] Num frames 6900...
3161
+ [2023-02-24 14:17:29,113][00980] Num frames 7000...
3162
+ [2023-02-24 14:17:29,237][00980] Num frames 7100...
3163
+ [2023-02-24 14:17:29,355][00980] Num frames 7200...
3164
+ [2023-02-24 14:17:29,470][00980] Avg episode rewards: #0: 21.427, true rewards: #0: 9.052
3165
+ [2023-02-24 14:17:29,473][00980] Avg episode reward: 21.427, avg true_objective: 9.052
3166
+ [2023-02-24 14:17:29,547][00980] Num frames 7300...
3167
+ [2023-02-24 14:17:29,672][00980] Num frames 7400...
3168
+ [2023-02-24 14:17:29,787][00980] Num frames 7500...
3169
+ [2023-02-24 14:17:29,913][00980] Num frames 7600...
3170
+ [2023-02-24 14:17:30,036][00980] Num frames 7700...
3171
+ [2023-02-24 14:17:30,152][00980] Num frames 7800...
3172
+ [2023-02-24 14:17:30,268][00980] Num frames 7900...
3173
+ [2023-02-24 14:17:30,391][00980] Num frames 8000...
3174
+ [2023-02-24 14:17:30,516][00980] Num frames 8100...
3175
+ [2023-02-24 14:17:30,644][00980] Num frames 8200...
3176
+ [2023-02-24 14:17:30,764][00980] Num frames 8300...
3177
+ [2023-02-24 14:17:30,883][00980] Num frames 8400...
3178
+ [2023-02-24 14:17:31,008][00980] Num frames 8500...
3179
+ [2023-02-24 14:17:31,128][00980] Num frames 8600...
3180
+ [2023-02-24 14:17:31,250][00980] Num frames 8700...
3181
+ [2023-02-24 14:17:31,399][00980] Avg episode rewards: #0: 23.308, true rewards: #0: 9.752
3182
+ [2023-02-24 14:17:31,400][00980] Avg episode reward: 23.308, avg true_objective: 9.752
3183
+ [2023-02-24 14:17:31,435][00980] Num frames 8800...
3184
+ [2023-02-24 14:17:31,553][00980] Num frames 8900...
3185
+ [2023-02-24 14:17:31,671][00980] Num frames 9000...
3186
+ [2023-02-24 14:17:31,794][00980] Num frames 9100...
3187
+ [2023-02-24 14:17:31,913][00980] Num frames 9200...
3188
+ [2023-02-24 14:17:32,045][00980] Num frames 9300...
3189
+ [2023-02-24 14:17:32,220][00980] Num frames 9400...
3190
+ [2023-02-24 14:17:32,391][00980] Num frames 9500...
3191
+ [2023-02-24 14:17:32,585][00980] Num frames 9600...
3192
+ [2023-02-24 14:17:32,749][00980] Num frames 9700...
3193
+ [2023-02-24 14:17:32,909][00980] Num frames 9800...
3194
+ [2023-02-24 14:17:33,124][00980] Avg episode rewards: #0: 23.597, true rewards: #0: 9.897
3195
+ [2023-02-24 14:17:33,127][00980] Avg episode reward: 23.597, avg true_objective: 9.897
3196
+ [2023-02-24 14:17:33,136][00980] Num frames 9900...
3197
+ [2023-02-24 14:18:34,473][00980] Replay video saved to /content/train_dir/default_experiment/replay.mp4!