|
2023-03-03 13:59:33,312 - mmseg - INFO - Multi-processing start method is `None` |
|
2023-03-03 13:59:33,327 - mmseg - INFO - OpenCV num_threads is `128 |
|
2023-03-03 13:59:33,327 - mmseg - INFO - OMP num threads is 1 |
|
2023-03-03 13:59:33,410 - mmseg - INFO - Environment info: |
|
------------------------------------------------------------ |
|
sys.platform: linux |
|
Python: 3.7.16 (default, Jan 17 2023, 22:20:44) [GCC 11.2.0] |
|
CUDA available: True |
|
GPU 0,1,2,3,4,5,6,7: NVIDIA A100-SXM4-80GB |
|
CUDA_HOME: /mnt/petrelfs/laizeqiang/miniconda3/envs/torch |
|
NVCC: Cuda compilation tools, release 11.6, V11.6.124 |
|
GCC: gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-44) |
|
PyTorch: 1.13.1 |
|
PyTorch compiling details: PyTorch built with: |
|
- GCC 9.3 |
|
- C++ Version: 201402 |
|
- Intel(R) oneAPI Math Kernel Library Version 2021.4-Product Build 20210904 for Intel(R) 64 architecture applications |
|
- Intel(R) MKL-DNN v2.6.0 (Git Hash 52b5f107dd9cf10910aaa19cb47f3abf9b349815) |
|
- OpenMP 201511 (a.k.a. OpenMP 4.5) |
|
- LAPACK is enabled (usually provided by MKL) |
|
- NNPACK is enabled |
|
- CPU capability usage: AVX2 |
|
- CUDA Runtime 11.6 |
|
- NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_37,code=compute_37 |
|
- CuDNN 8.3.2 (built against CUDA 11.5) |
|
- Magma 2.6.1 |
|
- Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.6, CUDNN_VERSION=8.3.2, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -fabi-version=11 -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wunused-local-typedefs -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.13.1, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, |
|
|
|
TorchVision: 0.14.1 |
|
OpenCV: 4.7.0 |
|
MMCV: 1.7.1 |
|
MMCV Compiler: GCC 9.3 |
|
MMCV CUDA Compiler: 11.6 |
|
MMSegmentation: 0.30.0+ad87029 |
|
------------------------------------------------------------ |
|
|
|
2023-03-03 13:59:33,411 - mmseg - INFO - Distributed training: True |
|
2023-03-03 13:59:34,043 - mmseg - INFO - Config: |
|
norm_cfg = dict(type='SyncBN', requires_grad=True) |
|
checkpoint = 'pretrained/segformer_mit-b2_512x512_160k_ade20k_20220620_114047-64e4feca.pth' |
|
model = dict( |
|
type='EncoderDecoderFreeze', |
|
freeze_parameters=['backbone', 'decode_head'], |
|
pretrained= |
|
'pretrained/segformer_mit-b2_512x512_160k_ade20k_20220620_114047-64e4feca.pth', |
|
backbone=dict( |
|
type='MixVisionTransformerCustomInitWeights', |
|
in_channels=3, |
|
embed_dims=64, |
|
num_stages=4, |
|
num_layers=[3, 4, 6, 3], |
|
num_heads=[1, 2, 5, 8], |
|
patch_sizes=[7, 3, 3, 3], |
|
sr_ratios=[8, 4, 2, 1], |
|
out_indices=(0, 1, 2, 3), |
|
mlp_ratio=4, |
|
qkv_bias=True, |
|
drop_rate=0.0, |
|
attn_drop_rate=0.0, |
|
drop_path_rate=0.1), |
|
decode_head=dict( |
|
type='SegformerHeadUnetFCHeadSingleStep', |
|
pretrained= |
|
'pretrained/segformer_mit-b2_512x512_160k_ade20k_20220620_114047-64e4feca.pth', |
|
dim=128, |
|
out_dim=256, |
|
unet_channels=272, |
|
dim_mults=[1, 1, 1], |
|
cat_embedding_dim=16, |
|
in_channels=[64, 128, 320, 512], |
|
in_index=[0, 1, 2, 3], |
|
channels=256, |
|
dropout_ratio=0.1, |
|
num_classes=151, |
|
norm_cfg=dict(type='SyncBN', requires_grad=True), |
|
align_corners=False, |
|
ignore_index=0, |
|
loss_decode=dict( |
|
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)), |
|
train_cfg=dict(), |
|
test_cfg=dict(mode='whole')) |
|
dataset_type = 'ADE20K151Dataset' |
|
data_root = 'data/ade/ADEChallengeData2016' |
|
img_norm_cfg = dict( |
|
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) |
|
crop_size = (512, 512) |
|
train_pipeline = [ |
|
dict(type='LoadImageFromFile'), |
|
dict(type='LoadAnnotations', reduce_zero_label=False), |
|
dict(type='Resize', img_scale=(2048, 512), ratio_range=(0.5, 2.0)), |
|
dict(type='RandomCrop', crop_size=(512, 512), cat_max_ratio=0.75), |
|
dict(type='RandomFlip', prob=0.5), |
|
dict(type='PhotoMetricDistortion'), |
|
dict( |
|
type='Normalize', |
|
mean=[123.675, 116.28, 103.53], |
|
std=[58.395, 57.12, 57.375], |
|
to_rgb=True), |
|
dict(type='Pad', size=(512, 512), pad_val=0, seg_pad_val=0), |
|
dict(type='DefaultFormatBundle'), |
|
dict(type='Collect', keys=['img', 'gt_semantic_seg']) |
|
] |
|
test_pipeline = [ |
|
dict(type='LoadImageFromFile'), |
|
dict( |
|
type='MultiScaleFlipAug', |
|
img_scale=(2048, 512), |
|
flip=False, |
|
transforms=[ |
|
dict(type='Resize', keep_ratio=True), |
|
dict(type='RandomFlip'), |
|
dict( |
|
type='Normalize', |
|
mean=[123.675, 116.28, 103.53], |
|
std=[58.395, 57.12, 57.375], |
|
to_rgb=True), |
|
dict(type='Pad', size_divisor=16, pad_val=0, seg_pad_val=0), |
|
dict(type='ImageToTensor', keys=['img']), |
|
dict(type='Collect', keys=['img']) |
|
]) |
|
] |
|
data = dict( |
|
samples_per_gpu=4, |
|
workers_per_gpu=4, |
|
train=dict( |
|
type='ADE20K151Dataset', |
|
data_root='data/ade/ADEChallengeData2016', |
|
img_dir='images/training', |
|
ann_dir='annotations/training', |
|
pipeline=[ |
|
dict(type='LoadImageFromFile'), |
|
dict(type='LoadAnnotations', reduce_zero_label=False), |
|
dict(type='Resize', img_scale=(2048, 512), ratio_range=(0.5, 2.0)), |
|
dict(type='RandomCrop', crop_size=(512, 512), cat_max_ratio=0.75), |
|
dict(type='RandomFlip', prob=0.5), |
|
dict(type='PhotoMetricDistortion'), |
|
dict( |
|
type='Normalize', |
|
mean=[123.675, 116.28, 103.53], |
|
std=[58.395, 57.12, 57.375], |
|
to_rgb=True), |
|
dict(type='Pad', size=(512, 512), pad_val=0, seg_pad_val=0), |
|
dict(type='DefaultFormatBundle'), |
|
dict(type='Collect', keys=['img', 'gt_semantic_seg']) |
|
]), |
|
val=dict( |
|
type='ADE20K151Dataset', |
|
data_root='data/ade/ADEChallengeData2016', |
|
img_dir='images/validation', |
|
ann_dir='annotations/validation', |
|
pipeline=[ |
|
dict(type='LoadImageFromFile'), |
|
dict( |
|
type='MultiScaleFlipAug', |
|
img_scale=(2048, 512), |
|
flip=False, |
|
transforms=[ |
|
dict(type='Resize', keep_ratio=True), |
|
dict(type='RandomFlip'), |
|
dict( |
|
type='Normalize', |
|
mean=[123.675, 116.28, 103.53], |
|
std=[58.395, 57.12, 57.375], |
|
to_rgb=True), |
|
dict( |
|
type='Pad', size_divisor=16, pad_val=0, seg_pad_val=0), |
|
dict(type='ImageToTensor', keys=['img']), |
|
dict(type='Collect', keys=['img']) |
|
]) |
|
]), |
|
test=dict( |
|
type='ADE20K151Dataset', |
|
data_root='data/ade/ADEChallengeData2016', |
|
img_dir='images/validation', |
|
ann_dir='annotations/validation', |
|
pipeline=[ |
|
dict(type='LoadImageFromFile'), |
|
dict( |
|
type='MultiScaleFlipAug', |
|
img_scale=(2048, 512), |
|
flip=False, |
|
transforms=[ |
|
dict(type='Resize', keep_ratio=True), |
|
dict(type='RandomFlip'), |
|
dict( |
|
type='Normalize', |
|
mean=[123.675, 116.28, 103.53], |
|
std=[58.395, 57.12, 57.375], |
|
to_rgb=True), |
|
dict( |
|
type='Pad', size_divisor=16, pad_val=0, seg_pad_val=0), |
|
dict(type='ImageToTensor', keys=['img']), |
|
dict(type='Collect', keys=['img']) |
|
]) |
|
])) |
|
log_config = dict( |
|
interval=50, hooks=[dict(type='TextLoggerHook', by_epoch=False)]) |
|
dist_params = dict(backend='nccl') |
|
log_level = 'INFO' |
|
load_from = None |
|
resume_from = None |
|
workflow = [('train', 1)] |
|
cudnn_benchmark = True |
|
optimizer = dict( |
|
type='AdamW', lr=0.00015, betas=[0.9, 0.96], weight_decay=0.045) |
|
optimizer_config = dict() |
|
lr_config = dict( |
|
policy='step', |
|
warmup='linear', |
|
warmup_iters=1000, |
|
warmup_ratio=1e-06, |
|
step=10000, |
|
gamma=0.5, |
|
min_lr=1e-06, |
|
by_epoch=False) |
|
runner = dict(type='IterBasedRunner', max_iters=80000) |
|
checkpoint_config = dict(by_epoch=False, interval=8000) |
|
evaluation = dict( |
|
interval=8000, metric='mIoU', pre_eval=True, save_best='mIoU') |
|
work_dir = './work_dirs/segformer_mit_b2_segformer_head_unet_fc_single_step_ade_pretrained_freeze_embed_80k_ade20k151' |
|
gpu_ids = range(0, 8) |
|
auto_resume = True |
|
|
|
2023-03-03 13:59:38,432 - mmseg - INFO - Set random seed to 97773280, deterministic: False |
|
2023-03-03 13:59:38,757 - mmseg - INFO - Parameters in backbone freezed! |
|
2023-03-03 13:59:38,758 - mmseg - INFO - Trainable parameters in SegformerHeadUnetFCHeadSingleStep: ['unet.init_conv.weight', 'unet.init_conv.bias', 'unet.time_mlp.1.weight', 'unet.time_mlp.1.bias', 'unet.time_mlp.3.weight', 'unet.time_mlp.3.bias', 'unet.downs.0.0.mlp.1.weight', 'unet.downs.0.0.mlp.1.bias', 'unet.downs.0.0.block1.proj.weight', 'unet.downs.0.0.block1.proj.bias', 'unet.downs.0.0.block1.norm.weight', 'unet.downs.0.0.block1.norm.bias', 'unet.downs.0.0.block2.proj.weight', 'unet.downs.0.0.block2.proj.bias', 'unet.downs.0.0.block2.norm.weight', 'unet.downs.0.0.block2.norm.bias', 'unet.downs.0.1.mlp.1.weight', 'unet.downs.0.1.mlp.1.bias', 'unet.downs.0.1.block1.proj.weight', 'unet.downs.0.1.block1.proj.bias', 'unet.downs.0.1.block1.norm.weight', 'unet.downs.0.1.block1.norm.bias', 'unet.downs.0.1.block2.proj.weight', 'unet.downs.0.1.block2.proj.bias', 'unet.downs.0.1.block2.norm.weight', 'unet.downs.0.1.block2.norm.bias', 'unet.downs.0.2.fn.fn.to_qkv.weight', 'unet.downs.0.2.fn.fn.to_out.0.weight', 'unet.downs.0.2.fn.fn.to_out.0.bias', 'unet.downs.0.2.fn.fn.to_out.1.g', 'unet.downs.0.2.fn.norm.g', 'unet.downs.0.3.weight', 'unet.downs.0.3.bias', 'unet.downs.1.0.mlp.1.weight', 'unet.downs.1.0.mlp.1.bias', 'unet.downs.1.0.block1.proj.weight', 'unet.downs.1.0.block1.proj.bias', 'unet.downs.1.0.block1.norm.weight', 'unet.downs.1.0.block1.norm.bias', 'unet.downs.1.0.block2.proj.weight', 'unet.downs.1.0.block2.proj.bias', 'unet.downs.1.0.block2.norm.weight', 'unet.downs.1.0.block2.norm.bias', 'unet.downs.1.1.mlp.1.weight', 'unet.downs.1.1.mlp.1.bias', 'unet.downs.1.1.block1.proj.weight', 'unet.downs.1.1.block1.proj.bias', 'unet.downs.1.1.block1.norm.weight', 'unet.downs.1.1.block1.norm.bias', 'unet.downs.1.1.block2.proj.weight', 'unet.downs.1.1.block2.proj.bias', 'unet.downs.1.1.block2.norm.weight', 'unet.downs.1.1.block2.norm.bias', 'unet.downs.1.2.fn.fn.to_qkv.weight', 'unet.downs.1.2.fn.fn.to_out.0.weight', 'unet.downs.1.2.fn.fn.to_out.0.bias', 'unet.downs.1.2.fn.fn.to_out.1.g', 'unet.downs.1.2.fn.norm.g', 'unet.downs.1.3.weight', 'unet.downs.1.3.bias', 'unet.downs.2.0.mlp.1.weight', 'unet.downs.2.0.mlp.1.bias', 'unet.downs.2.0.block1.proj.weight', 'unet.downs.2.0.block1.proj.bias', 'unet.downs.2.0.block1.norm.weight', 'unet.downs.2.0.block1.norm.bias', 'unet.downs.2.0.block2.proj.weight', 'unet.downs.2.0.block2.proj.bias', 'unet.downs.2.0.block2.norm.weight', 'unet.downs.2.0.block2.norm.bias', 'unet.downs.2.1.mlp.1.weight', 'unet.downs.2.1.mlp.1.bias', 'unet.downs.2.1.block1.proj.weight', 'unet.downs.2.1.block1.proj.bias', 'unet.downs.2.1.block1.norm.weight', 'unet.downs.2.1.block1.norm.bias', 'unet.downs.2.1.block2.proj.weight', 'unet.downs.2.1.block2.proj.bias', 'unet.downs.2.1.block2.norm.weight', 'unet.downs.2.1.block2.norm.bias', 'unet.downs.2.2.fn.fn.to_qkv.weight', 'unet.downs.2.2.fn.fn.to_out.0.weight', 'unet.downs.2.2.fn.fn.to_out.0.bias', 'unet.downs.2.2.fn.fn.to_out.1.g', 'unet.downs.2.2.fn.norm.g', 'unet.downs.2.3.weight', 'unet.downs.2.3.bias', 'unet.ups.0.0.mlp.1.weight', 'unet.ups.0.0.mlp.1.bias', 'unet.ups.0.0.block1.proj.weight', 'unet.ups.0.0.block1.proj.bias', 'unet.ups.0.0.block1.norm.weight', 'unet.ups.0.0.block1.norm.bias', 'unet.ups.0.0.block2.proj.weight', 'unet.ups.0.0.block2.proj.bias', 'unet.ups.0.0.block2.norm.weight', 'unet.ups.0.0.block2.norm.bias', 'unet.ups.0.0.res_conv.weight', 'unet.ups.0.0.res_conv.bias', 'unet.ups.0.1.mlp.1.weight', 'unet.ups.0.1.mlp.1.bias', 'unet.ups.0.1.block1.proj.weight', 'unet.ups.0.1.block1.proj.bias', 'unet.ups.0.1.block1.norm.weight', 'unet.ups.0.1.block1.norm.bias', 'unet.ups.0.1.block2.proj.weight', 'unet.ups.0.1.block2.proj.bias', 'unet.ups.0.1.block2.norm.weight', 'unet.ups.0.1.block2.norm.bias', 'unet.ups.0.1.res_conv.weight', 'unet.ups.0.1.res_conv.bias', 'unet.ups.0.2.fn.fn.to_qkv.weight', 'unet.ups.0.2.fn.fn.to_out.0.weight', 'unet.ups.0.2.fn.fn.to_out.0.bias', 'unet.ups.0.2.fn.fn.to_out.1.g', 'unet.ups.0.2.fn.norm.g', 'unet.ups.0.3.1.weight', 'unet.ups.0.3.1.bias', 'unet.ups.1.0.mlp.1.weight', 'unet.ups.1.0.mlp.1.bias', 'unet.ups.1.0.block1.proj.weight', 'unet.ups.1.0.block1.proj.bias', 'unet.ups.1.0.block1.norm.weight', 'unet.ups.1.0.block1.norm.bias', 'unet.ups.1.0.block2.proj.weight', 'unet.ups.1.0.block2.proj.bias', 'unet.ups.1.0.block2.norm.weight', 'unet.ups.1.0.block2.norm.bias', 'unet.ups.1.0.res_conv.weight', 'unet.ups.1.0.res_conv.bias', 'unet.ups.1.1.mlp.1.weight', 'unet.ups.1.1.mlp.1.bias', 'unet.ups.1.1.block1.proj.weight', 'unet.ups.1.1.block1.proj.bias', 'unet.ups.1.1.block1.norm.weight', 'unet.ups.1.1.block1.norm.bias', 'unet.ups.1.1.block2.proj.weight', 'unet.ups.1.1.block2.proj.bias', 'unet.ups.1.1.block2.norm.weight', 'unet.ups.1.1.block2.norm.bias', 'unet.ups.1.1.res_conv.weight', 'unet.ups.1.1.res_conv.bias', 'unet.ups.1.2.fn.fn.to_qkv.weight', 'unet.ups.1.2.fn.fn.to_out.0.weight', 'unet.ups.1.2.fn.fn.to_out.0.bias', 'unet.ups.1.2.fn.fn.to_out.1.g', 'unet.ups.1.2.fn.norm.g', 'unet.ups.1.3.1.weight', 'unet.ups.1.3.1.bias', 'unet.ups.2.0.mlp.1.weight', 'unet.ups.2.0.mlp.1.bias', 'unet.ups.2.0.block1.proj.weight', 'unet.ups.2.0.block1.proj.bias', 'unet.ups.2.0.block1.norm.weight', 'unet.ups.2.0.block1.norm.bias', 'unet.ups.2.0.block2.proj.weight', 'unet.ups.2.0.block2.proj.bias', 'unet.ups.2.0.block2.norm.weight', 'unet.ups.2.0.block2.norm.bias', 'unet.ups.2.0.res_conv.weight', 'unet.ups.2.0.res_conv.bias', 'unet.ups.2.1.mlp.1.weight', 'unet.ups.2.1.mlp.1.bias', 'unet.ups.2.1.block1.proj.weight', 'unet.ups.2.1.block1.proj.bias', 'unet.ups.2.1.block1.norm.weight', 'unet.ups.2.1.block1.norm.bias', 'unet.ups.2.1.block2.proj.weight', 'unet.ups.2.1.block2.proj.bias', 'unet.ups.2.1.block2.norm.weight', 'unet.ups.2.1.block2.norm.bias', 'unet.ups.2.1.res_conv.weight', 'unet.ups.2.1.res_conv.bias', 'unet.ups.2.2.fn.fn.to_qkv.weight', 'unet.ups.2.2.fn.fn.to_out.0.weight', 'unet.ups.2.2.fn.fn.to_out.0.bias', 'unet.ups.2.2.fn.fn.to_out.1.g', 'unet.ups.2.2.fn.norm.g', 'unet.ups.2.3.weight', 'unet.ups.2.3.bias', 'unet.mid_block1.mlp.1.weight', 'unet.mid_block1.mlp.1.bias', 'unet.mid_block1.block1.proj.weight', 'unet.mid_block1.block1.proj.bias', 'unet.mid_block1.block1.norm.weight', 'unet.mid_block1.block1.norm.bias', 'unet.mid_block1.block2.proj.weight', 'unet.mid_block1.block2.proj.bias', 'unet.mid_block1.block2.norm.weight', 'unet.mid_block1.block2.norm.bias', 'unet.mid_attn.fn.fn.to_qkv.weight', 'unet.mid_attn.fn.fn.to_out.weight', 'unet.mid_attn.fn.fn.to_out.bias', 'unet.mid_attn.fn.norm.g', 'unet.mid_block2.mlp.1.weight', 'unet.mid_block2.mlp.1.bias', 'unet.mid_block2.block1.proj.weight', 'unet.mid_block2.block1.proj.bias', 'unet.mid_block2.block1.norm.weight', 'unet.mid_block2.block1.norm.bias', 'unet.mid_block2.block2.proj.weight', 'unet.mid_block2.block2.proj.bias', 'unet.mid_block2.block2.norm.weight', 'unet.mid_block2.block2.norm.bias', 'unet.final_res_block.mlp.1.weight', 'unet.final_res_block.mlp.1.bias', 'unet.final_res_block.block1.proj.weight', 'unet.final_res_block.block1.proj.bias', 'unet.final_res_block.block1.norm.weight', 'unet.final_res_block.block1.norm.bias', 'unet.final_res_block.block2.proj.weight', 'unet.final_res_block.block2.proj.bias', 'unet.final_res_block.block2.norm.weight', 'unet.final_res_block.block2.norm.bias', 'unet.final_res_block.res_conv.weight', 'unet.final_res_block.res_conv.bias', 'unet.final_conv.weight', 'unet.final_conv.bias', 'conv_seg_new.weight', 'conv_seg_new.bias'] |
|
2023-03-03 13:59:38,758 - mmseg - INFO - Parameters in decode_head freezed! |
|
2023-03-03 13:59:38,778 - mmseg - INFO - load checkpoint from local path: pretrained/segformer_mit-b2_512x512_160k_ade20k_20220620_114047-64e4feca.pth |
|
2023-03-03 13:59:39,026 - mmseg - WARNING - The model and loaded state dict do not match exactly |
|
|
|
unexpected key in source state_dict: decode_head.conv_seg.weight, decode_head.conv_seg.bias, decode_head.convs.0.conv.weight, decode_head.convs.0.bn.weight, decode_head.convs.0.bn.bias, decode_head.convs.0.bn.running_mean, decode_head.convs.0.bn.running_var, decode_head.convs.0.bn.num_batches_tracked, decode_head.convs.1.conv.weight, decode_head.convs.1.bn.weight, decode_head.convs.1.bn.bias, decode_head.convs.1.bn.running_mean, decode_head.convs.1.bn.running_var, decode_head.convs.1.bn.num_batches_tracked, decode_head.convs.2.conv.weight, decode_head.convs.2.bn.weight, decode_head.convs.2.bn.bias, decode_head.convs.2.bn.running_mean, decode_head.convs.2.bn.running_var, decode_head.convs.2.bn.num_batches_tracked, decode_head.convs.3.conv.weight, decode_head.convs.3.bn.weight, decode_head.convs.3.bn.bias, decode_head.convs.3.bn.running_mean, decode_head.convs.3.bn.running_var, decode_head.convs.3.bn.num_batches_tracked, decode_head.fusion_conv.conv.weight, decode_head.fusion_conv.bn.weight, decode_head.fusion_conv.bn.bias, decode_head.fusion_conv.bn.running_mean, decode_head.fusion_conv.bn.running_var, decode_head.fusion_conv.bn.num_batches_tracked |
|
|
|
2023-03-03 13:59:39,040 - mmseg - INFO - load checkpoint from local path: pretrained/segformer_mit-b2_512x512_160k_ade20k_20220620_114047-64e4feca.pth |
|
2023-03-03 13:59:39,262 - mmseg - WARNING - The model and loaded state dict do not match exactly |
|
|
|
unexpected key in source state_dict: backbone.layers.0.0.projection.weight, backbone.layers.0.0.projection.bias, backbone.layers.0.0.norm.weight, backbone.layers.0.0.norm.bias, backbone.layers.0.1.0.norm1.weight, backbone.layers.0.1.0.norm1.bias, backbone.layers.0.1.0.attn.attn.in_proj_weight, backbone.layers.0.1.0.attn.attn.in_proj_bias, backbone.layers.0.1.0.attn.attn.out_proj.weight, backbone.layers.0.1.0.attn.attn.out_proj.bias, backbone.layers.0.1.0.attn.sr.weight, backbone.layers.0.1.0.attn.sr.bias, backbone.layers.0.1.0.attn.norm.weight, backbone.layers.0.1.0.attn.norm.bias, backbone.layers.0.1.0.norm2.weight, backbone.layers.0.1.0.norm2.bias, backbone.layers.0.1.0.ffn.layers.0.weight, backbone.layers.0.1.0.ffn.layers.0.bias, backbone.layers.0.1.0.ffn.layers.1.weight, backbone.layers.0.1.0.ffn.layers.1.bias, backbone.layers.0.1.0.ffn.layers.4.weight, backbone.layers.0.1.0.ffn.layers.4.bias, backbone.layers.0.1.1.norm1.weight, backbone.layers.0.1.1.norm1.bias, backbone.layers.0.1.1.attn.attn.in_proj_weight, backbone.layers.0.1.1.attn.attn.in_proj_bias, backbone.layers.0.1.1.attn.attn.out_proj.weight, backbone.layers.0.1.1.attn.attn.out_proj.bias, backbone.layers.0.1.1.attn.sr.weight, backbone.layers.0.1.1.attn.sr.bias, backbone.layers.0.1.1.attn.norm.weight, backbone.layers.0.1.1.attn.norm.bias, backbone.layers.0.1.1.norm2.weight, backbone.layers.0.1.1.norm2.bias, backbone.layers.0.1.1.ffn.layers.0.weight, backbone.layers.0.1.1.ffn.layers.0.bias, backbone.layers.0.1.1.ffn.layers.1.weight, backbone.layers.0.1.1.ffn.layers.1.bias, backbone.layers.0.1.1.ffn.layers.4.weight, backbone.layers.0.1.1.ffn.layers.4.bias, backbone.layers.0.1.2.norm1.weight, backbone.layers.0.1.2.norm1.bias, backbone.layers.0.1.2.attn.attn.in_proj_weight, backbone.layers.0.1.2.attn.attn.in_proj_bias, backbone.layers.0.1.2.attn.attn.out_proj.weight, backbone.layers.0.1.2.attn.attn.out_proj.bias, backbone.layers.0.1.2.attn.sr.weight, backbone.layers.0.1.2.attn.sr.bias, backbone.layers.0.1.2.attn.norm.weight, backbone.layers.0.1.2.attn.norm.bias, backbone.layers.0.1.2.norm2.weight, backbone.layers.0.1.2.norm2.bias, backbone.layers.0.1.2.ffn.layers.0.weight, backbone.layers.0.1.2.ffn.layers.0.bias, backbone.layers.0.1.2.ffn.layers.1.weight, backbone.layers.0.1.2.ffn.layers.1.bias, backbone.layers.0.1.2.ffn.layers.4.weight, backbone.layers.0.1.2.ffn.layers.4.bias, backbone.layers.0.2.weight, backbone.layers.0.2.bias, backbone.layers.1.0.projection.weight, backbone.layers.1.0.projection.bias, backbone.layers.1.0.norm.weight, backbone.layers.1.0.norm.bias, backbone.layers.1.1.0.norm1.weight, backbone.layers.1.1.0.norm1.bias, backbone.layers.1.1.0.attn.attn.in_proj_weight, backbone.layers.1.1.0.attn.attn.in_proj_bias, backbone.layers.1.1.0.attn.attn.out_proj.weight, backbone.layers.1.1.0.attn.attn.out_proj.bias, backbone.layers.1.1.0.attn.sr.weight, backbone.layers.1.1.0.attn.sr.bias, backbone.layers.1.1.0.attn.norm.weight, backbone.layers.1.1.0.attn.norm.bias, backbone.layers.1.1.0.norm2.weight, backbone.layers.1.1.0.norm2.bias, backbone.layers.1.1.0.ffn.layers.0.weight, backbone.layers.1.1.0.ffn.layers.0.bias, backbone.layers.1.1.0.ffn.layers.1.weight, backbone.layers.1.1.0.ffn.layers.1.bias, backbone.layers.1.1.0.ffn.layers.4.weight, backbone.layers.1.1.0.ffn.layers.4.bias, backbone.layers.1.1.1.norm1.weight, backbone.layers.1.1.1.norm1.bias, backbone.layers.1.1.1.attn.attn.in_proj_weight, backbone.layers.1.1.1.attn.attn.in_proj_bias, backbone.layers.1.1.1.attn.attn.out_proj.weight, backbone.layers.1.1.1.attn.attn.out_proj.bias, backbone.layers.1.1.1.attn.sr.weight, backbone.layers.1.1.1.attn.sr.bias, backbone.layers.1.1.1.attn.norm.weight, backbone.layers.1.1.1.attn.norm.bias, backbone.layers.1.1.1.norm2.weight, backbone.layers.1.1.1.norm2.bias, backbone.layers.1.1.1.ffn.layers.0.weight, backbone.layers.1.1.1.ffn.layers.0.bias, backbone.layers.1.1.1.ffn.layers.1.weight, backbone.layers.1.1.1.ffn.layers.1.bias, backbone.layers.1.1.1.ffn.layers.4.weight, backbone.layers.1.1.1.ffn.layers.4.bias, backbone.layers.1.1.2.norm1.weight, backbone.layers.1.1.2.norm1.bias, backbone.layers.1.1.2.attn.attn.in_proj_weight, backbone.layers.1.1.2.attn.attn.in_proj_bias, backbone.layers.1.1.2.attn.attn.out_proj.weight, backbone.layers.1.1.2.attn.attn.out_proj.bias, backbone.layers.1.1.2.attn.sr.weight, backbone.layers.1.1.2.attn.sr.bias, backbone.layers.1.1.2.attn.norm.weight, backbone.layers.1.1.2.attn.norm.bias, backbone.layers.1.1.2.norm2.weight, backbone.layers.1.1.2.norm2.bias, backbone.layers.1.1.2.ffn.layers.0.weight, backbone.layers.1.1.2.ffn.layers.0.bias, backbone.layers.1.1.2.ffn.layers.1.weight, backbone.layers.1.1.2.ffn.layers.1.bias, backbone.layers.1.1.2.ffn.layers.4.weight, backbone.layers.1.1.2.ffn.layers.4.bias, backbone.layers.1.1.3.norm1.weight, backbone.layers.1.1.3.norm1.bias, backbone.layers.1.1.3.attn.attn.in_proj_weight, backbone.layers.1.1.3.attn.attn.in_proj_bias, backbone.layers.1.1.3.attn.attn.out_proj.weight, backbone.layers.1.1.3.attn.attn.out_proj.bias, backbone.layers.1.1.3.attn.sr.weight, backbone.layers.1.1.3.attn.sr.bias, backbone.layers.1.1.3.attn.norm.weight, backbone.layers.1.1.3.attn.norm.bias, backbone.layers.1.1.3.norm2.weight, backbone.layers.1.1.3.norm2.bias, backbone.layers.1.1.3.ffn.layers.0.weight, backbone.layers.1.1.3.ffn.layers.0.bias, backbone.layers.1.1.3.ffn.layers.1.weight, backbone.layers.1.1.3.ffn.layers.1.bias, backbone.layers.1.1.3.ffn.layers.4.weight, backbone.layers.1.1.3.ffn.layers.4.bias, backbone.layers.1.2.weight, backbone.layers.1.2.bias, backbone.layers.2.0.projection.weight, backbone.layers.2.0.projection.bias, backbone.layers.2.0.norm.weight, backbone.layers.2.0.norm.bias, backbone.layers.2.1.0.norm1.weight, backbone.layers.2.1.0.norm1.bias, backbone.layers.2.1.0.attn.attn.in_proj_weight, backbone.layers.2.1.0.attn.attn.in_proj_bias, backbone.layers.2.1.0.attn.attn.out_proj.weight, backbone.layers.2.1.0.attn.attn.out_proj.bias, backbone.layers.2.1.0.attn.sr.weight, backbone.layers.2.1.0.attn.sr.bias, backbone.layers.2.1.0.attn.norm.weight, backbone.layers.2.1.0.attn.norm.bias, backbone.layers.2.1.0.norm2.weight, backbone.layers.2.1.0.norm2.bias, backbone.layers.2.1.0.ffn.layers.0.weight, backbone.layers.2.1.0.ffn.layers.0.bias, backbone.layers.2.1.0.ffn.layers.1.weight, backbone.layers.2.1.0.ffn.layers.1.bias, backbone.layers.2.1.0.ffn.layers.4.weight, backbone.layers.2.1.0.ffn.layers.4.bias, backbone.layers.2.1.1.norm1.weight, backbone.layers.2.1.1.norm1.bias, backbone.layers.2.1.1.attn.attn.in_proj_weight, backbone.layers.2.1.1.attn.attn.in_proj_bias, backbone.layers.2.1.1.attn.attn.out_proj.weight, backbone.layers.2.1.1.attn.attn.out_proj.bias, backbone.layers.2.1.1.attn.sr.weight, backbone.layers.2.1.1.attn.sr.bias, backbone.layers.2.1.1.attn.norm.weight, backbone.layers.2.1.1.attn.norm.bias, backbone.layers.2.1.1.norm2.weight, backbone.layers.2.1.1.norm2.bias, backbone.layers.2.1.1.ffn.layers.0.weight, backbone.layers.2.1.1.ffn.layers.0.bias, backbone.layers.2.1.1.ffn.layers.1.weight, backbone.layers.2.1.1.ffn.layers.1.bias, backbone.layers.2.1.1.ffn.layers.4.weight, backbone.layers.2.1.1.ffn.layers.4.bias, backbone.layers.2.1.2.norm1.weight, backbone.layers.2.1.2.norm1.bias, backbone.layers.2.1.2.attn.attn.in_proj_weight, backbone.layers.2.1.2.attn.attn.in_proj_bias, backbone.layers.2.1.2.attn.attn.out_proj.weight, backbone.layers.2.1.2.attn.attn.out_proj.bias, backbone.layers.2.1.2.attn.sr.weight, backbone.layers.2.1.2.attn.sr.bias, backbone.layers.2.1.2.attn.norm.weight, backbone.layers.2.1.2.attn.norm.bias, backbone.layers.2.1.2.norm2.weight, backbone.layers.2.1.2.norm2.bias, backbone.layers.2.1.2.ffn.layers.0.weight, backbone.layers.2.1.2.ffn.layers.0.bias, backbone.layers.2.1.2.ffn.layers.1.weight, backbone.layers.2.1.2.ffn.layers.1.bias, backbone.layers.2.1.2.ffn.layers.4.weight, backbone.layers.2.1.2.ffn.layers.4.bias, backbone.layers.2.1.3.norm1.weight, backbone.layers.2.1.3.norm1.bias, backbone.layers.2.1.3.attn.attn.in_proj_weight, backbone.layers.2.1.3.attn.attn.in_proj_bias, backbone.layers.2.1.3.attn.attn.out_proj.weight, backbone.layers.2.1.3.attn.attn.out_proj.bias, backbone.layers.2.1.3.attn.sr.weight, backbone.layers.2.1.3.attn.sr.bias, backbone.layers.2.1.3.attn.norm.weight, backbone.layers.2.1.3.attn.norm.bias, backbone.layers.2.1.3.norm2.weight, backbone.layers.2.1.3.norm2.bias, backbone.layers.2.1.3.ffn.layers.0.weight, backbone.layers.2.1.3.ffn.layers.0.bias, backbone.layers.2.1.3.ffn.layers.1.weight, backbone.layers.2.1.3.ffn.layers.1.bias, backbone.layers.2.1.3.ffn.layers.4.weight, backbone.layers.2.1.3.ffn.layers.4.bias, backbone.layers.2.1.4.norm1.weight, backbone.layers.2.1.4.norm1.bias, backbone.layers.2.1.4.attn.attn.in_proj_weight, backbone.layers.2.1.4.attn.attn.in_proj_bias, backbone.layers.2.1.4.attn.attn.out_proj.weight, backbone.layers.2.1.4.attn.attn.out_proj.bias, backbone.layers.2.1.4.attn.sr.weight, backbone.layers.2.1.4.attn.sr.bias, backbone.layers.2.1.4.attn.norm.weight, backbone.layers.2.1.4.attn.norm.bias, backbone.layers.2.1.4.norm2.weight, backbone.layers.2.1.4.norm2.bias, backbone.layers.2.1.4.ffn.layers.0.weight, backbone.layers.2.1.4.ffn.layers.0.bias, backbone.layers.2.1.4.ffn.layers.1.weight, backbone.layers.2.1.4.ffn.layers.1.bias, backbone.layers.2.1.4.ffn.layers.4.weight, backbone.layers.2.1.4.ffn.layers.4.bias, backbone.layers.2.1.5.norm1.weight, backbone.layers.2.1.5.norm1.bias, backbone.layers.2.1.5.attn.attn.in_proj_weight, backbone.layers.2.1.5.attn.attn.in_proj_bias, backbone.layers.2.1.5.attn.attn.out_proj.weight, backbone.layers.2.1.5.attn.attn.out_proj.bias, backbone.layers.2.1.5.attn.sr.weight, backbone.layers.2.1.5.attn.sr.bias, backbone.layers.2.1.5.attn.norm.weight, backbone.layers.2.1.5.attn.norm.bias, backbone.layers.2.1.5.norm2.weight, backbone.layers.2.1.5.norm2.bias, backbone.layers.2.1.5.ffn.layers.0.weight, backbone.layers.2.1.5.ffn.layers.0.bias, backbone.layers.2.1.5.ffn.layers.1.weight, backbone.layers.2.1.5.ffn.layers.1.bias, backbone.layers.2.1.5.ffn.layers.4.weight, backbone.layers.2.1.5.ffn.layers.4.bias, backbone.layers.2.2.weight, backbone.layers.2.2.bias, backbone.layers.3.0.projection.weight, backbone.layers.3.0.projection.bias, backbone.layers.3.0.norm.weight, backbone.layers.3.0.norm.bias, backbone.layers.3.1.0.norm1.weight, backbone.layers.3.1.0.norm1.bias, backbone.layers.3.1.0.attn.attn.in_proj_weight, backbone.layers.3.1.0.attn.attn.in_proj_bias, backbone.layers.3.1.0.attn.attn.out_proj.weight, backbone.layers.3.1.0.attn.attn.out_proj.bias, backbone.layers.3.1.0.norm2.weight, backbone.layers.3.1.0.norm2.bias, backbone.layers.3.1.0.ffn.layers.0.weight, backbone.layers.3.1.0.ffn.layers.0.bias, backbone.layers.3.1.0.ffn.layers.1.weight, backbone.layers.3.1.0.ffn.layers.1.bias, backbone.layers.3.1.0.ffn.layers.4.weight, backbone.layers.3.1.0.ffn.layers.4.bias, backbone.layers.3.1.1.norm1.weight, backbone.layers.3.1.1.norm1.bias, backbone.layers.3.1.1.attn.attn.in_proj_weight, backbone.layers.3.1.1.attn.attn.in_proj_bias, backbone.layers.3.1.1.attn.attn.out_proj.weight, backbone.layers.3.1.1.attn.attn.out_proj.bias, backbone.layers.3.1.1.norm2.weight, backbone.layers.3.1.1.norm2.bias, backbone.layers.3.1.1.ffn.layers.0.weight, backbone.layers.3.1.1.ffn.layers.0.bias, backbone.layers.3.1.1.ffn.layers.1.weight, backbone.layers.3.1.1.ffn.layers.1.bias, backbone.layers.3.1.1.ffn.layers.4.weight, backbone.layers.3.1.1.ffn.layers.4.bias, backbone.layers.3.1.2.norm1.weight, backbone.layers.3.1.2.norm1.bias, backbone.layers.3.1.2.attn.attn.in_proj_weight, backbone.layers.3.1.2.attn.attn.in_proj_bias, backbone.layers.3.1.2.attn.attn.out_proj.weight, backbone.layers.3.1.2.attn.attn.out_proj.bias, backbone.layers.3.1.2.norm2.weight, backbone.layers.3.1.2.norm2.bias, backbone.layers.3.1.2.ffn.layers.0.weight, backbone.layers.3.1.2.ffn.layers.0.bias, backbone.layers.3.1.2.ffn.layers.1.weight, backbone.layers.3.1.2.ffn.layers.1.bias, backbone.layers.3.1.2.ffn.layers.4.weight, backbone.layers.3.1.2.ffn.layers.4.bias, backbone.layers.3.2.weight, backbone.layers.3.2.bias |
|
|
|
missing keys in source state_dict: unet.init_conv.weight, unet.init_conv.bias, unet.time_mlp.1.weight, unet.time_mlp.1.bias, unet.time_mlp.3.weight, unet.time_mlp.3.bias, unet.downs.0.0.mlp.1.weight, unet.downs.0.0.mlp.1.bias, unet.downs.0.0.block1.proj.weight, unet.downs.0.0.block1.proj.bias, unet.downs.0.0.block1.norm.weight, unet.downs.0.0.block1.norm.bias, unet.downs.0.0.block2.proj.weight, unet.downs.0.0.block2.proj.bias, unet.downs.0.0.block2.norm.weight, unet.downs.0.0.block2.norm.bias, unet.downs.0.1.mlp.1.weight, unet.downs.0.1.mlp.1.bias, unet.downs.0.1.block1.proj.weight, unet.downs.0.1.block1.proj.bias, unet.downs.0.1.block1.norm.weight, unet.downs.0.1.block1.norm.bias, unet.downs.0.1.block2.proj.weight, unet.downs.0.1.block2.proj.bias, unet.downs.0.1.block2.norm.weight, unet.downs.0.1.block2.norm.bias, unet.downs.0.2.fn.fn.to_qkv.weight, unet.downs.0.2.fn.fn.to_out.0.weight, unet.downs.0.2.fn.fn.to_out.0.bias, unet.downs.0.2.fn.fn.to_out.1.g, unet.downs.0.2.fn.norm.g, unet.downs.0.3.weight, unet.downs.0.3.bias, unet.downs.1.0.mlp.1.weight, unet.downs.1.0.mlp.1.bias, unet.downs.1.0.block1.proj.weight, unet.downs.1.0.block1.proj.bias, unet.downs.1.0.block1.norm.weight, unet.downs.1.0.block1.norm.bias, unet.downs.1.0.block2.proj.weight, unet.downs.1.0.block2.proj.bias, unet.downs.1.0.block2.norm.weight, unet.downs.1.0.block2.norm.bias, unet.downs.1.1.mlp.1.weight, unet.downs.1.1.mlp.1.bias, unet.downs.1.1.block1.proj.weight, unet.downs.1.1.block1.proj.bias, unet.downs.1.1.block1.norm.weight, unet.downs.1.1.block1.norm.bias, unet.downs.1.1.block2.proj.weight, unet.downs.1.1.block2.proj.bias, unet.downs.1.1.block2.norm.weight, unet.downs.1.1.block2.norm.bias, unet.downs.1.2.fn.fn.to_qkv.weight, unet.downs.1.2.fn.fn.to_out.0.weight, unet.downs.1.2.fn.fn.to_out.0.bias, unet.downs.1.2.fn.fn.to_out.1.g, unet.downs.1.2.fn.norm.g, unet.downs.1.3.weight, unet.downs.1.3.bias, unet.downs.2.0.mlp.1.weight, unet.downs.2.0.mlp.1.bias, unet.downs.2.0.block1.proj.weight, unet.downs.2.0.block1.proj.bias, unet.downs.2.0.block1.norm.weight, unet.downs.2.0.block1.norm.bias, unet.downs.2.0.block2.proj.weight, unet.downs.2.0.block2.proj.bias, unet.downs.2.0.block2.norm.weight, unet.downs.2.0.block2.norm.bias, unet.downs.2.1.mlp.1.weight, unet.downs.2.1.mlp.1.bias, unet.downs.2.1.block1.proj.weight, unet.downs.2.1.block1.proj.bias, unet.downs.2.1.block1.norm.weight, unet.downs.2.1.block1.norm.bias, unet.downs.2.1.block2.proj.weight, unet.downs.2.1.block2.proj.bias, unet.downs.2.1.block2.norm.weight, unet.downs.2.1.block2.norm.bias, unet.downs.2.2.fn.fn.to_qkv.weight, unet.downs.2.2.fn.fn.to_out.0.weight, unet.downs.2.2.fn.fn.to_out.0.bias, unet.downs.2.2.fn.fn.to_out.1.g, unet.downs.2.2.fn.norm.g, unet.downs.2.3.weight, unet.downs.2.3.bias, unet.ups.0.0.mlp.1.weight, unet.ups.0.0.mlp.1.bias, unet.ups.0.0.block1.proj.weight, unet.ups.0.0.block1.proj.bias, unet.ups.0.0.block1.norm.weight, unet.ups.0.0.block1.norm.bias, unet.ups.0.0.block2.proj.weight, unet.ups.0.0.block2.proj.bias, unet.ups.0.0.block2.norm.weight, unet.ups.0.0.block2.norm.bias, unet.ups.0.0.res_conv.weight, unet.ups.0.0.res_conv.bias, unet.ups.0.1.mlp.1.weight, unet.ups.0.1.mlp.1.bias, unet.ups.0.1.block1.proj.weight, unet.ups.0.1.block1.proj.bias, unet.ups.0.1.block1.norm.weight, unet.ups.0.1.block1.norm.bias, unet.ups.0.1.block2.proj.weight, unet.ups.0.1.block2.proj.bias, unet.ups.0.1.block2.norm.weight, unet.ups.0.1.block2.norm.bias, unet.ups.0.1.res_conv.weight, unet.ups.0.1.res_conv.bias, unet.ups.0.2.fn.fn.to_qkv.weight, unet.ups.0.2.fn.fn.to_out.0.weight, unet.ups.0.2.fn.fn.to_out.0.bias, unet.ups.0.2.fn.fn.to_out.1.g, unet.ups.0.2.fn.norm.g, unet.ups.0.3.1.weight, unet.ups.0.3.1.bias, unet.ups.1.0.mlp.1.weight, unet.ups.1.0.mlp.1.bias, unet.ups.1.0.block1.proj.weight, unet.ups.1.0.block1.proj.bias, unet.ups.1.0.block1.norm.weight, unet.ups.1.0.block1.norm.bias, unet.ups.1.0.block2.proj.weight, unet.ups.1.0.block2.proj.bias, unet.ups.1.0.block2.norm.weight, unet.ups.1.0.block2.norm.bias, unet.ups.1.0.res_conv.weight, unet.ups.1.0.res_conv.bias, unet.ups.1.1.mlp.1.weight, unet.ups.1.1.mlp.1.bias, unet.ups.1.1.block1.proj.weight, unet.ups.1.1.block1.proj.bias, unet.ups.1.1.block1.norm.weight, unet.ups.1.1.block1.norm.bias, unet.ups.1.1.block2.proj.weight, unet.ups.1.1.block2.proj.bias, unet.ups.1.1.block2.norm.weight, unet.ups.1.1.block2.norm.bias, unet.ups.1.1.res_conv.weight, unet.ups.1.1.res_conv.bias, unet.ups.1.2.fn.fn.to_qkv.weight, unet.ups.1.2.fn.fn.to_out.0.weight, unet.ups.1.2.fn.fn.to_out.0.bias, unet.ups.1.2.fn.fn.to_out.1.g, unet.ups.1.2.fn.norm.g, unet.ups.1.3.1.weight, unet.ups.1.3.1.bias, unet.ups.2.0.mlp.1.weight, unet.ups.2.0.mlp.1.bias, unet.ups.2.0.block1.proj.weight, unet.ups.2.0.block1.proj.bias, unet.ups.2.0.block1.norm.weight, unet.ups.2.0.block1.norm.bias, unet.ups.2.0.block2.proj.weight, unet.ups.2.0.block2.proj.bias, unet.ups.2.0.block2.norm.weight, unet.ups.2.0.block2.norm.bias, unet.ups.2.0.res_conv.weight, unet.ups.2.0.res_conv.bias, unet.ups.2.1.mlp.1.weight, unet.ups.2.1.mlp.1.bias, unet.ups.2.1.block1.proj.weight, unet.ups.2.1.block1.proj.bias, unet.ups.2.1.block1.norm.weight, unet.ups.2.1.block1.norm.bias, unet.ups.2.1.block2.proj.weight, unet.ups.2.1.block2.proj.bias, unet.ups.2.1.block2.norm.weight, unet.ups.2.1.block2.norm.bias, unet.ups.2.1.res_conv.weight, unet.ups.2.1.res_conv.bias, unet.ups.2.2.fn.fn.to_qkv.weight, unet.ups.2.2.fn.fn.to_out.0.weight, unet.ups.2.2.fn.fn.to_out.0.bias, unet.ups.2.2.fn.fn.to_out.1.g, unet.ups.2.2.fn.norm.g, unet.ups.2.3.weight, unet.ups.2.3.bias, unet.mid_block1.mlp.1.weight, unet.mid_block1.mlp.1.bias, unet.mid_block1.block1.proj.weight, unet.mid_block1.block1.proj.bias, unet.mid_block1.block1.norm.weight, unet.mid_block1.block1.norm.bias, unet.mid_block1.block2.proj.weight, unet.mid_block1.block2.proj.bias, unet.mid_block1.block2.norm.weight, unet.mid_block1.block2.norm.bias, unet.mid_attn.fn.fn.to_qkv.weight, unet.mid_attn.fn.fn.to_out.weight, unet.mid_attn.fn.fn.to_out.bias, unet.mid_attn.fn.norm.g, unet.mid_block2.mlp.1.weight, unet.mid_block2.mlp.1.bias, unet.mid_block2.block1.proj.weight, unet.mid_block2.block1.proj.bias, unet.mid_block2.block1.norm.weight, unet.mid_block2.block1.norm.bias, unet.mid_block2.block2.proj.weight, unet.mid_block2.block2.proj.bias, unet.mid_block2.block2.norm.weight, unet.mid_block2.block2.norm.bias, unet.final_res_block.mlp.1.weight, unet.final_res_block.mlp.1.bias, unet.final_res_block.block1.proj.weight, unet.final_res_block.block1.proj.bias, unet.final_res_block.block1.norm.weight, unet.final_res_block.block1.norm.bias, unet.final_res_block.block2.proj.weight, unet.final_res_block.block2.proj.bias, unet.final_res_block.block2.norm.weight, unet.final_res_block.block2.norm.bias, unet.final_res_block.res_conv.weight, unet.final_res_block.res_conv.bias, unet.final_conv.weight, unet.final_conv.bias, conv_seg_new.weight, conv_seg_new.bias, embed.weight |
|
|
|
2023-03-03 13:59:39,286 - mmseg - INFO - EncoderDecoderFreeze( |
|
(backbone): MixVisionTransformerCustomInitWeights( |
|
(layers): ModuleList( |
|
(0): ModuleList( |
|
(0): PatchEmbed( |
|
(projection): Conv2d(3, 64, kernel_size=(7, 7), stride=(4, 4), padding=(3, 3)) |
|
(norm): LayerNorm((64,), eps=1e-06, elementwise_affine=True) |
|
) |
|
(1): ModuleList( |
|
(0): TransformerEncoderLayer( |
|
(norm1): LayerNorm((64,), eps=1e-06, elementwise_affine=True) |
|
(attn): EfficientMultiheadAttention( |
|
(attn): MultiheadAttention( |
|
(out_proj): NonDynamicallyQuantizableLinear(in_features=64, out_features=64, bias=True) |
|
) |
|
(proj_drop): Dropout(p=0.0, inplace=False) |
|
(dropout_layer): DropPath() |
|
(sr): Conv2d(64, 64, kernel_size=(8, 8), stride=(8, 8)) |
|
(norm): LayerNorm((64,), eps=1e-06, elementwise_affine=True) |
|
) |
|
(norm2): LayerNorm((64,), eps=1e-06, elementwise_affine=True) |
|
(ffn): MixFFN( |
|
(activate): GELU(approximate='none') |
|
(layers): Sequential( |
|
(0): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1)) |
|
(1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=256) |
|
(2): GELU(approximate='none') |
|
(3): Dropout(p=0.0, inplace=False) |
|
(4): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1)) |
|
(5): Dropout(p=0.0, inplace=False) |
|
) |
|
(dropout_layer): DropPath() |
|
) |
|
) |
|
(1): TransformerEncoderLayer( |
|
(norm1): LayerNorm((64,), eps=1e-06, elementwise_affine=True) |
|
(attn): EfficientMultiheadAttention( |
|
(attn): MultiheadAttention( |
|
(out_proj): NonDynamicallyQuantizableLinear(in_features=64, out_features=64, bias=True) |
|
) |
|
(proj_drop): Dropout(p=0.0, inplace=False) |
|
(dropout_layer): DropPath() |
|
(sr): Conv2d(64, 64, kernel_size=(8, 8), stride=(8, 8)) |
|
(norm): LayerNorm((64,), eps=1e-06, elementwise_affine=True) |
|
) |
|
(norm2): LayerNorm((64,), eps=1e-06, elementwise_affine=True) |
|
(ffn): MixFFN( |
|
(activate): GELU(approximate='none') |
|
(layers): Sequential( |
|
(0): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1)) |
|
(1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=256) |
|
(2): GELU(approximate='none') |
|
(3): Dropout(p=0.0, inplace=False) |
|
(4): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1)) |
|
(5): Dropout(p=0.0, inplace=False) |
|
) |
|
(dropout_layer): DropPath() |
|
) |
|
) |
|
(2): TransformerEncoderLayer( |
|
(norm1): LayerNorm((64,), eps=1e-06, elementwise_affine=True) |
|
(attn): EfficientMultiheadAttention( |
|
(attn): MultiheadAttention( |
|
(out_proj): NonDynamicallyQuantizableLinear(in_features=64, out_features=64, bias=True) |
|
) |
|
(proj_drop): Dropout(p=0.0, inplace=False) |
|
(dropout_layer): DropPath() |
|
(sr): Conv2d(64, 64, kernel_size=(8, 8), stride=(8, 8)) |
|
(norm): LayerNorm((64,), eps=1e-06, elementwise_affine=True) |
|
) |
|
(norm2): LayerNorm((64,), eps=1e-06, elementwise_affine=True) |
|
(ffn): MixFFN( |
|
(activate): GELU(approximate='none') |
|
(layers): Sequential( |
|
(0): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1)) |
|
(1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=256) |
|
(2): GELU(approximate='none') |
|
(3): Dropout(p=0.0, inplace=False) |
|
(4): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1)) |
|
(5): Dropout(p=0.0, inplace=False) |
|
) |
|
(dropout_layer): DropPath() |
|
) |
|
) |
|
) |
|
(2): LayerNorm((64,), eps=1e-06, elementwise_affine=True) |
|
) |
|
(1): ModuleList( |
|
(0): PatchEmbed( |
|
(projection): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1)) |
|
(norm): LayerNorm((128,), eps=1e-06, elementwise_affine=True) |
|
) |
|
(1): ModuleList( |
|
(0): TransformerEncoderLayer( |
|
(norm1): LayerNorm((128,), eps=1e-06, elementwise_affine=True) |
|
(attn): EfficientMultiheadAttention( |
|
(attn): MultiheadAttention( |
|
(out_proj): NonDynamicallyQuantizableLinear(in_features=128, out_features=128, bias=True) |
|
) |
|
(proj_drop): Dropout(p=0.0, inplace=False) |
|
(dropout_layer): DropPath() |
|
(sr): Conv2d(128, 128, kernel_size=(4, 4), stride=(4, 4)) |
|
(norm): LayerNorm((128,), eps=1e-06, elementwise_affine=True) |
|
) |
|
(norm2): LayerNorm((128,), eps=1e-06, elementwise_affine=True) |
|
(ffn): MixFFN( |
|
(activate): GELU(approximate='none') |
|
(layers): Sequential( |
|
(0): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1)) |
|
(1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=512) |
|
(2): GELU(approximate='none') |
|
(3): Dropout(p=0.0, inplace=False) |
|
(4): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1)) |
|
(5): Dropout(p=0.0, inplace=False) |
|
) |
|
(dropout_layer): DropPath() |
|
) |
|
) |
|
(1): TransformerEncoderLayer( |
|
(norm1): LayerNorm((128,), eps=1e-06, elementwise_affine=True) |
|
(attn): EfficientMultiheadAttention( |
|
(attn): MultiheadAttention( |
|
(out_proj): NonDynamicallyQuantizableLinear(in_features=128, out_features=128, bias=True) |
|
) |
|
(proj_drop): Dropout(p=0.0, inplace=False) |
|
(dropout_layer): DropPath() |
|
(sr): Conv2d(128, 128, kernel_size=(4, 4), stride=(4, 4)) |
|
(norm): LayerNorm((128,), eps=1e-06, elementwise_affine=True) |
|
) |
|
(norm2): LayerNorm((128,), eps=1e-06, elementwise_affine=True) |
|
(ffn): MixFFN( |
|
(activate): GELU(approximate='none') |
|
(layers): Sequential( |
|
(0): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1)) |
|
(1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=512) |
|
(2): GELU(approximate='none') |
|
(3): Dropout(p=0.0, inplace=False) |
|
(4): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1)) |
|
(5): Dropout(p=0.0, inplace=False) |
|
) |
|
(dropout_layer): DropPath() |
|
) |
|
) |
|
(2): TransformerEncoderLayer( |
|
(norm1): LayerNorm((128,), eps=1e-06, elementwise_affine=True) |
|
(attn): EfficientMultiheadAttention( |
|
(attn): MultiheadAttention( |
|
(out_proj): NonDynamicallyQuantizableLinear(in_features=128, out_features=128, bias=True) |
|
) |
|
(proj_drop): Dropout(p=0.0, inplace=False) |
|
(dropout_layer): DropPath() |
|
(sr): Conv2d(128, 128, kernel_size=(4, 4), stride=(4, 4)) |
|
(norm): LayerNorm((128,), eps=1e-06, elementwise_affine=True) |
|
) |
|
(norm2): LayerNorm((128,), eps=1e-06, elementwise_affine=True) |
|
(ffn): MixFFN( |
|
(activate): GELU(approximate='none') |
|
(layers): Sequential( |
|
(0): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1)) |
|
(1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=512) |
|
(2): GELU(approximate='none') |
|
(3): Dropout(p=0.0, inplace=False) |
|
(4): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1)) |
|
(5): Dropout(p=0.0, inplace=False) |
|
) |
|
(dropout_layer): DropPath() |
|
) |
|
) |
|
(3): TransformerEncoderLayer( |
|
(norm1): LayerNorm((128,), eps=1e-06, elementwise_affine=True) |
|
(attn): EfficientMultiheadAttention( |
|
(attn): MultiheadAttention( |
|
(out_proj): NonDynamicallyQuantizableLinear(in_features=128, out_features=128, bias=True) |
|
) |
|
(proj_drop): Dropout(p=0.0, inplace=False) |
|
(dropout_layer): DropPath() |
|
(sr): Conv2d(128, 128, kernel_size=(4, 4), stride=(4, 4)) |
|
(norm): LayerNorm((128,), eps=1e-06, elementwise_affine=True) |
|
) |
|
(norm2): LayerNorm((128,), eps=1e-06, elementwise_affine=True) |
|
(ffn): MixFFN( |
|
(activate): GELU(approximate='none') |
|
(layers): Sequential( |
|
(0): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1)) |
|
(1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=512) |
|
(2): GELU(approximate='none') |
|
(3): Dropout(p=0.0, inplace=False) |
|
(4): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1)) |
|
(5): Dropout(p=0.0, inplace=False) |
|
) |
|
(dropout_layer): DropPath() |
|
) |
|
) |
|
) |
|
(2): LayerNorm((128,), eps=1e-06, elementwise_affine=True) |
|
) |
|
(2): ModuleList( |
|
(0): PatchEmbed( |
|
(projection): Conv2d(128, 320, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1)) |
|
(norm): LayerNorm((320,), eps=1e-06, elementwise_affine=True) |
|
) |
|
(1): ModuleList( |
|
(0): TransformerEncoderLayer( |
|
(norm1): LayerNorm((320,), eps=1e-06, elementwise_affine=True) |
|
(attn): EfficientMultiheadAttention( |
|
(attn): MultiheadAttention( |
|
(out_proj): NonDynamicallyQuantizableLinear(in_features=320, out_features=320, bias=True) |
|
) |
|
(proj_drop): Dropout(p=0.0, inplace=False) |
|
(dropout_layer): DropPath() |
|
(sr): Conv2d(320, 320, kernel_size=(2, 2), stride=(2, 2)) |
|
(norm): LayerNorm((320,), eps=1e-06, elementwise_affine=True) |
|
) |
|
(norm2): LayerNorm((320,), eps=1e-06, elementwise_affine=True) |
|
(ffn): MixFFN( |
|
(activate): GELU(approximate='none') |
|
(layers): Sequential( |
|
(0): Conv2d(320, 1280, kernel_size=(1, 1), stride=(1, 1)) |
|
(1): Conv2d(1280, 1280, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=1280) |
|
(2): GELU(approximate='none') |
|
(3): Dropout(p=0.0, inplace=False) |
|
(4): Conv2d(1280, 320, kernel_size=(1, 1), stride=(1, 1)) |
|
(5): Dropout(p=0.0, inplace=False) |
|
) |
|
(dropout_layer): DropPath() |
|
) |
|
) |
|
(1): TransformerEncoderLayer( |
|
(norm1): LayerNorm((320,), eps=1e-06, elementwise_affine=True) |
|
(attn): EfficientMultiheadAttention( |
|
(attn): MultiheadAttention( |
|
(out_proj): NonDynamicallyQuantizableLinear(in_features=320, out_features=320, bias=True) |
|
) |
|
(proj_drop): Dropout(p=0.0, inplace=False) |
|
(dropout_layer): DropPath() |
|
(sr): Conv2d(320, 320, kernel_size=(2, 2), stride=(2, 2)) |
|
(norm): LayerNorm((320,), eps=1e-06, elementwise_affine=True) |
|
) |
|
(norm2): LayerNorm((320,), eps=1e-06, elementwise_affine=True) |
|
(ffn): MixFFN( |
|
(activate): GELU(approximate='none') |
|
(layers): Sequential( |
|
(0): Conv2d(320, 1280, kernel_size=(1, 1), stride=(1, 1)) |
|
(1): Conv2d(1280, 1280, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=1280) |
|
(2): GELU(approximate='none') |
|
(3): Dropout(p=0.0, inplace=False) |
|
(4): Conv2d(1280, 320, kernel_size=(1, 1), stride=(1, 1)) |
|
(5): Dropout(p=0.0, inplace=False) |
|
) |
|
(dropout_layer): DropPath() |
|
) |
|
) |
|
(2): TransformerEncoderLayer( |
|
(norm1): LayerNorm((320,), eps=1e-06, elementwise_affine=True) |
|
(attn): EfficientMultiheadAttention( |
|
(attn): MultiheadAttention( |
|
(out_proj): NonDynamicallyQuantizableLinear(in_features=320, out_features=320, bias=True) |
|
) |
|
(proj_drop): Dropout(p=0.0, inplace=False) |
|
(dropout_layer): DropPath() |
|
(sr): Conv2d(320, 320, kernel_size=(2, 2), stride=(2, 2)) |
|
(norm): LayerNorm((320,), eps=1e-06, elementwise_affine=True) |
|
) |
|
(norm2): LayerNorm((320,), eps=1e-06, elementwise_affine=True) |
|
(ffn): MixFFN( |
|
(activate): GELU(approximate='none') |
|
(layers): Sequential( |
|
(0): Conv2d(320, 1280, kernel_size=(1, 1), stride=(1, 1)) |
|
(1): Conv2d(1280, 1280, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=1280) |
|
(2): GELU(approximate='none') |
|
(3): Dropout(p=0.0, inplace=False) |
|
(4): Conv2d(1280, 320, kernel_size=(1, 1), stride=(1, 1)) |
|
(5): Dropout(p=0.0, inplace=False) |
|
) |
|
(dropout_layer): DropPath() |
|
) |
|
) |
|
(3): TransformerEncoderLayer( |
|
(norm1): LayerNorm((320,), eps=1e-06, elementwise_affine=True) |
|
(attn): EfficientMultiheadAttention( |
|
(attn): MultiheadAttention( |
|
(out_proj): NonDynamicallyQuantizableLinear(in_features=320, out_features=320, bias=True) |
|
) |
|
(proj_drop): Dropout(p=0.0, inplace=False) |
|
(dropout_layer): DropPath() |
|
(sr): Conv2d(320, 320, kernel_size=(2, 2), stride=(2, 2)) |
|
(norm): LayerNorm((320,), eps=1e-06, elementwise_affine=True) |
|
) |
|
(norm2): LayerNorm((320,), eps=1e-06, elementwise_affine=True) |
|
(ffn): MixFFN( |
|
(activate): GELU(approximate='none') |
|
(layers): Sequential( |
|
(0): Conv2d(320, 1280, kernel_size=(1, 1), stride=(1, 1)) |
|
(1): Conv2d(1280, 1280, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=1280) |
|
(2): GELU(approximate='none') |
|
(3): Dropout(p=0.0, inplace=False) |
|
(4): Conv2d(1280, 320, kernel_size=(1, 1), stride=(1, 1)) |
|
(5): Dropout(p=0.0, inplace=False) |
|
) |
|
(dropout_layer): DropPath() |
|
) |
|
) |
|
(4): TransformerEncoderLayer( |
|
(norm1): LayerNorm((320,), eps=1e-06, elementwise_affine=True) |
|
(attn): EfficientMultiheadAttention( |
|
(attn): MultiheadAttention( |
|
(out_proj): NonDynamicallyQuantizableLinear(in_features=320, out_features=320, bias=True) |
|
) |
|
(proj_drop): Dropout(p=0.0, inplace=False) |
|
(dropout_layer): DropPath() |
|
(sr): Conv2d(320, 320, kernel_size=(2, 2), stride=(2, 2)) |
|
(norm): LayerNorm((320,), eps=1e-06, elementwise_affine=True) |
|
) |
|
(norm2): LayerNorm((320,), eps=1e-06, elementwise_affine=True) |
|
(ffn): MixFFN( |
|
(activate): GELU(approximate='none') |
|
(layers): Sequential( |
|
(0): Conv2d(320, 1280, kernel_size=(1, 1), stride=(1, 1)) |
|
(1): Conv2d(1280, 1280, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=1280) |
|
(2): GELU(approximate='none') |
|
(3): Dropout(p=0.0, inplace=False) |
|
(4): Conv2d(1280, 320, kernel_size=(1, 1), stride=(1, 1)) |
|
(5): Dropout(p=0.0, inplace=False) |
|
) |
|
(dropout_layer): DropPath() |
|
) |
|
) |
|
(5): TransformerEncoderLayer( |
|
(norm1): LayerNorm((320,), eps=1e-06, elementwise_affine=True) |
|
(attn): EfficientMultiheadAttention( |
|
(attn): MultiheadAttention( |
|
(out_proj): NonDynamicallyQuantizableLinear(in_features=320, out_features=320, bias=True) |
|
) |
|
(proj_drop): Dropout(p=0.0, inplace=False) |
|
(dropout_layer): DropPath() |
|
(sr): Conv2d(320, 320, kernel_size=(2, 2), stride=(2, 2)) |
|
(norm): LayerNorm((320,), eps=1e-06, elementwise_affine=True) |
|
) |
|
(norm2): LayerNorm((320,), eps=1e-06, elementwise_affine=True) |
|
(ffn): MixFFN( |
|
(activate): GELU(approximate='none') |
|
(layers): Sequential( |
|
(0): Conv2d(320, 1280, kernel_size=(1, 1), stride=(1, 1)) |
|
(1): Conv2d(1280, 1280, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=1280) |
|
(2): GELU(approximate='none') |
|
(3): Dropout(p=0.0, inplace=False) |
|
(4): Conv2d(1280, 320, kernel_size=(1, 1), stride=(1, 1)) |
|
(5): Dropout(p=0.0, inplace=False) |
|
) |
|
(dropout_layer): DropPath() |
|
) |
|
) |
|
) |
|
(2): LayerNorm((320,), eps=1e-06, elementwise_affine=True) |
|
) |
|
(3): ModuleList( |
|
(0): PatchEmbed( |
|
(projection): Conv2d(320, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1)) |
|
(norm): LayerNorm((512,), eps=1e-06, elementwise_affine=True) |
|
) |
|
(1): ModuleList( |
|
(0): TransformerEncoderLayer( |
|
(norm1): LayerNorm((512,), eps=1e-06, elementwise_affine=True) |
|
(attn): EfficientMultiheadAttention( |
|
(attn): MultiheadAttention( |
|
(out_proj): NonDynamicallyQuantizableLinear(in_features=512, out_features=512, bias=True) |
|
) |
|
(proj_drop): Dropout(p=0.0, inplace=False) |
|
(dropout_layer): DropPath() |
|
) |
|
(norm2): LayerNorm((512,), eps=1e-06, elementwise_affine=True) |
|
(ffn): MixFFN( |
|
(activate): GELU(approximate='none') |
|
(layers): Sequential( |
|
(0): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1)) |
|
(1): Conv2d(2048, 2048, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=2048) |
|
(2): GELU(approximate='none') |
|
(3): Dropout(p=0.0, inplace=False) |
|
(4): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1)) |
|
(5): Dropout(p=0.0, inplace=False) |
|
) |
|
(dropout_layer): DropPath() |
|
) |
|
) |
|
(1): TransformerEncoderLayer( |
|
(norm1): LayerNorm((512,), eps=1e-06, elementwise_affine=True) |
|
(attn): EfficientMultiheadAttention( |
|
(attn): MultiheadAttention( |
|
(out_proj): NonDynamicallyQuantizableLinear(in_features=512, out_features=512, bias=True) |
|
) |
|
(proj_drop): Dropout(p=0.0, inplace=False) |
|
(dropout_layer): DropPath() |
|
) |
|
(norm2): LayerNorm((512,), eps=1e-06, elementwise_affine=True) |
|
(ffn): MixFFN( |
|
(activate): GELU(approximate='none') |
|
(layers): Sequential( |
|
(0): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1)) |
|
(1): Conv2d(2048, 2048, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=2048) |
|
(2): GELU(approximate='none') |
|
(3): Dropout(p=0.0, inplace=False) |
|
(4): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1)) |
|
(5): Dropout(p=0.0, inplace=False) |
|
) |
|
(dropout_layer): DropPath() |
|
) |
|
) |
|
(2): TransformerEncoderLayer( |
|
(norm1): LayerNorm((512,), eps=1e-06, elementwise_affine=True) |
|
(attn): EfficientMultiheadAttention( |
|
(attn): MultiheadAttention( |
|
(out_proj): NonDynamicallyQuantizableLinear(in_features=512, out_features=512, bias=True) |
|
) |
|
(proj_drop): Dropout(p=0.0, inplace=False) |
|
(dropout_layer): DropPath() |
|
) |
|
(norm2): LayerNorm((512,), eps=1e-06, elementwise_affine=True) |
|
(ffn): MixFFN( |
|
(activate): GELU(approximate='none') |
|
(layers): Sequential( |
|
(0): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1)) |
|
(1): Conv2d(2048, 2048, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=2048) |
|
(2): GELU(approximate='none') |
|
(3): Dropout(p=0.0, inplace=False) |
|
(4): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1)) |
|
(5): Dropout(p=0.0, inplace=False) |
|
) |
|
(dropout_layer): DropPath() |
|
) |
|
) |
|
) |
|
(2): LayerNorm((512,), eps=1e-06, elementwise_affine=True) |
|
) |
|
) |
|
) |
|
init_cfg={'type': 'Pretrained', 'checkpoint': 'pretrained/segformer_mit-b2_512x512_160k_ade20k_20220620_114047-64e4feca.pth'} |
|
(decode_head): SegformerHeadUnetFCHeadSingleStep( |
|
input_transform=multiple_select, ignore_index=0, align_corners=False |
|
(loss_decode): CrossEntropyLoss(avg_non_ignore=False) |
|
(conv_seg): None |
|
(dropout): Dropout2d(p=0.1, inplace=False) |
|
(convs): ModuleList( |
|
(0): ConvModule( |
|
(conv): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) |
|
(bn): SyncBatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) |
|
(activate): ReLU(inplace=True) |
|
) |
|
(1): ConvModule( |
|
(conv): Conv2d(128, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) |
|
(bn): SyncBatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) |
|
(activate): ReLU(inplace=True) |
|
) |
|
(2): ConvModule( |
|
(conv): Conv2d(320, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) |
|
(bn): SyncBatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) |
|
(activate): ReLU(inplace=True) |
|
) |
|
(3): ConvModule( |
|
(conv): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) |
|
(bn): SyncBatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) |
|
(activate): ReLU(inplace=True) |
|
) |
|
) |
|
(fusion_conv): ConvModule( |
|
(conv): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) |
|
(bn): SyncBatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) |
|
(activate): ReLU(inplace=True) |
|
) |
|
(unet): Unet( |
|
(init_conv): Conv2d(272, 128, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3)) |
|
(time_mlp): Sequential( |
|
(0): SinusoidalPosEmb() |
|
(1): Linear(in_features=128, out_features=512, bias=True) |
|
(2): GELU(approximate='none') |
|
(3): Linear(in_features=512, out_features=512, bias=True) |
|
) |
|
(downs): ModuleList( |
|
(0): ModuleList( |
|
(0): ResnetBlock( |
|
(mlp): Sequential( |
|
(0): SiLU() |
|
(1): Linear(in_features=512, out_features=256, bias=True) |
|
) |
|
(block1): Block( |
|
(proj): WeightStandardizedConv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) |
|
(norm): GroupNorm(8, 128, eps=1e-05, affine=True) |
|
(act): SiLU() |
|
) |
|
(block2): Block( |
|
(proj): WeightStandardizedConv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) |
|
(norm): GroupNorm(8, 128, eps=1e-05, affine=True) |
|
(act): SiLU() |
|
) |
|
(res_conv): Identity() |
|
) |
|
(1): ResnetBlock( |
|
(mlp): Sequential( |
|
(0): SiLU() |
|
(1): Linear(in_features=512, out_features=256, bias=True) |
|
) |
|
(block1): Block( |
|
(proj): WeightStandardizedConv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) |
|
(norm): GroupNorm(8, 128, eps=1e-05, affine=True) |
|
(act): SiLU() |
|
) |
|
(block2): Block( |
|
(proj): WeightStandardizedConv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) |
|
(norm): GroupNorm(8, 128, eps=1e-05, affine=True) |
|
(act): SiLU() |
|
) |
|
(res_conv): Identity() |
|
) |
|
(2): Residual( |
|
(fn): PreNorm( |
|
(fn): LinearAttention( |
|
(to_qkv): Conv2d(128, 384, kernel_size=(1, 1), stride=(1, 1), bias=False) |
|
(to_out): Sequential( |
|
(0): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1)) |
|
(1): LayerNorm() |
|
) |
|
) |
|
(norm): LayerNorm() |
|
) |
|
) |
|
(3): Conv2d(128, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)) |
|
) |
|
(1): ModuleList( |
|
(0): ResnetBlock( |
|
(mlp): Sequential( |
|
(0): SiLU() |
|
(1): Linear(in_features=512, out_features=256, bias=True) |
|
) |
|
(block1): Block( |
|
(proj): WeightStandardizedConv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) |
|
(norm): GroupNorm(8, 128, eps=1e-05, affine=True) |
|
(act): SiLU() |
|
) |
|
(block2): Block( |
|
(proj): WeightStandardizedConv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) |
|
(norm): GroupNorm(8, 128, eps=1e-05, affine=True) |
|
(act): SiLU() |
|
) |
|
(res_conv): Identity() |
|
) |
|
(1): ResnetBlock( |
|
(mlp): Sequential( |
|
(0): SiLU() |
|
(1): Linear(in_features=512, out_features=256, bias=True) |
|
) |
|
(block1): Block( |
|
(proj): WeightStandardizedConv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) |
|
(norm): GroupNorm(8, 128, eps=1e-05, affine=True) |
|
(act): SiLU() |
|
) |
|
(block2): Block( |
|
(proj): WeightStandardizedConv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) |
|
(norm): GroupNorm(8, 128, eps=1e-05, affine=True) |
|
(act): SiLU() |
|
) |
|
(res_conv): Identity() |
|
) |
|
(2): Residual( |
|
(fn): PreNorm( |
|
(fn): LinearAttention( |
|
(to_qkv): Conv2d(128, 384, kernel_size=(1, 1), stride=(1, 1), bias=False) |
|
(to_out): Sequential( |
|
(0): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1)) |
|
(1): LayerNorm() |
|
) |
|
) |
|
(norm): LayerNorm() |
|
) |
|
) |
|
(3): Conv2d(128, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)) |
|
) |
|
(2): ModuleList( |
|
(0): ResnetBlock( |
|
(mlp): Sequential( |
|
(0): SiLU() |
|
(1): Linear(in_features=512, out_features=256, bias=True) |
|
) |
|
(block1): Block( |
|
(proj): WeightStandardizedConv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) |
|
(norm): GroupNorm(8, 128, eps=1e-05, affine=True) |
|
(act): SiLU() |
|
) |
|
(block2): Block( |
|
(proj): WeightStandardizedConv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) |
|
(norm): GroupNorm(8, 128, eps=1e-05, affine=True) |
|
(act): SiLU() |
|
) |
|
(res_conv): Identity() |
|
) |
|
(1): ResnetBlock( |
|
(mlp): Sequential( |
|
(0): SiLU() |
|
(1): Linear(in_features=512, out_features=256, bias=True) |
|
) |
|
(block1): Block( |
|
(proj): WeightStandardizedConv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) |
|
(norm): GroupNorm(8, 128, eps=1e-05, affine=True) |
|
(act): SiLU() |
|
) |
|
(block2): Block( |
|
(proj): WeightStandardizedConv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) |
|
(norm): GroupNorm(8, 128, eps=1e-05, affine=True) |
|
(act): SiLU() |
|
) |
|
(res_conv): Identity() |
|
) |
|
(2): Residual( |
|
(fn): PreNorm( |
|
(fn): LinearAttention( |
|
(to_qkv): Conv2d(128, 384, kernel_size=(1, 1), stride=(1, 1), bias=False) |
|
(to_out): Sequential( |
|
(0): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1)) |
|
(1): LayerNorm() |
|
) |
|
) |
|
(norm): LayerNorm() |
|
) |
|
) |
|
(3): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) |
|
) |
|
) |
|
(ups): ModuleList( |
|
(0): ModuleList( |
|
(0): ResnetBlock( |
|
(mlp): Sequential( |
|
(0): SiLU() |
|
(1): Linear(in_features=512, out_features=256, bias=True) |
|
) |
|
(block1): Block( |
|
(proj): WeightStandardizedConv2d(256, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) |
|
(norm): GroupNorm(8, 128, eps=1e-05, affine=True) |
|
(act): SiLU() |
|
) |
|
(block2): Block( |
|
(proj): WeightStandardizedConv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) |
|
(norm): GroupNorm(8, 128, eps=1e-05, affine=True) |
|
(act): SiLU() |
|
) |
|
(res_conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1)) |
|
) |
|
(1): ResnetBlock( |
|
(mlp): Sequential( |
|
(0): SiLU() |
|
(1): Linear(in_features=512, out_features=256, bias=True) |
|
) |
|
(block1): Block( |
|
(proj): WeightStandardizedConv2d(256, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) |
|
(norm): GroupNorm(8, 128, eps=1e-05, affine=True) |
|
(act): SiLU() |
|
) |
|
(block2): Block( |
|
(proj): WeightStandardizedConv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) |
|
(norm): GroupNorm(8, 128, eps=1e-05, affine=True) |
|
(act): SiLU() |
|
) |
|
(res_conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1)) |
|
) |
|
(2): Residual( |
|
(fn): PreNorm( |
|
(fn): LinearAttention( |
|
(to_qkv): Conv2d(128, 384, kernel_size=(1, 1), stride=(1, 1), bias=False) |
|
(to_out): Sequential( |
|
(0): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1)) |
|
(1): LayerNorm() |
|
) |
|
) |
|
(norm): LayerNorm() |
|
) |
|
) |
|
(3): Sequential( |
|
(0): Upsample(scale_factor=2.0, mode=nearest) |
|
(1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) |
|
) |
|
) |
|
(1): ModuleList( |
|
(0): ResnetBlock( |
|
(mlp): Sequential( |
|
(0): SiLU() |
|
(1): Linear(in_features=512, out_features=256, bias=True) |
|
) |
|
(block1): Block( |
|
(proj): WeightStandardizedConv2d(256, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) |
|
(norm): GroupNorm(8, 128, eps=1e-05, affine=True) |
|
(act): SiLU() |
|
) |
|
(block2): Block( |
|
(proj): WeightStandardizedConv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) |
|
(norm): GroupNorm(8, 128, eps=1e-05, affine=True) |
|
(act): SiLU() |
|
) |
|
(res_conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1)) |
|
) |
|
(1): ResnetBlock( |
|
(mlp): Sequential( |
|
(0): SiLU() |
|
(1): Linear(in_features=512, out_features=256, bias=True) |
|
) |
|
(block1): Block( |
|
(proj): WeightStandardizedConv2d(256, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) |
|
(norm): GroupNorm(8, 128, eps=1e-05, affine=True) |
|
(act): SiLU() |
|
) |
|
(block2): Block( |
|
(proj): WeightStandardizedConv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) |
|
(norm): GroupNorm(8, 128, eps=1e-05, affine=True) |
|
(act): SiLU() |
|
) |
|
(res_conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1)) |
|
) |
|
(2): Residual( |
|
(fn): PreNorm( |
|
(fn): LinearAttention( |
|
(to_qkv): Conv2d(128, 384, kernel_size=(1, 1), stride=(1, 1), bias=False) |
|
(to_out): Sequential( |
|
(0): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1)) |
|
(1): LayerNorm() |
|
) |
|
) |
|
(norm): LayerNorm() |
|
) |
|
) |
|
(3): Sequential( |
|
(0): Upsample(scale_factor=2.0, mode=nearest) |
|
(1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) |
|
) |
|
) |
|
(2): ModuleList( |
|
(0): ResnetBlock( |
|
(mlp): Sequential( |
|
(0): SiLU() |
|
(1): Linear(in_features=512, out_features=256, bias=True) |
|
) |
|
(block1): Block( |
|
(proj): WeightStandardizedConv2d(256, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) |
|
(norm): GroupNorm(8, 128, eps=1e-05, affine=True) |
|
(act): SiLU() |
|
) |
|
(block2): Block( |
|
(proj): WeightStandardizedConv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) |
|
(norm): GroupNorm(8, 128, eps=1e-05, affine=True) |
|
(act): SiLU() |
|
) |
|
(res_conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1)) |
|
) |
|
(1): ResnetBlock( |
|
(mlp): Sequential( |
|
(0): SiLU() |
|
(1): Linear(in_features=512, out_features=256, bias=True) |
|
) |
|
(block1): Block( |
|
(proj): WeightStandardizedConv2d(256, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) |
|
(norm): GroupNorm(8, 128, eps=1e-05, affine=True) |
|
(act): SiLU() |
|
) |
|
(block2): Block( |
|
(proj): WeightStandardizedConv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) |
|
(norm): GroupNorm(8, 128, eps=1e-05, affine=True) |
|
(act): SiLU() |
|
) |
|
(res_conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1)) |
|
) |
|
(2): Residual( |
|
(fn): PreNorm( |
|
(fn): LinearAttention( |
|
(to_qkv): Conv2d(128, 384, kernel_size=(1, 1), stride=(1, 1), bias=False) |
|
(to_out): Sequential( |
|
(0): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1)) |
|
(1): LayerNorm() |
|
) |
|
) |
|
(norm): LayerNorm() |
|
) |
|
) |
|
(3): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) |
|
) |
|
) |
|
(mid_block1): ResnetBlock( |
|
(mlp): Sequential( |
|
(0): SiLU() |
|
(1): Linear(in_features=512, out_features=256, bias=True) |
|
) |
|
(block1): Block( |
|
(proj): WeightStandardizedConv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) |
|
(norm): GroupNorm(8, 128, eps=1e-05, affine=True) |
|
(act): SiLU() |
|
) |
|
(block2): Block( |
|
(proj): WeightStandardizedConv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) |
|
(norm): GroupNorm(8, 128, eps=1e-05, affine=True) |
|
(act): SiLU() |
|
) |
|
(res_conv): Identity() |
|
) |
|
(mid_attn): Residual( |
|
(fn): PreNorm( |
|
(fn): Attention( |
|
(to_qkv): Conv2d(128, 384, kernel_size=(1, 1), stride=(1, 1), bias=False) |
|
(to_out): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1)) |
|
) |
|
(norm): LayerNorm() |
|
) |
|
) |
|
(mid_block2): ResnetBlock( |
|
(mlp): Sequential( |
|
(0): SiLU() |
|
(1): Linear(in_features=512, out_features=256, bias=True) |
|
) |
|
(block1): Block( |
|
(proj): WeightStandardizedConv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) |
|
(norm): GroupNorm(8, 128, eps=1e-05, affine=True) |
|
(act): SiLU() |
|
) |
|
(block2): Block( |
|
(proj): WeightStandardizedConv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) |
|
(norm): GroupNorm(8, 128, eps=1e-05, affine=True) |
|
(act): SiLU() |
|
) |
|
(res_conv): Identity() |
|
) |
|
(final_res_block): ResnetBlock( |
|
(mlp): Sequential( |
|
(0): SiLU() |
|
(1): Linear(in_features=512, out_features=256, bias=True) |
|
) |
|
(block1): Block( |
|
(proj): WeightStandardizedConv2d(256, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) |
|
(norm): GroupNorm(8, 128, eps=1e-05, affine=True) |
|
(act): SiLU() |
|
) |
|
(block2): Block( |
|
(proj): WeightStandardizedConv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) |
|
(norm): GroupNorm(8, 128, eps=1e-05, affine=True) |
|
(act): SiLU() |
|
) |
|
(res_conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1)) |
|
) |
|
(final_conv): Conv2d(128, 256, kernel_size=(1, 1), stride=(1, 1)) |
|
) |
|
(conv_seg_new): Conv2d(256, 151, kernel_size=(1, 1), stride=(1, 1)) |
|
(embed): Embedding(151, 16) |
|
) |
|
init_cfg={'type': 'Pretrained', 'checkpoint': 'pretrained/segformer_mit-b2_512x512_160k_ade20k_20220620_114047-64e4feca.pth'} |
|
) |
|
2023-03-03 13:59:40,184 - mmseg - INFO - Loaded 20210 images |
|
2023-03-03 13:59:41,189 - mmseg - INFO - Loaded 2000 images |
|
2023-03-03 13:59:41,192 - mmseg - INFO - Start running, host: laizeqiang@SH-IDC1-10-140-37-124, work_dir: /mnt/petrelfs/laizeqiang/mmseg-baseline/work_dirs/segformer_mit_b2_segformer_head_unet_fc_single_step_ade_pretrained_freeze_embed_80k_ade20k151 |
|
2023-03-03 13:59:41,192 - mmseg - INFO - Hooks will be executed in the following order: |
|
before_run: |
|
(VERY_HIGH ) StepLrUpdaterHook |
|
(NORMAL ) CheckpointHook |
|
(LOW ) DistEvalHook |
|
(VERY_LOW ) TextLoggerHook |
|
-------------------- |
|
before_train_epoch: |
|
(VERY_HIGH ) StepLrUpdaterHook |
|
(LOW ) IterTimerHook |
|
(LOW ) DistEvalHook |
|
(VERY_LOW ) TextLoggerHook |
|
-------------------- |
|
before_train_iter: |
|
(VERY_HIGH ) StepLrUpdaterHook |
|
(LOW ) IterTimerHook |
|
(LOW ) DistEvalHook |
|
-------------------- |
|
after_train_iter: |
|
(ABOVE_NORMAL) OptimizerHook |
|
(NORMAL ) CheckpointHook |
|
(LOW ) IterTimerHook |
|
(LOW ) DistEvalHook |
|
(VERY_LOW ) TextLoggerHook |
|
-------------------- |
|
after_train_epoch: |
|
(NORMAL ) CheckpointHook |
|
(LOW ) DistEvalHook |
|
(VERY_LOW ) TextLoggerHook |
|
-------------------- |
|
before_val_epoch: |
|
(LOW ) IterTimerHook |
|
(VERY_LOW ) TextLoggerHook |
|
-------------------- |
|
before_val_iter: |
|
(LOW ) IterTimerHook |
|
-------------------- |
|
after_val_iter: |
|
(LOW ) IterTimerHook |
|
-------------------- |
|
after_val_epoch: |
|
(VERY_LOW ) TextLoggerHook |
|
-------------------- |
|
after_run: |
|
(VERY_LOW ) TextLoggerHook |
|
-------------------- |
|
2023-03-03 13:59:41,192 - mmseg - INFO - workflow: [('train', 1)], max: 80000 iters |
|
2023-03-03 13:59:41,192 - mmseg - INFO - Checkpoints will be saved to /mnt/petrelfs/laizeqiang/mmseg-baseline/work_dirs/segformer_mit_b2_segformer_head_unet_fc_single_step_ade_pretrained_freeze_embed_80k_ade20k151 by HardDiskBackend. |
|
|