license: cc-by-4.0
pipeline_tag: image-to-image
tags:
- pytorch
- super-resolution
- pretrain
Since no official HMA model releases exist yet, I am releasing my hma and hma_medium mssim pretrains.
These can be used to speed up and stabilize early training stages when training new hma models.
Trained with mssim on nomosv2.
4xmssim_hma_pretrain
Scale: 4
Architecture: HMA
Architecture Option: hma
Author: Philip Hofmann
License: CC-BY-0.4
Purpose: Pretrained
Subject: Photography
Input Type: Images
Release Date: 19.07.2024
Dataset: nomosv2
Dataset Size: 6000
OTF (on the fly augmentations): No
Pretrained Model: None (=From Scratch)
Iterations: 205'000
Batch Size: 4
Patch Size: 96
Description: A pretrain to start hma model training.
4xmssim_hma_medium_pretrain
Scale: 4
Architecture: HMA
Architecture Option: hma_medium
Author: Philip Hofmann
License: CC-BY-0.4
Purpose: Pretrained
Subject: Photography
Input Type: Images
Release Date: 19.07.2024
Dataset: nomosv2
Dataset Size: 6000
OTF (on the fly augmentations): No
Pretrained Model: None (=From Scratch)
Iterations: 150'000
Batch Size: 4
Patch Size: 48
Description: A pretrain to start hma_medium model training.
Showcase: