Transformers
A2N
super-image
image-super-resolution
Inference Endpoints
a2n / README.md
Eugene Siow
Add x3 scale model.
a4099c3
metadata
license: apache-2.0
tags:
  - super-image
  - image-super-resolution
datasets:
  - eugenesiow/Div2k
  - eugenesiow/Set5
  - eugenesiow/Set14
  - eugenesiow/BSD100
  - eugenesiow/Urban100
metrics:
  - pnsr
  - ssim

Attention in Attention Network for Image Super-Resolution (A2N)

A2N model pre-trained on DIV2K (800 images training, augmented to 4000 images, 100 images validation) for 2x, 3x and 4x image super resolution. It was introduced in the paper Attention in Attention Network for Image Super-Resolution by Chen et al. (2021) and first released in this repository.

The goal of image super resolution is to restore a high resolution (HR) image from a single low resolution (LR) image. The image below shows the ground truth (HR), the bicubic upscaling x2 and model upscaling x2.

Comparing Bicubic upscaling against the models x2 upscaling on Set5 Image 4

Model description

The A2N model proposes an attention in attention network (A2N) for highly accurate image SR. Specifically, the A2N consists of a non-attention branch and a coupling attention branch. Attention dropout module is proposed to generate dynamic attention weights for these two branches based on input features that can suppress unwanted attention adjustments. This allows attention modules to specialize to beneficial examples without otherwise penalties and thus greatly improve the capacity of the attention network with little parameter overhead.

More importantly the model is lightweight and fast to train (~1.5m parameters, ~4mb).

Intended uses & limitations

You can use the pre-trained models for upscaling your images 2x, 3x and 4x. You can also use the trainer to train a model on your own dataset.

How to use

The model can be used with the super_image library:

pip install super-image

Here is how to use a pre-trained model to upscale your image:

from super_image import A2nModel, ImageLoader
from PIL import Image
import requests

url = 'https://paperswithcode.com/media/datasets/Set5-0000002728-07a9793f_zA3bDjj.jpg'
image = Image.open(requests.get(url, stream=True).raw)

model = A2nModel.from_pretrained('eugenesiow/a2n', scale=2)      # scale 2, 3 and 4 models available
inputs = ImageLoader.load_image(image)
preds = model(inputs)

ImageLoader.save_image(preds, './scaled_2x.png')                        # save the output 2x scaled image to `./scaled_2x.png`
ImageLoader.save_compare(inputs, preds, './scaled_2x_compare.png')      # save an output comparing the super-image with a bicubic scaling

Open In Colab

Training data

The models for 2x, 3x and 4x image super resolution were pretrained on DIV2K, a dataset of 800 high-quality (2K resolution) images for training, augmented to 4000 images and uses a dev set of 100 validation images (images numbered 801 to 900).

Training procedure

Preprocessing

We follow the pre-processing and training method of Wang et al.. Low Resolution (LR) images are created by using bicubic interpolation as the resizing method to reduce the size of the High Resolution (HR) images by x2, x3 and x4 times. During training, RGB patches with size of 64×64 from the LR input are used together with their corresponding HR patches. Data augmentation is applied to the training set in the pre-processing stage where five images are created from the four corners and center of the original image.

We need the huggingface datasets library to download the data:

pip install datasets

The following code gets the data and preprocesses/augments the data.

from datasets import load_dataset
from super_image.data import EvalDataset, TrainDataset, augment_five_crop

augmented_dataset = load_dataset('eugenesiow/Div2k', 'bicubic_x4', split='train')\
    .map(augment_five_crop, batched=True, desc="Augmenting Dataset")                                # download and augment the data with the five_crop method
train_dataset = TrainDataset(augmented_dataset)                                                     # prepare the train dataset for loading PyTorch DataLoader
eval_dataset = EvalDataset(load_dataset('eugenesiow/Div2k', 'bicubic_x4', split='validation'))      # prepare the eval dataset for the PyTorch DataLoader

Pretraining

The model was trained on GPU. The training code is provided below:

from super_image import Trainer, TrainingArguments, A2nModel, A2nConfig

training_args = TrainingArguments(
    output_dir='./results',                 # output directory
    num_train_epochs=1000,                  # total number of training epochs
)

config = A2nConfig(
    scale=4,                                # train a model to upscale 4x
)
model = A2nModel(config)

trainer = Trainer(
    model=model,                         # the instantiated model to be trained
    args=training_args,                  # training arguments, defined above
    train_dataset=train_dataset,         # training dataset
    eval_dataset=eval_dataset            # evaluation dataset
)

trainer.train()

Open In Colab

Evaluation results

The evaluation metrics include PSNR and SSIM.

Evaluation datasets include:

The results columns below are represented below as PSNR/SSIM. They are compared against a Bicubic baseline.

|Dataset |Scale |Bicubic |A2N | |--- |--- |--- |--- | |Set5 |2x |33.64/0.9292 |37.87/0.9602 | |Set5 |3x |30.39/0.8678 |34.8/0.9387 | |Set5 |4x |28.42/0.8101 |32.07/0.8933 | |Set14 |2x |30.22/0.8683 |33.45/0.9162 | |Set14 |3x |27.53/0.7737 |30.94/0.8568 | |Set14 |4x |25.99/0.7023 |28.56/0.7801 | |BSD100 |2x |29.55/0.8425 |32.11/0.8987 | |BSD100 |3x |27.20/0.7382 |29.56/0.8173 | |BSD100 |4x |25.96/0.6672 |27.54/0.7342 | |Urban100 |2x |26.66/0.8408 |31.71/0.9240 | |Urban100 |3x | |28.95/0.8671 | |Urban100 |4x |23.14/0.6573 |25.89/0.7787 |

Comparing Bicubic upscaling against the models x2 upscaling on Set5 Image 2

You can find a notebook to easily run evaluation on pretrained models below:

Open In Colab

BibTeX entry and citation info

@misc{chen2021attention,
      title={Attention in Attention Network for Image Super-Resolution}, 
      author={Haoyu Chen and Jinjin Gu and Zhi Zhang},
      year={2021},
      eprint={2104.09497},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}