Papers
arxiv:2309.08113

MetaF2N: Blind Image Super-Resolution by Learning Efficient Model Adaptation from Faces

Published on Sep 15, 2023
Authors:
,
,
,
,
,

Abstract

Due to their highly structured characteristics, faces are easier to recover than natural scenes for <PRE_TAG>blind image super-resolution</POST_TAG>. Therefore, we can extract the <PRE_TAG>degradation representation</POST_TAG> of an image from the low-quality and recovered face pairs. Using the <PRE_TAG>degradation representation</POST_TAG>, realistic <PRE_TAG>low-quality images</POST_TAG> can then be synthesized to fine-tune the <PRE_TAG>super-resolution model</POST_TAG> for the real-world low-quality image. However, such a procedure is time-consuming and laborious, and the gaps between recovered faces and the <PRE_TAG>ground-truths</POST_TAG> further increase the optimization uncertainty. To facilitate efficient model adaptation towards <PRE_TAG>image-specific degradations</POST_TAG>, we propose a method dubbed <PRE_TAG>MetaF2N</POST_TAG>, which leverages the contained Faces to fine-tune model parameters for adapting to the whole Natural image in a <PRE_TAG>Meta-learning framework</POST_TAG>. The degradation extraction and low-quality image synthesis steps are thus circumvented in our <PRE_TAG>MetaF2N</POST_TAG>, and it requires only one <PRE_TAG>fine-tuning</POST_TAG> step to get decent performance. Considering the gaps between the recovered faces and <PRE_TAG>ground-truths</POST_TAG>, we further deploy a <PRE_TAG>MaskNet</POST_TAG> for adaptively predicting loss weights at different positions to reduce the impact of low-confidence areas. To evaluate our proposed <PRE_TAG>MetaF2N</POST_TAG>, we have collected a real-world low-quality dataset with one or multiple faces in each image, and our <PRE_TAG>MetaF2N</POST_TAG> achieves superior performance on both synthetic and real-world datasets. Source code, pre-trained models, and collected datasets are available at https://github.com/yinzhicun/<PRE_TAG>MetaF2N</POST_TAG>.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2309.08113 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2309.08113 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2309.08113 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.