Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,21 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<div align="center">
|
2 |
+
|
3 |
+
# [IEEE TIP] Transforming Image Super-Resolution: A ConvFormer-based Efficient Approach
|
4 |
+
|
5 |
+
[Gang Wu](https://scholar.google.com/citations?user=JSqb7QIAAAAJ), [Junjun Jiang](http://homepage.hit.edu.cn/jiangjunjun), [Junpeng Jiang](), and [Xianming Liu](http://homepage.hit.edu.cn/xmliu)
|
6 |
+
|
7 |
+
[AIIA Lab](https://aiialabhit.github.io/team/), Harbin Institute of Technology.
|
8 |
+
|
9 |
+
|
10 |
+
[![Paper2](https://img.shields.io/badge/Paper-arXiv-red)](https://arxiv.org/abs/2401.05633)[![Paper](https://img.shields.io/badge/Paper-IEEE%20TIP-blue)](https://github.com/Aitical/CFSR) [![Models](https://img.shields.io/badge/Models-Hugging%20Face-gold)](https://huggingface.co/GWu/CFSR/)[![Results](https://img.shields.io/badge/Results-GoogleDrive-brightgreen)](https://drive.google.com/drive/folders/1M55TvlSn1BJVJ4Go5uVkvHFhfwo7Z5ov?usp=sharing)[![Hits](https://hits.sh/github.com/Aitical/CFSR.svg)](https://hits.sh/github.com/Aitical/CFSR/)
|
11 |
+
|
12 |
+
</div>
|
13 |
+
|
14 |
+
This repository is the official PyTorch implementation of "Transforming Image Super-Resolution: A ConvFormer-based Efficient Approach"
|
15 |
+
|
16 |
+
>Recent progress in single-image super-resolution (SISR) has achieved remarkable performance, yet the computational costs of these methods remain a challenge for deployment on resource-constrained devices. In particular, transformer-based methods, which leverage self-attention mechanisms, have led to significant breakthroughs but also introduce substantial computational costs. To tackle this issue, we introduce the Convolutional Transformer layer (ConvFormer) and propose a ConvFormer-based Super-Resolution network (CFSR), offering an effective and efficient solution for lightweight image super-resolution. The proposed method inherits the advantages of both convolution-based and transformer-based approaches. Specifically, CFSR utilizes large kernel convolutions as a feature mixer to replace the self-attention module, efficiently modeling long-range dependencies and extensive receptive fields with minimal computational overhead. Furthermore, we propose an edge-preserving feed-forward network (EFN) designed to achieve local feature aggregation while effectively preserving high-frequency information. Extensive experiments demonstrate that CFSR strikes an optimal balance between computational cost and performance compared to existing lightweight SR methods. When benchmarked against state-of-the-art methods such as ShuffleMixer, the proposed CFSR achieves a gain of 0.39 dB on the Urban100 dataset for the x2 super-resolution task while requiring 26\% and 31\% fewer parameters and FLOPs, respectively.
|
17 |
+
|
18 |
+
## Results
|
19 |
+
|
20 |
+
Results of x2, x3, and x4 SR tasks are available at [Google Drive](https://drive.google.com/drive/folders/1M55TvlSn1BJVJ4Go5uVkvHFhfwo7Z5ov?usp=sharing)
|
21 |
+
|