Skip to content

Latest commit

 

History

History
151 lines (115 loc) · 7.99 KB

README.md

File metadata and controls

151 lines (115 loc) · 7.99 KB

Learning Enriched Features for Fast Image Restoration and Enhancement (TPAMI 2022)

Syed Waqas Zamir, Aditya Arora, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, Ming-Hsuan Yang, and Ling Shao

paper Open In Colab

News

  • April 27, 2022: Codes and pre-trained models are released!

Abstract: * Given a degraded input image, image restoration aims to recover the missing high-quality image content. Numerous applications demand effective image restoration, e.g., computational photography, surveillance, autonomous vehicles, and remote sensing. Significant advances in image restoration have been made in recent years, dominated by convolutional neural networks (CNNs). The widely-used CNN-based methods typically operate either on full-resolution or on progressively low-resolution representations. In the former case, spatial details are preserved but the contextual information cannot be precisely encoded. In the latter case, generated outputs are semantically reliable but spatially less accurate. This paper presents a new architecture with a holistic goal of maintaining spatially-precise high-resolution representations through the entire network, and receiving complementary contextual information from the low-resolution representations. The core of our approach is a multi-scale residual block containing the following key elements: (a) parallel multi-resolution convolution streams for extracting multi-scale features, (b) information exchange across the multi-resolution streams, (c) non-local attention mechanism for capturing contextual information, and (d) attention based multi-scale feature aggregation. Our approach learns an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details. Extensive experiments on six real image benchmark datasets demonstrate that our method, named as MIRNet-v2 , achieves state-of-the-art results for a variety of image processing tasks, including defocus deblurring, image denoising, super-resolution, and image enhancement.*


Network Architecture (click to expand)


Overall Framework of MIRNet_v2

Selective Kernel Feature Fusion (SKFF)

Residual Contextual Block (RCB)

Installation

See INSTALL.md for the installation of dependencies required to run MIRNet_v2.

Demo

To test the pre-trained MIRNet_v2 models of Real Denoising, Dual-Pixel Defocus Deblurring, Super-Resolution, and Image Enhancement on your own images,you can either use Google Colab Open In Colab, or command line as following

python demo.py --task Task_Name --input_dir path_to_images --result_dir save_images_here

Example usage to perform Image Denoising on a directory of images:

python demo.py --task real_denoising --input_dir './demo/degraded/' --result_dir './demo/restored/'

Example usage to perform Image Denoising on an image directly:

python demo.py --task real_denoising --input_dir './demo/degraded/noisy.png' --result_dir './demo/restored/'

Training and Evaluation

Training and Testing instructions for Real Denoising, Defocus Deblurring, Super-Resolution, and Image Enhancement are provided in their respective directories. Here is a summary table containing hyperlinks for easy navigation:

Task Training Instructions Testing Instructions MIRNetv2's Visual Results
Real Denoising Link Link Download
Defocus Deblurring Link Link Download
Super-Resolution Link Link Download
Image Enhancement Link Link Download

Results

Experiments are performed for different image processing tasks.

Real Denoising (click to expand)

Defocus Deblurring (click to expand)
Super-Resolution (click to expand)

Image Enhancement (click to expand)

Citation

If you use MIRNet_v2, please consider citing:

@article{Zamir2022MIRNetv2,
title={Learning Enriched Features for Fast Image Restoration and Enhancement}, 
author={Syed Waqas Zamir and Aditya Arora and Salman Khan and Munawar Hayat 
        and Fahad Shahbaz Khan and Ming-Hsuan Yang and Ling Shao},
journal={IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)},
year={2022}
}

Contact

Should you have any question, please contact [email protected]

Acknowledgment: This code is based on the BasicSR toolbox.

Our Related Works

  • Restormer: Efficient Transformer for High-Resolution Image Restoration, CVPR 2022. Paper | Code
  • Multi-Stage Progressive Image Restoration, CVPR 2021. Paper | Code
  • Learning Enriched Features for Real Image Restoration and Enhancement, ECCV 2020. Paper | Code
  • CycleISP: Real Image Restoration via Improved Data Synthesis, CVPR 2020. Paper | Code