# WACV-2024-Papers

<table>
    <tr>
        <td><strong>Application</strong></td>
        <td>
            <a href="https://huggingface.co/spaces/DmitryRyumin/NewEraAI-Papers" style="float:left;">
                <img src="https://img.shields.io/badge/🤗-NewEraAI--Papers-FFD21F.svg" alt="App" />
            </a>
        </td>
    </tr>
</table>

<div align="center">
    <a href="https://github.com/DmitryRyumin/WACV-2024-Papers/blob/main/sections/2024/main/image_recognition_and_understanding.md">
        <img src="https://cdn.jsdelivr.net/gh/DmitryRyumin/NewEraAI-Papers@main/images/left.svg" width="40" alt="" />
    </a>
    <a href="https://github.com/DmitryRyumin/WACV-2024-Papers/">
        <img src="https://cdn.jsdelivr.net/gh/DmitryRyumin/NewEraAI-Papers@main/images/home.svg" width="40" alt="" />
    </a>
    <a href="https://github.com/DmitryRyumin/WACV-2024-Papers/blob/main/sections/2024/main/ml_afa.md">
        <img src="https://cdn.jsdelivr.net/gh/DmitryRyumin/NewEraAI-Papers@main/images/right.svg" width="40" alt="" />
    </a>
</div>

## Low-Level and Physics-based Vision

![Section Papers](https://img.shields.io/badge/Section%20Papers-21-42BA16) ![Preprint Papers](https://img.shields.io/badge/Preprint%20Papers-11-b31b1b) ![Papers with Open Code](https://img.shields.io/badge/Papers%20with%20Open%20Code-11-1D7FBF) ![Papers with Video](https://img.shields.io/badge/Papers%20with%20Video-18-FF0000)

| **Title** | **Repo** | **Paper** | **Video** |
|-----------|:--------:|:---------:|:---------:|
| [Beyond RGB: A Real World Dataset for Multispectral Imaging in Mobile Devices](https://openaccess.thecvf.com/content/WACV2024/html/Glatt_Beyond_RGB_A_Real_World_Dataset_for_Multispectral_Imaging_in_WACV_2024_paper.html) | [![GitHub](https://img.shields.io/github/stars/shirawerman/Beyond-RGB?style=flat)](https://github.com/shirawerman/Beyond-RGB) | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Glatt_Beyond_RGB_A_Real_World_Dataset_for_Multispectral_Imaging_in_WACV_2024_paper.pdf) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=_US_DTYmGqI) |
| [Self-Supervised Denoising Transformer with Gaussian Process](https://openaccess.thecvf.com/content/WACV2024/html/Yasarla_Self-Supervised_Denoising_Transformer_With_Gaussian_Process_WACV_2024_paper.html) | :heavy_minus_sign: | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Yasarla_Self-Supervised_Denoising_Transformer_With_Gaussian_Process_WACV_2024_paper.pdf) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=WT9cX2NmU1Y) |
| [Scene Text Image Super-Resolution based on Text-Conditional Diffusion Models](https://openaccess.thecvf.com/content/WACV2024/html/Noguchi_Scene_Text_Image_Super-Resolution_Based_on_Text-Conditional_Diffusion_Models_WACV_2024_paper.html) | :heavy_minus_sign: | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Noguchi_Scene_Text_Image_Super-Resolution_Based_on_Text-Conditional_Diffusion_Models_WACV_2024_paper.pdf) <br /> [![arXiv](https://img.shields.io/badge/arXiv-2311.09759-b31b1b.svg)](http://arxiv.org/abs/2311.09759) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=i3Zx3QT3tEM) |
| [Meta-Learned Kernel for Blind Super-Resolution Kernel Estimation](https://openaccess.thecvf.com/content/WACV2024/html/Lee_Meta-Learned_Kernel_for_Blind_Super-Resolution_Kernel_Estimation_WACV_2024_paper.html) | [![GitHub](https://img.shields.io/github/stars/royson/metakernelgan?style=flat)](https://github.com/royson/metakernelgan) | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Lee_Meta-Learned_Kernel_for_Blind_Super-Resolution_Kernel_Estimation_WACV_2024_paper.pdf) <br /> [![arXiv](https://img.shields.io/badge/arXiv-2212.07886-b31b1b.svg)](http://arxiv.org/abs/2212.07886) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=OPK316wxlC0) |
| [PhISH-Net: Physics Inspired System for High Resolution Underwater Image Enhancement](https://openaccess.thecvf.com/content/WACV2024/html/Chandrasekar_PhISH-Net_Physics_Inspired_System_for_High_Resolution_Underwater_Image_Enhancement_WACV_2024_paper.html) | :heavy_minus_sign: | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Chandrasekar_PhISH-Net_Physics_Inspired_System_for_High_Resolution_Underwater_Image_Enhancement_WACV_2024_paper.pdf) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=E68RBJUBoVU) |
| [Leveraging Bitstream Metadata for Fast, Accurate, Generalized Compressed Video Quality Enhancement](https://openaccess.thecvf.com/content/WACV2024/html/Ehrlich_Leveraging_Bitstream_Metadata_for_Fast_Accurate_Generalized_Compressed_Video_Quality_WACV_2024_paper.html) | :heavy_minus_sign: | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Ehrlich_Leveraging_Bitstream_Metadata_for_Fast_Accurate_Generalized_Compressed_Video_Quality_WACV_2024_paper.pdf) <br /> [![arXiv](https://img.shields.io/badge/arXiv-2202.00011-b31b1b.svg)](http://arxiv.org/abs/2202.00011) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=s3uk5ZL1xCM) |
| [Image Denoising and the Generative Accumulation of Photons](https://openaccess.thecvf.com/content/WACV2024/html/Krull_Image_Denoising_and_the_Generative_Accumulation_of_Photons_WACV_2024_paper.html) | [![GitHub](https://img.shields.io/github/stars/krulllab/GAP?style=flat)](https://github.com/krulllab/GAP) | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Krull_Image_Denoising_and_the_Generative_Accumulation_of_Photons_WACV_2024_paper.pdf) <br /> [![arXiv](https://img.shields.io/badge/arXiv-2307.06607-b31b1b.svg)](http://arxiv.org/abs/2307.06607) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=XePtgzOU708) |
| [Deep Plug-and-Play Nighttime Non-Blind Deblurring with Saturated Pixel Handling Schemes](https://openaccess.thecvf.com/content/WACV2024/html/Shu_Deep_Plug-and-Play_Nighttime_Non-Blind_Deblurring_With_Saturated_Pixel_Handling_Schemes_WACV_2024_paper.html) | :heavy_minus_sign: | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Shu_Deep_Plug-and-Play_Nighttime_Non-Blind_Deblurring_With_Saturated_Pixel_Handling_Schemes_WACV_2024_paper.pdf) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=r_pRcohh69I) |
| [Best of Both Worlds: Learning Arbitrary-Scale Blind Super-Resolution via Dual Degradation Representations and Cycle-Consistency](https://openaccess.thecvf.com/content/WACV2024/html/Weng_Best_of_Both_Worlds_Learning_Arbitrary-Scale_Blind_Super-Resolution_via_Dual_WACV_2024_paper.html) | [![GitHub](https://img.shields.io/github/stars/vivian210223/arbitrary-scale-blind-SR?style=flat)](https://github.com/vivian210223/arbitrary-scale-blind-SR) | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Weng_Best_of_Both_Worlds_Learning_Arbitrary-Scale_Blind_Super-Resolution_via_Dual_WACV_2024_paper.pdf) | :heavy_minus_sign: |
| [ICF-SRSR: Invertible Scale-Conditional Function for Self-Supervised Real-World Single Image Super-Resolution](https://openaccess.thecvf.com/content/WACV2024/html/Neshatavar_ICF-SRSR_Invertible_Scale-Conditional_Function_for_Self-Supervised_Real-World_Single_Image_Super-Resolution_WACV_2024_paper.html) | [![GitHub](https://img.shields.io/github/stars/Reyhanehne/ICF-SRSR_PyTorch?style=flat)](https://github.com/Reyhanehne/ICF-SRSR_PyTorch) | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Neshatavar_ICF-SRSR_Invertible_Scale-Conditional_Function_for_Self-Supervised_Real-World_Single_Image_Super-Resolution_WACV_2024_paper.pdf) <br /> [![arXiv](https://img.shields.io/badge/arXiv-2307.12751-b31b1b.svg)](http://arxiv.org/abs/2307.12751) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=Pget2ZDz9BY) |
| [A Neural Height-Map Approach for the Binocular Photometric Stereo Problem](https://openaccess.thecvf.com/content/WACV2024/html/Logothetis_A_Neural_Height-Map_Approach_for_the_Binocular_Photometric_Stereo_Problem_WACV_2024_paper.html) | :heavy_minus_sign: | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Logothetis_A_Neural_Height-Map_Approach_for_the_Binocular_Photometric_Stereo_Problem_WACV_2024_paper.pdf) <br /> [![arXiv](https://img.shields.io/badge/arXiv-2311.05958-b31b1b.svg)](http://arxiv.org/abs/2311.05958) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=SKnfBjK1Vfc) |
| [Estimating fog Parameters from an Image Sequence using Non-Linear Optimisation](https://openaccess.thecvf.com/content/WACV2024/html/Ding_Estimating_Fog_Parameters_From_an_Image_Sequence_Using_Non-Linear_Optimisation_WACV_2024_paper.html) | :heavy_minus_sign: | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Ding_Estimating_Fog_Parameters_From_an_Image_Sequence_Using_Non-Linear_Optimisation_WACV_2024_paper.pdf) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=LCLbhgW1KGE) |
| [4K-Resolution Photo Exposure Correction at 125 FPS with ∼8K Parameters](https://openaccess.thecvf.com/content/WACV2024/html/Zhou_4K-Resolution_Photo_Exposure_Correction_at_125_FPS_With_8K_Parameters_WACV_2024_paper.html) | [![GitHub](https://img.shields.io/github/stars/Zhou-Yijie/MSLTNet?style=flat)](https://github.com/Zhou-Yijie/MSLTNet) | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Zhou_4K-Resolution_Photo_Exposure_Correction_at_125_FPS_With_8K_Parameters_WACV_2024_paper.pdf) <br /> [![arXiv](https://img.shields.io/badge/arXiv-2311.08759-b31b1b.svg)](http://arxiv.org/abs/2311.08759) | :heavy_minus_sign: |
| [UGPNet: Universal Generative Prior for Image Restoration](https://openaccess.thecvf.com/content/WACV2024/html/Lee_UGPNet_Universal_Generative_Prior_for_Image_Restoration_WACV_2024_paper.html) | :heavy_minus_sign: | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Lee_UGPNet_Universal_Generative_Prior_for_Image_Restoration_WACV_2024_paper.pdf) <br /> [![arXiv](https://img.shields.io/badge/arXiv-2401.00370-b31b1b.svg)](http://arxiv.org/abs/2401.00370) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=mBep81MnnOQ) |
| [Fully-Automatic Reflection Removal for 360-Degree Images](https://openaccess.thecvf.com/content/WACV2024/html/Park_Fully-Automatic_Reflection_Removal_for_360-Degree_Images_WACV_2024_paper.html) | :heavy_minus_sign: | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Park_Fully-Automatic_Reflection_Removal_for_360-Degree_Images_WACV_2024_paper.pdf) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=tbnjUfoYkSs) |
| [PETIT-GAN: Physically Enhanced Thermal Image-Translating Generative Adversarial Network](https://openaccess.thecvf.com/content/WACV2024/html/Berman_PETIT-GAN_Physically_Enhanced_Thermal_Image-Translating_Generative_Adversarial_Network_WACV_2024_paper.html) | [![GitHub Page](https://img.shields.io/badge/GitHub-Page-159957.svg)](https://bermanz.github.io/PETIT/) <br /> [![GitHub](https://img.shields.io/github/stars/bermanz/PETIT?style=flat)](https://github.com/bermanz/PETIT) | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Berman_PETIT-GAN_Physically_Enhanced_Thermal_Image-Translating_Generative_Adversarial_Network_WACV_2024_paper.pdf) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=1YQKJQ_7v6A) |
| [Bridging the Gap Between Multi-Focus and Multi-Modal: A Focused Integration Framework for Multi-Modal Image Fusion](https://openaccess.thecvf.com/content/WACV2024/html/Li_Bridging_the_Gap_Between_Multi-Focus_and_Multi-Modal_A_Focused_Integration_WACV_2024_paper.html) | [![GitHub](https://img.shields.io/github/stars/ixilai/MFIF-MMIF?style=flat)](https://github.com/ixilai/MFIF-MMIF) | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Li_Bridging_the_Gap_Between_Multi-Focus_and_Multi-Modal_A_Focused_Integration_WACV_2024_paper.pdf) <br /> [![arXiv](https://img.shields.io/badge/arXiv-2311.01886-b31b1b.svg)](http://arxiv.org/abs/2311.01886) | :heavy_minus_sign: |
| [BoostRad: Enhancing Object Detection by Boosting Radar Reflections](https://openaccess.thecvf.com/content/WACV2024/html/Haitman_BoostRad_Enhancing_Object_Detection_by_Boosting_Radar_Reflections_WACV_2024_paper.html) | :heavy_minus_sign: | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Haitman_BoostRad_Enhancing_Object_Detection_by_Boosting_Radar_Reflections_WACV_2024_paper.pdf) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=MIb-JYwL89Q) |
| [RankDVQA: Deep VQA based on Ranking-Inspired Hybrid Training](https://openaccess.thecvf.com/content/WACV2024/html/Feng_RankDVQA_Deep_VQA_Based_on_Ranking-Inspired_Hybrid_Training_WACV_2024_paper.html) | [![GitHub Page](https://img.shields.io/badge/GitHub-Page-159957.svg)](https://chenfeng-bristol.github.io/RankDVQA/) <br /> [![GitHub](https://img.shields.io/github/stars/ChenFeng-Bristol/RankDVQA_release?style=flat)](https://github.com/ChenFeng-Bristol/RankDVQA_release) | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Feng_RankDVQA_Deep_VQA_Based_on_Ranking-Inspired_Hybrid_Training_WACV_2024_paper.pdf) <br /> [![arXiv](https://img.shields.io/badge/arXiv-2202.08595-b31b1b.svg)](http://arxiv.org/abs/2202.08595) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=SThToABaxKY) |
| [Reference-based Restoration of Digitized Analog Videotapes](https://openaccess.thecvf.com/content/WACV2024/html/Agnolucci_Reference-Based_Restoration_of_Digitized_Analog_Videotapes_WACV_2024_paper.html) | [![GitHub](https://img.shields.io/github/stars/miccunifi/TAPE?style=flat)](https://github.com/miccunifi/TAPE) | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Agnolucci_Reference-Based_Restoration_of_Digitized_Analog_Videotapes_WACV_2024_paper.pdf) <br /> [![arXiv](https://img.shields.io/badge/arXiv-2310.14926-b31b1b.svg)](http://arxiv.org/abs/2310.14926) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=jHgWYPMqokU) |
| [Fixed Pattern Noise Removal for Multi-View Single-Sensor Infrared Camera](https://openaccess.thecvf.com/content/WACV2024/html/Barral_Fixed_Pattern_Noise_Removal_for_Multi-View_Single-Sensor_Infrared_Camera_WACV_2024_paper.html) | [![GitHub](https://img.shields.io/github/stars/centreborelli/multiview-fpn?style=flat)](https://github.com/centreborelli/multiview-fpn) | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Barral_Fixed_Pattern_Noise_Removal_for_Multi-View_Single-Sensor_Infrared_Camera_WACV_2024_paper.pdf) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=5QaIWPcNsT4) |
