# WACV-2024-Papers

<table>
    <tr>
        <td><strong>Application</strong></td>
        <td>
            <a href="https://huggingface.co/spaces/DmitryRyumin/NewEraAI-Papers" style="float:left;">
                <img src="https://img.shields.io/badge/🤗-NewEraAI--Papers-FFD21F.svg" alt="App" />
            </a>
        </td>
    </tr>
</table>

<div align="center">
    <a href="https://github.com/DmitryRyumin/WACV-2024-Papers/blob/main/sections/2024/main/biometrics_face_gesture_body_pose.md">
        <img src="https://cdn.jsdelivr.net/gh/DmitryRyumin/NewEraAI-Papers@main/images/left.svg" width="40" alt="" />
    </a>
    <a href="https://github.com/DmitryRyumin/WACV-2024-Papers/">
        <img src="https://cdn.jsdelivr.net/gh/DmitryRyumin/NewEraAI-Papers@main/images/home.svg" width="40" alt="" />
    </a>
    <a href="https://github.com/DmitryRyumin/WACV-2024-Papers/blob/main/sections/2024/main/datasets_and_evaluation.md">
        <img src="https://cdn.jsdelivr.net/gh/DmitryRyumin/NewEraAI-Papers@main/images/right.svg" width="40" alt="" />
    </a>
</div>

## Computational Photography, Image and Video Synthesis

![Section Papers](https://img.shields.io/badge/Section%20Papers-32-42BA16) ![Preprint Papers](https://img.shields.io/badge/Preprint%20Papers-20-b31b1b) ![Papers with Open Code](https://img.shields.io/badge/Papers%20with%20Open%20Code-16-1D7FBF) ![Papers with Video](https://img.shields.io/badge/Papers%20with%20Video-30-FF0000)

| **Title** | **Repo** | **Paper** | **Video** |
|-----------|:--------:|:---------:|:---------:|
| [Learning Residual Elastic Warps for Image Stitching under Dirichlet Boundary Condition](https://openaccess.thecvf.com/content/WACV2024/html/Kim_Learning_Residual_Elastic_Warps_for_Image_Stitching_Under_Dirichlet_Boundary_WACV_2024_paper.html) | [![GitHub](https://img.shields.io/github/stars/minshu-kim/Recurrent-Elastic-Warp?style=flat)](https://github.com/minshu-kim/Recurrent-Elastic-Warp) | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Kim_Learning_Residual_Elastic_Warps_for_Image_Stitching_Under_Dirichlet_Boundary_WACV_2024_paper.pdf) <br /> [![arXiv](https://img.shields.io/badge/arXiv-2309.01406-b31b1b.svg)](http://arxiv.org/abs/2309.01406) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=bXMsBUCMTlk) |
| [Exploiting the Signal-Leak Bias in Diffusion Models](https://openaccess.thecvf.com/content/WACV2024/html/Everaert_Exploiting_the_Signal-Leak_Bias_in_Diffusion_Models_WACV_2024_paper.html) | [![GitHub Page](https://img.shields.io/badge/GitHub-Page-159957.svg)](https://ivrl.github.io/signal-leak-bias/) | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Everaert_Exploiting_the_Signal-Leak_Bias_in_Diffusion_Models_WACV_2024_paper.pdf) <br /> [![arXiv](https://img.shields.io/badge/arXiv-2309.15842-b31b1b.svg)](http://arxiv.org/abs/2309.15842) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=x8MRgyiIYz4) |
| [Synthesizing Anyone, Anywhere, in Any Pose](https://openaccess.thecvf.com/content/WACV2024/html/Hukkelas_Synthesizing_Anyone_Anywhere_in_Any_Pose_WACV_2024_paper.html) | [![WEB Page](https://img.shields.io/badge/WEB-Page-159957.svg)](https://www.hukkelas.no/deep_privacy2) <br /> [![GitHub](https://img.shields.io/github/stars/hukkelas/deep_privacy2?style=flat)](https://github.com/hukkelas/deep_privacy2) | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Hukkelas_Synthesizing_Anyone_Anywhere_in_Any_Pose_WACV_2024_paper.pdf) <br /> [![arXiv](https://img.shields.io/badge/arXiv-2304.03164-b31b1b.svg)](http://arxiv.org/abs/2304.03164) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=3OnCZA9zOnE) |
| [Specular Object Reconstruction Behind Frosted Glass by Differentiable Rendering](https://openaccess.thecvf.com/content/WACV2024/html/Iwaguchi_Specular_Object_Reconstruction_Behind_Frosted_Glass_by_Differentiable_Rendering_WACV_2024_paper.html) | :heavy_minus_sign: | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Iwaguchi_Specular_Object_Reconstruction_Behind_Frosted_Glass_by_Differentiable_Rendering_WACV_2024_paper.pdf) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=tnCzuNf-Ee4) |
| [Multi-Level Attention Aggregation for Aesthetic Face Relighting](https://openaccess.thecvf.com/content/WACV2024/html/Pidaparthy_Multi-Level_Attention_Aggregation_for_Aesthetic_Face_Relighting_WACV_2024_paper.html) | :heavy_minus_sign: | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Pidaparthy_Multi-Level_Attention_Aggregation_for_Aesthetic_Face_Relighting_WACV_2024_paper.pdf) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=pvbsHQ4qSWU) |
| [Deep Image Fingerprint: Towards Low Budget Synthetic Image Detection and Model Lineage Analysis](https://openaccess.thecvf.com/content/WACV2024/html/Sinitsa_Deep_Image_Fingerprint_Towards_Low_Budget_Synthetic_Image_Detection_and_WACV_2024_paper.html) |  [![GitHub Page](https://img.shields.io/badge/GitHub-Page-159957.svg)](https://sergo2020.github.io/DIF/) <br /> [![GitHub](https://img.shields.io/github/stars/Sergo2020/DIF_pytorch_official?style=flat)](https://github.com/Sergo2020/DIF_pytorch_official) | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Sinitsa_Deep_Image_Fingerprint_Towards_Low_Budget_Synthetic_Image_Detection_and_WACV_2024_paper.pdf) <br /> [![arXiv](https://img.shields.io/badge/arXiv-2303.10762-b31b1b.svg)](http://arxiv.org/abs/2303.10762) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=_psmM4X-NbE) |
| [StyleGenes: Discrete and Efficient Latent Distributions for GANs](https://openaccess.thecvf.com/content/WACV2024/html/Ntavelis_StyleGenes_Discrete_and_Efficient_Latent_Distributions_for_GANs_WACV_2024_paper.html) | [![GitHub](https://img.shields.io/github/stars/entavelis/StyleGenes?style=flat)](https://github.com/entavelis/StyleGenes) | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Ntavelis_StyleGenes_Discrete_and_Efficient_Latent_Distributions_for_GANs_WACV_2024_paper.pdf) <br /> [![arXiv](https://img.shields.io/badge/arXiv-2305.00599-b31b1b.svg)](http://arxiv.org/abs/2305.00599) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=nEMApQws1UM) |
| [Implicit Neural Image Stitching with Enhanced and Blended Feature Reconstruction](https://openaccess.thecvf.com/content/WACV2024/html/Kim_Implicit_Neural_Image_Stitching_With_Enhanced_and_Blended_Feature_Reconstruction_WACV_2024_paper.html) | [![GitHub](https://img.shields.io/github/stars/minshu-kim/Neural-Image-Stitching?style=flat)](https://github.com/minshu-kim/Neural-Image-Stitching) | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Kim_Implicit_Neural_Image_Stitching_With_Enhanced_and_Blended_Feature_Reconstruction_WACV_2024_paper.pdf) <br /> [![arXiv](https://img.shields.io/badge/arXiv-2309.01409-b31b1b.svg)](http://arxiv.org/abs/2309.01409) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=yFKnmrddxS4) |
| [PDA-RWSR: Pixel-Wise Degradation Adaptive Real-World Super-Resolution](https://openaccess.thecvf.com/content/WACV2024/html/Aakerberg_PDA-RWSR_Pixel-Wise_Degradation_Adaptive_Real-World_Super-Resolution_WACV_2024_paper.html) | [![Zenodo](https://img.shields.io/badge/Zenodo-dataset-FFD1BF.svg)](https://zenodo.org/records/10044260) | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Aakerberg_PDA-RWSR_Pixel-Wise_Degradation_Adaptive_Real-World_Super-Resolution_WACV_2024_paper.pdf) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=XmwqTJGK9AQ) |
| [Real Time GAZED: Online Shot Selection and Editing of Virtual Cameras from Wide-Angle Monocular Video Recordings](https://openaccess.thecvf.com/content/WACV2024/html/Achary_Real_Time_GAZED_Online_Shot_Selection_and_Editing_of_Virtual_WACV_2024_paper.html) | :heavy_minus_sign: | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Achary_Real_Time_GAZED_Online_Shot_Selection_and_Editing_of_Virtual_WACV_2024_paper.pdf) <br /> [![arXiv](https://img.shields.io/badge/arXiv-2311.15581-b31b1b.svg)](http://arxiv.org/abs/2311.15581) | :heavy_minus_sign: |
| [Unsupervised Model-based Learning for Simultaneous Video Deflickering and Deblotching](https://openaccess.thecvf.com/content/WACV2024/html/Fulari_Unsupervised_Model-Based_Learning_for_Simultaneous_Video_Deflickering_and_Deblotching_WACV_2024_paper.html) | :heavy_minus_sign: | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Fulari_Unsupervised_Model-Based_Learning_for_Simultaneous_Video_Deflickering_and_Deblotching_WACV_2024_paper.pdf) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=K5uCcAF7RrA) |
| [Differentiable JPEG: The Devil is in the Details](https://openaccess.thecvf.com/content/WACV2024/html/Reich_Differentiable_JPEG_The_Devil_Is_in_the_Details_WACV_2024_paper.html) | [![GitHub Page](https://img.shields.io/badge/GitHub-Page-159957.svg)](https://christophreich1996.github.io/differentiable_jpeg/) <br /> [![GitHub](https://img.shields.io/github/stars/necla-ml/Diff-JPEG?style=flat)](https://github.com/necla-ml/Diff-JPEG) | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Reich_Differentiable_JPEG_The_Devil_Is_in_the_Details_WACV_2024_paper.pdf) <br /> [![arXiv](https://img.shields.io/badge/arXiv-2309.06978-b31b1b.svg)](http://arxiv.org/abs/2309.06978) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=K5uCcAF7RrA) |
| [TSA<sup>2</sup>: Temporal Segment Adaptation and Aggregation for Video Harmonization](https://openaccess.thecvf.com/content/WACV2024/html/Xiao_TSA2_Temporal_Segment_Adaptation_and_Aggregation_for_Video_Harmonization_WACV_2024_paper.html) | :heavy_minus_sign: | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Xiao_TSA2_Temporal_Segment_Adaptation_and_Aggregation_for_Video_Harmonization_WACV_2024_paper.pdf) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=F3mGXYs8wyM) |
| [Diffusion in the Dark: A Diffusion Model for Low-Light Text Recognition](https://openaccess.thecvf.com/content/WACV2024/html/Nguyen_Diffusion_in_the_Dark_A_Diffusion_Model_for_Low-Light_Text_WACV_2024_paper.html) | [![GitHub Page](https://img.shields.io/badge/GitHub-Page-159957.svg)](https://ccnguyen.github.io/diffusion-in-the-dark/) <br /> [![GitHub](https://img.shields.io/github/stars/computational-imaging/diffusion-in-the-dark?style=flat)](https://github.com/computational-imaging/diffusion-in-the-dark/) | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Nguyen_Diffusion_in_the_Dark_A_Diffusion_Model_for_Low-Light_Text_WACV_2024_paper.pdf) <br /> [![arXiv](https://img.shields.io/badge/arXiv-2303.04291-b31b1b.svg)](http://arxiv.org/abs/2303.04291) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=AKzZ4rwVOp0) |
| [Lightweight Portrait Matting via Regional Attention and Refinement](https://openaccess.thecvf.com/content/WACV2024/html/Zhong_Lightweight_Portrait_Matting_via_Regional_Attention_and_Refinement_WACV_2024_paper.html) | [![GitHub](https://img.shields.io/github/stars/JiauZhang/LiPM?style=flat)](https://github.com/JiauZhang/LiPM/) | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Zhong_Lightweight_Portrait_Matting_via_Regional_Attention_and_Refinement_WACV_2024_paper.pdf) <br /> [![arXiv](https://img.shields.io/badge/arXiv-2311.03770-b31b1b.svg)](http://arxiv.org/abs/2311.03770) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=BpV2mrIkVvI) |
| [RADIO: Reference-Agnostic Dubbing Video Synthesis](https://openaccess.thecvf.com/content/WACV2024/html/Lee_RADIO_Reference-Agnostic_Dubbing_Video_Synthesis_WACV_2024_paper.html) | :heavy_minus_sign: | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Lee_RADIO_Reference-Agnostic_Dubbing_Video_Synthesis_WACV_2024_paper.pdf) <br /> [![arXiv](https://img.shields.io/badge/arXiv-2309.01950-b31b1b.svg)](http://arxiv.org/abs/2309.01950) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=ueMXdmykim0) |
| [Unsupervised Event-based Video Reconstruction](https://openaccess.thecvf.com/content/WACV2024/html/Fox_Unsupervised_Event-Based_Video_Reconstruction_WACV_2024_paper.html) | :heavy_minus_sign: | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Fox_Unsupervised_Event-Based_Video_Reconstruction_WACV_2024_paper.pdf) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=-JLtEfFClRI) |
| [Neural Image Compression using Masked Sparse Visual Representation](https://openaccess.thecvf.com/content/WACV2024/html/Jiang_Neural_Image_Compression_Using_Masked_Sparse_Visual_Representation_WACV_2024_paper.html) | :heavy_minus_sign: | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Jiang_Neural_Image_Compression_Using_Masked_Sparse_Visual_Representation_WACV_2024_paper.pdf) <br /> [![arXiv](https://img.shields.io/badge/arXiv-2309.11661-b31b1b.svg)](http://arxiv.org/abs/2309.11661) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=pCcb8uQUpIg) |
| [Shape-Guided Diffusion with Inside-Outside Attention](https://openaccess.thecvf.com/content/WACV2024/html/Park_Shape-Guided_Diffusion_With_Inside-Outside_Attention_WACV_2024_paper.html) | [![GitHub Page](https://img.shields.io/badge/GitHub-Page-159957.svg)](https://shape-guided-diffusion.github.io/) <br /> [![GitHub](https://img.shields.io/github/stars/shape-guided-diffusion/shape-guided-diffusion?style=flat)](https://github.com/shape-guided-diffusion/shape-guided-diffusion/) | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Park_Shape-Guided_Diffusion_With_Inside-Outside_Attention_WACV_2024_paper.pdf) <br /> [![arXiv](https://img.shields.io/badge/arXiv-2212.00210-b31b1b.svg)](http://arxiv.org/abs/2212.00210) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=vwZkL_C9vIM) |
| [Collage Diffusion](https://openaccess.thecvf.com/content/WACV2024/html/Sarukkai_Collage_Diffusion_WACV_2024_paper.html) | [![GitHub](https://img.shields.io/github/stars/linden-li/collage-diffusion-ui?style=flat)](https://github.com/linden-li/collage-diffusion-ui) | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Sarukkai_Collage_Diffusion_WACV_2024_paper.pdf) <br /> [![arXiv](https://img.shields.io/badge/arXiv-2303.00262-b31b1b.svg)](http://arxiv.org/abs/2303.00262) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=oFs1XVSeX8A) |
| [Learning-based Spotlight Position Optimization for Non-Line-of-Sight Human Localization and Posture Classification](https://openaccess.thecvf.com/content/WACV2024/html/Chandran_Learning-Based_Spotlight_Position_Optimization_for_Non-Line-of-Sight_Human_Localization_and_Posture_WACV_2024_paper.html) | [![GitHub](https://img.shields.io/github/stars/srchandr/Learning-Spotlight-Optimisation-NLOS?style=flat)](https://github.com/srchandr/Learning-Spotlight-Optimisation-NLOS/) | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Chandran_Learning-Based_Spotlight_Position_Optimization_for_Non-Line-of-Sight_Human_Localization_and_Posture_WACV_2024_paper.pdf) | :heavy_minus_sign: |
| [Scale-Adaptive Feature Aggregation for Efficient Space-Time Video Super-Resolution](https://openaccess.thecvf.com/content/WACV2024/html/Huang_Scale-Adaptive_Feature_Aggregation_for_Efficient_Space-Time_Video_Super-Resolution_WACV_2024_paper.html) | [![GitHub](https://img.shields.io/github/stars/megvii-research/WACV2024-SAFA?style=flat)](https://github.com/megvii-research/WACV2024-SAFA/) | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Huang_Scale-Adaptive_Feature_Aggregation_for_Efficient_Space-Time_Video_Super-Resolution_WACV_2024_paper.pdf) <br /> [![arXiv](https://img.shields.io/badge/arXiv-2310.17294-b31b1b.svg)](http://arxiv.org/abs/2310.17294) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=Et3WOj9oj0w) |
| [What Decreases Editing Capability? Domain-Specific Hybrid Refinement for Improved GAN Inversion](https://openaccess.thecvf.com/content/WACV2024/html/Cao_What_Decreases_Editing_Capability_Domain-Specific_Hybrid_Refinement_for_Improved_GAN_WACV_2024_paper.html) | [![GitHub](https://img.shields.io/github/stars/caopulan/Domain-Specific_Hybrid_Refinement_Inversion?style=flat)](https://github.com/caopulan/Domain-Specific_Hybrid_Refinement_Inversion/) | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Cao_What_Decreases_Editing_Capability_Domain-Specific_Hybrid_Refinement_for_Improved_GAN_WACV_2024_paper.pdf) <br /> [![arXiv](https://img.shields.io/badge/arXiv-2301.12141-b31b1b.svg)](http://arxiv.org/abs/2301.12141) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=JcP8L8rp-d4) |
| [Latent-Guided Exemplar-based Image Re-Colorization](https://openaccess.thecvf.com/content/WACV2024/html/Yang_Latent-Guided_Exemplar-Based_Image_Re-Colorization_WACV_2024_paper.html) | [![GitHub](https://img.shields.io/github/stars/13633491388/LG-IR?style=flat)](https://github.com/13633491388/LG-IR/) | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Yang_Latent-Guided_Exemplar-Based_Image_Re-Colorization_WACV_2024_paper.pdf) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=ddmiHfPp8Ss) |
| [Stereo Conversion with Disparity-Aware Warping, Compositing and Inpainting](https://openaccess.thecvf.com/content/WACV2024/html/Mehl_Stereo_Conversion_With_Disparity-Aware_Warping_Compositing_and_Inpainting_WACV_2024_paper.html) | :heavy_minus_sign: | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Mehl_Stereo_Conversion_With_Disparity-Aware_Warping_Compositing_and_Inpainting_WACV_2024_paper.pdf) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=YYFUXv-uF7g) |
| [Arbitrary-Resolution and Arbitrary-Scale Face Super-Resolution with Implicit Representation Networks](https://openaccess.thecvf.com/content/WACV2024/html/Tsai_Arbitrary-Resolution_and_Arbitrary-Scale_Face_Super-Resolution_With_Implicit_Representation_Networks_WACV_2024_paper.html) | :heavy_minus_sign: | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Tsai_Arbitrary-Resolution_and_Arbitrary-Scale_Face_Super-Resolution_With_Implicit_Representation_Networks_WACV_2024_paper.pdf) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=fg--29QDJEY) |
| [Blurry Video Compression: A Trade-Off between Visual Enhancement and Data Compression](https://openaccess.thecvf.com/content/WACV2024/html/Argaw_Blurry_Video_Compression_A_Trade-Off_Between_Visual_Enhancement_and_Data_WACV_2024_paper.html) | :heavy_minus_sign: | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Argaw_Blurry_Video_Compression_A_Trade-Off_Between_Visual_Enhancement_and_Data_WACV_2024_paper.pdf) <br /> [![arXiv](https://img.shields.io/badge/arXiv-2311.04430-b31b1b.svg)](http://arxiv.org/abs/2311.04430) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=rtShUagDkL4) |
| [ProxEdit: Improving Tuning-Free Real Image Editing with Proximal Guidance](https://openaccess.thecvf.com/content/WACV2024/html/Han_ProxEdit_Improving_Tuning-Free_Real_Image_Editing_With_Proximal_Guidance_WACV_2024_paper.html) | :heavy_minus_sign: | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Han_ProxEdit_Improving_Tuning-Free_Real_Image_Editing_With_Proximal_Guidance_WACV_2024_paper.pdf) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=aXVG3NRqknw) |
| [VCISR: Blind Single Image Super-Resolution with Video Compression Synthetic Data](https://openaccess.thecvf.com/content/WACV2024/html/Wang_VCISR_Blind_Single_Image_Super-Resolution_With_Video_Compression_Synthetic_Data_WACV_2024_paper.html) | [![GitHub](https://img.shields.io/github/stars/Kiteretsu77/VCISR-official?style=flat)](https://github.com/Kiteretsu77/VCISR-official) | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Wang_VCISR_Blind_Single_Image_Super-Resolution_With_Video_Compression_Synthetic_Data_WACV_2024_paper.pdf) <br /> [![arXiv](https://img.shields.io/badge/arXiv-2311.00996-b31b1b.svg)](http://arxiv.org/abs/2311.00996) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=pq484CoxxME) |
| [Latent Feature-Guided Diffusion Models for Shadow Removal](https://openaccess.thecvf.com/content/WACV2024/html/Mei_Latent_Feature-Guided_Diffusion_Models_for_Shadow_Removal_WACV_2024_paper.html) | [![WEB Page](https://img.shields.io/badge/WEB-Page-159957.svg)](https://kfmei.page/shadow-diffusion/) <br /> [![GitHub](https://img.shields.io/github/stars/MKFMIKU/Instance-Shadow-Diffusion?style=flat)](https://github.com/MKFMIKU/Instance-Shadow-Diffusion) <br /> [![Hugging Face](https://img.shields.io/badge/🤗-demo-FFD21F.svg)](https://huggingface.co/spaces/MKFMIKU/Instance-Shadow-Removal) | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Mei_Latent_Feature-Guided_Diffusion_Models_for_Shadow_Removal_WACV_2024_paper.pdf) <br /> [![arXiv](https://img.shields.io/badge/arXiv-2312.02156-b31b1b.svg)](http://arxiv.org/abs/2312.02156) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=SekMYqp2MBA) |
| [MobileNVC: Real-Time 1080p Neural Video Compression on a Mobile Device](https://openaccess.thecvf.com/content/WACV2024/html/van_Rozendaal_MobileNVC_Real-Time_1080p_Neural_Video_Compression_on_a_Mobile_Device_WACV_2024_paper.html) | :heavy_minus_sign: | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/van_Rozendaal_MobileNVC_Real-Time_1080p_Neural_Video_Compression_on_a_Mobile_Device_WACV_2024_paper.pdf) <br /> [![arXiv](https://img.shields.io/badge/arXiv-2310.01258-b31b1b.svg)](http://arxiv.org/abs/2310.01258) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=4vwrScOx2dE) |
| [LatentPaint: Image Inpainting in Latent Space with Diffusion Models](https://openaccess.thecvf.com/content/WACV2024/html/Corneanu_LatentPaint_Image_Inpainting_in_Latent_Space_With_Diffusion_Models_WACV_2024_paper.html) | :heavy_minus_sign: | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Corneanu_LatentPaint_Image_Inpainting_in_Latent_Space_With_Diffusion_Models_WACV_2024_paper.pdf) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=mhHc34O2H4o) |
