# WACV-2024-Papers

<table>
    <tr>
        <td><strong>Application</strong></td>
        <td>
            <a href="https://huggingface.co/spaces/DmitryRyumin/NewEraAI-Papers" style="float:left;">
                <img src="https://img.shields.io/badge/🤗-NewEraAI--Papers-FFD21F.svg" alt="App" />
            </a>
        </td>
    </tr>
</table>

<div align="center">
    <a href="https://github.com/DmitryRyumin/WACV-2024-Papers/blob/main/sections/2024/main/adversarial_learning_adversarial_attack_defense_methods.md">
        <img src="https://cdn.jsdelivr.net/gh/DmitryRyumin/NewEraAI-Papers@main/images/left.svg" width="40" alt="" />
    </a>
    <a href="https://github.com/DmitryRyumin/WACV-2024-Papers/">
        <img src="https://cdn.jsdelivr.net/gh/DmitryRyumin/NewEraAI-Papers@main/images/home.svg" width="40" alt="" />
    </a>
    <a href="https://github.com/DmitryRyumin/WACV-2024-Papers/blob/main/sections/2024/main/computational_photography_image_and_video_synthesis.md">
        <img src="https://cdn.jsdelivr.net/gh/DmitryRyumin/NewEraAI-Papers@main/images/right.svg" width="40" alt="" />
    </a>
</div>

## Biometrics, Face, Gesture, Body Pose

![Section Papers](https://img.shields.io/badge/Section%20Papers-36-42BA16) ![Preprint Papers](https://img.shields.io/badge/Preprint%20Papers-24-b31b1b) ![Papers with Open Code](https://img.shields.io/badge/Papers%20with%20Open%20Code-15-1D7FBF) ![Papers with Video](https://img.shields.io/badge/Papers%20with%20Video-32-FF0000)

| **Title** | **Repo** | **Paper** | **Video** |
|-----------|:--------:|:---------:|:---------:|
| [Co-Speech Gesture Detection through Multi-Phase Sequence Labeling](https://openaccess.thecvf.com/content/WACV2024/html/Ghaleb_Co-Speech_Gesture_Detection_Through_Multi-Phase_Sequence_Labeling_WACV_2024_paper.html) | [![GitHub](https://img.shields.io/github/stars/EsamGhaleb/Multi-Phase-Gesture-Detection?style=flat)](https://github.com/EsamGhaleb/Multi-Phase-Gesture-Detection) | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Ghaleb_Co-Speech_Gesture_Detection_Through_Multi-Phase_Sequence_Labeling_WACV_2024_paper.pdf) <br /> [![arXiv](https://img.shields.io/badge/arXiv-2308.10680-b31b1b.svg)](http://arxiv.org/abs/2308.10680) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=31P6G5HM_Zw) |
| [HashReID: Dynamic Network with Binary Codes for Efficient Person Re-Identification](https://openaccess.thecvf.com/content/WACV2024/html/Nikhal_HashReID_Dynamic_Network_With_Binary_Codes_for_Efficient_Person_Re-Identification_WACV_2024_paper.html) | :heavy_minus_sign: | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Nikhal_HashReID_Dynamic_Network_With_Binary_Codes_for_Efficient_Person_Re-Identification_WACV_2024_paper.pdf) <br /> [![arXiv](https://img.shields.io/badge/arXiv-2308.11900-b31b1b.svg)](http://arxiv.org/abs/2308.11900) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=K_kYlSn_9E8) |
| [RGBT-Dog: A Parametric Model and Pose Prior for Canine Body Analysis Data Creation](https://openaccess.thecvf.com/content/WACV2024/html/Deane_RGBT-Dog_A_Parametric_Model_and_Pose_Prior_for_Canine_Body_WACV_2024_paper.html) | :heavy_minus_sign: | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Deane_RGBT-Dog_A_Parametric_Model_and_Pose_Prior_for_Canine_Body_WACV_2024_paper.pdf) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=ajA6jJEVqVs) |
| [FPGAN-Control: A Controllable Fingerprint Generator for Training with Synthetic Data](https://openaccess.thecvf.com/content/WACV2024/html/Shoshan_FPGAN-Control_A_Controllable_Fingerprint_Generator_for_Training_With_Synthetic_Data_WACV_2024_paper.html) | [![GitHub Page](https://img.shields.io/badge/GitHub-Page-159957.svg)](https://alonshoshan10.github.io/fpgan_control/) | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Shoshan_FPGAN-Control_A_Controllable_Fingerprint_Generator_for_Training_With_Synthetic_Data_WACV_2024_paper.pdf) <br /> [![arXiv](https://img.shields.io/badge/arXiv-2310.19024-b31b1b.svg)](http://arxiv.org/abs/2310.19024) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=jUeiT5dOUi8) |
| [Multimodal Channel-Mixing: Channel and Spatial Masked AutoEncoder on Facial Action Unit Detection](https://openaccess.thecvf.com/content/WACV2024/html/Zhang_Multimodal_Channel-Mixing_Channel_and_Spatial_Masked_AutoEncoder_on_Facial_Action_WACV_2024_paper.html) | :heavy_minus_sign: | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Zhang_Multimodal_Channel-Mixing_Channel_and_Spatial_Masked_AutoEncoder_on_Facial_Action_WACV_2024_paper.pdf) <br /> [![arXiv](https://img.shields.io/badge/arXiv-2209.12244-b31b1b.svg)](http://arxiv.org/abs/2209.12244) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=_og4gaB5S1c) |
| [ProS: Facial Omni-Representation Learning via Prototype-based Self-Distillation](https://openaccess.thecvf.com/content/WACV2024/html/Di_ProS_Facial_Omni-Representation_Learning_via_Prototype-Based_Self-Distillation_WACV_2024_paper.html) | :heavy_minus_sign: | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Di_ProS_Facial_Omni-Representation_Learning_via_Prototype-Based_Self-Distillation_WACV_2024_paper.pdf) <br /> [![arXiv](https://img.shields.io/badge/arXiv-2311.01929-b31b1b.svg)](http://arxiv.org/abs/2311.01929) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=HZrllcqNMMo) |
| [FG-Net: Facial Action Unit Detection with Generalizable Pyramidal Features](https://openaccess.thecvf.com/content/WACV2024/html/Yin_FG-Net_Facial_Action_Unit_Detection_With_Generalizable_Pyramidal_Features_WACV_2024_paper.html) | [![GitHub](https://img.shields.io/github/stars/ihp-lab/FG-Net?style=flat)](https://github.com/ihp-lab/FG-Net) | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Yin_FG-Net_Facial_Action_Unit_Detection_With_Generalizable_Pyramidal_Features_WACV_2024_paper.pdf) <br /> [![arXiv](https://img.shields.io/badge/arXiv-2308.12380-b31b1b.svg)](http://arxiv.org/abs/2308.12380) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=VFPx2Hc1l_M) |
| [Watch where You Head: A View-Biased Domain Gap in Gait Recognition and Unsupervised Adaptation](https://openaccess.thecvf.com/content/WACV2024/html/Habib_Watch_Where_You_Head_A_View-Biased_Domain_Gap_in_Gait_WACV_2024_paper.html) | :heavy_minus_sign: | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Habib_Watch_Where_You_Head_A_View-Biased_Domain_Gap_in_Gait_WACV_2024_paper.pdf) <br /> [![arXiv](https://img.shields.io/badge/arXiv-2307.06751-b31b1b.svg)](http://arxiv.org/abs/2307.06751) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=aDVaJpgODek) |
| [Intrinsic Hand Avatar: Illumination-Aware Hand Appearance and Shape Reconstruction from Monocular RGB Video](https://openaccess.thecvf.com/content/WACV2024/html/Kalshetti_Intrinsic_Hand_Avatar_Illumination-Aware_Hand_Appearance_and_Shape_Reconstruction_From_WACV_2024_paper.html) | [![GitHub](https://img.shields.io/github/stars/pmkalshetti/intrinsic_hand_avatar?style=flat)](https://github.com/pmkalshetti/intrinsic_hand_avatar) | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Kalshetti_Intrinsic_Hand_Avatar_Illumination-Aware_Hand_Appearance_and_Shape_Reconstruction_From_WACV_2024_paper.pdf) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=IaBEAsFcTH0) |
| [CVTHead: One-Shot Controllable Head Avatar with Vertex-Feature Transformer](https://openaccess.thecvf.com/content/WACV2024/html/Ma_CVTHead_One-Shot_Controllable_Head_Avatar_With_Vertex-Feature_Transformer_WACV_2024_paper.html) | [![GitHub](https://img.shields.io/github/stars/HowieMa/CVTHead?style=flat)](https://github.com/HowieMa/CVTHead) | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Ma_CVTHead_One-Shot_Controllable_Head_Avatar_With_Vertex-Feature_Transformer_WACV_2024_paper.pdf) <br /> [![arXiv](https://img.shields.io/badge/arXiv-2311.06443-b31b1b.svg)](http://arxiv.org/abs/2311.06443) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=2oK91VFXSi4) |
| [Back to Optimization: Diffusion-based Zero-Shot 3D Human Pose Estimation](https://openaccess.thecvf.com/content/WACV2024/html/Jiang_Back_to_Optimization_Diffusion-Based_Zero-Shot_3D_Human_Pose_Estimation_WACV_2024_paper.html) | [![GitHub Page](https://img.shields.io/badge/GitHub-Page-159957.svg)](https://zhyjiang.github.io/ZeDO-proj/) <br /> [![GitHub](https://img.shields.io/github/stars/ipl-uw/ZeDO-Release?style=flat)](https://github.com/ipl-uw/ZeDO-Release) | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Jiang_Back_to_Optimization_Diffusion-Based_Zero-Shot_3D_Human_Pose_Estimation_WACV_2024_paper.pdf) <br /> [![arXiv](https://img.shields.io/badge/arXiv-2307.03833-b31b1b.svg)](http://arxiv.org/abs/2307.03833) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=JYbeJTuMI5E) |
| [POISE: Pose Guided Human Silhouette Extraction under Occlusions](https://openaccess.thecvf.com/content/WACV2024/html/Dutta_POISE_Pose_Guided_Human_Silhouette_Extraction_Under_Occlusions_WACV_2024_paper.html) | [![GitHub](https://img.shields.io/github/stars/take2rohit/poise?style=flat)](https://github.com/take2rohit/poise) | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Dutta_POISE_Pose_Guided_Human_Silhouette_Extraction_Under_Occlusions_WACV_2024_paper.pdf) <br /> [![arXiv](https://img.shields.io/badge/arXiv-2311.05077-b31b1b.svg)](http://arxiv.org/abs/2311.05077) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=oAExunvoJe0) |
| [Incorporating Physics Principles for Precise Human Motion Prediction](https://openaccess.thecvf.com/content/WACV2024/html/Zhang_Incorporating_Physics_Principles_for_Precise_Human_Motion_Prediction_WACV_2024_paper.html) | [![GitHub](https://img.shields.io/github/stars/zhangy76/PhysMoP?style=flat)](https://github.com/zhangy76/PhysMoP) | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Zhang_Incorporating_Physics_Principles_for_Precise_Human_Motion_Prediction_WACV_2024_paper.pdf) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=CcmDufhBXkM) |
| [Fingervein Verification using Convolutional Multi-Head Attention Network](https://openaccess.thecvf.com/content/WACV2024/html/Ramachandra_Fingervein_Verification_Using_Convolutional_Multi-Head_Attention_Network_WACV_2024_paper.html) | :heavy_minus_sign: | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Ramachandra_Fingervein_Verification_Using_Convolutional_Multi-Head_Attention_Network_WACV_2024_paper.pdf) <br /> [![arXiv](https://img.shields.io/badge/arXiv-2310.16808-b31b1b.svg)](http://arxiv.org/abs/2310.16808) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=UfRCw4Wo294) |
| [Multispectral Imaging for Differential Face Morphing Attack Detection: A Preliminary Study](https://openaccess.thecvf.com/content/WACV2024/html/Ramachandra_Multispectral_Imaging_for_Differential_Face_Morphing_Attack_Detection_A_Preliminary_WACV_2024_paper.html) | [![WEB Page](https://img.shields.io/badge/WEB-Page-159957.svg)](https://sites.google.com/view/narayanvetrekar/database/spectral-face-gender) | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Ramachandra_Multispectral_Imaging_for_Differential_Face_Morphing_Attack_Detection_A_Preliminary_WACV_2024_paper.pdf) <br /> [![arXiv](https://img.shields.io/badge/arXiv-2304.03510-b31b1b.svg)](http://arxiv.org/abs/2304.03510) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=GXLkn1zu9hE) |
| [Controlling Character Motions without Observable Driving Source](https://openaccess.thecvf.com/content/WACV2024/html/Li_Controlling_Character_Motions_Without_Observable_Driving_Source_WACV_2024_paper.html) | :heavy_minus_sign: | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Li_Controlling_Character_Motions_Without_Observable_Driving_Source_WACV_2024_paper.pdf) <br /> [![arXiv](https://img.shields.io/badge/arXiv-2308.06025-b31b1b.svg)](http://arxiv.org/abs/2308.06025) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=ceXoycaYw2Y) |
| [DR<sup>2</sup>: Disentangled Recurrent Representation Learning for Data-Efficient Speech Video Synthesis](https://openaccess.thecvf.com/content/WACV2024/html/Zhang_DR2_Disentangled_Recurrent_Representation_Learning_for_Data-Efficient_Speech_Video_Synthesis_WACV_2024_paper.html) | :heavy_minus_sign: | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Zhang_DR2_Disentangled_Recurrent_Representation_Learning_for_Data-Efficient_Speech_Video_Synthesis_WACV_2024_paper.pdf) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=LDdDm86Ve_4) |
| [Bias and Diversity in Synthetic-based Face Recognition](https://openaccess.thecvf.com/content/WACV2024/html/Huber_Bias_and_Diversity_in_Synthetic-Based_Face_Recognition_WACV_2024_paper.html) | :heavy_minus_sign: | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Huber_Bias_and_Diversity_in_Synthetic-Based_Face_Recognition_WACV_2024_paper.pdf) <br /> [![arXiv](https://img.shields.io/badge/arXiv-2311.03970-b31b1b.svg)](http://arxiv.org/abs/2311.03970) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=qASXsVeGVBE) |
| [FarSight: A Physics-Driven Whole-Body Biometric System at Large Distance and Altitude](https://openaccess.thecvf.com/content/WACV2024/html/Liu_FarSight_A_Physics-Driven_Whole-Body_Biometric_System_at_Large_Distance_and_WACV_2024_paper.html) | :heavy_minus_sign: | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Liu_FarSight_A_Physics-Driven_Whole-Body_Biometric_System_at_Large_Distance_and_WACV_2024_paper.pdf) <br /> [![arXiv](https://img.shields.io/badge/arXiv-2306.17206-b31b1b.svg)](http://arxiv.org/abs/2306.17206) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=HLIHgAR1s-w) |
| [AU-Aware Dynamic 3D Face Reconstruction from Videos with Transformer](https://openaccess.thecvf.com/content/WACV2024/html/Kuang_AU-Aware_Dynamic_3D_Face_Reconstruction_From_Videos_With_Transformer_WACV_2024_paper.html) | [![GitHub](https://img.shields.io/github/stars/kuangcy1998/AU-D3DFace?style=flat)](https://github.com/kuangcy1998/AU-D3DFace) | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Kuang_AU-Aware_Dynamic_3D_Face_Reconstruction_From_Videos_With_Transformer_WACV_2024_paper.pdf) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=PYW2TS_NnYE) |
| [Handformer2T: A Lightweight Regression-based Model for Interacting Hands Pose Estimation from a Single RGB Image](https://openaccess.thecvf.com/content/WACV2024/html/Zhang_Handformer2T_A_Lightweight_Regression-Based_Model_for_Interacting_Hands_Pose_Estimation_WACV_2024_paper.html) | :heavy_minus_sign: | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Zhang_Handformer2T_A_Lightweight_Regression-Based_Model_for_Interacting_Hands_Pose_Estimation_WACV_2024_paper.pdf) | :heavy_minus_sign: |
| [Weakly-Supervised Deepfake Localization in Diffusion-Generated Images](https://openaccess.thecvf.com/content/WACV2024/html/Tantaru_Weakly-Supervised_Deepfake_Localization_in_Diffusion-Generated_Images_WACV_2024_paper.html) | [![GitHub](https://img.shields.io/github/stars/bit-ml/dolos?style=flat)](https://github.com/bit-ml/dolos) | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Tantaru_Weakly-Supervised_Deepfake_Localization_in_Diffusion-Generated_Images_WACV_2024_paper.pdf) <br /> [![arXiv](https://img.shields.io/badge/arXiv-2311.04584-b31b1b.svg)](http://arxiv.org/abs/2311.04584) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=sHDBLo8D4dU) |
| [Face Presentation Attack Detection by Excavating Causal Clues and Adapting Embedding Statistics](https://openaccess.thecvf.com/content/WACV2024/html/Fang_Face_Presentation_Attack_Detection_by_Excavating_Causal_Clues_and_Adapting_WACV_2024_paper.html) | [![GitHub](https://img.shields.io/github/stars/meilfang/CF-PAD?style=flat)](https://github.com/meilfang/CF-PAD) | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Fang_Face_Presentation_Attack_Detection_by_Excavating_Causal_Clues_and_Adapting_WACV_2024_paper.pdf) <br /> [![arXiv](https://img.shields.io/badge/arXiv-2308.14551-b31b1b.svg)](http://arxiv.org/abs/2308.14551) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=nRG15htGQpE) |
| [Denoising and Selecting Pseudo-Heatmaps for Semi-Supervised Human Pose Estimation](https://openaccess.thecvf.com/content/WACV2024/html/Yu_Denoising_and_Selecting_Pseudo-Heatmaps_for_Semi-Supervised_Human_Pose_Estimation_WACV_2024_paper.html) | :heavy_minus_sign: | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Yu_Denoising_and_Selecting_Pseudo-Heatmaps_for_Semi-Supervised_Human_Pose_Estimation_WACV_2024_paper.pdf) <br /> [![arXiv](https://img.shields.io/badge/arXiv-2310.00099-b31b1b.svg)](http://arxiv.org/abs/2310.00099) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=7b0D8fdnE4Y) |
| [ShARc: Shape and Appearance Recognition for Person Identification In-the-Wild](https://openaccess.thecvf.com/content/WACV2024/html/Zhu_ShARc_Shape_and_Appearance_Recognition_for_Person_Identification_In-the-Wild_WACV_2024_paper.html) | :heavy_minus_sign: | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Zhu_ShARc_Shape_and_Appearance_Recognition_for_Person_Identification_In-the-Wild_WACV_2024_paper.pdf) <br /> [![arXiv](https://img.shields.io/badge/arXiv-2310.15946-b31b1b.svg)](http://arxiv.org/abs/2310.15946) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=CuuwWsOqlqU) |
| [Fast and Interpretable Face Identification for Out-of-Distribution Data using Vision Transformers](https://openaccess.thecvf.com/content/WACV2024/html/Phan_Fast_and_Interpretable_Face_Identification_for_Out-of-Distribution_Data_Using_Vision_WACV_2024_paper.html) | [![GitHub](https://img.shields.io/github/stars/anguyen8/face-vit?style=flat)](https://github.com/anguyen8/face-vit) | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Phan_Fast_and_Interpretable_Face_Identification_for_Out-of-Distribution_Data_Using_Vision_WACV_2024_paper.pdf) <br /> [![arXiv](https://img.shields.io/badge/arXiv-2311.02803-b31b1b.svg)](http://arxiv.org/abs/2311.02803) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=M9VXxayidHY) |
| [SigmML: Metric Meta-Learning for Writer Independent Offline Signature Verification in the Space of SPD Matrices](https://openaccess.thecvf.com/content/WACV2024/html/Giazitzis_SigmML_Metric_Meta-Learning_for_Writer_Independent_Offline_Signature_Verification_in_WACV_2024_paper.html) | [![Bitbucket](https://img.shields.io/badge/bitbucket-%230047B3.svg?style=for-the-badge&logo=bitbucket&logoColor=white)](https://bitbucket.org/agiaz/sigmml) | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Giazitzis_SigmML_Metric_Meta-Learning_for_Writer_Independent_Offline_Signature_Verification_in_WACV_2024_paper.pdf) | :heavy_minus_sign: |
| [Progressive Hypothesis Transformer for 3D Human Mesh Recovery](https://openaccess.thecvf.com/content/WACV2024/html/Liao_Progressive_Hypothesis_Transformer_for_3D_Human_Mesh_Recovery_WACV_2024_paper.html) | :heavy_minus_sign: | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Liao_Progressive_Hypothesis_Transformer_for_3D_Human_Mesh_Recovery_WACV_2024_paper.pdf) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=UmlQZDNYlYU) |
| [DiffBody: Diffusion-based Pose and Shape Editing of Human Images](https://openaccess.thecvf.com/content/WACV2024/html/Okuyama_DiffBody_Diffusion-Based_Pose_and_Shape_Editing_of_Human_Images_WACV_2024_paper.html) | [![WEB Page](https://img.shields.io/badge/WEB-Page-159957.svg)](https://www.cgg.cs.tsukuba.ac.jp/~okuyama/pub/diffbody/) <br /> [![GitHub](https://img.shields.io/github/stars/yutaokuyama/DiffBody?style=flat)](https://github.com/yutaokuyama/DiffBody) | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Okuyama_DiffBody_Diffusion-Based_Pose_and_Shape_Editing_of_Human_Images_WACV_2024_paper.pdf) <br /> [![arXiv](https://img.shields.io/badge/arXiv-2401.02804-b31b1b.svg)](http://arxiv.org/abs/2401.02804) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=U461Smu1cbc) |
| [Diffuse and Restore: A Region-Adaptive Diffusion Model for Identity-Preserving Blind Face Restoration](https://openaccess.thecvf.com/content/WACV2024/html/Suin_Diffuse_and_Restore_A_Region-Adaptive_Diffusion_Model_for_Identity-Preserving_Blind_WACV_2024_paper.html) | :heavy_minus_sign: | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Suin_Diffuse_and_Restore_A_Region-Adaptive_Diffusion_Model_for_Identity-Preserving_Blind_WACV_2024_paper.pdf) | :heavy_minus_sign: |
| [HMP: Hand Motion Priors for Pose and Shape Estimation from Video](https://openaccess.thecvf.com/content/WACV2024/html/Duran_HMP_Hand_Motion_Priors_for_Pose_and_Shape_Estimation_From_WACV_2024_paper.html) | [![WEB Page](https://img.shields.io/badge/WEB-Page-159957.svg)](https://hmp.is.tue.mpg.de/) <br /> [![GitHub](https://img.shields.io/github/stars/enesduran/HMP?style=flat)](https://github.com/enesduran/HMP) | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Duran_HMP_Hand_Motion_Priors_for_Pose_and_Shape_Estimation_From_WACV_2024_paper.pdf) <br /> [![arXiv](https://img.shields.io/badge/arXiv-2312.16737-b31b1b.svg)](http://arxiv.org/abs/2312.16737) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=tUEvueGjJy0) |
| [Approximating Intersections and Differences between Linear Statistical Shape Models using Markov Chain Monte Carlo](https://openaccess.thecvf.com/content/WACV2024/html/Weiherer_Approximating_Intersections_and_Differences_Between_Linear_Statistical_Shape_Models_Using_WACV_2024_paper.html) | :heavy_minus_sign: | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Weiherer_Approximating_Intersections_and_Differences_Between_Linear_Statistical_Shape_Models_Using_WACV_2024_paper.pdf) <br /> [![arXiv](https://img.shields.io/badge/arXiv-2211.16314-b31b1b.svg)](http://arxiv.org/abs/2211.16314) | :heavy_minus_sign: |
| [Robust Eye Blink Detection using Dual Embedding Video Vision Transformer](https://openaccess.thecvf.com/content/WACV2024/html/Hong_Robust_Eye_Blink_Detection_Using_Dual_Embedding_Video_Vision_Transformer_WACV_2024_paper.html) | :heavy_minus_sign: | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Hong_Robust_Eye_Blink_Detection_Using_Dual_Embedding_Video_Vision_Transformer_WACV_2024_paper.pdf) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=i2CWdyRcgWQ) |
| [EmoStyle: One-Shot Facial Expression Editing using Continuous Emotion Parameters](https://openaccess.thecvf.com/content/WACV2024/html/Azari_EmoStyle_One-Shot_Facial_Expression_Editing_Using_Continuous_Emotion_Parameters_WACV_2024_paper.html) | [![GitHub Page](https://img.shields.io/badge/GitHub-Page-159957.svg)](https://bihamta.github.io/emostyle/) | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Azari_EmoStyle_One-Shot_Facial_Expression_Editing_Using_Continuous_Emotion_Parameters_WACV_2024_paper.pdf) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=zHOyddKyb9o) |
| [Vikriti-ID: A Novel Approach for Real Looking Fingerprint Data-Set Generation](https://openaccess.thecvf.com/content/WACV2024/html/Shukla_Vikriti-ID_A_Novel_Approach_for_Real_Looking_Fingerprint_Data-Set_Generation_WACV_2024_paper.html) | :heavy_minus_sign: | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Shukla_Vikriti-ID_A_Novel_Approach_for_Real_Looking_Fingerprint_Data-Set_Generation_WACV_2024_paper.pdf) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=H5jCMamS42Q) |
| [LaughTalk: Expressive 3D Talking Head Generation with Laughter](https://openaccess.thecvf.com/content/WACV2024/html/Sung-Bin_LaughTalk_Expressive_3D_Talking_Head_Generation_With_Laughter_WACV_2024_paper.html) | [![GitHub Page](https://img.shields.io/badge/GitHub-Page-159957.svg)](https://laughtalk.github.io/) <br /> [![GitHub](https://img.shields.io/github/stars/postech-ami/LaughTalk?style=flat)](https://github.com/postech-ami/LaughTalk) | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/WACV2024/papers/Sung-Bin_LaughTalk_Expressive_3D_Talking_Head_Generation_With_Laughter_WACV_2024_paper.pdf) <br /> [![arXiv](https://img.shields.io/badge/arXiv-2311.00994-b31b1b.svg)](http://arxiv.org/abs/2311.00994) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=58hSZAs2on0) |
