--- license: apache-2.0 --- Pretrained models and datasets of our method **SemiUHPE** Title: *Semi-Supervised Unconstrained Head Pose Estimation in the Wild* Paper link: https://arxiv.org/abs/2404.02544 Code link: https://github.com/hnuzhy/SemiUHPE Project link: https://hnuzhy.github.io/projects/SemiUHPE/ **Pretrained models** * Labeled DAD3DHeads + Unlabeled **COCOHead** (based on **ResNet50**): [DAD-COCOHead-ResNet50-best.pth](./DAD-COCOHead-ResNet50-best.pth) * Labeled DAD3DHeads + Unlabeled **COCOHead** (based on **RepVGG**): [DAD-COCOHead-RepVGG-best.pth](./DAD-COCOHead-RepVGG-best.pth) * Labeled DAD3DHeads + Unlabeled **COCOHead** (based on **EffNetV2-S**): [DAD-COCOHead-EffNetV2-S-best.pth](./DAD-COCOHead-EffNetV2-S-best.pth) * Labeled DAD3DHeads + Unlabeled **WildHead** (based on **EffNetV2-S**): [DAD-WildHead-EffNetV2-S-best.pth](./DAD-WildHead-EffNetV2-S-best.pth) **Processed datasets** * Unlabeled WildHead (the combination of **COCOHead**, **CrowdHuman** and **OpenImageV6** with totally about 403K heads): * You can download our ready-made zip file [WildHead_30.zip](./WildHead_30.zip). * Unlabeled COCOHead (extracted heads from **COCO** train2017 and val2017 with totally about 74K heads): * All head images are extracted from [COCO](https://cocodataset.org/), and head bboxes are from [HumanParts](https://github.com/soeaver/Hier-R-CNN#dataset). Please see [dataset_COCOHead.py](https://github.com/hnuzhy/SemiUHPE/tree/main/src/datasets/dataset_COCOHead.py#L194) for more details. * Unlabeled CrowdHuman (extracted heads from **CrowdHuman** train-set and val-set with totally about 163K heads): * All head images are extracted from [CrowdHuman](https://www.crowdhuman.org/), and head bboxes are from [BFJDet](https://github.com/AibeeDetect/BFJDet#1-crowdhuman). Please see [dataset_CrowdHuman.py](https://github.com/hnuzhy/SemiUHPE/tree/main/src/datasets/dataset_CrowdHuman.py#L194) for more details. * Unlabeled OpenImageV6 (extracted heads from **OpenImageV6** train-set, val-set and test-set with totally about 166K heads): * All head images are extracted from [OpenImageV6](https://storage.googleapis.com/openimages/web/index.html). You can download all head-related images by using [FiftyOne](https://storage.googleapis.com/openimages/web/download.html#download_using_fiftyone). Then you can process them by using [headcrop_OpenImageV6.py](https://github.com/hnuzhy/SemiUHPE/tree/main/tools/headcrop_OpenImageV6.py) * Labeled 300W-LP and AFLW2000 (about 120K synthetic heads in 300W-LP, and 2000 real wild heads in AFLW2000): * You can download them from [3DDFA homepage](http://www.cbsr.ia.ac.cn/users/xiangyuzhu/projects/3DDFA/main.htm). Please see [dataset_300WLP.py](https://github.com/hnuzhy/SemiUHPE/tree/main/src/datasets/dataset_300WLP.py) and [dataset_AFLW2000.py](https://github.com/hnuzhy/SemiUHPE/tree/main/src/datasets/dataset_AFLW2000.py) for more details. * Labeled DAD-3DHeads (37840 images in train-set, 4312 images in val-set, and 2746 images in test-set): * You can download them from [DAD-3DHeads](https://github.com/PinataFarms/DAD-3DHeads). Please see [dataset_DAD3DHeads.py](https://github.com/hnuzhy/SemiUHPE/tree/main/src/datasets/dataset_DAD3DHeads.py#L257) for more details.