--- license: apache-2.0 datasets: - ILSVRC/imagenet-1k language: - en pipeline_tag: image-classification tags: - Robust SSL - DINO - SimCLR - Perspective Distortion - MPD - ImageNet-PD - Self-supervised Learning --- **Self-Supervised Pretrained Models with MPD Integration** Publication- [*Möbius Transform for Mitigating Perspective Distortions in Representation Learning, European Conference on Computer Vision (ECCV 2024)*](https://huggingface.co/papers/2405.02296) **Model Description** This release includes two self-supervised pretrained models integrated with the Mitigating Perspective Distortion (MPD) method. The models are: 1. ResNet50 pretrained using SimCLR- https://huggingface.co/prakashchhipa/MPD_SSL/blob/main/SimCLR_resnet50_with_MPD.pth.tar 2. ViT-small pretrained using DINO- https://huggingface.co/prakashchhipa/MPD_SSL/blob/main/DINO_vit-small_with_MPD.pth Both models were trained with a *batch size of 512* over *100 epochs*. The MPD method enhances the robustness of these models by simulating real-world perspective distortions, making them more robust in various computer vision tasks. **Training Details** 1. Algorithms- SimCLR for ResNet50, DINO for ViT-small 2. Batch Size- 512 3. Epochs- 100 4. Method- Mitigating Perspective Distortion (MPD) **Performance** The integration of MPD in both SimCLR and DINO frameworks significantly improves the models' performance on tasks affected by perspective distortion. The models can be used directly for downstream tasks or further fine-tuned for specific applications. Refer results in MPD paper. **Source Code** Two minutes summary on MPD and links to access source code repository and ImageNet-PD bacnhmark are available at https://prakashchhipa.github.io/projects/mpd/ Chhipa, P. C., Chippa, M. S., De, K., Saini, R., Liwicki, M., & Shah, M. (2024). M\" obius Transform for Mitigating Perspective Distortions in Representation Learning. arXiv preprint arXiv:2405.02296.