COSMOS Model
Authors: Sanghwan Kim, Rui Xiao, Mariana-Iuliana Georgescu, Stephan Alaniz, Zeynep Akata
COSMOS is introduced in the paper COSMOS: Cross-Modality Self-Distillation for Vision Language Pre-training. COSMOS is trained in self-supervised learning framework with multi-modal augmentation and cross-attention module. It outperforms CLIP-based models trained on larger datasets in visual perception and contextual understanding tasks. COSMOS also achieves strong performance in downstream tasks including zero-shot image-text retrieval, classification, and semantic segmentation segmentation.
Usage
Please refer to our Github repo for detailed usage.
Citation
If you find our work useful, please consider citing:
@article{kim2024cosmos,
title={COSMOS: Cross-Modality Self-Distillation for Vision Language Pre-training},
author={Kim, Sanghwan and Xiao, Rui and Georgescu, Mariana-Iuliana and Alaniz, Stephan and Akata, Zeynep},
journal={arXiv preprint arXiv:2412.01814},
year={2024}
}
- Downloads last month
- 2
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no library tag.