cosmos / README.md
sankim2's picture
Update README.md
374c287 verified
metadata
license: mit
tags:
  - vision
  - vision-language-model
  - contrastive learning
  - self-supervised learning
pipeline_tag: image-text-to-text
library_name: transformers

[CVPR 2025] COSMOS Model

Authors: Sanghwan Kim, Rui Xiao, Mariana-Iuliana Georgescu, Stephan Alaniz, Zeynep Akata

COSMOS is introduced in the paper COSMOS: Cross-Modality Self-Distillation for Vision Language Pre-training. COSMOS is trained in self-supervised learning framework with multi-modal augmentation and cross-attention module. It outperforms CLIP-based models trained on larger datasets in visual perception and contextual understanding tasks. COSMOS also achieves strong performance in downstream tasks including zero-shot image-text retrieval, classification, and semantic segmentation.

Usage

Please refer to our Github repo for detailed usage.

Citation

If you find our work useful, please consider citing:

@article{kim2024cosmos,
  title={COSMOS: Cross-Modality Self-Distillation for Vision Language Pre-training},
  author={Kim, Sanghwan and Xiao, Rui and Georgescu, Mariana-Iuliana and Alaniz, Stephan and Akata, Zeynep},
  journal={arXiv preprint arXiv:2412.01814},
  year={2024}
}