MM_eval / README.md
xhLiu's picture
Update README.md
06d0043 verified

Calibrated Multimodal Representation Learning with Missing Modalities

License: MIT License License: MIT

Multimodal representation learning under partial-modality settings

✨ Overview

Anchor shift

CalMRL is a multimodal representation learning framework designed for alignment calibration when some modalities are missing. CalMRL combines two complementary goals:

  • Cross-modal alignment for robust shared representations
  • Missing-modality calibration through posterior inference and learned generative parameters

🎯 Key Features

🔄 Partial-Modality Learning

  • Handles missing video, audio, text, or subtitle signals
  • Supports posterior-based feature completion with learned modality-specific parameters

🎯 Multimodal Retrieval

  • Joint training over text-video, text-audio, text-video-audio, and subtitle-aware setups
  • Config-driven recipes for pretraining, finetuning, and evaluation

🧠 Feature Calibration

  • Uses latent posterior inference for modality completion
  • Includes a warmup pipeline to estimate W, mu, and log_sigma

🏗️ Architecture

The current codebase is organized around three main stages:

  1. 🔧 Multimodal Encoding: Video, audio, text, and subtitle features are extracted with VAST-style encoders.
  2. 🧮 Representation Calibration: Shared embeddings are aligned while latent posterior inference estimates missing information.
  3. 🔄 Downstream Evaluation: Retrieval and other tasks are executed through a unified config-driven pipeline.

Citation

If this project is useful for your research, you can cite the work as:

@article{liu2025calibrated,
  title={Calibrated Multimodal Representation Learning with Missing Modalities},
  author={Liu, Xiaohao and Xia, Xiaobo and Wei, Jiaheng and Yang, Shuo and Su, Xiu and Ng, See-Kiong and Chua, Tat-Seng},
  journal={arXiv preprint arXiv:2511.12034},
  year={2025}
}