--- license: other license_name: sample-code-license license_link: LICENSE library_name: ml-4m --- # 4M: Massively Multimodal Masked Modeling *David Mizrahi\*, Roman Bachmann\*, Oğuzhan Fatih Kar, Teresa Yeo, Mingfei Gao, Afshin Dehghan, Amir Zamir* Official implementation and pre-trained models for "4M: Massively Multimodal Masked Modeling" (NeurIPS 2023). [`Website`](https://4m.epfl.ch) | [`Paper`](https://arxiv.org/abs/2312.06647) | [`GitHub`](https://github.com/apple/ml-4m) 4M is a framework for training "any-to-any" foundation models, using tokenization and masking to scale to many diverse modalities. Models trained using 4M can perform a wide range of vision tasks, transfer well to unseen tasks and modalities, and are flexible and steerable multimodal generative models. ## Installation For install instructions, please see https://github.com/apple/ml-4m. ## Usage The CLIP-B/16 tokenizer can be loaded from Hugging Face Hub as follows: ```python from fourm.vq.vqvae import VQVAE tok_rgb = VQVAE.from_pretrained('EPFL-VILAB/4M_tokenizers_CLIP-B16_8k_224-448') ``` Please see https://github.com/apple/ml-4m/README_TOKENIZATION.md for more detailed instructions and https://github.com/apple/ml-4m for other tokenizer and 4M model checkpoints. ## Citation If you find this repository helpful, please consider citing our work: ``` @inproceedings{mizrahi20234m, title={{4M}: Massively Multimodal Masked Modeling}, author={David Mizrahi and Roman Bachmann and O{\u{g}}uzhan Fatih Kar and Teresa Yeo and Mingfei Gao and Afshin Dehghan and Amir Zamir}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, } ``` ## License The model weights in this repository are released under the Sample Code license as found in the [LICENSE](LICENSE) file.