This demo uses [CLIP-mBART50 model checkpoint](https://huggingface.co/flax-community/clip-vit-base-patch32_mbart-large-50) to predict caption for a given image in 4 languages (English, French, German, Spanish). Training was done using image encoder (CLIP-ViT) and text decoder (mBART50) with approximately 5 million image-text pairs taken from the [Conceptual 12M dataset](https://github.com/google-research-datasets/conceptual-12m) translated using [MarianMT](https://huggingface.co/transformers/model_doc/marian.html). For more details, click on `Usage` 🤗 above.