This demo uses a [CLIP-Vision-Bert model checkpoint](https://huggingface.co/flax-community/clip-vision-bert-cc12m-70k) pre-trained using text-only Masked LM on approximately 10 million image-text pairs taken from the [Conceptual 12M dataset](https://github.com/google-research-datasets/conceptual-12m) translated using [MBart](https://huggingface.co/transformers/model_doc/mbart.html). The translations are performed in the following four languages: English, French, German and Spanish, giving 2.5M examples in each language. The model can be used for mask-filling as shown in this demo. The caption can be present or written in any of the following: English, French, German and Spanish. For more details, click on `Usage` above or `Article` on the sidebar. 🤗