### Pretraining We follow an encoder-decoder approach for image captioning, where the image encoder is the CLIP Vision model (a ViT transformer). The pre-training task is image-to-text generation. We take the input tokens and shift them using an `` token towards right in order to create the inputs for our model, while the original input tokens become labels. The model is trained on the dataset. in an end-to-end fashion. **Dataset** The dataset we use for pre-training is a cleaned version of Conceptual 12M. The dataset is downloaded and then broken images are removed which gives us about 10M images. To save time, we use 2.5M of these image-text pairs. Then we use the Marian `Helsinki-NLP/opus-mt-en-es` checkpoint to translate the captions into Spanish. **Model** The model is shown in the image above. We create a custom model in Flax which integerates the CLIP Vision model as an encoder inside Marian model. We also use custom configs and modules in order to accomodate for these changes, and allow loading from Marian and CLIP Vision checkpoints. The image is fed to the CLIP Vision encoder and the shifted token ids are fed to the Marian decoder. We use the `Helsinki-NLP/opus-mt-en-es` and `openai/clip-vit-base-patch32` checkpoints for Marian and CLIP Vision models, respectively. All our code is available on [GitHub](https://github.com/gchhablani/spanish-image-captioning).