Spaces:
Runtime error
Pretraining
We follow an encoder-decoder approach for image captioning, where the image encoder is the CLIP Vision model (a ViT transformer). The pre-training task is image-to-text generation. We take the input tokens and shift them using an <eos>
token towards right in order to create the inputs for our model, while the original input tokens become labels. The model is trained on the dataset. in an end-to-end fashion.
Dataset
The dataset we use for pre-training is a cleaned version of Conceptual 12M. The dataset is downloaded and then broken images are removed which gives us about 10M images. To save time, we use 5M of these image-text pairs. Then we use the MBart50 mbart-large-50-one-to-many-mmt
checkpoint to translate the dataset into four different languages - English, French, German, and Spanish, keeping approximately 1.25 million examples of each language.
Model
The model is shown in the image above. We create a custom model in Flax which integerates the CLIP Vision model as an encoder inside mBART model. We also use custom configs and modules in order to accomodate for these changes, and allow loading from mBART and CLIP Vision checkpoints. The image is fed to the CLIP Vision encoder and the shifted token ids are fed to the mBART decoder. We use the facebook/mbart-large-50
and openai/clip-vit-base-patch32
checkpoints for mBART and CLIP Vision models, respectively. All our code is available on GitHub.