Spaces:
Runtime error
Runtime error
### Pretraining | |
We follow an encoder-decoder approach for image captioning, where the image encoder is the CLIP Vision model (a ViT transformer). The pre-training task is image-to-text generation. We take the input tokens and shift them using an `<eos>` token towards right in order to create the inputs for our model, while the original input tokens become labels. The model is trained on the dataset. in an end-to-end fashion. | |
**Dataset** | |
The dataset we use for pre-training is a cleaned version of Conceptual 12M. The dataset is downloaded and then broken images are removed which gives us about 10M images. To save time, we use 2.5M of these image-text pairs. Then we use the MarianMT `Helsinki-NLP/opus-mt-{src}-{tgt}` checkpoint to translate the dataset into four different languages - English, French, German, and Spanish, keeping approximately 2.5M examples of each language. | |
**Model** | |
The model is shown in the image above. We create a custom model in Flax which integerates the CLIP Vision model as an encoder inside mBART model. We also use custom configs and modules in order to accomodate for these changes, and allow loading from mBART and CLIP Vision checkpoints. The image is fed to the CLIP Vision encoder and the shifted token ids are fed to the mBART decoder. We use the `facebook/mbart-large-50` and `openai/clip-vit-base-patch32` checkpoints for mBART and CLIP Vision models, respectively. All our code is available on [GitHub](https://github.com/gchhablani/multilingual-image-captioning). | |
Our model reached **eval loss of ~2.6** around ~70K steps. Here are the BLEU scores (out of 1) for different languages: | |
|Language |BLEU-1|BLEU-2|BLEU-3|BLEU-4| | |
|--------------------------|------|------|------|------| | |
|English | 0.13083| 0.08887| 0.06681 | 0.04899| | |
|Spanish | 0.15981| 0.09858| 0.06918| 0.04776| | |
|German | 0.14234| 0.09817| 0.07405| 0.0515| | |
|French | 0.13021| 0.08862| 0.06598| 0.04647| |