gchhablani's picture
Update sections
abbfb41

This demo uses CLIP-Vision-Marian model checkpoint to predict caption for a given image in Spanish. Training was done using image encoder and text decoder with approximately 2.5 million image-text pairs taken from the Conceptual 12M dataset with captions translated using Marian.

For more details, click on Usage or Article 🤗 below.