vit-gpt2 / README.md
ydshieh's picture
ydshieh HF staff
Update README.md
3ea1ff8

An image captioning model ViT-GPT2 by combining the ViT model and a French GPT2 model.

Part of the Huggingface JAX/Flax event.

The GPT2 model source code is modified so it can accept an encoder's output. The pretained weights of both models are loaded, with a set of randomly initialized cross-attention weigths. The model is trained on 65000 images from the COCO dataset for about 1500 steps, with the original english cpationis are translated to french for training purpose.