michelecafagna26's picture
Update README.md
b461ccf
metadata
license: apache-2.0
datasets:
  - coco
  - conceptual-caption
  - sbu
  - flickr30k
  - vqa
  - gqa
  - vg-qa
  - open-images
library_name: pytorch
tags:
  - pytorch
  - image-to-text

Model Card: VinVL for Captioning πŸ–ΌοΈ

Microsoft's VinVL base pretrained for image caption generation downstream task.

COCO Test set metrics πŸ“ˆ

Table from the authors (Table 7, cross-entropy optimization, )

Bleu-4 METEOR CIDEr SPICE
0.38 0.30 1.29 0.23

Usage and Installation:

More info about how to install and use this model can be found here: michelecafagna26/VinVL

Feature extraction ⛏️

This model has a separate Visualbackbone used to extract features.

More info about:

Quick start: πŸš€

from transformers.pytorch_transformers import BertConfig, BertTokenizer
from oscar.modeling.modeling_bert import BertForImageCaptioning
from oscar.wrappers import OscarTensorizer

ckpt = "path/to/the/checkpoint"
device = "cuda" if torch.cuda.is_available() else "cpu"

# original code
config = BertConfig.from_pretrained(ckpt)
tokenizer = BertTokenizer.from_pretrained(ckpt)
model = BertForImageCaptioning.from_pretrained(ckpt, config=config).to(device)

# This takes care of the preprocessing
tensorizer = OscarTensorizer(tokenizer=tokenizer, device=device)

# numpy-arrays with shape (1, num_boxes, feat_size)
# feat_size is 2054 by default in VinVL
visual_features = torch.from_numpy(feat_obj).to(device).unsqueeze(0)

# labels are usually extracted by the features extractor
labels = [['boat', 'boat', 'boat', 'bottom', 'bush', 'coat', 'deck', 'deck', 'deck', 'dock', 'hair', 'jacket']]

inputs = tensorizer.encode(visual_features, labels=labels)
outputs = model(**inputs)

pred = tensorizer.decode(outputs)

# the output looks like this:
# pred = {0: [{'caption': 'a red and white boat traveling down a river next to a small boat.', 'conf': 0.7070220112800598]}

Citations 🧾

Please consider citing the original project and the VinVL paper


@misc{han2021image,
      title={Image Scene Graph Generation (SGG) Benchmark}, 
      author={Xiaotian Han and Jianwei Yang and Houdong Hu and Lei Zhang and Jianfeng Gao and Pengchuan Zhang},
      year={2021},
      eprint={2107.12604},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}
@inproceedings{zhang2021vinvl,
  title={Vinvl: Revisiting visual representations in vision-language models},
  author={Zhang, Pengchuan and Li, Xiujun and Hu, Xiaowei and Yang, Jianwei and Zhang, Lei and Wang, Lijuan and Choi, Yejin and Gao, Jianfeng},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={5579--5588},
  year={2021}
}