|
--- |
|
license: apache-2.0 |
|
datasets: |
|
- coco |
|
- conceptual-caption |
|
- sbu |
|
- flickr30k |
|
- vqa |
|
- gqa |
|
- vg-qa |
|
- open-images |
|
|
|
library_name: pytorch |
|
tags: |
|
- pytorch |
|
- image-to-text |
|
--- |
|
|
|
# Model Card: VinVL for Captioning ๐ผ๏ธ |
|
|
|
[Microsoft's VinVL](https://github.com/microsoft/Oscar) base pretrained for **image caption generation** downstream task. |
|
|
|
|
|
# COCO Test set metrics ๐ |
|
|
|
Table from the authors (Table 7, cross-entropy optimization, ) |
|
|
|
| Bleu-4 | METEOR | CIDEr | SPICE | |
|
|--------|--------|-------|-------| |
|
| 0.38 | 0.30 | 1.29 | 0.23 | |
|
|
|
|
|
# Usage and Installation: |
|
|
|
More info about how to install and use this model can be found here: [michelecafagna26/VinVL |
|
](https://github.com/michelecafagna26/VinVL) |
|
|
|
# Feature extraction โ๏ธ |
|
|
|
This model has a separate Visualbackbone used to extract features. |
|
|
|
More info about: |
|
- the model: [michelecafagna26/vinvl_vg_x152c4](https://huggingface.co/michelecafagna26/vinvl_vg_x152c4) |
|
- the usage and installation [michelecafagna26/vinvl-visualbackbone](https://github.com/michelecafagna26/vinvl-visualbackbone) |
|
|
|
# Quick start: ๐ |
|
|
|
```python |
|
from transformers.pytorch_transformers import BertConfig, BertTokenizer |
|
from oscar.modeling.modeling_bert import BertForImageCaptioning |
|
from oscar.wrappers import OscarTensorizer |
|
|
|
ckpt = "path/to/the/checkpoint" |
|
device = "cuda" if torch.cuda.is_available() else "cpu" |
|
|
|
# original code |
|
config = BertConfig.from_pretrained(ckpt) |
|
tokenizer = BertTokenizer.from_pretrained(ckpt) |
|
model = BertForImageCaptioning.from_pretrained(ckpt, config=config).to(device) |
|
|
|
# This takes care of the preprocessing |
|
tensorizer = OscarTensorizer(tokenizer=tokenizer, device=device) |
|
|
|
# numpy-arrays with shape (1, num_boxes, feat_size) |
|
# feat_size is 2054 by default in VinVL |
|
visual_features = torch.from_numpy(feat_obj).to(device).unsqueeze(0) |
|
|
|
# labels are usually extracted by the features extractor |
|
labels = [['boat', 'boat', 'boat', 'bottom', 'bush', 'coat', 'deck', 'deck', 'deck', 'dock', 'hair', 'jacket']] |
|
|
|
inputs = tensorizer.encode(visual_features, labels=labels) |
|
outputs = model(**inputs) |
|
|
|
pred = tensorizer.decode(outputs) |
|
|
|
# the output looks like this: |
|
# pred = {0: [{'caption': 'a red and white boat traveling down a river next to a small boat.', 'conf': 0.7070220112800598]} |
|
``` |
|
|
|
# Citations ๐งพ |
|
|
|
Please consider citing the original project and the VinVL paper |
|
|
|
```BibTeX |
|
|
|
@misc{han2021image, |
|
title={Image Scene Graph Generation (SGG) Benchmark}, |
|
author={Xiaotian Han and Jianwei Yang and Houdong Hu and Lei Zhang and Jianfeng Gao and Pengchuan Zhang}, |
|
year={2021}, |
|
eprint={2107.12604}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.CV} |
|
} |
|
``` |
|
```BibTeX |
|
@inproceedings{zhang2021vinvl, |
|
title={Vinvl: Revisiting visual representations in vision-language models}, |
|
author={Zhang, Pengchuan and Li, Xiujun and Hu, Xiaowei and Yang, Jianwei and Zhang, Lei and Wang, Lijuan and Choi, Yejin and Gao, Jianfeng}, |
|
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition}, |
|
pages={5579--5588}, |
|
year={2021} |
|
} |
|
``` |
|
|