|
--- |
|
license: apache-2.0 |
|
datasets: |
|
- michelecafagna26/hl |
|
language: |
|
- en |
|
metrics: |
|
- sacrebleu |
|
- rouge |
|
- meteor |
|
- spice |
|
- cider |
|
|
|
library_name: pytorch |
|
tags: |
|
- pytorch |
|
- image-to-text |
|
--- |
|
|
|
# Model Card: VinVL for Captioning πΌοΈ |
|
|
|
[Microsoft's VinVL](https://github.com/microsoft/Oscar) base fine-tuned on [HL dataset](https://arxiv.org/abs/2302.12189?context=cs.CL) for **scene description generation** downstream task. |
|
|
|
# Model fine-tuning ποΈβ |
|
|
|
The model has been finetuned for 10 epochs on the scenes captions of the [HL dataset](https://arxiv.org/abs/2302.12189?context=cs.CL) (available on π€ HUB: [michelecafagna26/hl](https://huggingface.co/datasets/michelecafagna26/hl)) |
|
|
|
# Test set metrics π |
|
|
|
Obtained with beam size 5 and max length 20 |
|
|
|
| Bleu-1 | Bleu-2 | Bleu-3 | Bleu-4 | METEOR | ROUGE-L | CIDEr | SPICE | |
|
|--------|--------|--------|--------|--------|---------|-------|-------| |
|
| 0.68 | 0.55 | 0.45 | 0.36 | 0.36 | 0.63 | 1.42 | 0.40 | |
|
|
|
|
|
# Usage and Installation: |
|
|
|
More info about how to install and use this model can be found here: [michelecafagna26/VinVL |
|
](https://github.com/michelecafagna26/VinVL) |
|
|
|
# Feature extraction βοΈ |
|
|
|
This model has a separate Visualbackbone used to extract features. |
|
More info about: |
|
- the model: [michelecafagna26/vinvl_vg_x152c4](https://huggingface.co/michelecafagna26/vinvl_vg_x152c4) |
|
- the usage: [michelecafagna26/vinvl-visualbackbone](https://github.com/michelecafagna26/vinvl-visualbackbone) |
|
|
|
# Quick start: π |
|
|
|
```python |
|
from transformers.pytorch_transformers import BertConfig, BertTokenizer |
|
from oscar.modeling.modeling_bert import BertForImageCaptioning |
|
from oscar.wrappers import OscarTensorizer |
|
|
|
ckpt = "path/to/the/checkpoint" |
|
device = "cuda" if torch.cuda.is_available() else "cpu" |
|
|
|
# original code |
|
config = BertConfig.from_pretrained(ckpt) |
|
tokenizer = BertTokenizer.from_pretrained(ckpt) |
|
model = BertForImageCaptioning.from_pretrained(ckpt, config=config).to(device) |
|
|
|
# This takes care of the preprocessing |
|
tensorizer = OscarTensorizer(tokenizer=tokenizer, device=device) |
|
|
|
# numpy-arrays with shape (1, num_boxes, feat_size) |
|
# feat_size is 2054 by default in VinVL |
|
visual_features = torch.from_numpy(feat_obj).to(device).unsqueeze(0) |
|
|
|
# labels are usually extracted by the features extractor |
|
labels = [['boat', 'boat', 'boat', 'bottom', 'bush', 'coat', 'deck', 'deck', 'deck', 'dock', 'hair', 'jacket']] |
|
|
|
inputs = tensorizer.encode(visual_features, labels=labels) |
|
outputs = model(**inputs) |
|
|
|
pred = tensorizer.decode(outputs) |
|
|
|
# the output looks like this: |
|
# pred = {0: [{'caption': 'in a library', 'conf': 0.7070220112800598]} |
|
``` |
|
|
|
# Citations π§Ύ |
|
|
|
VinVL model finetuned on scenes descriptions: |
|
|
|
```BibTeX |
|
@inproceedings{cafagna-etal-2022-understanding, |
|
title = "Understanding Cross-modal Interactions in {V}{\&}{L} Models that Generate Scene Descriptions", |
|
author = "Cafagna, Michele and |
|
Deemter, Kees van and |
|
Gatt, Albert", |
|
booktitle = "Proceedings of the Workshop on Unimodal and Multimodal Induction of Linguistic Structures (UM-IoS)", |
|
month = dec, |
|
year = "2022", |
|
address = "Abu Dhabi, United Arab Emirates (Hybrid)", |
|
publisher = "Association for Computational Linguistics", |
|
url = "https://aclanthology.org/2022.umios-1.6", |
|
pages = "56--72", |
|
abstract = "Image captioning models tend to describe images in an object-centric way, emphasising visible objects. But image descriptions can also abstract away from objects and describe the type of scene depicted. In this paper, we explore the potential of a state of the art Vision and Language model, VinVL, to caption images at the scene level using (1) a novel dataset which pairs images with both object-centric and scene descriptions. Through (2) an in-depth analysis of the effect of the fine-tuning, we show (3) that a small amount of curated data suffices to generate scene descriptions without losing the capability to identify object-level concepts in the scene; the model acquires a more holistic view of the image compared to when object-centric descriptions are generated. We discuss the parallels between these results and insights from computational and cognitive science research on scene perception.", |
|
} |
|
``` |
|
|
|
HL Dataset paper: |
|
|
|
```BibTeX |
|
@inproceedings{cafagna2023hl, |
|
title={{HL} {D}ataset: {V}isually-grounded {D}escription of {S}cenes, {A}ctions and |
|
{R}ationales}, |
|
author={Cafagna, Michele and van Deemter, Kees and Gatt, Albert}, |
|
booktitle={Proceedings of the 16th International Natural Language Generation Conference (INLG'23)}, |
|
address = {Prague, Czech Republic}, |
|
year={2023} |
|
} |
|
``` |
|
|
|
Please consider citing the original project and the VinVL paper |
|
|
|
```BibTeX |
|
|
|
@misc{han2021image, |
|
title={Image Scene Graph Generation (SGG) Benchmark}, |
|
author={Xiaotian Han and Jianwei Yang and Houdong Hu and Lei Zhang and Jianfeng Gao and Pengchuan Zhang}, |
|
year={2021}, |
|
eprint={2107.12604}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.CV} |
|
} |
|
|
|
@inproceedings{zhang2021vinvl, |
|
title={Vinvl: Revisiting visual representations in vision-language models}, |
|
author={Zhang, Pengchuan and Li, Xiujun and Hu, Xiaowei and Yang, Jianwei and Zhang, Lei and Wang, Lijuan and Choi, Yejin and Gao, Jianfeng}, |
|
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition}, |
|
pages={5579--5588}, |
|
year={2021} |
|
} |
|
``` |