File size: 5,374 Bytes
c780046
 
1ef9f84
084ecaa
 
 
 
 
 
 
 
 
1ef9f84
 
 
 
 
c780046
1ef9f84
 
 
084ecaa
1ef9f84
 
 
084ecaa
1ef9f84
 
 
a2a34ec
1ef9f84
 
 
 
 
 
 
 
 
 
 
 
 
 
 
cc917ab
 
1ef9f84
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c715e8e
1ef9f84
 
e75ae13
 
 
 
 
 
 
 
 
 
 
 
 
1ef9f84
99aa060
1ef9f84
6d1c75e
 
 
 
 
 
 
 
 
 
 
 
 
1ef9f84
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
084ecaa
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
---
license: apache-2.0
datasets:
- michelecafagna26/hl
language:
- en
metrics:
- sacrebleu
- rouge
- meteor
- spice
- cider

library_name: pytorch
tags:
- pytorch
- image-to-text
---

# Model Card: VinVL for Captioning ๐Ÿ–ผ๏ธ

[Microsoft's VinVL](https://github.com/microsoft/Oscar) base fine-tuned on [HL dataset](https://arxiv.org/abs/2302.12189?context=cs.CL)  for **scene description generation** downstream task.

# Model fine-tuning ๐Ÿ‹๏ธโ€

The model has been finetuned for 10 epochs on the scenes captions of the [HL dataset](https://arxiv.org/abs/2302.12189?context=cs.CL) (available on ๐Ÿค— HUB: [michelecafagna26/hl](https://huggingface.co/datasets/michelecafagna26/hl))

# Test set metrics ๐Ÿ“ˆ

Obtained with beam size 5 and max length 20

| Bleu-1 | Bleu-2 | Bleu-3 | Bleu-4 | METEOR | ROUGE-L | CIDEr | SPICE |
|--------|--------|--------|--------|--------|---------|-------|-------|
|  0.68  |  0.55  |  0.45  |  0.36  |  0.36  |  0.63   |  1.42 |  0.40 |


# Usage and Installation:

More info about how to install and use this model can be found here: [michelecafagna26/VinVL
](https://github.com/michelecafagna26/VinVL)

# Feature extraction โ›๏ธ

This model has a separate Visualbackbone used to extract features.
More info about:
- the model: [michelecafagna26/vinvl_vg_x152c4](https://huggingface.co/michelecafagna26/vinvl_vg_x152c4)
- the usage: [michelecafagna26/vinvl-visualbackbone](https://github.com/michelecafagna26/vinvl-visualbackbone)

# Quick start: ๐Ÿš€

```python
from transformers.pytorch_transformers import BertConfig, BertTokenizer
from oscar.modeling.modeling_bert import BertForImageCaptioning
from oscar.wrappers import OscarTensorizer

ckpt = "path/to/the/checkpoint"
device = "cuda" if torch.cuda.is_available() else "cpu"

# original code
config = BertConfig.from_pretrained(ckpt)
tokenizer = BertTokenizer.from_pretrained(ckpt)
model = BertForImageCaptioning.from_pretrained(ckpt, config=config).to(device)

# This takes care of the preprocessing
tensorizer = OscarTensorizer(tokenizer=tokenizer, device=device)

# numpy-arrays with shape (1, num_boxes, feat_size)
# feat_size is 2054 by default in VinVL
visual_features = torch.from_numpy(feat_obj).to(device).unsqueeze(0)

# labels are usually extracted by the features extractor
labels = [['boat', 'boat', 'boat', 'bottom', 'bush', 'coat', 'deck', 'deck', 'deck', 'dock', 'hair', 'jacket']]

inputs = tensorizer.encode(visual_features, labels=labels)
outputs = model(**inputs)

pred = tensorizer.decode(outputs)

# the output looks like this:
# pred = {0: [{'caption': 'in a library', 'conf': 0.7070220112800598]}
```

# Citations ๐Ÿงพ

VinVL model finetuned on scenes descriptions:

```BibTeX
@inproceedings{cafagna-etal-2022-understanding,
    title = "Understanding Cross-modal Interactions in {V}{\&}{L} Models that Generate Scene Descriptions",
    author = "Cafagna, Michele  and
      Deemter, Kees van  and
      Gatt, Albert",
    booktitle = "Proceedings of the Workshop on Unimodal and Multimodal Induction of Linguistic Structures (UM-IoS)",
    month = dec,
    year = "2022",
    address = "Abu Dhabi, United Arab Emirates (Hybrid)",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2022.umios-1.6",
    pages = "56--72",
    abstract = "Image captioning models tend to describe images in an object-centric way, emphasising visible objects. But image descriptions can also abstract away from objects and describe the type of scene depicted. In this paper, we explore the potential of a state of the art Vision and Language model, VinVL, to caption images at the scene level using (1) a novel dataset which pairs images with both object-centric and scene descriptions. Through (2) an in-depth analysis of the effect of the fine-tuning, we show (3) that a small amount of curated data suffices to generate scene descriptions without losing the capability to identify object-level concepts in the scene; the model acquires a more holistic view of the image compared to when object-centric descriptions are generated. We discuss the parallels between these results and insights from computational and cognitive science research on scene perception.",
}
```

HL Dataset paper:

```BibTeX
@inproceedings{cafagna2023hl,
  title={{HL} {D}ataset: {V}isually-grounded {D}escription of {S}cenes, {A}ctions and
{R}ationales},
  author={Cafagna, Michele and van Deemter, Kees and Gatt, Albert},
  booktitle={Proceedings of the 16th International Natural Language Generation Conference (INLG'23)},
address = {Prague, Czech Republic},
  year={2023}
}
```

Please consider citing the original project and the VinVL paper

```BibTeX

@misc{han2021image,
      title={Image Scene Graph Generation (SGG) Benchmark}, 
      author={Xiaotian Han and Jianwei Yang and Houdong Hu and Lei Zhang and Jianfeng Gao and Pengchuan Zhang},
      year={2021},
      eprint={2107.12604},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

@inproceedings{zhang2021vinvl,
  title={Vinvl: Revisiting visual representations in vision-language models},
  author={Zhang, Pengchuan and Li, Xiujun and Hu, Xiaowei and Yang, Jianwei and Zhang, Lei and Wang, Lijuan and Choi, Yejin and Gao, Jianfeng},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={5579--5588},
  year={2021}
}
```