AhmedSSabir's picture
Update README.md
6b27926 verified
|
raw
history blame
5.97 kB
metadata
task_categories:
  - image-to-text
  - image-classification
  - visual-question-answering
  - sentence-similarity
language:
  - en
tags:
  - image captioning
  - language grounding
  - visual semantic
  - semantic similarity
pretty_name: ' image captioning  language grounding visual semantic '

Update: OCT-2023

Add v2 with recent SoTA model swinV2 classifier for both soft/hard-label visual_caption_cosine_score_v2 with person label (0.2, 0.3 and 0.4)

Introduction

Modern image captaining relies heavily on extracting knowledge, from images such as objects, to capture the concept of static story in the image. In this paper, we propose a textual visual context dataset for captioning, where the publicly available dataset COCO caption (Lin et al., 2014) has been extended with information about the scene (such as objects in the image). Since this information has textual form, it can be used to leverage any NLP task, such as text similarity or semantic relation methods, into captioning systems, either as an end-to-end training strategy or a post-processing based approach.

Please refer to project page and Github for more information. arXiv Website shields.io

For quick start please have a look this demo and pre-trained model with th 0.2, 0.3, 0.4

Overview

We enrich COCO-Caption with textual Visual Context information. We use ResNet152, CLIP, and Faster R-CNN to extract object information for each image. We use three filter approaches to ensure the quality of the dataset (1) Threshold: to filter out predictions where the object classifier is not confident enough, and (2) semantic alignment with semantic similarity to remove duplicated objects. (3) semantic relatedness score as soft-label: to guarantee the visual context and caption have a strong relation. In particular, we use Sentence-RoBERTa-sts via cosine similarity to give a soft score, and then we use a threshold to annotate the final label (if th ≥ 0.2, 0.3, 0.4 then 1,0). Finally, to take advantage of the visual overlap between caption and visual context, and to extract global information, we use BERT followed by a shallow 1D-CNN (Kim, 2014) to estimate the visual relatedness score.

Download

  1. Dowload Raw data with ID and Visual context -> original dataset with related ID caption train2014
  2. Downlod Data with cosine score-> soft cosine lable with th 0.2, 0.3, 0.4 and 0.5 and hardlabel [0,1]
  3. Dowload Overlaping visual with caption-> Overlap visual context and the human annotated caption
  4. Download Dataset (tsv file) 0.0-> raw data with hard lable without cosine similairty and with threshold cosine sim degree of the relation beteween the visual and caption = 0.2, 0.3, 0.4
  5. Download Dataset GenderBias-> man/woman replaced with person class label

Citation

The details of this repo are described in the following paper. If you find this repo useful, please kindly cite it:

@article{sabir2023visual,
  title={Visual Semantic Relatedness Dataset for Image Captioning},
  author={Sabir, Ahmed and Moreno-Noguer, Francesc and Padr{\'o}, Llu{\'\i}s},
  journal={arXiv preprint arXiv:2301.08784},
  year={2023}
}