AhmedSSabir's picture
Create README.md
96041d5
|
raw
history blame
3.92 kB

Overview

We enrich COCO-caption with textual Visual Context information. We use ResNet152, CLIP and Faster R-CNN to extract object information for each COCO-caption image. We use three filter approaches to ensure quality of the dataset (1) Threshold: to filter out predictions where the object classifier is not confident enough, and (2) semantic alignment to with semantic similarity to remove duplicated object. (3) semantic relatedness score as soft-label: to grantee the visual context and caption have strong relation, we use Sentence RoBERTa -SBERT uses siamese network to derive meaningfully sentence embedding that can be compared via cosine similarity- to give a soft label via cosine similarity with threshold to annotate the final label (if th > 0.2, 0.3, 0.4 then 1,0). Finally, to take advantage of the overlapping between the visual context and the caption, and to extract global information from each visual, we use BERT followed by a shallow CNN (Kim, 2014). colab

Dataset

Sample

|---------------+--------------+---------------+---------------------------------------------------|
| VC1           | VC2          | VC3           | human annoated caption                            |
| ------------- | -----------  | ------------- | ------------------------------------------------- |
| cheeseburger  | plate        | hotdog        | a plate with a hamburger fries and tomatoes       |
| bakery        | dining table | website       | a table having tea and a cake on it               |
| gown          | groom        | apron         | its time to cut the cake at this couples wedding  |
|---------------+--------------+---------------+---------------------------------------------------|

Download

  1. Dowload Raw data with ID and Visual context -> original dataset with related ID caption train2014
  2. Downlod Data with cosine score-> soft cosine lable with th 0.2, 0.3, 0.4 and 0.5
  3. Dowload Overlaping visual with caption-> Overlap visual context and the human annotated caption
  4. Download Dataset (tsv file) 0.0-> raw data with hard lable without cosine similairty and with threshold cosine sim degree of the relation beteween the visual and caption = 0.2, 0.3, 0.4
  5. Download Dataset GenderBias-> man/woman replaced with person class label

For unspervied learning

  1. Download CC -> Caption dataset from Conceptinal Caption (CC) 2M (2255927 captions)
  2. Download CC+wiki -> CC+1M-wiki 3M (3255928)
  3. Download CC+wiki+COCO -> CC+wiki+COCO-Caption 3.5M (366984)
  4. Download COCO-caption+wiki -> COCO-caption +wiki 1.4M (1413915)
  5. Download COCO-caption+wiki+CC+8Mwiki -> COCO-caption+wiki+CC+8Mwiki 11M (11541667)