AhmedSSabir commited on
Commit
8db3cda
1 Parent(s): ebc576d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -3
README.md CHANGED
@@ -11,6 +11,8 @@ Please refer to [project page](https://sabirdvd.github.io/project_page/Dataset_
11
 
12
 
13
 
 
 
14
  # Overview
15
 
16
  We enrich COCO-Caption with textual Visual Context information. We use ResNet152, CLIP,
@@ -23,9 +25,8 @@ Please refer to [project page](https://sabirdvd.github.io/project_page/Dataset_
23
  of the visual overlap between caption and visual context, and to extract global information, we use BERT followed by a shallow CNN (<a href="https://arxiv.org/abs/1408.5882">Kim, 2014</a>)
24
  to estimate the visual relatedness score.
25
 
26
- For quick start please have a look this [demo](https://github.com/ahmedssabir/Textual-Visual-Semantic-Dataset/blob/main/BERT_CNN_Visual_re_ranker_demo.ipynb)
27
-
28
-
29
  <!--
30
  ## Dataset
31
 
11
 
12
 
13
 
14
+
15
+
16
  # Overview
17
 
18
  We enrich COCO-Caption with textual Visual Context information. We use ResNet152, CLIP,
25
  of the visual overlap between caption and visual context, and to extract global information, we use BERT followed by a shallow CNN (<a href="https://arxiv.org/abs/1408.5882">Kim, 2014</a>)
26
  to estimate the visual relatedness score.
27
 
28
+ For quick start please have a look this [demo](https://github.com/ahmedssabir/Textual-Visual-Semantic-Dataset/blob/main/BERT_CNN_Visual_re_ranker_demo.ipynb) and [pre-trained model with th 0.2, 0.3, 0.4](https://huggingface.co/AhmedSSabir/BERT-CNN-Visual-Semantic)
29
+
 
30
  <!--
31
  ## Dataset
32