AhmedSSabir commited on
Commit
5c6ec93
1 Parent(s): 8db3cda

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -9,7 +9,7 @@ such as text similarity or semantic relation methods, into captioning systems, e
9
 
10
  Please refer to [project page](https://sabirdvd.github.io/project_page/Dataset_2022/index.html) and [Github](https://github.com/ahmedssabir/Visual-Semantic-Relatedness-Dataset-for-Image-Captioning) for more information. [![arXiv](https://img.shields.io/badge/arXiv-2301.08784-b31b1b.svg)](https://arxiv.org/abs/2301.08784) [![Website shields.io](https://img.shields.io/website-up-down-green-red/http/shields.io.svg)](https://ahmed.jp/project_page/Dataset_2022/index.html)
11
 
12
-
13
 
14
 
15
 
@@ -25,7 +25,7 @@ Please refer to [project page](https://sabirdvd.github.io/project_page/Dataset_
25
  of the visual overlap between caption and visual context, and to extract global information, we use BERT followed by a shallow CNN (<a href="https://arxiv.org/abs/1408.5882">Kim, 2014</a>)
26
  to estimate the visual relatedness score.
27
 
28
- For quick start please have a look this [demo](https://github.com/ahmedssabir/Textual-Visual-Semantic-Dataset/blob/main/BERT_CNN_Visual_re_ranker_demo.ipynb) and [pre-trained model with th 0.2, 0.3, 0.4](https://huggingface.co/AhmedSSabir/BERT-CNN-Visual-Semantic)
29
 
30
  <!--
31
  ## Dataset
 
9
 
10
  Please refer to [project page](https://sabirdvd.github.io/project_page/Dataset_2022/index.html) and [Github](https://github.com/ahmedssabir/Visual-Semantic-Relatedness-Dataset-for-Image-Captioning) for more information. [![arXiv](https://img.shields.io/badge/arXiv-2301.08784-b31b1b.svg)](https://arxiv.org/abs/2301.08784) [![Website shields.io](https://img.shields.io/website-up-down-green-red/http/shields.io.svg)](https://ahmed.jp/project_page/Dataset_2022/index.html)
11
 
12
+ For quick start please have a look this [demo](https://github.com/ahmedssabir/Textual-Visual-Semantic-Dataset/blob/main/BERT_CNN_Visual_re_ranker_demo.ipynb) and [pre-trained model with th 0.2, 0.3, 0.4](https://huggingface.co/AhmedSSabir/BERT-CNN-Visual-Semantic)
13
 
14
 
15
 
 
25
  of the visual overlap between caption and visual context, and to extract global information, we use BERT followed by a shallow CNN (<a href="https://arxiv.org/abs/1408.5882">Kim, 2014</a>)
26
  to estimate the visual relatedness score.
27
 
28
+
29
 
30
  <!--
31
  ## Dataset