AhmedSSabir
commited on
Commit
•
9de9db2
1
Parent(s):
864c482
Update README.md
Browse files
README.md
CHANGED
@@ -21,8 +21,7 @@ Please refer to [project page](https://sabirdvd.github.io/project_page/Dataset_
|
|
21 |
(3) semantic relatedness score as soft-label: to guarantee the visual context and caption have a strong
|
22 |
relation. In particular, we use Sentence-RoBERTa via cosine similarity to give a soft score, and then
|
23 |
we use a threshold to annotate the final label (if th > 0.2, 0.3, 0.4 then 1,0). Finally, to take advantage
|
24 |
-
of the visual overlap between caption and visual context,
|
25 |
-
and to extract global information, we use BERT followed by a shallow CNN (<a href="https://arxiv.org/abs/1408.5882">Kim, 2014</a>)
|
26 |
to estimate the visual relatedness score.
|
27 |
|
28 |
For quick start please have a look this [demo](https://github.com/ahmedssabir/Textual-Visual-Semantic-Dataset/blob/main/BERT_CNN_Visual_re_ranker_demo.ipynb)
|
|
|
21 |
(3) semantic relatedness score as soft-label: to guarantee the visual context and caption have a strong
|
22 |
relation. In particular, we use Sentence-RoBERTa via cosine similarity to give a soft score, and then
|
23 |
we use a threshold to annotate the final label (if th > 0.2, 0.3, 0.4 then 1,0). Finally, to take advantage
|
24 |
+
of the visual overlap between caption and visual context, and to extract global information, we use BERT followed by a shallow CNN (<a href="https://arxiv.org/abs/1408.5882">Kim, 2014</a>)
|
|
|
25 |
to estimate the visual relatedness score.
|
26 |
|
27 |
For quick start please have a look this [demo](https://github.com/ahmedssabir/Textual-Visual-Semantic-Dataset/blob/main/BERT_CNN_Visual_re_ranker_demo.ipynb)
|