michelecafagna26 commited on
Commit
bcada79
1 Parent(s): 018238f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -188,7 +188,7 @@ From the paper:
188
 
189
  From the paper:
190
 
191
- >In this work, we tackle the issue of grounding high-level linguistic concepts in the visual modality, proposing the High-Level (HL) Dataset: a
192
  V\&L resource aligning existing object-centric captions with human-collected high-level descriptions of images along three different axes: _scenes_, _actions_ and _rationales_.
193
  The high-level captions capture the human interpretation of the scene, providing abstract linguistic concepts complementary to object-centric captions
194
  >used in current V\&L datasets, e.g. in COCO. We take a step further, and we collect _confidence scores_ to distinguish commonsense assumptions
 
188
 
189
  From the paper:
190
 
191
+ >In this work, we tackle the issue of **grounding high-level linguistic concepts in the visual modality**, proposing the High-Level (HL) Dataset: a
192
  V\&L resource aligning existing object-centric captions with human-collected high-level descriptions of images along three different axes: _scenes_, _actions_ and _rationales_.
193
  The high-level captions capture the human interpretation of the scene, providing abstract linguistic concepts complementary to object-centric captions
194
  >used in current V\&L datasets, e.g. in COCO. We take a step further, and we collect _confidence scores_ to distinguish commonsense assumptions