Dataset Viewer
Viewer
The dataset viewer is not available for this split.
Job manager crashed while running this job (missing heartbeats).
Error code:   JobManagerCrashedError

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

HNC: Leveraging Hard Negative Captions towards Models with Fine-Grained Visual-Linguistic Comprehension Capabilities

This repository contains the dataset of the publication: HNC: Leveraging Hard Negative Captions towards Models with Fine-Grained Visual-Linguistic Comprehension Capabilities at CoNLL 2023 by Esra Dönmez*, Pascal Tilli*, Hsiu-Yu Yang*, Ngoc Thang Vu and Carina Silberer.

Paper link: https://aclanthology.org/2023.conll-1.24.pdf

GitHub

Link to the official implementation: https://github.com/DigitalPhonetics/hard-negative-captions

Data

Download the automatically generated train and validation set as well as the human-annotated test set from DaRUS: https://doi.org/10.18419/darus-4341

Abstract

Image-Text-Matching (ITM) is one of the defacto methods of learning generalized representations from a large corpus in Vision and Language (VL). However, due to the weak association between the web-collected image–text pairs, models fail to show fine-grained understanding of the combined semantics of these modalities. To this end, we propose Hard Negative Captions (HNC): an automatically created dataset containing foiled hard negative captions for ITM training towards achieving fine-grained cross-modal comprehension in VL. Additionally, we provide a challenging manually-created test set for benchmarking models on a fine-grained cross-modal mismatch with varying levels of compositional complexity. Our results show the effectiveness of training on HNC by improving the models’ zero-shot capabilities in detecting mismatches on diagnostic tasks and performing robustly under noisy visual input scenarios. Also, we demonstrate that HNC models yield a comparable or better initialization for fine-tuning. Our code and data are publicly available.

Citation

@inproceedings{hnc,
    title = "{HNC}: Leveraging Hard Negative Captions towards Models with Fine-Grained Visual-Linguistic Comprehension Capabilities",
    author = {D{\"o}nmez, Esra  and
      Tilli, Pascal  and
      Yang, Hsiu-Yu  and
      Vu, Ngoc Thang  and
      Silberer, Carina},
    booktitle = "Proceedings of the 27th Conference on Computational Natural Language Learning (CoNLL)",
    year = "2023",
    address = "Singapore",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2023.conll-1.24",
    doi = "10.18653/v1/2023.conll-1.24",
    pages = "364--388",
}
Downloads last month
0