Datasets:
Tasks:
Image Classification
Modalities:
Image
Formats:
parquet
Languages:
English
Size:
10K - 100K
ArXiv:
Search is not available for this dataset
image
imagewidth (px) 32
8.55k
| label
class label 298
classes |
---|---|
0abandon
|
|
0abandon
|
|
0abandon
|
|
0abandon
|
|
0abandon
|
|
0abandon
|
|
0abandon
|
|
0abandon
|
|
0abandon
|
|
0abandon
|
|
0abandon
|
|
0abandon
|
|
0abandon
|
|
0abandon
|
|
0abandon
|
|
0abandon
|
|
0abandon
|
|
0abandon
|
|
0abandon
|
|
0abandon
|
|
0abandon
|
|
0abandon
|
|
0abandon
|
|
0abandon
|
|
0abandon
|
|
0abandon
|
|
0abandon
|
|
0abandon
|
|
0abandon
|
|
0abandon
|
|
0abandon
|
|
0abandon
|
|
0abandon
|
|
0abandon
|
|
0abandon
|
|
1abdomen
|
|
1abdomen
|
|
1abdomen
|
|
1abdomen
|
|
1abdomen
|
|
1abdomen
|
|
1abdomen
|
|
1abdomen
|
|
1abdomen
|
|
1abdomen
|
|
1abdomen
|
|
1abdomen
|
|
1abdomen
|
|
1abdomen
|
|
1abdomen
|
|
1abdomen
|
|
1abdomen
|
|
1abdomen
|
|
1abdomen
|
|
1abdomen
|
|
1abdomen
|
|
1abdomen
|
|
1abdomen
|
|
1abdomen
|
|
1abdomen
|
|
1abdomen
|
|
1abdomen
|
|
1abdomen
|
|
1abdomen
|
|
1abdomen
|
|
1abdomen
|
|
1abdomen
|
|
1abdomen
|
|
1abdomen
|
|
1abdomen
|
|
2ability
|
|
2ability
|
|
2ability
|
|
2ability
|
|
2ability
|
|
2ability
|
|
2ability
|
|
2ability
|
|
2ability
|
|
2ability
|
|
2ability
|
|
2ability
|
|
2ability
|
|
2ability
|
|
2ability
|
|
2ability
|
|
2ability
|
|
2ability
|
|
2ability
|
|
2ability
|
|
2ability
|
|
2ability
|
|
2ability
|
|
2ability
|
|
2ability
|
|
2ability
|
|
2ability
|
|
2ability
|
|
2ability
|
|
2ability
|
End of preview. Expand
in Dataset Viewer.
Dataset Description
"EN-CLDI" (en-cldi) contains 1690 classes, which contains images paired with verbs and adjectives. Each word within this set is unique and paired with at least 22 images.
It is the English subset of CLDI (cross-lingual dictionary induction) dataset from (Hartmann and Søgaard, 2018).
How to Use
from datasets import load_dataset
# Load the dataset
common_words = load_dataset("jaagli/en-cldi", split="train")
Citation
@misc{li2024visionlanguagemodelsshare,
title={Do Vision and Language Models Share Concepts? A Vector Space Alignment Study},
author={Jiaang Li and Yova Kementchedjhieva and Constanza Fierro and Anders Søgaard},
year={2024},
eprint={2302.06555},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2302.06555},
}
@inproceedings{hartmann-sogaard-2018-limitations,
title = "Limitations of Cross-Lingual Learning from Image Search",
author = "Hartmann, Mareike and
S{\o}gaard, Anders",
editor = "Augenstein, Isabelle and
Cao, Kris and
He, He and
Hill, Felix and
Gella, Spandana and
Kiros, Jamie and
Mei, Hongyuan and
Misra, Dipendra",
booktitle = "Proceedings of the Third Workshop on Representation Learning for {NLP}",
month = jul,
year = "2018",
address = "Melbourne, Australia",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/W18-3021",
doi = "10.18653/v1/W18-3021",
pages = "159--163",
abstract = "Cross-lingual representation learning is an important step in making NLP scale to all the world{'}s languages. Previous work on bilingual lexicon induction suggests that it is possible to learn cross-lingual representations of words based on similarities between images associated with these words. However, that work focused (almost exclusively) on the translation of nouns only. Here, we investigate whether the meaning of other parts-of-speech (POS), in particular adjectives and verbs, can be learned in the same way. Our experiments across five language pairs indicate that previous work does not scale to the problem of learning cross-lingual representations beyond simple nouns.",
}
- Downloads last month
- 95