Spaces:
Runtime error
Runtime error
File size: 4,097 Bytes
3070a83 a3ee979 3070a83 a3ee979 3070a83 6e061ed 3070a83 7d96d0d 07a2d78 7d96d0d 07a2d78 7d96d0d 07a2d78 7d96d0d 07a2d78 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 |
---
title: Vocabulary-free Image Classification
emoji: 🌍
colorFrom: green
colorTo: yellow
sdk: gradio
sdk_version: 3.33.1
python_version: '3.9'
app_file: app.py
pinned: false
---
# Vocabulary-free Image Classification
[Alessandro Conti](https://scholar.google.com/citations?user=EPImyCcAAAAJ), [Enrico Fini](https://scholar.google.com/citations?user=OQMtSKIAAAAJ), [Massimiliano Mancini](https://scholar.google.com/citations?user=bqTPA8kAAAAJ), [Paolo Rota](https://scholar.google.com/citations?user=K1goGQ4AAAAJ), [Yiming Wang](https://scholar.google.com/citations?user=KBZ3zrEAAAAJ), [Elisa Ricci](https://scholar.google.com/citations?user=xf1T870AAAAJ)
</div>
Recent advances in large vision-language models have revolutionized the image classification paradigm. Despite showing impressive zero-shot capabilities, a pre-defined set of categories, a.k.a. the vocabulary, is assumed at test time for composing the textual prompts. However, such assumption can be impractical when the semantic context is unknown and evolving. We thus formalize a novel task, termed as Vocabulary-free Image Classification (VIC), where we aim to assign to an input image a class that resides in an unconstrained language-induced semantic space, without the prerequisite of a known vocabulary. VIC is a challenging task as the semantic space is extremely large, containing millions of concepts, with hard-to-discriminate fine-grained categories.
<div align="center">
| <img src="https://altndrr.github.io/vic/assets/images/task_left.png"> | <img src="https://altndrr.github.io/vic/assets/images/task_right.png"> |
| :-------------------------------------------------------------------: | :--------------------------------------------------------------------: |
| Vision Language Model (VLM)-based classification | Vocabulary-free Image Classification |
</div>
In this work, we first empirically verify that representing this semantic space by means of an external vision-language database is the most effective way to obtain semantically relevant content for classifying the image. We then propose Category Search from External Databases (CaSED), a method that exploits a pre-trained vision-language model and an external vision-language database to address VIC in a training-free manner. CaSED first extracts a set of candidate categories from captions retrieved from the database based on their semantic similarity to the image, and then assigns to the image the best matching candidate category according to the same vision-language model. Experiments on benchmark datasets validate that CaSED outperforms other complex vision-language frameworks, while being efficient with much fewer parameters, paving the way for future research in this direction.
<div align="center">
| <img src="https://altndrr.github.io/vic/assets/images/method.png"> |
| :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: |
| Overview of CaSED. Given an input image, CaSED retrieves the most relevant captions from an external database filtering them to extract candidate categories. We classify image-to-text and text-to-text, using the retrieved captions centroid as the textual counterpart of the input image. |
</div>
## Citation
If you find this work useful, please consider citing:
```latex
@misc{conti2023vocabularyfree,
title={Vocabulary-free Image Classification},
author={Alessandro Conti and Enrico Fini and Massimiliano Mancini and Paolo Rota and Yiming Wang and Elisa Ricci},
year={2023},
eprint={2306.00917},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
|