Zero-shot Learning

Following the introductory chapter, we will next explain zero-shot learning (ZSL) in more details. This chapter is designed to cover:

Zero-shot Learning vs. Generalized Zero-shot Learning

Zero-shot learning and generalized zero-shot learning (GZSL) belong to a machine learning algorithm type, where the image classification model needs to classify labels not included in training. ZSL and GZSL are very similar, and the main difference is how the model is evaluated [2].

For ZSL, the model is purely evaluated for its capability to classify images of unseen classes - only observations of unseen classes are included in ZSL testing dataset. For GZSL, the model is evaluated on both seen classes and unseen classes - which is considered closer to real-world use cases. Overall, GZSL is more challenging because the model needs to determine if an observation belongs to a novel class or a known class.

Inductive Zero-shot Learning vs. Transductive Zero-shot Learning

Based on the type of training data, there are two kinds of zero-shot learning:

In inductive ZSL, the model is trained exclusively on datasets containing only seen classes, without access to any data from the unseen classes. The learning process focuses on extracting and generalizing patterns from the training data, which are then applied to classify instances of unseen classes. The approach assumes a clear separation between seen and unseen data during training, emphasizing the model’s ability to generalize from the training data to the unseen classes.

Transductive ZSL differs by allowing the model to have access to some information about the unseen classes during training, typically the attributes or unlabeled examples of the unseen classes, but not the labels. This approach leverages additional information about the structure of the unseen data to train a more generalizable model.

In next section, we will follow the main concept of a classic research paper from Google [1], and give an example for inductive ZSL.

Zero-shot Learning Example with Semantic Embeddings

As described in the previous chapter, developing a successful ZSL model requires more than just images and class labels. It is nearly impossible to classify unseen classes based on images alone. ZSL utilizes auxiliary information, such as semantic attributes or embeddings to help classify images from unseen classes. Before diving into details, the following is a short introduction on semantic embeddings for readers unfamiliar with the term.

What are Semantic Embeddings?

Semantic embeddings are vector representations of semantic information which carry the meaning and interpretation of data. For example, the information transferred with spoken text is a type of semantic information. Semantic information does not include only the direct meanings of words or sentences, but also the contextual and cultural implications.

Embeddings refer to the process of mapping semantic information into vectors of real numbers. Semantic embeddings are often learned with unsupervised machine learning models, such as Word2Vec [3] or GloVe [4]. All types of textual information, such as words, phrases, or sentences can be transformed into numerical vectors based on set procedures. Semantic embeddings describe words in a high-dimensional space where the distance and direction between words reflect their semantic relationships. This enables machines to understand the usage, synonyms, and context of each word by mathematical operations on the word embeddings.

Enable Zero-shot Learning with Semantic Embeddings

During training, a ZSL model learns to associate the visual features of images from seen classes with their corresponding semantic embeddings. The objective is to minimize the distance between the projected visual features of an image and the semantic embedding of its class. This process helps the model to learn the correspondence between images and semantic information.

Since the model has learned to project image features onto semantic space, it can attempt to classify images of unseen classes by projecting their image features into the same space and comparing them to the embeddings of unseen classes. For an image of an unseen class, the model calculates its projected embedding and then searches for the nearest semantic embedding among the unseen classes. The unseen class with nearest embedding is the predicted label for the image.

In summary, semantic embeddings are central to ZSL, enabling models to extend their classification capabilities. This approach allows for a more flexible and scalable way to classify the vast amount of real-world categories without needing labeled datasets.

Comparison to CLIP

The relationship between ZSL and CLIP (Contrastive Language–Image Pre-training) [5] stems from their shared goal of enabling models to recognize and classify images of categories that were not present in the training data. However, CLIP represents a significant advancement and a broader application of the principles underlying ZSL, leveraging a novel approach to learning and generalization.

The relationship between CLIP and ZSL can be described as:

Zero-shot Learning Evaluation Datasets

New ZSL methods are proposed every year, making it challenging to identify a superior approach due to varying evaluation methodologies. Standardized evaluation frameworks and datasets are preferred to assess different ZSL methods. A comparison study of classic ZSL methods is presented in [6]. Commonly used datasets for ZSL evaluation include:

Dataset to benchmark transfer-learning algorithms, in particular attribute based classification [7]. It consists of 30475 images of 50 animal classes with six feature representations for each image.

Dataset for fine-grained visual categorization task. It contains 11788 images of 200 subcategories of birds. Each image has 1 subcategory label, 15 part locations, 312 binary attributes and 1 bounding box. Also, ten-sentence descriptions for each image. were collected through Mechanical Turk by Amazon, and the descriptions are carefully constructed to not contain any subcategory information.

First large-scale scene attribute database. The dataset consists of 130519 images with 899 categories which can be used in high-level scene understanding and fine-grained scene recognition.

A coarse-grained dataset composed of 15339 images from 3 broad categories (animals, objects and vehicles), further divided into a total of 32 subcategories.

The ImageNet Large Scale Visual Recognition Challenge (ILSVRC) evaluates algorithms for object detection and image classification at large scale [8].

Reference