Now that we have trained our model like a student cramming all epochs for a test, the real test begins! We hope this knowledge acquired during training by the model translates beyond the specific pictures (cat pictures, for example) it learned from, allowing it to recognize unseen cats like Alice’s and Ted’s furry friends. Think of it as the model learning the essence of catness, not just those specific furry faces it saw during training. This ability to apply knowledge to new situations is called generalization, and it’s what separates a good cat model from a mere cat picture memorizer. Can you imagine an alternate universe without generalization? Yes, it’s pretty simple actually, you’ll only have to train your model on ALL the images of cats in the world (assuming they only exist on earth), including Alice’s and Ted’s, then find a way to prevent the current cat population from breeding. So, yeah, no big deal.
Actually, we weren’t right when we said that the model is expected to generalize to all cat pictures that it has not seen. It is expected to generalize to all cat pictures from the same distribution as the image data it was trained on. Simply, if you trained your model on cat selfies and then presented it with a cartoon picture of a cat, it probably won’t be able to recognize it. These two pictures come from totally different distributions or domains. Making your cat selfie model able to recognize cartoon cats is referred to as domain adaptation (we’ll briefly talk about it later). It’s like taking all the knowledge your model learned about real cats and teaching it to recognize their animated cousins.
So, we’ve gone from generalization, recognizing the unseen Alice’s and Ted’s cat pictures, to domain adaptation, recognizing animated cat pictures. But we’re much greedier than that. You don’t want your model to be able to only recognize your cat pictures, or Alice’s and Ted’s, or not even cartoon cats! Having a model trained on cat pictures, you also want it to recognize pictures of llamas and falcons.
Well, now we’re on the turf of zero-shot learning (also known as ZSL).
Let’s warm up with a definition
Zero-shot learning is a setup in which the model is presented with images belonging to only classes that it was not exposed to during training, at test time. In other words, the training and testing sets are disjoint.
Just a heads-up: in the classic ZSL setup, the test set only has pictures of classes the model hasn’t seen before, not a single one from its training days. This may seem a little bit unrealistic, it’s like asking a student to ace an exam on only materials they’ve never studied.
Luckily, there’s a more pragmatic version of ZSL that doesn’t have this strict rule and is called generalized zero-shot learning, or GZSL. This more flexible approach allows the test set to include both seen and unseen classes. It’s a more realistic scenario, reflecting how things work in the real world.
The question of whether we can make models that perform well on tasks that they were not explicitly trained on, came soon after deep learning started to seem feasible[^1]. In 2008, the seeds of ZSL were sowed with two independent papers presented at the Association for the Advancement of Artificial Intelligence (AAAI) conference. The first paper, titled Dataless Classification, explored the concept of zero-shot learning in the context of natural language processing (NLP). The second one, titled Zero-Data Learning, focused on applying ZSL to computer vision tasks. The term zero-shot learning itself first appeared in 2009 at NeurIPS in a paper co-authored by, wait for it….Geoff Hinton.
Let’s have a nice outline for the most important moments in ZSL below
2008 | The first sparks of (the question of) zero-shot learning |
2009 | The term zero-shot learning was coined |
2013 | The concept of Generalized ZSL was introduced[^2] |
2017 | The first application of the encoder-decoder paradigm to ZSL[^3] |
2020 | OpenAI’s GPT-3 achieves an impressive performance on zero-shot NLP[^4] |
2021 | OpenAI’s CLIP takes zero-shot computer vision to a whole new level[^5] |
The impact of CLIP has been profound, ushering in a new era of zero-shot computer vision research. Its versatility and performance have opened up exciting new possibilities. It could be said that the history of zero-shot computer vision can be divided into two eras: The pre-CLIP and post-CLIP eras.
Now that we know what zero-shot learning is, it would be nice to know how it is applied to computer vision, right? This part will be addressed in more detail in the coming chapters, but let’s paint a broad picture here to break the ice.
In NLP, zero-shot learning is (although it wasn’t always the case) pretty straightforward. Many language models are trained on massive datasets of text, learning to predict the next word in a given sequence. This ability to capture the underlying patterns and semantic relationships within language allows these models to perform surprisingly well on tasks they weren’t explicitly trained on. A nice example is GPT-2 being able to summarize a passage when TL;DR is appended to the prompt. Zero-shot computer vision is another story.
Let’s begin by answering a simple question, *how can we humans do it? How can we recognize objects that we have not seen before?
Yes, you’re absolutely right! We need some other information about that object. We can easily identify a tiger, even if we haven’t seen one before, given that we know what a lion looks like. A tiger is a lion with stripes and minus the mane. A zebra is a black-and-white striped version of a horse. Deadpool is a red-black version of Spiderman.
Because of this other information, zero-shot computer vision is essentially multi-modal. If generalization is like learning a language by reading books and talking to people, zero-shot computer vision is like learning a language by reading a dictionary and listening to someone describe what it sounds like.
For zero-shot computer vision to work, we need to provide the model with information other than the visual features during training. This other information is called Semantic or Auxiliary Information. It provides a semantic bridge between the visual features and the unseen classes. By incorporating this multi-modal information (Text & Image), zero-shot computer vision models can learn to recognize objects even if they have never seen them before visually. In other words, semantic information embeds both seen and unseen classes in high-dimensional vectors, and it comes in many different forms:
Using this semantic information, you train a model to learn a mapping function between the image features and the semantic features. At inference time, the model predicts the class label by looking for the closest label in the semantic space by using, for example, k-nearest neighbor. We can say that we are transferring knowledge obtained from seen classes with the help of semantic information.
Different zero-shot computer vision methods differ in what semantic information they use and which embedding space they utilize at inference.
Good question! Zero-shot learning (ZSL) falls within the broader category of transfer learning, specifically under the umbrella of heterogeneous transfer learning. This means that ZSL relies on transferring knowledge gained from a source domain (seen classes) to a distinct target domain (unseen classes).
Due to the disparity between the source and target domains, ZSL faces specific challenges in transfer learning. These include overcoming domain shift, where the data distributions differ significantly, and handling the lack of labeled data in the target domain. But we’re going to talk about these challenges in detail later, so don’t worry about it now. For now, let’s talk a little about the different methods of zero-shot computer vision.
The landscape of zero-shot computer vision methods is vast and diverse, with numerous proposed approaches and multiple frameworks for categorization. But one framework that I find appealing is one in which these methods can be broadly categorized into Embedding-based methods and Generative-based methods. This framework offers a useful lens through which to understand and compare different zero-shot computer vision approaches.
The choice between embedding-based and generative-based approaches, like every other choice that you have to make in machine learning, depends on the specifics of the task at hand and the available resources. Embedding-based methods are often preferred for their efficiency and scalability, while generative-based methods offer greater flexibility and potential for handling complex data.
But anyway, in this unit we will address only embedding-based methods, including attention-based embedding-based methods. Zero-shot learning is a big topic and comprehensive coverage may require a standalone course. Interested readers may check the provided additional readings and have fun.
We’ve gone a long way! We now have a decent idea of zero-shot learning, its history, its application in computer vision, and how it generally works. To round out this introduction, we are going to compare ZSL to other different methods that may initially appear similar.
[^1]: A fast learning algorithm for deep belief nets, Link [^2]: Zero-Shot Learning Through Cross-Modal Transfer, Link [^3]: Semantic Autoencoder for Zero-Shot Learning, Link [^4]: Language Models are Few-Shot Learners, Link [^5]: Learning Transferable Visual Models From Natural Language Supervision, Link [^6]: Learning Structured Output Representation using Deep Conditional Generative Models, Link