# Language Models Can See: Plugging Visual Controls In Text Generation Anonymous authors Paper under double-blind review ## Abstract Generative language models (LMs) such as GPT-2/3 can be prompted to generate text with remarkable quality. While they are designed for text-prompted generation, it remains an open question how the generation process could be guided by modalities beyond text such as images. In this work, we propose a training-free framework, called MAGIC (iMAge-Guided text generatIon with CLIP), for plugging in visual controls in the generation process and enabling LMs to perform multimodal tasks (e.g., image captioning) in a zero-shot manner. MAGIC is a simple yet efficient plug-and-play framework, which directly combines an off-the-shelf LM (i.e., GPT-2) and an image-text matching model (i.e., CLIP) for imagegrounded text generation. During decoding, MAGIC influences the generation of the LM by introducing a CLIP-induced score, namely *magic score*, which regularizes the generated result to be semantically related to a given image while being coherent to the previously generated context. Notably, the proposed decoding scheme does not involve any gradient update operation, therefore being computationally efficient. On the challenging task of zero-shot image captioning, MAGIC outperforms the state-of-the-art method by notable margins with a nearly 27 times decoding speedup. MAGIC is a flexible framework and is theoretically compatible with any text generation tasks that incorporate image grounding. In the experiments, we showcase that it is also capable of performing visually grounded story generation given both an image and a text prompt. ## 1 Introduction Since the introduction of GPT-2 (Radford et al., 2019), generative language models (LMs), which are pretrained on an enormous amount of unstructured text, have produced unmatched performances on a wide range of NLP tasks (Brown et al., 2020; Chowdhery et al., 2022). Given a textual prompt, LMs can continuously generate texts with the next-token prediction decoding scheme. Although controlling the outputs of LMs has become possible by inserting textual prompts, it is still unknown how the decoding process could be guided by information beyond texts, such as images. Recently, multimodal representation learning of text and images have been rejuvenated by pre-trained imagetext joint embedding models, such as CLIP (Radford et al., 2021) and ALIGN (Jia et al., 2021). They leverage large-scale noisy image-text pairs with weak correspondence for contrastive embedding learning and the learned joint model achieves impressive zero-shot performance competitive to supervised models on tasks such as image classification and image-text retrieval. However, they are still under-explored for image-grounded text generation.1 How can we combine the best of both the pre-trained LMs and image-text embedding models for visually grounded text generation? Existing supervised methods combine multimodal encoders by further training them on human-annotated paired image-text data (Mokady et al., 2021; Chen et al., 2021a). Differently, weakly supervised approaches (Anderson et al., 2018a; Feng et al., 2019; Laina et al., 2019) rely on pretrained object detectors to identify visual concepts and create pseudo image-text pairs. Instead of training 1Note that while such noisy weak image-text pair supervision is sufficient for learning embeddings, they could not be directly used to train image captioning model due to the data's extreme level of noise (Tewel et al., 2021). on annotated image-text pairs, they directly train on the pseudo data. However, such methods are usually limited by the object detectors that are trained with a fixed set of labels. The closest to our proposal is ZeroCap (Tewel et al., 2021) which is an unsupervised image captioning method by combining frozen CLIP and GPT-2. One of the advantages of ZeroCap is it performs *ex post facto* in the activation space without re-training or fine-tuning the CLIP and GPT-2 models. However, ZeroCap relies on gradient update and optimization over the context cache, which significantly slows down the inference and hinders its use in real-world scenarios. In this paper, we propose to solve this challenging task in a completely new perspective by designing a novel text decoding scheme, called MAGIC (iMAge-Guided text generatIon with CLIP). During inference, MAGIC does not rely on any additional training or parameters and utilizes explicit "control knobs" to select desired outputs following the guidance of both the GPT-2 and CLIP models. Different from the standard decoding process of GPT-2, we insert a CLIP-induced term, namely *magic score*, in the next token search to encourage the predicted result to demonstrate information that is close to the given image. Our experiments show that such a framework enables zero-shot image captioning and also visually grounded story generation under a simple plug-and-play principle. To verify the qualitative and quantitative performance of the proposed approach, we conduct comprehensive experiments on two commonly used benchmarks (Section §4): MS-COCO (Lin et al., 2014) and Flickr30k (Plummer et al., 2015). To our surprise, MAGIC achieves state-of-the-art (SOTA) performance across different evaluation metrics, especially outperforming all unsupervised and weakly supervised baselines by notable margins. Moreover, since MAGIC involves no gradient update, the inference speed accelerates upon the previous SOTA on zero-shot image captioning by around 27 times. Beyond image captioning, we also test our approach on visually grounded story generation (Section §5). In this task, given an image and a text prompt, MAGIC can generate high-quality stories that outperform strong baseline methods on both human and automatic evaluations. In summary, we make the following contributions: - To the best of our knowledge, we are the first to propose a zero-shot method, i.e. MAGIC, to utilize explicit "control knobs" to efficiently select desired outputs following the guidance of both the pre-trained GPT-2 and CLIP models. - We empirically show that MAGIC is extremely effective on zero-shot image captioning, achieving SOTA across different benchmarks. - We demonstrate that MAGIC could be used in creative ways: it can perform complex multimodal generation tasks such as visually grounded story generation and reaches near-human performances on a wide range of evaluation metrics. ## 2 Background In this section, we briefly introduce previous work related to our research. ## 2.1 Image Captioning Our work is closely related to the literature of image captioning, where the goal is to describe images with meaningful and syntactically correct sentences. Although this topic has been extensively explored in the past few years, it is still far from being considered as a solved task. Given the training strategies (e.g., the type of training data), we can roughly classify the previous methods into two categories: (1) Supervised and (2) Weakly-/Un-Supervised approaches. The former heavily depends on manually labelled image-text datasets. In contrast, the latter tries to create pseudo image-text pairs (i.e., weakly supervised) or even avoid using any paired image-text data (i.e., unsupervised). Supervised Approaches. With the development of deep learning, most of the existing models use one CNN to encode the input image and one RNN to generate the corresponding sentence describing the image (Mao et al., 2014; Vinyals et al., 2015). These models are trained to maximize the probability of generating the ground-truth captions conditioned on the input image. After that, the main focus of following methods is to model the interaction between visual and textual cues via attention mechanism to get more faithful and richer captions (Xu et al., 2015; Lu et al., 2017; Anderson et al., 2018b; Zhang et al., 2021b; Huang et al., 2021). Meanwhile, some controllable image captioning methods (Mathews et al., 2016; Gan et al., 2017; Chen et al., 2018; Shuster et al., 2019; Chen et al., 2021b) propose to generate diverse descriptions by feeding different control signals (e.g., label and text), which require additional annotations for training. Beyond describing the whole image scene, dense captioning methods (Johnson et al., 2016; Chatterjee & Schwing, 2018; Kim et al., 2019; Yin et al., 2019; Zeng et al., 2020) aim to describe the visual objects in a sub-region of the input image. Recently, vision-language pre-training methods (Zhou et al., 2020; Li et al., 2020; Mokady et al., 2021; Hu et al., 2021), benefiting from the rich visual-textual representation of pre-trained models on large-scale datasets, are tendencies for vision-language generation by re-training or fine-tuning the model parameters on downstream tasks. Although these methods have achieved impressive results, a certain amount of paired image-text data is indispensable during training. Weakly-/Un-Supervised Approaches. Till now, there has been several attempts to reduce the reliance on paired image-text data for the training of image captioning model. In weakly-supervised approaches, employing *pseudo-captions*, i.e., sentences that contain the object labels detected from the given images, has been a common choice (Anderson et al., 2018a; Feng et al., 2019; Laina et al., 2019). However, a weakly supervised cross-modal alignment between image and text is needed. Besides, *pseudo-captions* tend to contain irrelevant words for the given images (Honda et al., 2021). Therefore, it requires carefully designed constraints or learning schema to alleviate the noise. What is more, such methods require a pre-trained object detector bounded by a fixed set of labels to provide visual concepts. They are thus ineffective for any out-of-domain concepts and scenes. How can we get rid of creating *pseudo-captions* and perform image captioning in a truly unsupervised manner? Recently, CLIP (Radford et al., 2021) has emerged as a successful vision-language alignment model by training on 400M noisy web-collected image-sentence pairs. It has shown impressive zero-shot capabilities on various vision-language tasks and can open new avenues for answering the former question. ZeroCap (Tewel et al., 2021) is the most related to our work. It is built on a pre-trained CLIP model together with the GPT-2 language model (Radford et al., 2019). Different from previous work, ZeroCap is truly zeroshot, where the optimization is performed "*ex post facto*" in the activation space without re-training or fine-tuning the model parameters. In ZeroCap, the whole context cache (i.e., all the K and V in the selfattention modules (Vaswani et al., 2017; Dosovitskiy et al., 2021)) is updated with the guidance of CLIP and GPT-2 for every prediction step. As a result, the computational overhead of such optimization steps will increase drastically as the size of the language model goes up. One key difference of our proposal with respect to ZeroCap is that MAGIC utilizes explicit "control knobs" to select desired outputs corresponding to the given image. Since our procedure does not involve any gradient updating or optimization, it significantly speeds up the decoding process by around 27 times (Section §4.1). ## 2.2 Plug And Play Generative Models Lagre-scale pre-trained generative models have revolutionized the field of natural language processing (Radford et al., 2019; Brown et al., 2020) and computer vision (Radford et al., 2021; Ramesh et al., 2021; 2022; Karras et al., 2019; 2020; 2021) in the past few years. Various previous work (Nguyen et al., 2016; 2017; Dathathri et al., 2020; Shen et al., 2020) have revealed that there are rich meaningful semantics in the features learned by such models. This shows a promising pathway to synthesize the desired outputs by interpreting the existing generative models in a "*plug and play*" manner. We can then directly enjoy the powerful capabilities of these *off-the-shelf* big models (without any re-training or fine-tuning) and focus on the lightweight task-specific optimization. For instance, in the image generation field, DGN-AM (Nguyen et al., 2016) can generate images conditioned on a class by finding a hidden code that clearly activates a neuron in another classifier. Then, PPGN (Nguyen et al., 2017) improves the diversity and quality of the synthesized images by incorporating approximate Metropolis-adjusted Langevin (MALA) algorithm (Roberts & Tweedie, 1996; Roberts & Rosenthal, 1998). Shen et al. (2020) propose to directly travel in the latent space of pre-trained unconditional GANs to manipulate the attributes of the input image. Patashnik et al. (2021) use CLIP to connect the text prompt and images to search the latent codes of StyleGAN by gradient descent optimization, which finally results in the manipulation of the visual attributes in the input image. Similarly, in the text generation field, PPLM (Dathathri et al., 2020) extends the previous PPGN to text generation tasks (i.e., editing topic and sentiment), where the image generative models is replaced with a GPT-2 language model. Most recently, ZeroCap (Tewel et al., 2021) shows image captioning task can be tackled by directly combining the existing CLIP and GPT-2 models. In general, most of these mentioned "plug and play" methods require iteratively shifting the hidden code (or latent code, or context cache) with gradient descent optimization. Different from previous work, our proposed approach extends the "plug and play" paradigm by optimizing the decoding strategy of generative LMs. Since MAGIC does not involve any gradient update in the inference, it is computationally efficient (e.g., ∼27 times faster than ZeroCap). Notably, although GPT-2 could generate synthetic text samples of unprecedented quality, it may not be natural on some task-specific text (Mokady et al., 2021; Shen et al., 2021; Zhang et al., 2021a). Following this observation, we continue fine-tuning the GPT-2 model on the task-specific text corpus in an unsupervised manner to adapt it to the textual domain of the end task (Laina et al., 2019; Honda et al., 2021). The computational consumption of such adaptation is negligible (e.g., less than 2 hours with 1 NVIDIA 1080Ti on MS-COCO). During decoding, the fine-tuned GPT-2 and CLIP models are fixed. ## 3 Methodology 3.1 Unsupervised Language Modelling Following previous studies (Laina et al., 2019; Honda et al., 2021), we first learn an unsupervised language model on the text corpus of the end task to adapt to its textual domain. Typically, given a text sequence x, the maximum likelihood estimation (MLE) objective is used to train the language model θ as $${\cal L}_{\rm MLE}=-\frac{1}{|\mathbf{x}|}\sum_{i=1}^{|\mathbf{x}|}\log p_{\theta}(x_{i}|\mathbf{x}_{