{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:09:06.933607Z" }, "title": "Transferring Knowledge from Vision to Language: How to Achieve it and how to Measure it?", "authors": [ { "first": "Tobias", "middle": [], "last": "Norlund", "suffix": "", "affiliation": { "laboratory": "", "institution": "Chalmers University of Technology \u2020 Recorded Future", "location": {} }, "email": "tobiasno@chalmers.se" }, { "first": "Lovisa", "middle": [], "last": "Hagstr\u00f6m", "suffix": "", "affiliation": { "laboratory": "", "institution": "Chalmers University of Technology \u2020 Recorded Future", "location": {} }, "email": "" }, { "first": "Richard", "middle": [], "last": "Johansson", "suffix": "", "affiliation": { "laboratory": "", "institution": "Chalmers University of Technology \u2020 Recorded Future", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Large language models are known to suffer from the hallucination problem in that they are prone to output statements that are false or inconsistent, indicating a lack of knowledge. A proposed solution to this is to provide the model with additional data modalities that complements the knowledge obtained through text. We investigate the use of visual data to complement the knowledge of large language models by proposing a method for evaluating visual knowledge transfer to text for uni-or multimodal language models. The method is based on two steps, 1) a novel task querying for knowledge of memory colors, i.e. typical colors of well-known objects, and 2) filtering of model training data to clearly separate knowledge contributions. Additionally, we introduce a model architecture that involves a visual imagination step and evaluate it with our proposed method. We find that our method can successfully be used to measure visual knowledge transfer capabilities in models and that our novel model architecture shows promising results for leveraging multimodal knowledge in a unimodal setting.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Large language models are known to suffer from the hallucination problem in that they are prone to output statements that are false or inconsistent, indicating a lack of knowledge. A proposed solution to this is to provide the model with additional data modalities that complements the knowledge obtained through text. We investigate the use of visual data to complement the knowledge of large language models by proposing a method for evaluating visual knowledge transfer to text for uni-or multimodal language models. The method is based on two steps, 1) a novel task querying for knowledge of memory colors, i.e. typical colors of well-known objects, and 2) filtering of model training data to clearly separate knowledge contributions. Additionally, we introduce a model architecture that involves a visual imagination step and evaluate it with our proposed method. We find that our method can successfully be used to measure visual knowledge transfer capabilities in models and that our novel model architecture shows promising results for leveraging multimodal knowledge in a unimodal setting.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Large language models have proved performant across a diverse set of tasks in NLP, and most recently even as unsupervised multitask learners (Radford et al., 2019; Brown et al., 2020) . An important contributing factor to this is the capability of the models to hold large amounts of linguistic as well as factual knowledge in their parameters.", "cite_spans": [ { "start": 141, "end": 163, "text": "(Radford et al., 2019;", "ref_id": "BIBREF27" }, { "start": 164, "end": 183, "text": "Brown et al., 2020)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "While impressive, without strong task-specific fine-tuning these models are prone to outputting false or inconsistent statements, often also referred to as hallucination (Logan et al., 2019) . This has been particularly studied for generative tasks such as abstractive text summarization (Maynez et al., * Equal contribution. 2020) and dialog systems (Roller et al., 2021; Li et al., 2020a) , but the problem is also apparent for models applied to cloze-style fill-in-the-blank tasks (Petroni et al., 2019; Jiang et al., 2020) . Having truthful NLP systems is a core requirement for most applications, which is why this is an important problem to address.", "cite_spans": [ { "start": 170, "end": 190, "text": "(Logan et al., 2019)", "ref_id": "BIBREF18" }, { "start": 288, "end": 305, "text": "(Maynez et al., *", "ref_id": null }, { "start": 351, "end": 372, "text": "(Roller et al., 2021;", "ref_id": "BIBREF29" }, { "start": 373, "end": 390, "text": "Li et al., 2020a)", "ref_id": "BIBREF15" }, { "start": 484, "end": 506, "text": "(Petroni et al., 2019;", "ref_id": "BIBREF24" }, { "start": 507, "end": 526, "text": "Jiang et al., 2020)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Grounding has been proposed as a potential way to mitigate this problem, e.g by providing broader world information from for example multimodal perception (Bisk et al., 2020; Bender and Koller, 2020) .", "cite_spans": [ { "start": 155, "end": 174, "text": "(Bisk et al., 2020;", "ref_id": "BIBREF1" }, { "start": 175, "end": 199, "text": "Bender and Koller, 2020)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Information from multimodal perception may actually provide a significant amount of additional world information to an NLP model, since text data suffers from the problem of reporting bias. That is, humans generally communicate novel information rather than trivial, leading to a discrepancy between reality and what gets described in text (Gordon and Van Durme, 2013) . Consequently, perceptual information may contain complementing world knowledge that cannot be found in text data, and has the potential to mitigate the aforementioned problem of hallucinating NLP models.", "cite_spans": [ { "start": 340, "end": 368, "text": "(Gordon and Van Durme, 2013)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Previous works have evaluated how grounded language representations impact performance on common NLP benchmarks (Sileo, 2021; Kiela et al., 2018; Elliott and K\u00e1d\u00e1r, 2017) , but little has been done on investigating grounding specifically as an additional source of knowledge.", "cite_spans": [ { "start": 112, "end": 125, "text": "(Sileo, 2021;", "ref_id": "BIBREF31" }, { "start": 126, "end": 145, "text": "Kiela et al., 2018;", "ref_id": "BIBREF13" }, { "start": 146, "end": 170, "text": "Elliott and K\u00e1d\u00e1r, 2017)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this work, we take a focused look at how data from a visual modality can augment the knowledge a language model expresses. We design an experimental setup to enable the development of strategies for maximizing visual-to-textual knowledge transfer. In the setup, we create a small knowledgecentric cloze-style task in English named Memory Colors that is tailored to test for visual knowledge by querying for the typical colors of well-known items. We also build a large vision-and-language dataset in the English language, where we carefully control for the modality from which the necessary visual knowledge can be learnt. Finally, we use this data to train self-supervised multimodal models, and compare strategies to query for the visual knowledge.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Based on intuitions of how humans are able to store and retrieve such knowledge, we also propose a querying strategy that involves \"imagining\" a visual representation from which the answer then can be decoded.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To summarize, our contribution is twofold:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "1. We provide an experimental setup for evaluating visual knowledge transfer in English multimodal language models, including a novel task we denote Memory Colors. 2. We propose a language model querying strategy involving a visual imagination step and show that it can provide an efficient means of visual knowledge transfer compared to standard querying.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Humans have the ability to learn knowledge from non-linguistic modalities (such as visual perception) and express this in language, making them able to e.g. textually reason about what an elephant looks like because they have previously seen said animal in an image. Many models that integrate the textual and visual modalities exist, but the majority of them have been created with the purpose of reasoning about properties of individual images provided to the system: for instance, to ask about an elephant, you need to simultaneously provide the model with an image of an elephant. We hypothesize that the capability to incorporate knowledge from different modalities and expressing it textually could improve on the common sense as well as in-domain knowledge that language models possess. To this end, we wish to create an experimental setup in which we can measure how well a model can acquire visual knowledge and then express it in text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "An experimental setup evaluating visual knowledge transfer", "sec_num": "2" }, { "text": "A simple way to evaluate a model for its capability to transfer visual knowledge into text is to query it about typical colors of certain objects -memory colors -while making sure that the model cannot acquire this knowledge through a text signal, i.e. it has not previously been told what the colors should be.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "An experimental setup evaluating visual knowledge transfer", "sec_num": "2" }, { "text": "Consequently, we create a zero-shot cloze-style task of predicting memory colors of common objects, described in Section 2.1. We also collect a large vision-and-language dataset for model training in which we carefully control for whether the knowledge necessary for solving the memory color task is available strictly in the visual, textual, or both modalities, described in Section 2.2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "An experimental setup evaluating visual knowledge transfer", "sec_num": "2" }, { "text": "When human observers can agree on the typical color, or canonical color, of a certain object type through their experiences with instances of said object type, the color of that object is generally referred to as its memory color (P\u00e9rez-Carpinell et al., 1998) . For example, a banana can be green or brown, but it is usually remembered as being yellow, such that yellow is the memory color of a banana. As explained by Newhall et al. (1957) \"... color memory is a selective resultant of the relative impressiveness during perception of the various aspects of stimulation. More dominant, characteristic, and attractive aspects tend to be more impressive, and less dominant aspects tend to be less impressive. The more impressive aspects are more prone to survival in subsequent memory while other aspects are not.\"", "cite_spans": [ { "start": 230, "end": 260, "text": "(P\u00e9rez-Carpinell et al., 1998)", "ref_id": "BIBREF23" }, { "start": 420, "end": 441, "text": "Newhall et al. (1957)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Memory Colors dataset", "sec_num": "2.1" }, { "text": "As such, memory colors of typical objects are remembered by humans and a human can answer questions about what the typical color of such an object is despite not having the object in front of them when answering. Consequently, memory colors express visual knowledge and we can use them for a simple zero-shot evaluation of whether a model can display the same capability as a human of transferring a visual signal into memory and, later on, text. For our visual knowledge transfer evaluation task we create a novel Memory Colors dataset in the English language, consisting of 109 object types paired with their memory color, an illustrating picture and a descriptor. Figure 1 shows an example, and the supplementary material includes the full dataset with additional statistics.", "cite_spans": [], "ref_spans": [ { "start": 667, "end": 675, "text": "Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Memory Colors dataset", "sec_num": "2.1" }, { "text": "The Memory Colors dataset and a corresponding human baseline is obtained by annotating a set of randomly shuffled cloze questions based on well-known entities with typical colors. Examples of such entities are items, materials, animals, ingredients or plants that are observable in the real world, such as tomato, elephant, cocoa and grass. These entities were sourced from the web, including Wikidata 1 and ConceptNet, 2 as well as from the commonsense knowledge of the authors of this article.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Memory Colors dataset", "sec_num": "2.1" }, { "text": "The cloze questions of the Memory Colors dataset are created with the help of a predefined query template; see an example question in Table 2 . The predefined query template is assigned to each annotator from a set of seven different templates to create differently formatted questions querying for the same visual knowledge.", "cite_spans": [], "ref_spans": [ { "start": 134, "end": 141, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Memory Colors dataset", "sec_num": "2.1" }, { "text": "The memory colors used for the items in the dataset are black, blue, brown, green, grey, orange, pink, purple, red, white and yellow. The annotators are asked to pick their answer from one of these 11 colors for each question. They are also asked to answer the questions to the best of their ability, without consulting other information sources. 3 Memory color label The color label for each item is given by the majority vote of 11 annotators, and only items with a minimum of 8 annotators agreeing on a memory color are included in the dataset, resulting in the Memory Colors dataset consisting of 109 items and corresponding memory colors, with a majority vote distribution as indicated in Table 1 . Arguably, our use of the term memory color may be somewhat less strict than that of the optical science field, in which very few memory colors are admitted due to high requirements on agreement between humans for a color to be classified as a memory color. However, for the sake of obtaining a dataset of a sufficent size, we decide to also include items and colors for which there is a majority, while not a perfect one. ", "cite_spans": [ { "start": 347, "end": 348, "text": "3", "ref_id": null } ], "ref_spans": [ { "start": 694, "end": 701, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Memory Colors dataset", "sec_num": "2.1" }, { "text": "Answer Q: What is the color of a lemon? yellow A: [answer] Descriptor The descriptor for each item in the dataset is manually added to make the clozequestions grammatically correct and to resolve potential item reference ambiguities. For example, determiners such as \"a\", \"an\" or \" the\" are added as a descriptor for countable nouns and the addition \"the animal\" might be added for the occurence seal to clarify that we refer to the animal and not e.g. a letter seal.", "cite_spans": [ { "start": 50, "end": 58, "text": "[answer]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Question", "sec_num": null }, { "text": "Illustrating picture A picture of each item in the dataset is manually added by the authors by picking an image from the Internet that is deemed to correspond well to the item, and that to the authors' best ability reflects the labeled color.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question", "sec_num": null }, { "text": "Human baseline The human baseline for the task is taken as the mean of the accuracy scores of 11 annotators, where each accuracy score is calculated by comparing the annotator answers with the majority vote labels. The annotators achieved a mean accuracy score of 0.937 with a standard deviation of 0.051 on the Memory Colors dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question", "sec_num": null }, { "text": "We hypothesize that a perfect accuracy score is not reached due to different perceptions of colors, varying knowledge of what the Memory Colors items refer to and that it perhaps is unavoidable that some disagreements exist for this fairly large dataset that has an apparent dependence on cultural background. Hypothetically, the phrasing of the question may also be a factor that explains some of the variation (Kalton and Schuman, 1982) , although this seems unlikely in this case since the questions concern concrete physical objects.", "cite_spans": [ { "start": 412, "end": 438, "text": "(Kalton and Schuman, 1982)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Question", "sec_num": null }, { "text": "Annotator agreement To verify our Memory Colors dataset and its human baseline we also evaluate the annotator agreement between the 11 annotators using Fleiss' kappa score (Fleiss, 1971 ). The kappa score for the agreement between the annotators is found to be 0.863, indicating that the annotators agree fairly well.", "cite_spans": [ { "start": 172, "end": 185, "text": "(Fleiss, 1971", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Question", "sec_num": null }, { "text": "We combine four public image+text datasets to be used for self-supervised training in our experiments: MS COCO (Lin et al., 2014) , SBU Captions (Ordonez et al., 2011) , Visual Genome QA (Krishna et al., 2017) and Conceptual Captions (Sharma et al., 2018) . In total it comprises 4.7M captions paired with 2.9M unique images. At the core of this work is a method to measure visual knowledge transfer by means of the memory colors task described in Section 2.1. To this end, we construct a version of this vision-andlanguage dataset where we remove training examples in which a memory color is revealed in the caption. This way we can, with high confidence, attribute correct model predictions to originate from the visual modality rather than the captions. In the filtered version, an example is excluded if its caption matches either of two conditions:", "cite_spans": [ { "start": 111, "end": 129, "text": "(Lin et al., 2014)", "ref_id": "BIBREF17" }, { "start": 145, "end": 167, "text": "(Ordonez et al., 2011)", "ref_id": "BIBREF22" }, { "start": 187, "end": 209, "text": "(Krishna et al., 2017)", "ref_id": "BIBREF14" }, { "start": 234, "end": 255, "text": "(Sharma et al., 2018)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Vision-and-language dataset", "sec_num": "2.2" }, { "text": "1. It contains any object word and any color word from the memory colors dataset, by exact string match.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Vision-and-language dataset", "sec_num": "2.2" }, { "text": "2. When tokenized and stemmed, it contains any stemmed object word and any stemmed color word from the memory colors dataset", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Vision-and-language dataset", "sec_num": "2.2" }, { "text": "The above filter matches about 6% of the captions in the training set. A summary of the statistics for the full and filtered versions of the training dataset are detailed in Table 3 . Complete statistics for the dataset can be found in the supplementary material. Figure 2 : Training of the multimodal CLIP-BERT model using MLM. An image represented by CLIP is appended to the transformer input.", "cite_spans": [], "ref_spans": [ { "start": 174, "end": 181, "text": "Table 3", "ref_id": "TABREF2" }, { "start": 264, "end": 272, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Vision-and-language dataset", "sec_num": "2.2" }, { "text": "When a human is asked a question like What is the color of your house? it typically requires retrieving a mental picture from memory of what the house looks like. Based on this mental picture, the answer can then easily be inferred. The mental picture provides an efficient means to store knowledge about the appearance of the house, as other questions like How many floors does it have? or Does it have a garden? can as easily be inferred. We hypothesize that this idea of visual imagination could also provide an efficient means of visual knowledge transfer, and propose a model for performing the \"imagination\" explicitly. We take inspiration from recent works in visionand-language modeling, where the transformer architecture (Vaswani et al., 2017) has become the de facto standard (Lu et al., 2019; Tan and Bansal, 2019; Qi et al., 2020; Li et al., 2020b) . In a typical setup, an image representation is fed to the transformer encoder jointly with the text tokens, and the encoder is then pre-trained using various denoising and contrastive objectives.", "cite_spans": [ { "start": 731, "end": 753, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF34" }, { "start": 787, "end": 804, "text": "(Lu et al., 2019;", "ref_id": "BIBREF19" }, { "start": 805, "end": 826, "text": "Tan and Bansal, 2019;", "ref_id": "BIBREF33" }, { "start": 827, "end": 843, "text": "Qi et al., 2020;", "ref_id": "BIBREF25" }, { "start": 844, "end": 861, "text": "Li et al., 2020b)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": "3" }, { "text": "In this work, we perform experiments on a simple yet novel variant to accomodate for the visual imagination. While common practice is to use visual features from an object detector (Ren et al., 2015), we extract visual representations using the image encoder of a pretrained CLIP model (Radford et al., 2021) instead. CLIP consists of two networks for encoding an image and a text sentence respectively, and is trained to align these representations in a joint space using a contrastive training objective. The resulting visual encoder is shown to ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": "3" }, { "text": "T R A N S F O R M E R h[ CLS] ht he hcol or hof hbl ood hi s h[ MASK] red T R A N S F O R M E R a) b)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": "3" }, { "text": "Figure 3: a) Inference from CLIP-BERT using the implicit transfer strategy, directly querying for knowledge through a masked token prediction. b) Inference from CLIP-BERT using the explicit transfer strategy, involving the prediction of visual latent features (visual imagination) as a preceding step.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": "3" }, { "text": "have great discriminatory performance, for example when applied to zero-shot image classification. However, the main benefit of using CLIP to extract visual features is the joint feature space between its visual and textual encoder, enabling us to generate \"visual\" features from text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": "3" }, { "text": "In our experiments, we start from the popular pretrained BERT base model 4 , and continue training on our visual-language dataset from Section 2.2, using only the Masked Language Modelling (MLM) objective with 15% random dynamic masking ratio. Specifically, we train two models on the filtered and unfiltered versions respectively:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": "3" }, { "text": "BERT-base We continue training of BERT base using MLM only on the captions part of the visualand-language dataset. This provides a baseline of the amount of color knowledge that can be picked up from text alone.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": "3" }, { "text": "We continue training of BERT base using MLM but on both the captions and the images of the visual-and-language dataset. The image representation is transformed through a projection layer and appended to the transformer input without adding any positional or segment embeddings. The MLM objective is only applied on the textual positions. An illustration of the training of CLIP-BERT is shown in Figure 2 .", "cite_spans": [], "ref_spans": [ { "start": 395, "end": 403, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "CLIP-BERT", "sec_num": null }, { "text": "All models were trained for 16 hours using 32 16 GB T4 GPUs with a total batch size of 16,384. During this time between 44k to 58k gradient steps were taken, and all validation losses had converged. 4 bert-base-uncased in Huggingface Transformers.", "cite_spans": [ { "start": 199, "end": 200, "text": "4", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "3.1" }, { "text": "We used the Adam optimizer with a constant learning rate of 5e-5, and applied mixed-precision training for increased performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "3.1" }, { "text": "The canonical way to query BERT-like models for knowledge in a zero-shot setting is to construct textual templates containing a [MASK] token to be predicted by the model in a cloze-style fashion (Petroni et al., 2019) . Similarly, we manually construct templates to query for the color of objects in Memory Colors. Since it has been shown that language models can be sensitive to the exact phrasing of such templates (Jiang et al., 2020), we construct a set of 13 distinct alternatives paraphrasing each other. The templates provided to the human annotators (described in Section 2.1) are included in these alternatives, while the model templates are complemented with versions that also contain model-specific tokens, such as [SEP] . All model templates are listed in the supplementary material. We report the mean top-1 accuracy and standard deviation of each model over all templates, and we only consider the eleven valid color words from the full vocabulary of model predictions.", "cite_spans": [ { "start": 195, "end": 217, "text": "(Petroni et al., 2019)", "ref_id": "BIBREF24" }, { "start": 727, "end": 732, "text": "[SEP]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Querying strategies", "sec_num": "3.2" }, { "text": "Since the goal of our work is to investigate how visual knowledge can be transferred into language models, we consider two mechanisms of knowledge transfer, denoted implicit and explicit transfer respectively. These mechanisms are investigated using two different querying strategies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Querying strategies", "sec_num": "3.2" }, { "text": "By implicit transfer, we refer to the effect of multimodal training on the word representations of a language model. To measure the implicit transfer capabilities of a model, we use a multimodal signal at training time but at test time, we query the model as described above using the method proposed by Petroni et al. (2019) . We use the term implicit, as the visual knowledge (e.g. that blood typically has a red hue) needs to be memorized in the model weights as a part of MLM training, and later retieved textually (the correct masked token should be \"red\").", "cite_spans": [ { "start": 304, "end": 325, "text": "Petroni et al. (2019)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Implicit transfer", "sec_num": "3.2.1" }, { "text": "As an alternative to implicit transfer, we propose a more explicit transfer strategy where we as a preceeding step predict visual features of an imaginary image based on the text. 5 These predicted features are then appended to the transformer input that thus becomes complete with both the textual and visual features as seen during training. For this visual prediction, we use the textual encoder of CLIP as it is explicitly trained to align its representations with the visual counterpart.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Explicit transfer by visual imagination", "sec_num": "3.2.2" }, { "text": "To evaluate the quality of the predicted representations on the Memory Colors dataset, we also generate \"true\" visual representations with the visual encoder of CLIP using a ground truth image of each object, and evaluate each model using these as well. This setting more resembles visual question answering, and should be considered as an upper bound for what performance can be expected from the predicted features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Explicit transfer by visual imagination", "sec_num": "3.2.2" }, { "text": "We evaluate the transfer capabilities of our aforementioned models both to assess the functionality of our measurement method and to investigate the effect of implicit and explicit visual knowledge transfer. The results on Memory Colors for the different experiments are displayed in Table 4 . We structure the analysis in this section around a set of interesting questions.", "cite_spans": [], "ref_spans": [ { "start": 284, "end": 291, "text": "Table 4", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Results and analysis", "sec_num": "4" }, { "text": "Are humans the top performers on the Memory Colors dataset? We can conclude that the human baseline results are better than those of any model, training procedure and querying strategy evaluated. This baseline is expected to be high because the task is inherently based on the notions of color according to the majority of the humans that were evaluated. Furthermore, as language models lack much knowledge compared to humans, we expect them 5 We refer to the raw text, including the [MASK] part. to perform worse than humans on this task. Not even the CLIP-BERT model provided with gold standard images and unfiltered textual information on colors matches the performance of the human annotators. There may be several reasons for this, for instance that the capacity of the multimodal models is not sufficient, or that humans are privy to additional information that helps them solve the Memory Colors task better.", "cite_spans": [ { "start": 442, "end": 443, "text": "5", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Results and analysis", "sec_num": "4" }, { "text": "Is the filtering of the training data necessary for our experimental setup of evaluating visual knowledge transfer? We see that the BERTbase model without any further training has a bad performance on the Memory Colors dataset, only slightly better than the majority baseline. On the other hand, the model shows significant performance improvement if it is trained on our unfiltered visual-and-language data. This suggests that the unfiltered training dataset contains much information about the objects' color textually. This is perhaps not surprising, as it is common that captions describe what colors the objects in the image have. However, for our purposes it is problematic as we wish to constrain this information to be learnt from the visual modality solely. Based on this, we conclude that the filtering of the training data is necessary in our experimental setup for evaluating visual knowledge transfer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and analysis", "sec_num": "4" }, { "text": "Does the filtering of the training data work as intended according to the Memory Colors dataset? If we filter the training data of the BERT-base model, the performance drops from 0.724 to 0.460, indicating that a large portion of the necessary information has been removed. However, the model performance does not drop to that of the original BERT-base model, so seemingly some color information still reaches the model through the text despite the data filtering. This leakage is undesirable from the perspective of evaluating visual knowledge transfer, since the model should not be able to perform well on Memory Colors without visual knowledge transfer capabilities.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and analysis", "sec_num": "4" }, { "text": "Does the implicit transfer strategy improve performance on Memory Colors? The CLIP-BERT model using implicit transfer displays significantly better performance than the corresponding BERT-base baseline in the filtered case, while the performance difference is negligible in the unfiltered case. This indicates that the implicit strategy does work to some extent, at least when corresponding textual information is lacking.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and analysis", "sec_num": "4" }, { "text": "The CLIP-BERT model using explicit transfer displays significantly better performance than the baseline and the implicit transfer model for both the unfiltered and filtered training methods. This suggests that the model has a strong visual knowledge transfer capability that enables it to improve the performance on Memory Colors, beyond the knowledge provided textually. However, we can observe that the rise from the baseline is larger for the filtered case than the unfiltered case, with a difference of 27 percentage units and 15 percentage units respectively. This is expected of our task, as the models need to rely more on their visual transfer capabilites to perform well in the filtered training case.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Does the explicit transfer strategy improve performance on Memory Colors?", "sec_num": null }, { "text": "Is the explicit strategy better than the implicit? The fact that the performance is much improved over the text-only baseline also in the unfiltered case indicates that the explicit strategy indeed extends the textual knowledge in a complementary manner. Since we do not see a similar performance gain for the implicit strategy, we have reason to believe that the explicit strategy is more effective.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Does the explicit transfer strategy improve performance on Memory Colors?", "sec_num": null }, { "text": "What is the quality of the predicted visual features compared to those of the gold standard visuals? Lastly, we have the results of the CLIP-BERT model that bases its predictions on the ground truth image of each object it is being queried on. As expected, this model acts as an upper bound for the model performance on both the unfiltered and filtered method cases, while the rise in performance is more significant in the filtered training case. This also agrees with the previously mentioned hypothesis on the performance difference between the filtered and unfiltered training case. It also implies that the predicted features of the CLIP-BERT-explicit model are not as good as if they were generated from the actual item pictures.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Does the explicit transfer strategy improve performance on Memory Colors?", "sec_num": null }, { "text": "Are the models sensitive to the phrasing of the query templates? The standard deviation figures presented in Table 4 show the variation in the accuracy scores for the different query templates. We can observe that all of the models evaluated display a standard deviation between 5-11%, and that none are lower than the standard deviation of the human baseline. Consequently, the models are sensitive to the phrasing of the query templates, as already mentioned in Section 3.2.", "cite_spans": [], "ref_spans": [ { "start": 109, "end": 116, "text": "Table 4", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Does the explicit transfer strategy improve performance on Memory Colors?", "sec_num": null }, { "text": "There are multiple perspectives on how our contributions relate to previous work, and we elaborate on this in the subsequent sections.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "5" }, { "text": "A body of previous work exists on the topic of visual grounding for improving performance on language tasks. For example, Chrupa\u0142a et al. (2015) ground the language representations by adding an auxiliary visual feature prediction loss during training, and evaluate the learned representations on word and sentence similarity tasks. Similarly, Kiela et al. (2018) align language and corresponding visual representations through a contrastive ranking loss, and evaluate the learned representations on a suite of common NLP classification tasks. Visual grounding has also been explored for machine translation; for instance, Elliott and K\u00e1d\u00e1r (2017) add an auxiliary visual prediction loss in addition to the regular seq2seq objective which is shown to improve performance. More recently, Sileo (2021) investigates the extent to which visual-linguistic pretraining of multimodal transformers can improve performance on a set of text-only tasks. While these approaches suggest that visual grounding can be helpful for language tasks, our work more explicitly targets the question of how the additional modality can complement the textual signal. We do this through a narrow focus on visual knowledge, in contrast to tasks requiring broader language understanding.", "cite_spans": [ { "start": 122, "end": 144, "text": "Chrupa\u0142a et al. (2015)", "ref_id": "BIBREF3" }, { "start": 343, "end": 362, "text": "Kiela et al. (2018)", "ref_id": "BIBREF13" }, { "start": 622, "end": 646, "text": "Elliott and K\u00e1d\u00e1r (2017)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Visual grounding for improved NLP", "sec_num": "5.1" }, { "text": "Our work is not the first to implement the generation of imaginary features based on text for a unimodal text task. There is previous work investigating the potential of leveraging multimodal information during training to enable a model to generate or retrieve additional multimodal information at inference time for a pure text input. Sileo (2021) uses the term associative grounding, which can be based on synthesis or retrieval. The main difference between our work and Sileo's is that he develops a model based on retrieval, while we use feature synthesis. Earlier work has used latent visual features to augment the input for improving word embeddings (Goyal et al., 2017) .", "cite_spans": [ { "start": 658, "end": 678, "text": "(Goyal et al., 2017)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Augmenting input using feature prediction", "sec_num": "5.2" }, { "text": "The idea has been explored for non-visual information as well. For example in open-domain question answering, retrieving relevant source documents as a preliminary step prior to knowledge extraction has proved highly effective (Guu et al., 2020) . Recently, Zellers et al. (2021) also proposed a similar explicit decoupling but for augmenting a language model with knowledge about physical dynamics. Also here, our work differs in that we augment the input with a visual signal and that we use it for a task focused on evaluating the capacity of visual knowledge transfer of a model.", "cite_spans": [ { "start": 227, "end": 245, "text": "(Guu et al., 2020)", "ref_id": "BIBREF9" }, { "start": 258, "end": 279, "text": "Zellers et al. (2021)", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "Augmenting input using feature prediction", "sec_num": "5.2" }, { "text": "Much recent work on vision-and-text models focuses on developing models for multimodal tasks. Here, the model is queried with both textual and visual input on tasks such as VQA, GQA and NLVR2 (Goyal et al., 2017; Hudson and Manning, 2019; Suhr et al., 2019) . Recently developed models that can or could be found on the leaderboards of these tasks without using ensembling are e.g. ViLBERT, LXMERT, ImageBERT and OSCAR (Lu et al., 2019; Tan and Bansal, 2019; Qi et al., 2020; Li et al., 2020b) . These models are typically based on the BERT transformer model architecture (Devlin et al., 2019) and they often extract features from the visual input using a Faster R-CNN model (Ren et al., 2015) . Similarly to this previous work, we also base our model design on the BERT model architecture and extract features from the visual input using a pre-trained visual processing model. However, we differ from the previous work in that we utilize the CLIP model to extract visual features, which also enables us to predict visual features from a pure textual input using the shared feature space for textual and visual representations of CLIP. We also differ in that we aim to study the visual knowledge transfer capabilities of a model by evaluating it with a method that measures visual knowledge for a unimodal textual task.", "cite_spans": [ { "start": 192, "end": 212, "text": "(Goyal et al., 2017;", "ref_id": "BIBREF8" }, { "start": 213, "end": 238, "text": "Hudson and Manning, 2019;", "ref_id": "BIBREF10" }, { "start": 239, "end": 257, "text": "Suhr et al., 2019)", "ref_id": "BIBREF32" }, { "start": 419, "end": 436, "text": "(Lu et al., 2019;", "ref_id": "BIBREF19" }, { "start": 437, "end": 458, "text": "Tan and Bansal, 2019;", "ref_id": "BIBREF33" }, { "start": 459, "end": 475, "text": "Qi et al., 2020;", "ref_id": "BIBREF25" }, { "start": 476, "end": 493, "text": "Li et al., 2020b)", "ref_id": "BIBREF16" }, { "start": 572, "end": 593, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF4" }, { "start": 675, "end": 693, "text": "(Ren et al., 2015)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Visual-linguistic tasks", "sec_num": "5.3" }, { "text": "We have introduced a methodology for measuring visual knowledge transfer in multimodal language models. The centerpiece is a new benchmark Memory Colors designed to test how well such models incorporate knowledge about colors of common objects. We find that careful filtering of the underlying training data can provide an effective means to attribute the acquired knowledge to the individual source modalities.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and future work", "sec_num": "6" }, { "text": "Our results based on this methodology also showcase that vision-and-language pre-trained language models are able to textually express knowledge obtained from a separate (e.g. visual) modality.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and future work", "sec_num": "6" }, { "text": "We also found that there is some information leakage in our filtering method, as the performance of filtered BERT-base improves over the BERTbase baseline. This improvement in model performance cannot be explained based on the method and results of this work. Thus, it remains to be investigated what kind of information leakage takes place in spite of the filtering. Potential explanations are that the model learns the color of an item through second-order effects, e.g. by learning the color of a synonymous item that we have not filtered for, or that the original BERT-base model already contains textual knowledge relevant to Memory Colors, while it needs further training on a visual-language dataset to access that knowledge. Future work should ensure that the experimental setup for evaluating visual knowledge works as intended.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and future work", "sec_num": "6" }, { "text": "Additionally, it is worth investigating what other evaluation alternatives we have for measuring the cross-modal capabilities of NLP models. Can we create an evaluation methodology that is not taskbased, can we find some other task to evaluate on, or can we improve on the statistical robustness of our evaluation methodology?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and future work", "sec_num": "6" }, { "text": "We observed that a model with implicit transfer performs better on our evaluation task than a unimodal language model, while a model with explicit transfer through prediction performs even better on the task. This implies that both implicit and explicit knowledge transfer are promising directions for efficient visual knowledge transfer to text, although the explicit transfer may be more promising.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and future work", "sec_num": "6" }, { "text": "While we here only investigate knowledge transfer from a visual modality, it is likely that this model design also can be successfully implemented for other modalities.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and future work", "sec_num": "6" }, { "text": "The experimental setup proposed in this work helped us discover and validate the potential of explicit transfer. We can conclude that more work on understanding how multimodal training of language models affect their predictions is an interesting direction towards more robust and trustworthy NLP systems. : Total number of image and caption samples, in each respective source dataset. In Visual Genome QA, the \"caption\" is the concatenation of the question and answer strings. Since some image links in SBU Captions and Conceptual Captions have become broken, the total number of samples don't match what was originally reported. There are more captions than images in the dataset since several different captions may refer to the same image. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and future work", "sec_num": "6" }, { "text": "www.wikidata.org 2 www.conceptnet.io 3 All the query templates and annotator instructions are provided in the supplementary material.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work was partially supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation.Additionally, the computations were enabled by resources provided by the Swedish National Infrastructure for Computing (SNIC), partially funded by the Swedish Research Council through grant agreement no. 2018-05973.Lastly, we would like to thank the 11 individuals who helped us annotate our Memory Colors dataset. Their work was imperative for the creation of this article. We also thank the anonymous reviewers for their valuable feedback and knowledge sharing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null }, { "text": "The supplementary material of this work includes the instructions provided to the human annotators in Figure 4 , the Memory Colors data in Table 5 , the color label distribution of the data in Figure 5 , the different query templates for human annotators and models in Table 6 and the full statistics for the Vision-and-Language dataset in Table 7 .Annotator Instructions Thank you for helping us out by solving this task! You will be presented with 121 fill-in-the-gap color questions that are to be answered with one answer, where you can pick between the following answer alternatives: yellow blue green white red orange black pink brown grey purple You should fill in your answer for the gap [fill-in-this-word] under the column Fill-in-word. Answer with the alternative that first comes to your mind. The cell you are to fill in will turn green after you have specified one of the possible answers in it. Make sure that all cells in the column are green and not red before you submit your answers. Do not leave any cells empty, just guess on the alterative you find most likely even if you don't know the answer.It is important that you solve this task by yourself, such that you do not discuss the questions or the answers with anyone else before you have submitted your answers. Also, you should not Google or look up anything while answering the questions.Thank you again! Figure 4 : The instructions provided to the human annotators before they annotated the predecessor to the Memory Colors dataset.", "cite_spans": [], "ref_spans": [ { "start": 102, "end": 110, "text": "Figure 4", "ref_id": null }, { "start": 139, "end": 146, "text": "Table 5", "ref_id": null }, { "start": 193, "end": 201, "text": "Figure 5", "ref_id": null }, { "start": 269, "end": 276, "text": "Table 6", "ref_id": null }, { "start": 340, "end": 347, "text": "Table 7", "ref_id": null }, { "start": 1381, "end": 1389, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "A Supplementary material", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Climbing towards NLU: On meaning, form, and understanding in the age of data", "authors": [ { "first": "Emily", "middle": [ "M" ], "last": "Bender", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Koller", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "5185--5198", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.463" ] }, "num": null, "urls": [], "raw_text": "Emily M. Bender and Alexander Koller. 2020. Climb- ing towards NLU: On meaning, form, and under- standing in the age of data. In Proceedings of the 58th Annual Meeting of the Association for Compu- tational Linguistics, pages 5185-5198, Online. As- sociation for Computational Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Experience grounds language", "authors": [ { "first": "Yonatan", "middle": [], "last": "Bisk", "suffix": "" }, { "first": "Ari", "middle": [], "last": "Holtzman", "suffix": "" }, { "first": "Jesse", "middle": [], "last": "Thomason", "suffix": "" }, { "first": "Jacob", "middle": [], "last": "Andreas", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "Joyce", "middle": [], "last": "Chai", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" }, { "first": "Angeliki", "middle": [], "last": "Lazaridou", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "May", "suffix": "" }, { "first": "Aleksandr", "middle": [], "last": "Nisnevich", "suffix": "" }, { "first": "Nicolas", "middle": [], "last": "Pinto", "suffix": "" }, { "first": "Joseph", "middle": [], "last": "Turian", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "8718--8735", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-main.703" ] }, "num": null, "urls": [], "raw_text": "Yonatan Bisk, Ari Holtzman, Jesse Thomason, Jacob Andreas, Yoshua Bengio, Joyce Chai, Mirella Lap- ata, Angeliki Lazaridou, Jonathan May, Aleksandr Nisnevich, Nicolas Pinto, and Joseph Turian. 2020. Experience grounds language. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 8718-8735, Online. Association for Computational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners", "authors": [ { "first": "Tom", "middle": [], "last": "Brown", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Mann", "suffix": "" }, { "first": "Nick", "middle": [], "last": "Ryder", "suffix": "" }, { "first": "Melanie", "middle": [], "last": "Subbiah", "suffix": "" }, { "first": "Jared", "middle": [ "D" ], "last": "Kaplan", "suffix": "" }, { "first": "Prafulla", "middle": [], "last": "Dhariwal", "suffix": "" }, { "first": "Arvind", "middle": [], "last": "Neelakantan", "suffix": "" }, { "first": "Pranav", "middle": [], "last": "Shyam", "suffix": "" }, { "first": "Girish", "middle": [], "last": "Sastry", "suffix": "" }, { "first": "Amanda", "middle": [], "last": "Askell", "suffix": "" }, { "first": "Sandhini", "middle": [], "last": "Agarwal", "suffix": "" }, { "first": "Ariel", "middle": [], "last": "Herbert-Voss", "suffix": "" }, { "first": "Gretchen", "middle": [], "last": "Krueger", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Henighan", "suffix": "" }, { "first": "Rewon", "middle": [], "last": "Child", "suffix": "" }, { "first": "Aditya", "middle": [], "last": "Ramesh", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Ziegler", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Clemens", "middle": [], "last": "Winter", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Hesse", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Sigler", "suffix": "" }, { "first": "Mateusz", "middle": [], "last": "Litwin", "suffix": "" } ], "year": null, "venue": "Advances in Neural Information Processing Systems", "volume": "33", "issue": "", "pages": "1877--1901", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert- Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems, volume 33, pages 1877-1901. Curran Associates, Inc.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Learning language through pictures", "authors": [ { "first": "Grzegorz", "middle": [], "last": "Chrupa\u0142a", "suffix": "" }, { "first": "\u00c1kos", "middle": [], "last": "K\u00e1d\u00e1r", "suffix": "" }, { "first": "Afra", "middle": [], "last": "Alishahi", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", "volume": "2", "issue": "", "pages": "112--118", "other_ids": { "DOI": [ "10.3115/v1/P15-2019" ] }, "num": null, "urls": [], "raw_text": "Grzegorz Chrupa\u0142a,\u00c1kos K\u00e1d\u00e1r, and Afra Alishahi. 2015. Learning language through pictures. In Pro- ceedings of the 53rd Annual Meeting of the Associ- ation for Computational Linguistics and the 7th In- ternational Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 112- 118, Beijing, China. Association for Computational Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Imagination improves multimodal translation", "authors": [ { "first": "Desmond", "middle": [], "last": "Elliott", "suffix": "" }, { "first": "", "middle": [], "last": "K\u00e1d\u00e1r", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the Eighth International Joint Conference on Natural Language Processing", "volume": "1", "issue": "", "pages": "130--141", "other_ids": {}, "num": null, "urls": [], "raw_text": "Desmond Elliott and\u00c1kos K\u00e1d\u00e1r. 2017. Imagination improves multimodal translation. In Proceedings of the Eighth International Joint Conference on Natu- ral Language Processing (Volume 1: Long Papers), pages 130-141, Taipei, Taiwan. Asian Federation of Natural Language Processing.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Measuring nominal scale agreement among many raters", "authors": [ { "first": "L", "middle": [], "last": "Joseph", "suffix": "" }, { "first": "", "middle": [], "last": "Fleiss", "suffix": "" } ], "year": 1971, "venue": "Psychological bulletin", "volume": "76", "issue": "5", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joseph L Fleiss. 1971. Measuring nominal scale agree- ment among many raters. Psychological bulletin, 76(5):378.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Reporting bias and knowledge acquisition", "authors": [ { "first": "Jonathan", "middle": [], "last": "Gordon", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Van Durme", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2013 workshop on Automated knowledge base construction", "volume": "", "issue": "", "pages": "25--30", "other_ids": { "DOI": [ "10.1145/2509558.2509563" ] }, "num": null, "urls": [], "raw_text": "Jonathan Gordon and Benjamin Van Durme. 2013. Re- porting bias and knowledge acquisition. In Proceed- ings of the 2013 workshop on Automated knowledge base construction, pages 25-30.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Making the V in VQA matter: Elevating the role of image understanding in Visual Question Answering", "authors": [ { "first": "Yash", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Tejas", "middle": [], "last": "Khot", "suffix": "" }, { "first": "Douglas", "middle": [], "last": "Summers-Stay", "suffix": "" }, { "first": "Dhruv", "middle": [], "last": "Batra", "suffix": "" }, { "first": "Devi", "middle": [], "last": "Parikh", "suffix": "" } ], "year": 2017, "venue": "Conference on Computer Vision and Pattern Recognition (CVPR)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. 2017. Making the V in VQA matter: Elevating the role of image under- standing in Visual Question Answering. In Confer- ence on Computer Vision and Pattern Recognition (CVPR).", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "REALM: retrievalaugmented language model pre-training", "authors": [ { "first": "Kelvin", "middle": [], "last": "Guu", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Zora", "middle": [], "last": "Tung", "suffix": "" }, { "first": "Panupong", "middle": [], "last": "Pasupat", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasu- pat, and Ming-Wei Chang. 2020. REALM: retrieval- augmented language model pre-training. CoRR, abs/2002.08909.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "GQA: A new dataset for real-world visual reasoning and compositional question answering", "authors": [ { "first": "A", "middle": [], "last": "Drew", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Hudson", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2019, "venue": "Conference on Computer Vision and Pattern Recognition (CVPR)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Drew A Hudson and Christopher D Manning. 2019. GQA: A new dataset for real-world visual reason- ing and compositional question answering. Confer- ence on Computer Vision and Pattern Recognition (CVPR).", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "How can we know what language models know?", "authors": [ { "first": "Zhengbao", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Frank", "middle": [ "F" ], "last": "Xu", "suffix": "" } ], "year": 2020, "venue": "Transactions of the Association for Computational Linguistics", "volume": "8", "issue": "", "pages": "423--438", "other_ids": { "DOI": [ "10.1162/tacl_a_00324" ] }, "num": null, "urls": [], "raw_text": "Zhengbao Jiang, Frank F. Xu, Jun Araki, and Graham Neubig. 2020. How can we know what language models know? Transactions of the Association for Computational Linguistics, 8:423-438.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "The effect of the question on survey responses: A review", "authors": [ { "first": "Graham", "middle": [], "last": "Kalton", "suffix": "" }, { "first": "Howard", "middle": [], "last": "Schuman", "suffix": "" } ], "year": 1982, "venue": "J. R. Statist. Soc", "volume": "145", "issue": "", "pages": "42--73", "other_ids": {}, "num": null, "urls": [], "raw_text": "Graham Kalton and Howard Schuman. 1982. The ef- fect of the question on survey responses: A review. J. R. Statist. Soc., 145:42-73.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Learning visually grounded sentence representations", "authors": [ { "first": "Douwe", "middle": [], "last": "Kiela", "suffix": "" }, { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "Allan", "middle": [], "last": "Jabri", "suffix": "" }, { "first": "Maximilian", "middle": [], "last": "Nickel", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "408--418", "other_ids": { "DOI": [ "10.18653/v1/N18-1038" ] }, "num": null, "urls": [], "raw_text": "Douwe Kiela, Alexis Conneau, Allan Jabri, and Max- imilian Nickel. 2018. Learning visually grounded sentence representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, Volume 1 (Long Pa- pers), pages 408-418, New Orleans, Louisiana. As- sociation for Computational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Visual genome: Connecting language and vision using crowdsourced dense image annotations", "authors": [ { "first": "Ranjay", "middle": [], "last": "Krishna", "suffix": "" }, { "first": "Yuke", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Oliver", "middle": [], "last": "Groth", "suffix": "" }, { "first": "Justin", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Kenji", "middle": [], "last": "Hata", "suffix": "" }, { "first": "Joshua", "middle": [], "last": "Kravitz", "suffix": "" }, { "first": "Stephanie", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Yannis", "middle": [], "last": "Kalantidis", "suffix": "" }, { "first": "Li-Jia", "middle": [], "last": "Li", "suffix": "" }, { "first": "David", "middle": [ "A" ], "last": "Shamma", "suffix": "" }, { "first": "Michael", "middle": [ "S" ], "last": "Bernstein", "suffix": "" }, { "first": "Li", "middle": [], "last": "Fei-Fei", "suffix": "" } ], "year": 2017, "venue": "Int. J. Comput. Vision", "volume": "123", "issue": "1", "pages": "32--73", "other_ids": { "DOI": [ "10.1007/s11263-016-0981-7" ] }, "num": null, "urls": [], "raw_text": "Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin John- son, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A. Shamma, Michael S. Bernstein, and Li Fei-Fei. 2017. Vi- sual genome: Connecting language and vision using crowdsourced dense image annotations. Int. J. Com- put. Vision, 123(1):32-73.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Don't say that! making inconsistent dialogue unlikely with unlikelihood training", "authors": [ { "first": "Margaret", "middle": [], "last": "Li", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Roller", "suffix": "" }, { "first": "Ilia", "middle": [], "last": "Kulikov", "suffix": "" }, { "first": "Sean", "middle": [], "last": "Welleck", "suffix": "" }, { "first": "Y-Lan", "middle": [], "last": "Boureau", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4715--4728", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.428" ] }, "num": null, "urls": [], "raw_text": "Margaret Li, Stephen Roller, Ilia Kulikov, Sean Welleck, Y-Lan Boureau, Kyunghyun Cho, and Ja- son Weston. 2020a. Don't say that! making in- consistent dialogue unlikely with unlikelihood train- ing. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4715-4728, Online. Association for Compu- tational Linguistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Oscar: Objectsemantics aligned pre-training for vision-language tasks", "authors": [ { "first": "Xiujun", "middle": [], "last": "Li", "suffix": "" }, { "first": "Xi", "middle": [], "last": "Yin", "suffix": "" }, { "first": "Chunyuan", "middle": [], "last": "Li", "suffix": "" }, { "first": "Pengchuan", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Xiaowei", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Lei", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Lijuan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Houdong", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Li", "middle": [], "last": "Dong", "suffix": "" }, { "first": "Furu", "middle": [], "last": "Wei", "suffix": "" } ], "year": null, "venue": "European Conference on Computer Vision", "volume": "", "issue": "", "pages": "121--137", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiujun Li, Xi Yin, Chunyuan Li, Pengchuan Zhang, Xi- aowei Hu, Lei Zhang, Lijuan Wang, Houdong Hu, Li Dong, Furu Wei, et al. 2020b. Oscar: Object- semantics aligned pre-training for vision-language tasks. In European Conference on Computer Vision, pages 121-137. Springer.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Microsoft COCO: Common objects in context", "authors": [ { "first": "Tsung-Yi", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Maire", "suffix": "" }, { "first": "Serge", "middle": [], "last": "Belongie", "suffix": "" }, { "first": "James", "middle": [], "last": "Hays", "suffix": "" }, { "first": "Pietro", "middle": [], "last": "Perona", "suffix": "" }, { "first": "Deva", "middle": [], "last": "Ramanan", "suffix": "" }, { "first": "Piotr", "middle": [], "last": "Doll\u00e1r", "suffix": "" }, { "first": "C", "middle": [ "Lawrence" ], "last": "Zitnick", "suffix": "" } ], "year": 2014, "venue": "Computer Vision -ECCV 2014", "volume": "", "issue": "", "pages": "740--755", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll\u00e1r, and C. Lawrence Zitnick. 2014. Microsoft COCO: Common objects in context. In Computer Vision - ECCV 2014, pages 740-755, Cham. Springer Inter- national Publishing.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Barack's wife Hillary: Using knowledge graphs for fact-aware language modeling", "authors": [ { "first": "Robert", "middle": [], "last": "Logan", "suffix": "" }, { "first": "Nelson", "middle": [ "F" ], "last": "Liu", "suffix": "" }, { "first": "Matthew", "middle": [ "E" ], "last": "Peters", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Sameer", "middle": [], "last": "Singh", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "5962--5971", "other_ids": { "DOI": [ "10.18653/v1/P19-1598" ] }, "num": null, "urls": [], "raw_text": "Robert Logan, Nelson F. Liu, Matthew E. Peters, Matt Gardner, and Sameer Singh. 2019. Barack's wife Hillary: Using knowledge graphs for fact-aware lan- guage modeling. In Proceedings of the 57th Annual Meeting of the Association for Computational Lin- guistics, pages 5962-5971, Florence, Italy. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "ViLBERT: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks", "authors": [ { "first": "Jiasen", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Dhruv", "middle": [], "last": "Batra", "suffix": "" }, { "first": "Devi", "middle": [], "last": "Parikh", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2019, "venue": "Advances in Neural Information Processing Systems", "volume": "32", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. 2019. ViLBERT: Pretraining task-agnostic visi- olinguistic representations for vision-and-language tasks. In Advances in Neural Information Process- ing Systems, volume 32. Curran Associates, Inc.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "On faithfulness and factuality in abstractive summarization", "authors": [ { "first": "Joshua", "middle": [], "last": "Maynez", "suffix": "" }, { "first": "Shashi", "middle": [], "last": "Narayan", "suffix": "" }, { "first": "Bernd", "middle": [], "last": "Bohnet", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Mcdonald", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1906--1919", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.173" ] }, "num": null, "urls": [], "raw_text": "Joshua Maynez, Shashi Narayan, Bernd Bohnet, and Ryan McDonald. 2020. On faithfulness and factu- ality in abstractive summarization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1906-1919, On- line. Association for Computational Linguistics.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Comparison of successive with simultaneous color matching", "authors": [ { "first": "", "middle": [], "last": "Sm Newhall", "suffix": "" }, { "first": "Joyce R", "middle": [], "last": "Burnham", "suffix": "" }, { "first": "", "middle": [], "last": "Clark", "suffix": "" } ], "year": 1957, "venue": "JOSA", "volume": "47", "issue": "1", "pages": "43--56", "other_ids": {}, "num": null, "urls": [], "raw_text": "SM Newhall, RW Burnham, and Joyce R Clark. 1957. Comparison of successive with simultaneous color matching. JOSA, 47(1):43-56.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Im2Text: Describing images using 1 million captioned photographs", "authors": [ { "first": "Vicente", "middle": [], "last": "Ordonez", "suffix": "" }, { "first": "Girish", "middle": [], "last": "Kulkarni", "suffix": "" }, { "first": "Tamara", "middle": [], "last": "Berg", "suffix": "" } ], "year": 2011, "venue": "Advances in Neural Information Processing Systems", "volume": "24", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vicente Ordonez, Girish Kulkarni, and Tamara Berg. 2011. Im2Text: Describing images using 1 million captioned photographs. In Advances in Neural Infor- mation Processing Systems, volume 24. Curran As- sociates, Inc.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Color Research & Application: Endorsed by Inter-Society Color Council, The Colour Group (Great Britain), Canadian Society for Color, Color Science Association of Japan, Dutch Society for the Study of Color, The Swedish Colour Centre Foundation", "authors": [ { "first": "Joaqu\u00edn", "middle": [], "last": "P\u00e9rez-Carpinell", "suffix": "" }, { "first": "Rosa", "middle": [], "last": "Md De Fez", "suffix": "" }, { "first": "Juan", "middle": [ "Carlos" ], "last": "Baldov\u00ed", "suffix": "" }, { "first": "", "middle": [], "last": "Soriano", "suffix": "" } ], "year": 1998, "venue": "", "volume": "23", "issue": "", "pages": "416--427", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joaqu\u00edn P\u00e9rez-Carpinell, MD De Fez, Rosa Baldov\u00ed, and Juan Carlos Soriano. 1998. Familiar objects and memory color. Color Research & Application: En- dorsed by Inter-Society Color Council, The Colour Group (Great Britain), Canadian Society for Color, Color Science Association of Japan, Dutch Society for the Study of Color, The Swedish Colour Cen- tre Foundation, Colour Society of Australia, Centre Fran\u00e7ais de la Couleur, 23(6):416-427.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Association for Computational Linguistics", "authors": [ { "first": "Fabio", "middle": [], "last": "Petroni", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Rockt\u00e4schel", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Riedel", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Anton", "middle": [], "last": "Bakhtin", "suffix": "" }, { "first": "Yuxiang", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Miller", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "2463--2473", "other_ids": { "DOI": [ "10.18653/v1/D19-1250" ] }, "num": null, "urls": [], "raw_text": "Fabio Petroni, Tim Rockt\u00e4schel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander Miller. 2019. Language models as knowl- edge bases? In Proceedings of the 2019 Confer- ence on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 2463-2473, Hong Kong, China. As- sociation for Computational Linguistics.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "ImageBERT: Crossmodal pre-training with large-scale weak-supervised image-text data", "authors": [ { "first": "Di", "middle": [], "last": "Qi", "suffix": "" }, { "first": "Lin", "middle": [], "last": "Su", "suffix": "" }, { "first": "Jia", "middle": [], "last": "Song", "suffix": "" }, { "first": "Edward", "middle": [], "last": "Cui", "suffix": "" }, { "first": "Taroon", "middle": [], "last": "Bharti", "suffix": "" }, { "first": "Arun", "middle": [], "last": "Sacheti", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2001.07966" ] }, "num": null, "urls": [], "raw_text": "Di Qi, Lin Su, Jia Song, Edward Cui, Taroon Bharti, and Arun Sacheti. 2020. ImageBERT: Cross- modal pre-training with large-scale weak-supervised image-text data. arXiv preprint arXiv:2001.07966.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Gretchen Krueger, and Ilya Sutskever. 2021. Learning transferable visual models from natural language supervision", "authors": [ { "first": "Alec", "middle": [], "last": "Radford", "suffix": "" }, { "first": "Jong", "middle": [ "Wook" ], "last": "Kim", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Hallacy", "suffix": "" }, { "first": "Aditya", "middle": [], "last": "Ramesh", "suffix": "" }, { "first": "Gabriel", "middle": [], "last": "Goh", "suffix": "" }, { "first": "Sandhini", "middle": [], "last": "Agarwal", "suffix": "" }, { "first": "Girish", "middle": [], "last": "Sastry", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. 2021. Learn- ing transferable visual models from natural language supervision.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Language models are unsupervised multitask learners", "authors": [ { "first": "Alec", "middle": [], "last": "Radford", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Rewon", "middle": [], "last": "Child", "suffix": "" }, { "first": "David", "middle": [], "last": "Luan", "suffix": "" }, { "first": "Dario", "middle": [], "last": "Amodei", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" } ], "year": 2019, "venue": "OpenAI blog", "volume": "1", "issue": "8", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Lan- guage models are unsupervised multitask learners. OpenAI blog, 1(8):9.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Faster R-CNN: Towards real-time object detection with region proposal networks", "authors": [ { "first": "Kaiming", "middle": [], "last": "Shaoqing Ren", "suffix": "" }, { "first": "Ross", "middle": [], "last": "He", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Girshick", "suffix": "" }, { "first": "", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2015, "venue": "Advances in Neural Information Processing Systems", "volume": "28", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. 2015. Faster R-CNN: Towards real-time ob- ject detection with region proposal networks. In Ad- vances in Neural Information Processing Systems, volume 28. Curran Associates, Inc.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Recipes for building an open-domain chatbot", "authors": [ { "first": "Stephen", "middle": [], "last": "Roller", "suffix": "" }, { "first": "Emily", "middle": [], "last": "Dinan", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Da", "middle": [], "last": "Ju", "suffix": "" }, { "first": "Mary", "middle": [], "last": "Williamson", "suffix": "" }, { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Jing", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Eric", "middle": [ "Michael" ], "last": "Smith", "suffix": "" }, { "first": "Y-Lan", "middle": [], "last": "Boureau", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume", "volume": "", "issue": "", "pages": "300--325", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Eric Michael Smith, Y-Lan Boureau, and Jason We- ston. 2021. Recipes for building an open-domain chatbot. In Proceedings of the 16th Conference of the European Chapter of the Association for Compu- tational Linguistics: Main Volume, pages 300-325, Online. Association for Computational Linguistics.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning", "authors": [ { "first": "Piyush", "middle": [], "last": "Sharma", "suffix": "" }, { "first": "Nan", "middle": [], "last": "Ding", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Goodman", "suffix": "" }, { "first": "Radu", "middle": [], "last": "Soricut", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "2556--2565", "other_ids": { "DOI": [ "10.18653/v1/P18-1238" ] }, "num": null, "urls": [], "raw_text": "Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu Soricut. 2018. Conceptual captions: A cleaned, hypernymed, image alt-text dataset for au- tomatic image captioning. In Proceedings of the 56th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 2556-2565, Melbourne, Australia. Association for Computational Linguistics.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Visual grounding strategies for text-only natural language processing", "authors": [ { "first": "Damien", "middle": [], "last": "Sileo", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2103.13942" ] }, "num": null, "urls": [], "raw_text": "Damien Sileo. 2021. Visual grounding strategies for text-only natural language processing. arXiv preprint arXiv:2103.13942.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "A corpus for reasoning about natural language grounded in photographs", "authors": [ { "first": "Alane", "middle": [], "last": "Suhr", "suffix": "" }, { "first": "Stephanie", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Ally", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Iris", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Huajun", "middle": [], "last": "Bai", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Artzi", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "6418--6428", "other_ids": { "DOI": [ "10.18653/v1/P19-1644" ] }, "num": null, "urls": [], "raw_text": "Alane Suhr, Stephanie Zhou, Ally Zhang, Iris Zhang, Huajun Bai, and Yoav Artzi. 2019. A corpus for reasoning about natural language grounded in pho- tographs. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics, pages 6418-6428, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "LXMERT: Learning cross-modality encoder representations from transformers", "authors": [ { "first": "Hao", "middle": [], "last": "Tan", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Bansal", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "5100--5111", "other_ids": { "DOI": [ "10.18653/v1/D19-1514" ] }, "num": null, "urls": [], "raw_text": "Hao Tan and Mohit Bansal. 2019. LXMERT: Learning cross-modality encoder representations from trans- formers. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 5100-5111, Hong Kong, China. Association for Computational Linguistics.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "\u0141ukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems", "volume": "30", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, volume 30. Curran Associates, Inc.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "PIGLeT: Language grounding through neuro-symbolic interaction in a 3D world", "authors": [ { "first": "Rowan", "middle": [], "last": "Zellers", "suffix": "" }, { "first": "Ari", "middle": [], "last": "Holtzman", "suffix": "" }, { "first": "Matthew", "middle": [], "last": "Peters", "suffix": "" }, { "first": "Roozbeh", "middle": [], "last": "Mottaghi", "suffix": "" }, { "first": "Aniruddha", "middle": [], "last": "Kembhavi", "suffix": "" }, { "first": "Ali", "middle": [], "last": "Farhadi", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rowan Zellers, Ari Holtzman, Matthew Peters, Roozbeh Mottaghi, Aniruddha Kembhavi, Ali Farhadi, and Yejin Choi. 2021. PIGLeT: Language grounding through neuro-symbolic interaction in a 3D world. In Proceedings of the 59th Annual Meet- ing of the Association for Computational Linguis- tics.", "links": null } }, "ref_entries": { "FIGREF1": { "uris": null, "text": "One entry in the Memory Colors dataset.", "type_str": "figure", "num": null }, "FIGREF2": { "uris": null, "text": "The color distribution of the 109 items in the Memory Colors dataset. The most frequent color in the dataset is white with a count of 25. The colors with the lowest frequency are pink and purple, which only occur for 3 items each.", "type_str": "figure", "num": null }, "TABREF0": { "content": "
. A
", "num": null, "text": "", "type_str": "table", "html": null }, "TABREF1": { "content": "
: An example of a cloze-question provided to a
human annotator, given by the query template Q: What
is the color of [DESCRIPTOR] [ITEM]? A: [MASK].
", "num": null, "text": "", "type_str": "table", "html": null }, "TABREF2": { "content": "
Total
Validation
Captions58,937
Images38,234
Training-unfiltered
Captions4,720,971
Images2,911,438
Training-filtered
Captions4,429,671
Images2,749,612
", "num": null, "text": "Total number of image and caption samples for the full and filtered versions of the training dataset.", "type_str": "table", "html": null }, "TABREF5": { "content": "
TrainingModelAccuracy
Random baseline 0.091 \u00b1 0.026
Majority baseline 0.229 \u00b1 0.000
Human baseline0.937 \u00b1 0.051
NoneBERT-base0.252 \u00b1 0.102
Unfiltered BERT-base0.724 \u00b1 0.112
CLIP-BERT
implicit0.744 \u00b1 0.080
explicit0.870 \u00b1 0.086
images0.876 \u00b1 0.063
FilteredBERT-base0.460 \u00b1 0.083
CLIP-BERT
implicit0.541 \u00b1 0.060
explicit0.733 \u00b1 0.098
images0.785 \u00b1 0.055
", "num": null, "text": "The mean and standard deviation of the accuracy scores of the models on the Memory Colors dataset for different query templates. CLIP-BERTimages is the only model that is given the pictures from the dataset during evaluation.", "type_str": "table", "html": null }, "TABREF6": { "content": "
Index DescriptorItemColorIndex DescriptorItemColor
1asunfloweryellow56plantsgreen
2theoceanblue57asuitblack
3grassgreen58cocoabrown
4butteryellow59chocolatebrown
5bonewhite60concretegrey
6ivorywhite61aluminium foilgrey
7theskyblue62apeagreen
8the inside of a pineappleyellow63arainforestgreen
9atomatored64ricewhite
10astrawberryred65pastayellow
11arosered66spinachgreen
12bloodred67broccoligreen
13aheartred68alimegreen
14apumpkinorange69guacamolegreen
15acarrotorange70salmon meatpink
16cheeseyellow71yoghurtwhite
17thesunyellow72cottage cheesewhite
18alemonyellow73feta cheesewhite
19cornyellow74matchagreen
20afroggreen75seaweedgreen
21aleafgreen76garlicwhite
22ablueberryblue77anauberginepurple
23jeansblue78ivygreen
24the animalbatblack79arubyred
25acrowblack80flourwhite
26aravenblack81baking sodawhite
27coalblack82asnowmanwhite
28paperwhite83gravelgrey
29sugarwhite84anegg yolkyellow
30milkwhite85aneggwhite
31snowwhite86mossgreen
32sheepwhite87cinnamonbrown
33aflamingopink88the outside of a coconutbrown
34cherry blossoms pink89scrambled eggsyellow
35soilbrown90acucumbergreen
36stonegrey91afire extinguisher red
37anelephantgrey92aducklingyellow
38the animalsealgrey93apantherblack
39aplumpurple94apine treegreen
40lavenderpurple95atoothwhite
41apolar bearwhite96fecesbrown
42the inside of a watermelonred97urineyellow
43honeyyellow98anicebergwhite
44abananayellow99aschool busyellow
45anorangeorange100achickyellow
46apeargreen101sailswhite
47the fruitmandarinorange102woodbrown
48acherryred103alady bugred
49saltwhite104adaffodilyellow
50aswanwhite105adandelionyellow
51asnow leopardwhite106cardboardbrown
52anarctic foxwhite107ablackboardblack
53steelgrey108basilgreen
54cloudswhite109parsleygreen
55raincloudsgrey
", "num": null, "text": "The 109 entries in the Memory Colors dataset.", "type_str": "table", "html": null }, "TABREF7": { "content": "
Index Template
1Q: What is the color of [DESCRIPTOR] [ITEM]? A: It is [MASK].
2What is the color of [DESCRIPTOR] [ITEM]? [MASK].
3The color of [DESCRIPTOR] [ITEM] is [MASK].
4The usual color of [DESCRIPTOR] [ITEM] is [MASK].
5[DESCRIPTOR] [ITEM] usually has the color of [MASK].
6What is the usual color of [DESCRIPTOR] [ITEM]? [MASK].
7What is the typical color of [DESCRIPTOR] [ITEM]? [MASK].
(b) Index Template
1Q: What is the color of [DESCRIPTOR] [ITEM]? A: It is [MASK].
2Q: What is the color of [DESCRIPTOR] [ITEM]? [SEP] A: It is [MASK].
3Q: What is the colour of [DESCRIPTOR] [ITEM]? A: It is [MASK].
4What is the color of [DESCRIPTOR] [ITEM]? [MASK].
5What is the color of [DESCRIPTOR] [ITEM]? [SEP] [MASK].
6What is the colour of [DESCRIPTOR] [ITEM]? [MASK].
7The color of [DESCRIPTOR] [ITEM] is [MASK].
8The usual color of [DESCRIPTOR] [ITEM] is [MASK].
9[DESCRIPTOR] [
", "num": null, "text": "The query templates used to query both human annotators and models on the Memory Colors task.(a) The question templates used to query the human annotators on the object-colors evaluation task. The question templates used to query the models on the object-colors evaluation task. ITEM] usually has the color of[MASK]. 10 What is the usual color of [DESCRIPTOR] [ITEM]? [MASK]. 11 What is the usual color of [DESCRIPTOR] [ITEM]? [SEP] [MASK]. 12 What is the typical color of [DESCRIPTOR] [ITEM]? [MASK]. 13 What is the typical color of [DESCRIPTOR] [ITEM]? [SEP] [MASK].", "type_str": "table", "html": null }, "TABREF8": { "content": "", "num": null, "text": "", "type_str": "table", "html": null } } } }