{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T14:46:43.587157Z" }, "title": "Reasoning Over History: Context Aware Visual Dialog", "authors": [ { "first": "Muhammad", "middle": [ "A" ], "last": "Shah", "suffix": "", "affiliation": { "laboratory": "", "institution": "Language Technologies Institute Carnegie Mellon University Pittsburgh", "location": { "region": "PA" } }, "email": "mshah1@cmu.edu" }, { "first": "Shikib", "middle": [], "last": "Mehri", "suffix": "", "affiliation": { "laboratory": "", "institution": "Language Technologies Institute Carnegie Mellon University Pittsburgh", "location": { "region": "PA" } }, "email": "amehri@cmu.edu" }, { "first": "Tejas", "middle": [], "last": "Srinivasan", "suffix": "", "affiliation": { "laboratory": "", "institution": "Language Technologies Institute Carnegie Mellon University Pittsburgh", "location": { "region": "PA" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "While neural models have been shown to exhibit strong performance on single-turn visual question answering (VQA) tasks, extending VQA to a multi-turn, conversational setting remains a challenge. One way to address this challenge is to augment existing strong neural VQA models with the mechanisms that allow them to retain information from previous dialog turns. One strong VQA model is the MAC network, which decomposes a task into a series of attention-based reasoning steps. However, since the MAC network is designed for single-turn question answering, it is not capable of referring to past dialog turns. More specifically, it struggles with tasks that require reasoning over the dialog history, particularly coreference resolution. We extend the MAC network architecture with Context-aware Attention and Memory (CAM), which attends over control states in past dialog turns to determine the necessary reasoning operations for the current question. MAC nets with CAM achieve up to 98.25% accuracy on the CLEVR-Dialog dataset, beating the existing state-ofthe-art by 30% (absolute). Our error analysis indicates that with CAM, the model's performance particularly improved on questions that required coreference resolution.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "While neural models have been shown to exhibit strong performance on single-turn visual question answering (VQA) tasks, extending VQA to a multi-turn, conversational setting remains a challenge. One way to address this challenge is to augment existing strong neural VQA models with the mechanisms that allow them to retain information from previous dialog turns. One strong VQA model is the MAC network, which decomposes a task into a series of attention-based reasoning steps. However, since the MAC network is designed for single-turn question answering, it is not capable of referring to past dialog turns. More specifically, it struggles with tasks that require reasoning over the dialog history, particularly coreference resolution. We extend the MAC network architecture with Context-aware Attention and Memory (CAM), which attends over control states in past dialog turns to determine the necessary reasoning operations for the current question. MAC nets with CAM achieve up to 98.25% accuracy on the CLEVR-Dialog dataset, beating the existing state-ofthe-art by 30% (absolute). Our error analysis indicates that with CAM, the model's performance particularly improved on questions that required coreference resolution.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Visual dialog is the task of answering a sequence of questions about a given image such that responding to any one question in the dialog requires context from the previous dialog history. The task of visual dialog (Das et al., 2017b; Kottur et al., 2019) brings together several fundamental building blocks of intelligent systems: visual understanding, natural language understanding and complex reasoning. The multimodal nature of visual dialog requires approaches that jointly model and reason over both * equal contribution modalities. Furthermore, visual dialog necessitates the ability to resolve visual coreferences, which arise when two phrases in the dialog refer to the same object in the image. Visual coreference resolution requires both an ability to reason over coreferences in the dialog, as well as ground the entities from the language modality in the visual one.", "cite_spans": [ { "start": 215, "end": 234, "text": "(Das et al., 2017b;", "ref_id": "BIBREF4" }, { "start": 235, "end": 255, "text": "Kottur et al., 2019)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In contrast to large-scale realistic datasets for visual dialog, such as VisDial (Das et al., 2017b) , Kottur et al. (2019) introduce CLEVR-Dialog as a diagnostic dataset for visual dialog. In contrast to other visual dialog datasets, CLEVR-Dialog is synthetically generated -this allows it to be both large-scale and structured in nature. This diagnostic dataset allows for improved fine-grained analysis, using the structured nature of the images and language. This fine-grained analysis allows researchers to study the different components in isolation and identify bottlenecks in end-to-end systems for visual dialog.", "cite_spans": [ { "start": 81, "end": 100, "text": "(Das et al., 2017b)", "ref_id": "BIBREF4" }, { "start": 103, "end": 123, "text": "Kottur et al. (2019)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Highly structured models have performed well on visual question answering and visual dialog (Andreas et al., 2016b,a; Kottur et al., 2018) by leveraging explicit program modules to perform composition reasoning. CorefNMN Kottur et al. (2018) , which leverages explicit program modules for coreference resolution, was the previous stateof-the-art model on the CLEVR-Dialog dataset. However, the explicit definition of program modules requires handcrafting and limits generalizability. As such, we explore mechanisms of relaxing the structural constrains by using MAC Networks (Hudson and Manning, 2018) and adapting it to the task of dialog. Specifically, we introduce the Context-aware Attention and Memory (CAM) to serve as an inductive bias that allows MAC networks to explicitly capture the necessary context from the dialog history.", "cite_spans": [ { "start": 92, "end": 117, "text": "(Andreas et al., 2016b,a;", "ref_id": null }, { "start": 118, "end": 138, "text": "Kottur et al., 2018)", "ref_id": "BIBREF9" }, { "start": 212, "end": 241, "text": "CorefNMN Kottur et al. (2018)", "ref_id": null }, { "start": 575, "end": 601, "text": "(Hudson and Manning, 2018)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "CAM consists of a context-aware attention mechanism and a multi-turn memory state. The contextaware attention mechanism attends over the control states of past dialog turns, to determine the control states for the current dialog turn. Since control states in a MAC networks are analogous to program modules, the attention effectively leverages past reasoning operations to inform current reasoning operations. For example, if the MAC network had to locate the \"the red ball\", a future turn which refers to \"the object to the left of the previous red object\" can attend to the control state responsible for locating the red ball. Meanwhile the multi-turn memory remembers information extracted to answer previous questions in the dialog. Similar to the explicit programs of CorefNMN, CAM serves to model properties of dialog (e.g., coreference resolution, history dependent reasoning). However, unlike CorefNMN, CAM does not require explicit handcrafting, can be trained end-to-end and is capable of generalization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our methods attain state-of-the-art performance on CLEVR-Dialog, with a 30% improvement over the prior work. Further, CAM provides strong performance gains over MAC networks across several different experimental setups. Analysis shows that CAM's attention weights are meaningful, and particularly useful for questions that require coreference resolution across dialog turns.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Visual Question Answering (VQA) requires models to reason about an image, conditioned on a complex natural language question. Solving VQA requires the ability to reason over images and grounding language entities in the visual modality. There have been several datasets proposed for this task, such as the open-ended VQA (Antol et al., 2015) and diagnostic CLEVR (Johnson et al., 2017) datasets, and several models proposed to solve this task (Yu et al., 2015; Malinowski and Fritz, 2014; Gao et al., 2015; Ren et al., 2015; Liu et al., 2019) .", "cite_spans": [ { "start": 321, "end": 341, "text": "(Antol et al., 2015)", "ref_id": "BIBREF2" }, { "start": 363, "end": 385, "text": "(Johnson et al., 2017)", "ref_id": "BIBREF8" }, { "start": 443, "end": 460, "text": "(Yu et al., 2015;", "ref_id": "BIBREF19" }, { "start": 461, "end": 488, "text": "Malinowski and Fritz, 2014;", "ref_id": "BIBREF13" }, { "start": 489, "end": 506, "text": "Gao et al., 2015;", "ref_id": "BIBREF5" }, { "start": 507, "end": 524, "text": "Ren et al., 2015;", "ref_id": "BIBREF14" }, { "start": 525, "end": 542, "text": "Liu et al., 2019)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Visual Question Answering", "sec_num": "2.1" }, { "text": "State-of-the-art modeling approaches to VQA can largely be broken into two categories: modular networks (Yi et al., 2018; Andreas et al., 2016b; Hu et al., 2017) , and end-to-end differentiable networks (Hudson and Manning, 2018) . Neural Module Networks (NMNs) consist of specialized neural modules and can be composed into programs. Since the program construction is not differentiable, training module networks involves complex reinforcement learning training techniques. Moreover, the strong structural constraints along with the need to handcraft modules limits the generalizability of these models. We think that relaxing some structural constraints, such as those involved in handcrafted models, while retaining other, specifically those that allow for compositional reasoning, would yield powerful yet flexible models.", "cite_spans": [ { "start": 104, "end": 121, "text": "(Yi et al., 2018;", "ref_id": "BIBREF18" }, { "start": 122, "end": 144, "text": "Andreas et al., 2016b;", "ref_id": "BIBREF1" }, { "start": 145, "end": 161, "text": "Hu et al., 2017)", "ref_id": "BIBREF6" }, { "start": 203, "end": 229, "text": "(Hudson and Manning, 2018)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Visual Question Answering", "sec_num": "2.1" }, { "text": "As a step in this direction, Hudson and Manning (2018) have proposed MAC Networks (Memory, Attention and Comprehension Networks) which simulates a p-step compositional reasoning process by decomposing the question into a series of attention-based reasoning steps. Unlike NMNs, MAC networks do not have specialized program modules, instead they use a control unit to predict a continuous valued vector representation of the reasoning process to be performed at each step.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Visual Question Answering", "sec_num": "2.1" }, { "text": "As models achieve human-level performance on VQA, Das et al. (2017a) and Kottur et al. (2019) proposed to extend them to a conversational setting. Concretely, visual dialog is a multi-turn conversation grounded in an image. In addition to the challenges of VQA, visual dialog requires reasoning over multiple turns of dialog, in which can refer to information introduced in previous dialog turns.", "cite_spans": [ { "start": 50, "end": 68, "text": "Das et al. (2017a)", "ref_id": "BIBREF3" }, { "start": 73, "end": 93, "text": "Kottur et al. (2019)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Visual Dialog", "sec_num": "2.2" }, { "text": "Several datasets have been introduced to study the problem of visual dialog, such as the large-scale VisDial dataset (Das et al., 2017a) and the diagnostic CLEVR-Dialog dataset (Kottur et al., 2019) . CLEVR-Dialog is a programmatically constructed dataset with complex images and conversations reasoning about the objects in a given image. Similar to the CLEVR dataset, CLEVR-Dialog comprises of queries and responses about entities in a static image. However, in this multi-turn dataset, queries make references to entities mentioned in previous turns of the dialog, and can thus not be treated as single-turn queries. The main challenge in CLEVR-Dialog is thus visual coreference resolution -resolving multiple references across dialog turns to the same entity in the image.", "cite_spans": [ { "start": 117, "end": 136, "text": "(Das et al., 2017a)", "ref_id": "BIBREF3" }, { "start": 177, "end": 198, "text": "(Kottur et al., 2019)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Visual Dialog", "sec_num": "2.2" }, { "text": "Several recently proposed methods use reinforcement learning techniques to solve this problem (Strub et al., 2017; Das et al., 2017b) . Strub et al. (2017) policy gradient based method for visually grounded task-oriented dialogues. On the other hand, Das et al. (2017b) utilise a goal-driven training for visual question answernig and dialog agents via a cooperative game between two agents (questioner and answerer) and learn the policies of these agents using deep reinforcement learning.", "cite_spans": [ { "start": 94, "end": 114, "text": "(Strub et al., 2017;", "ref_id": "BIBREF17" }, { "start": 115, "end": 133, "text": "Das et al., 2017b)", "ref_id": "BIBREF4" }, { "start": 136, "end": 155, "text": "Strub et al. (2017)", "ref_id": "BIBREF17" }, { "start": 251, "end": 269, "text": "Das et al. (2017b)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Visual Dialog", "sec_num": "2.2" }, { "text": "Other approaches utilise transferring knowledge from a discriminatively trained model to a generative dialog model (Lu et al., 2016) and by using differentiable memory to solve visual coreferences (Seo et al., 2017) . More specifically, Seo et al. 2017utilise an associative attention memory for retrieving previous attentions which are most useful for answering the current question. Later, the retrieved attention is combined with a tentative one via dynamic parameter prediction in order to answer the current question. Kottur et al. (2018) adapted NMNs used in (Andreas et al., 2016b) with an addition of two modules (Refer and Exclude) specifically meant for handling coreference resolution. These two modules perform explicit coreference resoltuion at a word level granularity. The module 'Refer' grounds coreferences in the conversation history while 'Exclude' handles contextual shifts. This Coreference Neural Module Networks (Coref-NMN) were applied to CLEVR-Dialog (Kottur et al., 2019) , and achieve the best accuracy on the dataset.", "cite_spans": [ { "start": 115, "end": 132, "text": "(Lu et al., 2016)", "ref_id": "BIBREF12" }, { "start": 197, "end": 215, "text": "(Seo et al., 2017)", "ref_id": "BIBREF15" }, { "start": 523, "end": 543, "text": "Kottur et al. (2018)", "ref_id": "BIBREF9" }, { "start": 976, "end": 997, "text": "(Kottur et al., 2019)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Visual Dialog", "sec_num": "2.2" }, { "text": "Formally, the task we are tackling in this paper is to pick the correct answer, a * t \u2208 A for a question, with representation q t \u2208 Q, based on an image, I \u2208 I, with a caption, C \u2208 C, and a past dialog history,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Statement", "sec_num": "3.1" }, { "text": "H t = {(q 1 , a 1 ), (q 2 , a 2 )..(q t\u22121 , a t\u22121 )},", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Statement", "sec_num": "3.1" }, { "text": "where a i is the answer to question q i . In practice, I does not contain the actual image, but rather, an embedding of the image computed using a pretrained image recognition model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Statement", "sec_num": "3.1" }, { "text": "Since our approach builds upon the MAC Network architecture (Hudson and Manning, 2018), we will briefly introduce it in this section before presenting our own novel extensions to it in the subsequent sections.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MAC Network Architecture", "sec_num": "3.2" }, { "text": "The MAC network has three core components: the input unit, the MAC cell and the output unit. The input unit computes an image representation, a single question embedding for the entire question and contextualized word embeddings for each word in the question. The output of the input unit is recurrently passed through the MAC cell p times, where p is a predefined hyper-parameter. Each pass through the MAC cell is meant to simulate one step of a p-step reasoning process. The MAC cell consists of three sub-modules, namely the control, read and write units and a running memory state that accumulates the results of each reasoning step. The control unit computes an embedding for the i th reasoning operation based on the (i \u2212 1) th reasoning operation, and the sentence and word embeddings of the question. The read unit attends on the image representation using the current memory state and the output of the control unit, to extract the information required for current reasoning step. The write unit uses the output of the read unit to update the memory state. After p reasoning steps have been performed, the output unit uses the memory state to predict the answer. It is assumed in the architecture that the answers are categorical. This process is illustrated in Figure 1 .", "cite_spans": [], "ref_spans": [ { "start": 1272, "end": 1280, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "MAC Network Architecture", "sec_num": "3.2" }, { "text": "Since the MAC network is designed to answer single-turn questions, it is not able to answer questions that rely on context established in previous turns of a dialog. In this section we describe our proposed Context-aware Attention and Memory (CAM) mechanism that endows MAC networks with the ability to perform multi-turn reasoning by remembering the reasoning steps it performed, and the information it extracted from the image to answer questions posed in past turns. CAM has two components, namely a memory state that remains persistent across multiple turns and an attention mechanism that encodes contextual information from past turns in the current control state.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extending MAC Network With CAM", "sec_num": "3.3" }, { "text": "The first extension we propose endows the model with a memory that remains persistent across dialog turns. Specifically, we want to allow the model to remember the information it has already extracted from the image in earlier dialog turns, so that it can use this information to answer context-dependent questions in subsequent turns.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-Turn Memory", "sec_num": "3.3.1" }, { "text": "To implement this memory mechanism we leverage the existing memory state of MAC networks, with a slight modification. In the original MAC network architecture the memory state is initialized with a zero vector for each question, and is updated after each of the p reasoning steps before being discarded. Formally, the memory state at the k th reasoning step of the t th turn in the dialog is computed as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-Turn Memory", "sec_num": "3.3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "m (t) k = f (I (t) k , 0), k = 0 f (I (t) k , m (t) k\u22121 ), k > 0", "eq_num": "(1)" } ], "section": "Multi-Turn Memory", "sec_num": "3.3.1" }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-Turn Memory", "sec_num": "3.3.1" }, { "text": "I (t)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-Turn Memory", "sec_num": "3.3.1" }, { "text": "k represents the information extracted from the image and f is a function that computes the updated memory state. Under this scheme, the information accumulated while reasoning about the first question in the dialog is discarded when the model starts reasoning about the second question. In our implementation we initialize the memory once for each dialog, and retain it across all the turns of the dialog. Formally, this leads to the modification of Equation 1 to", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-Turn Memory", "sec_num": "3.3.1" }, { "text": "m (t) k = \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 f (I (t) k , 0), t = 0, k = 0 f (I (t) k , m (t\u22121) k ), t > 0, k = 0 f (I (t) k , m (t) k\u22121 ), k > 0", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-Turn Memory", "sec_num": "3.3.1" }, { "text": "(2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-Turn Memory", "sec_num": "3.3.1" }, { "text": "For our second extension, we propose to allow the model to recall previous control states when computing the current control state. The intuition behind this extension is that if the current question, q t , references an entity from a previous question, q t\u2212k or its answer, the reasoning steps for answering q t are likely to be similar, to those for answering q t\u2212k , at least insofar as they relate to the coreferent entity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context-aware Attention Mechanism", "sec_num": "3.3.2" }, { "text": "Since the introduction of the coreferent entity was more recent with respect to q t\u2212k , compared to q t , it would have been more salient in the model's memory. Therefore, it is likely that the model would have applied appropriate reasoning processes in when answering q t\u2212k . At q t , the coreferent entity is less salient in the model's memory, which increases the likelihood of the model selecting the inappropriate reasoning steps.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context-aware Attention Mechanism", "sec_num": "3.3.2" }, { "text": "To mitigate the aforementioned problem and explicitly incorporate the dialog context into the model, we introduce a transformer-like selfattention mechanism on the previous control states. The resulting architecture is illustrated in Figure 2 . This mechanism allow the model to explicitly attend to the past outputs of the control unit, both, from previous reasoning steps in the current turn and the reasoning steps from the previous dialog turns, while computing the control output for the current reasoning step.", "cite_spans": [], "ref_spans": [ { "start": 234, "end": 243, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Context-aware Attention Mechanism", "sec_num": "3.3.2" }, { "text": "Concretely, given the unattended control representations,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context-aware Attention Mechanism", "sec_num": "3.3.2" }, { "text": "C = [c (1) 1 ...c (t) i\u22121 ]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context-aware Attention Mechanism", "sec_num": "3.3.2" }, { "text": "T of all the reasoning steps until the i th reasoning step of turn t, the final control output,C, is computed as the fusion (Hu et al., 2017) of the attended control representation, C, and the unattended control representation as follows:", "cite_spans": [ { "start": 124, "end": 141, "text": "(Hu et al., 2017)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Context-aware Attention Mechanism", "sec_num": "3.3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "E = \u03c6 key (C)\u03c6 key (C) T (3) A = softmax(tril(E)) (4) C = f usion(C, AC)", "eq_num": "(5)" } ], "section": "Context-aware Attention Mechanism", "sec_num": "3.3.2" }, { "text": "where \u03c6 key and \u03c6 value represent the key and value projections used in the self-attention step, tril(E) represents the matrix obtained by setting the values in the upper triangle of E to zero and the fusion module is defined as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context-aware Attention Mechanism", "sec_num": "3.3.2" }, { "text": "fusion(x, y) = gx + (1 \u2212 g)x x = relu(W r [x]; y; x y; x \u2212 y) g = sigmoid(W g [x]; y; x y; x \u2212 y)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context-aware Attention Mechanism", "sec_num": "3.3.2" }, { "text": "where represents element-wise multiplication.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context-aware Attention Mechanism", "sec_num": "3.3.2" }, { "text": "The CLEVR-Dialog dataset 1 (Kottur et al., 2019) , pictured in Figure 3 , consists of several modali- ties: visual images, natural language dialog and structured scene graphs.", "cite_spans": [ { "start": 27, "end": 48, "text": "(Kottur et al., 2019)", "ref_id": "BIBREF10" } ], "ref_spans": [ { "start": 63, "end": 71, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Dataset", "sec_num": "4.1" }, { "text": "Each image I and its respective complete scene graph S a depicts a scene containing several objects. Each object has four major attributes, enumerated as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "4.1" }, { "text": "\u2022 Color -blue, brown, cyan, gray, green, purple, red, yellow", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "4.1" }, { "text": "\u2022 Shape -cylinder, cube, sphere", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "4.1" }, { "text": "\u2022 Size -large, small", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "4.1" }, { "text": "\u2022 Material -metal, rubber", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "4.1" }, { "text": "Every pair of objects has a spatial relationship which describes their relative spatial position: front, back, right, left.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "4.1" }, { "text": "Each dialog is an interaction between a Questioner and an Answerer. The Answerer, who has the image and the complete scene graph, begins by providing a caption that describes the image. The Questioner, who does not see the image, aims to build up a complete scene graph by repeatedly asking questions. As the Questioner gets more information, they build up a partial scene graph S t q . Though, during data collection the Answerer had a complete scene graph, during the task of visual dialog -the scene graph should not be used during testing and the Answerer can only use the image.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "4.1" }, { "text": "Each dialog consists of 10 turns. Questions are generated with the use of 23 question templates, which can be grouped into several categories: Count questions which ask for the number of objects that satisfy certain conditions, Existence questions are yes/no questions that query about certain conditions in the image and Seek questions ask for attributes of certain objects. The seek question type is 60% of the dataset, followed by count at 23% and exist at 17%. There are 29 unique answers (e.g., 'yes', 'no', 'blue', '1', '2' etc.) , with all the answers being single-word.", "cite_spans": [ { "start": 500, "end": 535, "text": "'yes', 'no', 'blue', '1', '2' etc.)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "4.1" }, { "text": "A strong motivation of the CLEVR-Dialog dataset is to model dialog in the context of an image. To this end, there are two types of history dependancy. The first is coreference, wherein a phrase in a question refers to an earlier referent in the history. The mean coreference distance is 3.2 turns and the distance spans between 1-10 turns. The second type of history dependency is when the question relies on the entire dialog history, rather than a specific referent. For example: 'How many other objects are there?'", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "4.1" }, { "text": "The dataset has 85k unique images, with 5 dialogs per image for a total of 425k dialogs. Each dialog consists of a caption, and ten turns of questionanswer pairs, for a total of 4.25M questions and answers. There are 23 unique question templates and 73k unique questions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "4.1" }, { "text": "Since CLEVR-dialog has 29 unique single-word answers, the metric used is accuracy. The structured nature of the dataset allows for accuracy breakdown by coreference distance and question type, as shown by Kottur et al. (2019) .", "cite_spans": [ { "start": 205, "end": 225, "text": "Kottur et al. (2019)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "4.1" }, { "text": "All models were implemented in PyTorch 2 building on an open source implementation of MAC networks. While training history agnostic models each dialog turn was treated as an independent question and in each iteration we trained the model on 128 random dialog turns. Meanwhile, we trained the context-aware models by providing them one dialog turn at a time, with a batch size of 12 dialogs (120 turns). The learning rate and the number of reasoning steps for the MAC networks was set to 2e-4 and 8, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Implementation Details", "sec_num": "4.2" }, { "text": "Since CLEVR-Dialog consists of only a training set and a development set, the development set was used for evaluation. We remove 1000 images and their respective dialogs from the training set to use for validation, for a total of 5000 dialogs and 50,000 dialog turns. The models were set to trained for 25 epochs but if the validation accuracy does not increase for 5 epochs, we stop training so some models were trained for 16-17 epochs while others were trained for 25. We ran experiments on a cluster with 32 core Intel Xenon processors and Nvidia 1080Ti GPUs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Implementation Details", "sec_num": "4.2" }, { "text": "We use Neural Module Networks (Andreas et al., 2016b ) (NMN) and CorefNMN (Kottur et al., 2018) as our baslines because the dataset paper (Kottur et al., 2019) reports them to have the best performance on CLEVR dialog. Neural Module Networks (NMN) proposed by Andreas et al. (2016b) are a general class of recursive neural networks (Socher et al., 2013) which provide a framework for constructing deep networks with dynamic computational structure. NMNs are history agnostic, making them a weak baseline for this dataset.", "cite_spans": [ { "start": 30, "end": 52, "text": "(Andreas et al., 2016b", "ref_id": "BIBREF1" }, { "start": 74, "end": 95, "text": "(Kottur et al., 2018)", "ref_id": "BIBREF9" }, { "start": 138, "end": 159, "text": "(Kottur et al., 2019)", "ref_id": "BIBREF10" }, { "start": 260, "end": 282, "text": "Andreas et al. (2016b)", "ref_id": "BIBREF1" }, { "start": 332, "end": 353, "text": "(Socher et al., 2013)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "4.3" }, { "text": "CorefNMNs (Kottur et al., 2018) adapts NMNs (Andreas et al., 2016b) with an addition of two modules (Refer and Exclude) that perform explicit coreference resolution at a word level granularity. 'Refer' grounds coreferences in the conversation history while 'Exclude' handles contextual shifts.", "cite_spans": [ { "start": 10, "end": 31, "text": "(Kottur et al., 2018)", "ref_id": "BIBREF9" }, { "start": 44, "end": 67, "text": "(Andreas et al., 2016b)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "4.3" }, { "text": "In Table 1 , we examine the performance of our baseline models, and the effect of Context-aware Attention Mechanism (CAM). We experiment with three different combinations of our dialog-specific extensions to the MAC network architecture:", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 10, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "(i) context-aware attention over control states, (ii) multi-turn memory, and (iii) concatenating the dialog history as input to MAC -an obvious but naive and inefficient strategy for incorporating contextual information into a single-turn QA model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "When none of these three extensions are present, we obtain the vanilla MAC network which does not have the ability to reason over the dialog context. We see that vanilla MAC achieves 10% higher accuracy than NMN, which is also history-agnostic, and is surprisingly close to CorefNMN, which explicitly reasons over the dialog history. The fact that a single turn model can correctly answer twothirds of the questions in a very large dataset raises some concerns regarding how representative is the dataset of an actual dialog task. Adding contextaware attention to the MAC improves the accuracy of the model considerably to 89.43%. Introducing multi-turn memory to this model yields an accuracy of 97.98% accuracy -an improvement of 30% (absolute) on the performance of vanilla MAC network and benchmark CorefNMN. These results emphatically establish the efficacy of CAM and establish a new state-of-the-art for CLEVR-Dialog.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "Perhaps most notably, concatenating the dialog history to the current query works remarkably well for MAC networks, achieving 98.08% accuracy with no other augmentations to the MAC network. Introducing context-aware attention further improves accuracy to 98.25%, which yet again evidences the efficacy of the attention mechanism we propose. However, introducing the multi-turn memory results in a slight decrease in performance, indicating that the memory mechanism is not useful when the entire dialog context is present.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "We think it is important to mention here that, concatenating the dialog history is a naive method, and this method becomes computationally inefficient when the dialog history is longer and the questions themselves are longer. While it is true that the context aware attention mechanism also stores additional data -the past control states. However, since the control states are fixed sized, the additional memory and computation required is in O(T p), where T is the maximum number of dialog turns and p is the number of reasoning steps to be performed. On the other hand, the increase memory and computation requirements for concatenation is in O(|Q max |T p) where |Q max | is the length of the longest question. Without concatenating the dialog history, incorporating the multi-turn memory greatly improves accuracy (89.43% \u2192 97.98%), while being more computationally efficient.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "These results indicate that relaxing the structure of the model by eliminating hand-crafted modules gives the model much more flexibility in how it processes the input query, allowing it to perform more complex reasoning than the programs assembled by Neural Module Networks. In order to better understand the results presented above, we breakdown the accuracy of the models along dialog turns and question types.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "The heatmap shown in Figure 4 breaks down the accuracies of the models for different question types.", "cite_spans": [], "ref_spans": [ { "start": 21, "end": 29, "text": "Figure 4", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Question Types and Accuracy", "sec_num": "6.1.1" }, { "text": "Different question types require different reasoning about the image and the dialog. For example, count-obj-rel-* requires the models to count the number of objects relative to another entity, often one that was discussed earlier in the dialog. We observe that MacNet-Concat-Attn obtains a 1% gain over MacNet-Concat and MacNet-Attn-Memory for the count-obj-rel-imm2 question type, which requires reasoning about the number of objects relative to one from earlier in the dialog. These question types are follow-ups (e.g., \"how about to it's left\"), meaning that they have both anaphora and ellipsis. As such the performance gains on this question type are indicative of better dialog modelling. It is important to note that the above question types are also the ones that have the lowest performance across all models. This highlights the importance of developing specialized strategies for modelling dialog.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Types and Accuracy", "sec_num": "6.1.1" }, { "text": "The heatmap shown in Figure 5 presents the accuracy of the models when answering questions at different dialog turns. MacNet-Attn performs significantly better than MacNet. The fact that MacNet-Attn performs better at later turns suggests that the model is effectively resolving coreferences from the dialog history. Likewise, MacNet-Attn-Memory obtains even stronger performance gains, especially at later dialog turns. In the final turn of dialogs, MacNet-Attn-Memory is 15% more accu- The image has a yellow thing right of a cylinder. How many other things are in the picture?", "cite_spans": [], "ref_spans": [ { "start": 21, "end": 29, "text": "Figure 5", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Dialog Turn Number and Accuracy", "sec_num": "6.1.2" }, { "text": "What is the size of the previous cylinder? Previous Turns Does the previous yellow thing have things to its behind?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dialog Turn Number and Accuracy", "sec_num": "6.1.2" }, { "text": "If there is a thing behind the previous cylinder, what is its material? Table 2 : Dialog examples where attention over control states of previous dialog turns informs the model of which previous turn is important to attend to when answering the current query (the last question of the dialog). Darker shade means higher attention weight.", "cite_spans": [], "ref_spans": [ { "start": 72, "end": 79, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Current Turn", "sec_num": null }, { "text": "rate than MacNet-Attn and 23% over MacNet. MacNet-Concat-Attn obtains a 1% improvement over MacNet-Attn-Memory and MacNet-Concat, at the 9 th and 10 th dialog turns, respectively. This performance gain is relatively smaller, however, since the accuracies are so high, the relative error reduction is still significant. It is important to note that a 1% improvement in accuracy corresponds to answering 7500 more questions correctly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Current Turn", "sec_num": null }, { "text": "We verify that the context-aware attention over the control states is performing coreference resolution by looking at the attention weights assigned to each past question in the dialog history. Since 8 control states are computed per question, we consider the maximum attention weight between any control state of the current question and any control state in the past question. Table 2 presents examples of turn-level attention weights for two different dialogs (in red and blue, respectively). The first example shows that a higher attention weight is allotted to the immediately preceding dialog turn. We note that this preceding turn contains a reference to the entity which is referred to in the current turn.", "cite_spans": [], "ref_spans": [ { "start": 379, "end": 386, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Attention Analysis", "sec_num": "6.2" }, { "text": "In the second example, we see that a much higher attention is given to the first turn (which includes the image caption). We noticed that lots of dialog turns give a higher attention to the first dialog turn. This could be because a lot of questions start a new line of dialog by making a reference back to the original image caption. For instance, in the current turn, a question is asked about an entity in relation to the cylinder which is mentioned in the caption.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Attention Analysis", "sec_num": "6.2" }, { "text": "These examples illustrate that CAM is able to identify the referent turn in the dialog and appropriately attend to it.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Attention Analysis", "sec_num": "6.2" }, { "text": "We present Context-aware Attention and Memory (CAM), a set of dialog-specific augmentations to MAC networks (Hudson and Manning, 2018) . CAM consists of a context-aware attention mechanism which attends over the MAC control states of past dialog turns and a persistent, multi-turn memory which is accumulated over multiple turns of the dialog. These augmentations serve as an inductive bias that allow the architecture to capture various important properties of dialog, such as coreference and history dependency.", "cite_spans": [ { "start": 108, "end": 134, "text": "(Hudson and Manning, 2018)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "Our methods attain state-of-the-art performance on CLEVR-Dialog, with our best model attaining an accuracy of 98.25%, a 30% improvement over all prior results. Further, CAM attains strong performance gains over vanilla MAC networks, especially for question types that require coreference resolution and later dialog turns. Ablation experiments indicate that both of the components in CAM provide significant improvement in performance. We also verified that the context aware attention mechanism in indeed captures coreferences between the dialog turns.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "Our results are indicative of the flexibility of weakly structured models, like MAC networks and their ability to adapt to different problem settings. To adapt MAC networks for visual dialog we had to devise a mechanism to provide it with contextual information from past turns. Thereafter, the other components of the model were able to adapt and use this information to improve performance on the task, whereas to adapt NMN to visual dialog Kottur et al. (2018) had to devise specialized modules to handle specific types of questions.", "cite_spans": [ { "start": 443, "end": 463, "text": "Kottur et al. (2018)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "https://github.com/satwikkottur/ clevr-dialog", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/tohinz/ pytorch-mac-network", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Learning to compose neural networks for question answering", "authors": [ { "first": "Jacob", "middle": [], "last": "Andreas", "suffix": "" }, { "first": "Marcus", "middle": [], "last": "Rohrbach", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Darrell", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1601.01705" ] }, "num": null, "urls": [], "raw_text": "Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. 2016a. Learning to compose neural networks for question answering. arXiv preprint arXiv:1601.01705.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Neural module networks", "authors": [ { "first": "Jacob", "middle": [], "last": "Andreas", "suffix": "" }, { "first": "Marcus", "middle": [], "last": "Rohrbach", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Darrell", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition", "volume": "", "issue": "", "pages": "39--48", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. 2016b. Neural module networks. In Pro- ceedings of the IEEE Conference on Computer Vi- sion and Pattern Recognition, pages 39-48.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Vqa: Visual question answering", "authors": [ { "first": "Stanislaw", "middle": [], "last": "Antol", "suffix": "" }, { "first": "Aishwarya", "middle": [], "last": "Agrawal", "suffix": "" }, { "first": "Jiasen", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Margaret", "middle": [], "last": "Mitchell", "suffix": "" }, { "first": "Dhruv", "middle": [], "last": "Batra", "suffix": "" }, { "first": "Lawrence", "middle": [], "last": "Zitnick", "suffix": "" }, { "first": "Devi", "middle": [], "last": "Parikh", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the IEEE international conference on computer vision", "volume": "", "issue": "", "pages": "2425--2433", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Mar- garet Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. 2015. Vqa: Visual question an- swering. In Proceedings of the IEEE international conference on computer vision, pages 2425-2433.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition", "authors": [ { "first": "Abhishek", "middle": [], "last": "Das", "suffix": "" }, { "first": "Satwik", "middle": [], "last": "Kottur", "suffix": "" }, { "first": "Khushi", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Avi", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Deshraj", "middle": [], "last": "Yadav", "suffix": "" }, { "first": "M", "middle": [ "F" ], "last": "Jos\u00e9", "suffix": "" }, { "first": "Devi", "middle": [], "last": "Moura", "suffix": "" }, { "first": "Dhruv", "middle": [], "last": "Parikh", "suffix": "" }, { "first": "", "middle": [], "last": "Batra", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "326--335", "other_ids": {}, "num": null, "urls": [], "raw_text": "Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, Jos\u00e9 MF Moura, Devi Parikh, and Dhruv Batra. 2017a. Visual dialog. In Proceed- ings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 326-335.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Learning cooperative visual dialog agents with deep reinforcement learning", "authors": [ { "first": "Abhishek", "middle": [], "last": "Das", "suffix": "" }, { "first": "Satwik", "middle": [], "last": "Kottur", "suffix": "" }, { "first": "M", "middle": [ "F" ], "last": "Jos\u00e9", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Moura", "suffix": "" }, { "first": "Dhruv", "middle": [], "last": "Lee", "suffix": "" }, { "first": "", "middle": [], "last": "Batra", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the IEEE International Conference on Computer Vision", "volume": "", "issue": "", "pages": "2951--2960", "other_ids": {}, "num": null, "urls": [], "raw_text": "Abhishek Das, Satwik Kottur, Jos\u00e9 MF Moura, Stefan Lee, and Dhruv Batra. 2017b. Learning cooperative visual dialog agents with deep reinforcement learn- ing. In Proceedings of the IEEE International Con- ference on Computer Vision, pages 2951-2960.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Are you talking to a machine? dataset and methods for multilingual image question", "authors": [ { "first": "Haoyuan", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Junhua", "middle": [], "last": "Mao", "suffix": "" }, { "first": "Jie", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Zhiheng", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Lei", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Xu", "suffix": "" } ], "year": 2015, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "2296--2304", "other_ids": {}, "num": null, "urls": [], "raw_text": "Haoyuan Gao, Junhua Mao, Jie Zhou, Zhiheng Huang, Lei Wang, and Wei Xu. 2015. Are you talking to a machine? dataset and methods for multilingual im- age question. In Advances in neural information pro- cessing systems, pages 2296-2304.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Learning to reason: End-to-end module networks for visual question answering", "authors": [ { "first": "Ronghang", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Jacob", "middle": [], "last": "Andreas", "suffix": "" }, { "first": "Marcus", "middle": [], "last": "Rohrbach", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Darrell", "suffix": "" }, { "first": "Kate", "middle": [], "last": "Saenko", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the IEEE International Conference on Computer Vision", "volume": "", "issue": "", "pages": "804--813", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ronghang Hu, Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Kate Saenko. 2017. Learning to reason: End-to-end module networks for visual question answering. In Proceedings of the IEEE In- ternational Conference on Computer Vision, pages 804-813.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Compositional attention networks for machine reasoning", "authors": [ { "first": "A", "middle": [], "last": "Drew", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Hudson", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1803.03067" ] }, "num": null, "urls": [], "raw_text": "Drew A Hudson and Christopher D Manning. 2018. Compositional attention networks for machine rea- soning. arXiv preprint arXiv:1803.03067.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Clevr: A diagnostic dataset for compositional language and elementary visual reasoning", "authors": [ { "first": "Justin", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Bharath", "middle": [], "last": "Hariharan", "suffix": "" }, { "first": "Laurens", "middle": [], "last": "Van Der Maaten", "suffix": "" }, { "first": "Li", "middle": [], "last": "Fei-Fei", "suffix": "" }, { "first": "Lawrence", "middle": [], "last": "Zitnick", "suffix": "" }, { "first": "Ross", "middle": [], "last": "Girshick", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition", "volume": "", "issue": "", "pages": "2901--2910", "other_ids": {}, "num": null, "urls": [], "raw_text": "Justin Johnson, Bharath Hariharan, Laurens van der Maaten, Li Fei-Fei, C Lawrence Zitnick, and Ross Girshick. 2017. Clevr: A diagnostic dataset for com- positional language and elementary visual reasoning. In Proceedings of the IEEE Conference on Com- puter Vision and Pattern Recognition, pages 2901- 2910.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Visual coreference resolution in visual dialog using neural module networks", "authors": [ { "first": "Satwik", "middle": [], "last": "Kottur", "suffix": "" }, { "first": "M", "middle": [ "F" ], "last": "Jos\u00e9", "suffix": "" }, { "first": "Devi", "middle": [], "last": "Moura", "suffix": "" }, { "first": "", "middle": [], "last": "Parikh", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the European Conference on Computer Vision (ECCV)", "volume": "", "issue": "", "pages": "153--169", "other_ids": {}, "num": null, "urls": [], "raw_text": "Satwik Kottur, Jos\u00e9 MF Moura, Devi Parikh, Dhruv Ba- tra, and Marcus Rohrbach. 2018. Visual coreference resolution in visual dialog using neural module net- works. In Proceedings of the European Conference on Computer Vision (ECCV), pages 153-169.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Clevr-dialog: A diagnostic dataset for multi-round reasoning in visual dialog", "authors": [ { "first": "Satwik", "middle": [], "last": "Kottur", "suffix": "" }, { "first": "M", "middle": [ "F" ], "last": "Jos\u00e9", "suffix": "" }, { "first": "Devi", "middle": [], "last": "Moura", "suffix": "" }, { "first": "", "middle": [], "last": "Parikh", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1903.03166" ] }, "num": null, "urls": [], "raw_text": "Satwik Kottur, Jos\u00e9 MF Moura, Devi Parikh, Dhruv Ba- tra, and Marcus Rohrbach. 2019. Clevr-dialog: A di- agnostic dataset for multi-round reasoning in visual dialog. arXiv preprint arXiv:1903.03166.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Clevr-ref+: Diagnosing visual reasoning with referring expressions", "authors": [ { "first": "Runtao", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Chenxi", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Yutong", "middle": [], "last": "Bai", "suffix": "" }, { "first": "Alan", "middle": [ "L" ], "last": "Yuille", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition", "volume": "", "issue": "", "pages": "4185--4194", "other_ids": {}, "num": null, "urls": [], "raw_text": "Runtao Liu, Chenxi Liu, Yutong Bai, and Alan L Yuille. 2019. Clevr-ref+: Diagnosing visual reasoning with referring expressions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recog- nition, pages 4185-4194.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Hierarchical question-image co-attention for visual question answering", "authors": [ { "first": "Jiasen", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Jianwei", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Dhruv", "middle": [], "last": "Batra", "suffix": "" }, { "first": "Devi", "middle": [], "last": "Parikh", "suffix": "" } ], "year": 2016, "venue": "Advances In Neural Information Processing Systems", "volume": "", "issue": "", "pages": "289--297", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiasen Lu, Jianwei Yang, Dhruv Batra, and Devi Parikh. 2016. Hierarchical question-image co-attention for visual question answering. In Advances In Neural Information Processing Systems, pages 289-297.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "A multiworld approach to question answering about realworld scenes based on uncertain input", "authors": [ { "first": "Mateusz", "middle": [], "last": "Malinowski", "suffix": "" }, { "first": "Mario", "middle": [], "last": "Fritz", "suffix": "" } ], "year": 2014, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "1682--1690", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mateusz Malinowski and Mario Fritz. 2014. A multi- world approach to question answering about real- world scenes based on uncertain input. In Advances in neural information processing systems, pages 1682-1690.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Exploring models and data for image question answering", "authors": [ { "first": "Mengye", "middle": [], "last": "Ren", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Kiros", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Zemel", "suffix": "" } ], "year": 2015, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "2953--2961", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mengye Ren, Ryan Kiros, and Richard Zemel. 2015. Exploring models and data for image question an- swering. In Advances in neural information process- ing systems, pages 2953-2961.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Visual reference resolution using attention memory for visual dialog", "authors": [ { "first": "Andreas", "middle": [], "last": "Paul Hongsuck Seo", "suffix": "" }, { "first": "Bohyung", "middle": [], "last": "Lehrmann", "suffix": "" }, { "first": "Leonid", "middle": [], "last": "Han", "suffix": "" }, { "first": "", "middle": [], "last": "Sigal", "suffix": "" } ], "year": 2017, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "3719--3729", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paul Hongsuck Seo, Andreas Lehrmann, Bohyung Han, and Leonid Sigal. 2017. Visual reference resolu- tion using attention memory for visual dialog. In Advances in neural information processing systems, pages 3719-3729.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Recursive deep models for semantic compositionality over a sentiment treebank", "authors": [ { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Perelygin", "suffix": "" }, { "first": "Jean", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Chuang", "suffix": "" }, { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Manning", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Ng", "suffix": "" }, { "first": "", "middle": [], "last": "Potts", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2013 conference on empirical methods in natural language processing", "volume": "", "issue": "", "pages": "1631--1642", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment tree- bank. In Proceedings of the 2013 conference on empirical methods in natural language processing, pages 1631-1642.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "End-to-end optimization of goal-driven and visually grounded dialogue systems", "authors": [ { "first": "Florian", "middle": [], "last": "Strub", "suffix": "" }, { "first": "Jeremie", "middle": [], "last": "Harm De Vries", "suffix": "" }, { "first": "Bilal", "middle": [], "last": "Mary", "suffix": "" }, { "first": "Aaron", "middle": [], "last": "Piot", "suffix": "" }, { "first": "Olivier", "middle": [], "last": "Courville", "suffix": "" }, { "first": "", "middle": [], "last": "Pietquin", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1703.05423" ] }, "num": null, "urls": [], "raw_text": "Florian Strub, Harm De Vries, Jeremie Mary, Bilal Piot, Aaron Courville, and Olivier Pietquin. 2017. End-to-end optimization of goal-driven and visu- ally grounded dialogue systems. arXiv preprint arXiv:1703.05423.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Neural-symbolic vqa: Disentangling reasoning from vision and language understanding", "authors": [ { "first": "Kexin", "middle": [], "last": "Yi", "suffix": "" }, { "first": "Jiajun", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Chuang", "middle": [], "last": "Gan", "suffix": "" }, { "first": "Antonio", "middle": [], "last": "Torralba", "suffix": "" }, { "first": "Pushmeet", "middle": [], "last": "Kohli", "suffix": "" }, { "first": "Josh", "middle": [], "last": "Tenenbaum", "suffix": "" } ], "year": 2018, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "1031--1042", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kexin Yi, Jiajun Wu, Chuang Gan, Antonio Tor- ralba, Pushmeet Kohli, and Josh Tenenbaum. 2018. Neural-symbolic vqa: Disentangling reasoning from vision and language understanding. In Advances in Neural Information Processing Systems, pages 1031-1042.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Visual madlibs: Fill in the blank image generation and question answering", "authors": [ { "first": "Licheng", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Eunbyung", "middle": [], "last": "Park", "suffix": "" }, { "first": "C", "middle": [], "last": "Alexander", "suffix": "" }, { "first": "Tamara", "middle": [ "L" ], "last": "Berg", "suffix": "" }, { "first": "", "middle": [], "last": "Berg", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1506.00278" ] }, "num": null, "urls": [], "raw_text": "Licheng Yu, Eunbyung Park, Alexander C Berg, and Tamara L Berg. 2015. Visual madlibs: Fill in the blank image generation and question answering. arXiv preprint arXiv:1506.00278.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "The MAC Network Architecture (image from (Hudson and Manning, 2018))", "num": null, "uris": null, "type_str": "figure" }, "FIGREF1": { "text": "The modified MAC Network architecture which explicitly incorporates dialog context", "num": null, "uris": null, "type_str": "figure" }, "FIGREF2": { "text": "Example from the CLEVR-Dialog dataset, consisting of an image, a dialog. Each dialog begins with a caption describing the image, followed by a round of questions and answers. Each question relies on information from previous dialog turns.", "num": null, "uris": null, "type_str": "figure" }, "FIGREF3": { "text": "Breakdown of the accuracies of the models by question type. Different question types require different reasoning, especially pertaining to the dialog history.", "num": null, "uris": null, "type_str": "figure" }, "FIGREF4": { "text": "Breakdown of the accuracies of different models on different turns in the dialog", "num": null, "uris": null, "type_str": "figure" }, "TABREF1": { "html": null, "num": null, "text": "There are 4 small things. What is the number of green things in the view, if present?Are there other things that share its color in the scene? Previous Turns If there is a thing in front of the above green thing, what is its material?Current TurnIf there is a thing to the right of it, what color is it?", "content": "