Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "I17-1047",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:37:26.549842Z"
},
"title": "Image-Grounded Conversations: Multimodal Context for Natural Question and Response Generation",
"authors": [
{
"first": "Nasrin",
"middle": [],
"last": "Mostafazadeh",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University College London",
"location": {}
},
"email": "nasrin.m@benevolent.ai"
},
{
"first": "Chris",
"middle": [],
"last": "Brockett",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University College London",
"location": {}
},
"email": "chrisbkt@microsoft.com"
},
{
"first": "Bill",
"middle": [],
"last": "Dolan",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University College London",
"location": {}
},
"email": ""
},
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University College London",
"location": {}
},
"email": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University College London",
"location": {}
},
"email": ""
},
{
"first": "Georgios",
"middle": [
"P"
],
"last": "Spithourakis",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University College London",
"location": {}
},
"email": ""
},
{
"first": "Lucy",
"middle": [],
"last": "Vanderwende",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University College London",
"location": {}
},
"email": ""
},
{
"first": "",
"middle": [],
"last": "Benevolentai",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University College London",
"location": {}
},
"email": ""
},
{
"first": "Microsoft",
"middle": [],
"last": "Research",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University College London",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The popularity of image sharing on social media and the engagement it creates between users reflect the important role that visual context plays in everyday conversations. We present a novel task, Image-Grounded Conversations (IGC), in which natural-sounding conversations are generated about a shared image. To benchmark progress, we introduce a new multiplereference dataset of crowd-sourced, eventcentric conversations on images. IGC falls on the continuum between chitchat and goal-directed conversation models, where visual grounding constrains the topic of conversation to event-driven utterances. Experiments with models trained on social media data show that the combination of visual and textual context enhances the quality of generated conversational turns. In human evaluation, the gap between human performance and that of both neural and retrieval architectures suggests that multi-modal IGC presents an interesting challenge for dialog research. * This work was performed at Microsoft. User1: My son is ahead and surprised! User2: Did he end up winning the race? User1: Yes he won, he can't believe it!",
"pdf_parse": {
"paper_id": "I17-1047",
"_pdf_hash": "",
"abstract": [
{
"text": "The popularity of image sharing on social media and the engagement it creates between users reflect the important role that visual context plays in everyday conversations. We present a novel task, Image-Grounded Conversations (IGC), in which natural-sounding conversations are generated about a shared image. To benchmark progress, we introduce a new multiplereference dataset of crowd-sourced, eventcentric conversations on images. IGC falls on the continuum between chitchat and goal-directed conversation models, where visual grounding constrains the topic of conversation to event-driven utterances. Experiments with models trained on social media data show that the combination of visual and textual context enhances the quality of generated conversational turns. In human evaluation, the gap between human performance and that of both neural and retrieval architectures suggests that multi-modal IGC presents an interesting challenge for dialog research. * This work was performed at Microsoft. User1: My son is ahead and surprised! User2: Did he end up winning the race? User1: Yes he won, he can't believe it!",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Bringing together vision & language in one intelligent conversational system has been one of the longest running goals in AI (Winograd, 1972) . Advances in image captioning (Fang et al., 2014; Chen et al., 2015; have enabled much interdisciplinary research in vision and language, from video transcription (Rohrbach et al., 2012; , to answering questions about images (Antol et al., 2015; Malinowski and Fritz, 2014) , to storytelling around series of photographs (Huang et al., 2016) . Most recent work on vision & language focuses on either describing (captioning) the image or answering questions about their visible content. Observing how people naturally engage with one another around images in social media, it is evident that it is often in the form of conversational threads. On Twitter, for example, uploading a photo with an accompanying tweet has become increasingly popular: in June 2015, 28% of tweets reportedly contained an image (Morris et al., 2016) . Moreover, across social media, the conversations around shared images range beyond what is explicitly visible in the image. Figure 1 illustrates such a conversation. As this example shows, the conversation is grounded not only in the visible objects (e.g., the boys, the bikes) but more importantly, in the events and actions (e.g., the race, winning) implicit in the image that is accompanied by the textual utterance. To humans, it is these latter aspects that are likely to be the most interesting and most meaningful components of a natural conversation, and to the systems, inferring such implicit aspects can be the most challenging.",
"cite_spans": [
{
"start": 125,
"end": 141,
"text": "(Winograd, 1972)",
"ref_id": "BIBREF42"
},
{
"start": 173,
"end": 192,
"text": "(Fang et al., 2014;",
"ref_id": "BIBREF8"
},
{
"start": 193,
"end": 211,
"text": "Chen et al., 2015;",
"ref_id": "BIBREF2"
},
{
"start": 306,
"end": 329,
"text": "(Rohrbach et al., 2012;",
"ref_id": null
},
{
"start": 368,
"end": 388,
"text": "(Antol et al., 2015;",
"ref_id": "BIBREF0"
},
{
"start": 389,
"end": 416,
"text": "Malinowski and Fritz, 2014)",
"ref_id": "BIBREF20"
},
{
"start": 464,
"end": 484,
"text": "(Huang et al., 2016)",
"ref_id": null
},
{
"start": 946,
"end": 967,
"text": "(Morris et al., 2016)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1094,
"end": 1102,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper we shift the focus from image as an artifact (as is in the existing vision & language work, to be described in Section 2), to image as the context for interaction: we introduce the task of Image-Grounded Conversation (IGC) in which a system must generate conversational turns to proactively drive the interaction forward. IGC thus falls on a continuum between chit-chat (open-ended) and goal-oriented task-completion dialog systems, where the visual context in IGC naturally serves as a detailed topic for a conversation. As conversational agents gain increasing ground in commercial settings (e.g., Siri and Alexa), they will increasingly need to engage humans in ways that seem intelligent and anticipatory of future needs. For example, a conversational agent might engage in a conversation with a user about a camera-roll image in order to elicit background information from the user (e.g., special celebrations, favorite food, the name of the friends and family, etc.).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper draws together two threads of investigation that have hitherto remained largely unrelated: vision & language and data-driven conversation modeling. Its contributions are threefold: (1) we introduce multimodal conversational context for formulating questions and responses around images, and support benchmarking with a publicly-released, high-quality, crowd-sourced dataset of 4,222 multi-turn, multi-reference conversations grounded on event-centric images. We analyze various characteristics of this IGC dataset in Section 3.1. 2We investigate the application of deep neural generation and retrieval approaches for question and response generation tasks (Section 5), trained on 250K 3-turn naturally-occurring image-grounded conversations found on Twitter.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(3) Our experiments suggest that the combination of visual and textual context improves the quality of generated conversational turns (Section 6-7). We hope that this novel task will spark new interest in multimodal conversation modeling.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Visual features combined with language modeling have shown good performance both in image captioning (Devlin et al., 2015; Fang et al., 2014; and in question answering on images (Antol et al., 2015; Ray et al., 2016; Malinowski and Fritz, 2014) , when trained on large datasets, such as the COCO dataset (Lin et al., 2014) . In Visual Question Answering (VQA) (Antol et al., 2015) , a system is tasked with answering a question about a given image, where the questions are constrained to be answerable directly from the image. In other words, the VQA task primarily serves to evaluate the extent to which the system has recognized the explicit content of the image. Figure 2 : Typical crowdsourced conversations in IGC (left) and VisDial (right). Das et al. (2017a) extend the VQA scenario by collecting sequential questions from people who are shown only an automatically generated caption, not the image itself. The utterances in this dataset, called 'Visual Dialog' (VisDial), are best viewed as simple one-sided QA exchanges in which humans ask questions and the system provides answers. Figure 2 contrasts an example ICG conversation with the VisDial dataset. As this example shows, IGC involves natural conversations with the image as the grounding, where the literal objects (e.g., the pumpkins) may not even be mentioned in the conversation at all, whereas Vis-Dial targets explicit image understanding. More recently, Das et al. (2017b) have explored the Vis-Dial dataset with richer models that incorporate deep-reinforcement learning. Mostafazadeh et al. (2016b) introduce the task of visual question generation (VQG), in which the system itself outputs questions about a given image. Questions are required to be 'natural and engaging', i.e. a person would find them interesting to answer, but need not be answerable from the image alone. In this work, we introduce multimodal context, recognizing that images commonly come associated with a verbal commentary that can affect the interpretation. This is thus a broader, more complex task that involves implicit commonsense reasoning around both image and text.",
"cite_spans": [
{
"start": 101,
"end": 122,
"text": "(Devlin et al., 2015;",
"ref_id": "BIBREF6"
},
{
"start": 123,
"end": 141,
"text": "Fang et al., 2014;",
"ref_id": "BIBREF8"
},
{
"start": 178,
"end": 198,
"text": "(Antol et al., 2015;",
"ref_id": "BIBREF0"
},
{
"start": 199,
"end": 216,
"text": "Ray et al., 2016;",
"ref_id": "BIBREF30"
},
{
"start": 217,
"end": 244,
"text": "Malinowski and Fritz, 2014)",
"ref_id": "BIBREF20"
},
{
"start": 304,
"end": 322,
"text": "(Lin et al., 2014)",
"ref_id": "BIBREF17"
},
{
"start": 360,
"end": 380,
"text": "(Antol et al., 2015)",
"ref_id": "BIBREF0"
},
{
"start": 747,
"end": 765,
"text": "Das et al. (2017a)",
"ref_id": "BIBREF4"
},
{
"start": 1427,
"end": 1445,
"text": "Das et al. (2017b)",
"ref_id": "BIBREF5"
},
{
"start": 1546,
"end": 1573,
"text": "Mostafazadeh et al. (2016b)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [
{
"start": 666,
"end": 674,
"text": "Figure 2",
"ref_id": null
},
{
"start": 1092,
"end": 1100,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Vision and Language",
"sec_num": "2.1"
},
{
"text": "This work is also closely linked to research on data-driven conversation modeling. Ritter et al. (2011) posed response generation as a machine translation task, learning conversations from parallel message-response pairs found on social media. Their work has been successfully extended with the use of deep neural models (Sordoni et al., 2015; Shang et al., 2015; Serban et al., 2015a; Vinyals and Le, 2015; Li et al., 2016a,b) . Sordoni et al. (2015) introduce a context-sensitive neural language model that selects the most probable response conditioned on the conversation history (i.e., a text-only context). In this paper, we extend the contextual approach with multimodal features to build models that are capable of asking questions on topics of interest to a human that might allow a conversational agent to proactively drive a conversation forward.",
"cite_spans": [
{
"start": 83,
"end": 103,
"text": "Ritter et al. (2011)",
"ref_id": "BIBREF31"
},
{
"start": 321,
"end": 343,
"text": "(Sordoni et al., 2015;",
"ref_id": "BIBREF37"
},
{
"start": 344,
"end": 363,
"text": "Shang et al., 2015;",
"ref_id": "BIBREF35"
},
{
"start": 364,
"end": 385,
"text": "Serban et al., 2015a;",
"ref_id": "BIBREF33"
},
{
"start": 386,
"end": 407,
"text": "Vinyals and Le, 2015;",
"ref_id": "BIBREF40"
},
{
"start": 408,
"end": 427,
"text": "Li et al., 2016a,b)",
"ref_id": null
},
{
"start": 430,
"end": 451,
"text": "Sordoni et al. (2015)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data-Driven Conversational Modeling",
"sec_num": "2.2"
},
{
"text": "3 Image-Grounded Conversations",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data-Driven Conversational Modeling",
"sec_num": "2.2"
},
{
"text": "We define the current scope of IGC as the following two consecutive conversational steps:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Definition",
"sec_num": "3.1"
},
{
"text": "\u2022 Question Generation: Given a visual context I and a textual context T (e.g., the first statement in Figure 1 ), generate a coherent, natural question Q about the image as the second utterance in the conversation. It has been shown that humans achieve greater consensus on what constitutes a natural question to ask given an image (the task of VQG) than on captioning or asking a visually verifiable question (VQA) (Mostafazadeh et al., 2016b) . As seen in Figure 1 , the question is not directly answerable from the image. Here we emphasize on questions as a way of potentially engaging a human in continuing the conversation.",
"cite_spans": [
{
"start": 416,
"end": 444,
"text": "(Mostafazadeh et al., 2016b)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [
{
"start": 102,
"end": 110,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 458,
"end": 466,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Task Definition",
"sec_num": "3.1"
},
{
"text": "\u2022 Response Generation: Given a visual context I, a textual context T, and a question Q, generate a coherent, natural, response R to the question as the third utterance in the conversation. In the interests of feasible multi-reference evaluation, we pose question and response generation as two separate tasks. However, all the models presented in this paper can be fed with their own generated question to generate a response.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Definition",
"sec_num": "3.1"
},
{
"text": "The majority of the available corpora for developing data-driven dialogue systems contain taskoriented and goal-driven conversational data (Serban et al., 2015b) . For instance, the Ubuntu dia-logue corpus (Lowe et al., 2015) is the largest corpus of dialogues (almost 1 million mainly 3-turn dialogues) for the specific topic of troubleshooting Ubuntu problems. On the other hand, for openended conversation modeling (chitchat), now a high demand application in AI, shared datasets with which to track progress are severely lacking. The ICG task presented here lies nicely in the continuum between the two, where the visual grounding of event-centric images constrains the topic of conversation to contentful utterances.",
"cite_spans": [
{
"start": 139,
"end": 161,
"text": "(Serban et al., 2015b)",
"ref_id": "BIBREF34"
},
{
"start": 206,
"end": 225,
"text": "(Lowe et al., 2015)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The ICG Dataset",
"sec_num": "3.2"
},
{
"text": "To enable benchmarking of progress in the IGC task, we constructed the IGC Crowd dataset for validation and testing purposes. We first sampled eventful images from the VQG dataset (Mostafazadeh et al., 2016b) which has been extracted by querying a search engine using eventcentric query terms. These were then served in a photo gallery of a crowd-sourcing platform we developed using the Turkserver toolkit (Mao et al., 2012) , which enables synchronous and real-time interactions between crowd workers on Amazon Mechanical Turk. Multiple workers wait in a virtual lobby to be paired with a conversation partner. After being paired, one of the workers selects an image from the large photo gallery, after which the two workers enter a chat window in which they conduct a short conversation about the selected image. We prompted the workers to naturally drive the conversation forward without using informal/IM language. To enable multi-reference evaluation (Section 6), we crowd-sourced five additional questions and responses for the IGC Crowd contexts and initial questions. Table 1 shows three full conversations found in the IGC Crowd dataset. These examples show show that eventful images lead to conversations that are semantically rich and appear to involve commonsense reasoning. Table 2 summarizes basic dataset statistics. The IGC Crowd dataset has been released as the Microsoft Research Image-Grounded Conversation dataset (https://www.microsoft.com/en-us/ download/details.aspx?id=55324& 751be11f-ede8).",
"cite_spans": [
{
"start": 180,
"end": 208,
"text": "(Mostafazadeh et al., 2016b)",
"ref_id": "BIBREF26"
},
{
"start": 407,
"end": 425,
"text": "(Mao et al., 2012)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [
{
"start": 1077,
"end": 1084,
"text": "Table 1",
"ref_id": "TABREF2"
},
{
"start": 1288,
"end": 1295,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "The ICG Dataset",
"sec_num": "3.2"
},
{
"text": "In this Section, we analyze the IGC dataset to highlight a range of phenomena specific to this task. (Additional material pertaining to the lexical distributions of this dataset can be found in the Supplementary Material.) ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Characteristics",
"sec_num": "4"
},
{
"text": "The task of IGC emphasizes modeling of not only visual but also textual context. We presented human judges with a random sample of 600 triplets of image, textual context, and question (I, T, Q) from each IGC Twitter and IGC Crowd datasets and asked them to rate the effectiveness of the visual and the textual context. We define 'effectiveness' to be \"the degree to which the image or text is required in order for the given question to sound natural\". The workers were prompted to make this judgment based on whether or not the question already makes sense without either the image or the text. As Figure 3 demonstrates, both visual and textual contexts are generally highly effective, and understanding of both would be required for the question that was asked. By way of comparison, Figure 3 also shows the effectiveness of image and text for a sample taken from Twitter data presented in Section 4.4. We note that the crowd-sourced dataset is more heavily reliant on understanding the textual context than is the Twitter set. ",
"cite_spans": [],
"ref_spans": [
{
"start": 599,
"end": 607,
"text": "Figure 3",
"ref_id": "FIGREF1"
},
{
"start": 786,
"end": 794,
"text": "Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "The Effectiveness of Multimodal Context",
"sec_num": "4.1"
},
{
"text": "The grounded conversations starting with questions contain a considerable amount stereotypical commonsense knowledge. To get a better sense of the richness of our IGC Crowd dataset, we manually annotated a random sample of 330 (I, T, Q) triplets in terms of Minsky's Frames: Minsky defines 'frame' as follows: \"When one encounters a new situation, one selects from memory a structure called a Frame\" (Minsky, 1974) . A frame is thus a commonsense knowledge representation data-structure for representing stereotypical situations, such as a wedding ceremony. Minsky further connects frames to the nature of questions: \"[A Frame] is a collection of questions to be asked about a situation\". These questions can ask about the cause, intention, or side-effects of a presented situation. We annotated 1 the FrameNet (Baker et al., 1998) frame evoked by the image I, to be called (I F N ), and the textual context T , (T F N ). Then, for the question asked, we annotated the frame Figure 4 : An example causal and temporal (CaTeRS) annotation on the conversation presented in Figure 1 . The rectangular nodes show the event entities and the edges are the semantic links. For simplicity, we show the 'identity' relation between events using gray nodes. The coreference chain is depicted by the underlined words.",
"cite_spans": [
{
"start": 400,
"end": 414,
"text": "(Minsky, 1974)",
"ref_id": "BIBREF22"
},
{
"start": 811,
"end": 831,
"text": "(Baker et al., 1998)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 975,
"end": 983,
"text": "Figure 4",
"ref_id": null
},
{
"start": 1070,
"end": 1078,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Frame Semantic Analysis of Questions",
"sec_num": "4.2"
},
{
"text": "slot (Q F N \u2212slot ) associated with a context frame (Q F N ). For 17% of cases, we were unable to identify a corresponding Q F N \u2212slot in FrameNet. As the example in Table 3 shows, the image in isolation often does not evoke any uniquely contentful frame, whereas the textual context frequently does. In only 14% of cases does I F N =T F N , which further supports the complementary effect of our multimodal contexts. Moreover, Q F N =I F N for 32% our annotations, whereas Q F N =T F N for 47% of the triplets, again, showing the effectiveness of textual context in determining the question.",
"cite_spans": [],
"ref_spans": [
{
"start": 166,
"end": 173,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Frame Semantic Analysis of Questions",
"sec_num": "4.2"
},
{
"text": "To further investigate the representation of events and any stereotypical causal and temporal relations between them in the IGC Crowd dataset, we manually annotated a sample of 20 conversations with their causal and temporal event structures.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Event Analysis of Conversations",
"sec_num": "4.3"
},
{
"text": "Here, we followed the Causal and Temporal Relation Scheme (CaTeRS) (Mostafazadeh et al., 2016a) for event entity and event-event semantic relation annotations. Our analysis shows that the IGC utterances are indeed rich in events. On average, each utterance in IGC has 0.71 event entity mentions, such as 'win' or 'remodel'. The semantic link annotation reflects commonsense relation between event mentions in the context of the ongoing conversation. Figure 4 shows an example CaTeRS annotation. The distribution of semantic t! Figure 5 : The frequency of event-event semantic links in a random sample of 20 IGC conversations. links in the annotated sample can be found in Figure 5 . These numbers further suggest that in addition to jointly understanding the visual and textual context (including multimodal anaphora resolution, among other challenges), capturing causal and temporal relations between events will likely be necessary for a system to perform the IGC task.",
"cite_spans": [
{
"start": 67,
"end": 95,
"text": "(Mostafazadeh et al., 2016a)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [
{
"start": 450,
"end": 458,
"text": "Figure 4",
"ref_id": null
},
{
"start": 527,
"end": 535,
"text": "Figure 5",
"ref_id": null
},
{
"start": 672,
"end": 680,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Event Analysis of Conversations",
"sec_num": "4.3"
},
{
"text": "Previous work in neural conversation modeling (Ritter et al., 2011; Sordoni et al., 2015) has successfully used Twitter as the source of natural conversations. As training data, we sampled 250K quadruples of {visual context, textual context, question, response} tweet threads from a larger dataset of 1.4 million, extracted from the Twitter Firehose over a 3-year period beginning in May 2013 and filtered to select just those conversations in which the initial turn was associated with an image and the second turn was a question.",
"cite_spans": [
{
"start": 46,
"end": 67,
"text": "(Ritter et al., 2011;",
"ref_id": "BIBREF31"
},
{
"start": 68,
"end": 89,
"text": "Sordoni et al., 2015)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "IGC Twitter Training Dataset",
"sec_num": "4.4"
},
{
"text": "Regular expressions were used to detect questions. To improve the likelihood that the authors are experienced Twitter conversationalists, we further limited extraction to those exchanges where users had actively engaged in at least 30 conversational exchanges during a 3-month period.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IGC Twitter Training Dataset",
"sec_num": "4.4"
},
{
"text": "Twitter data is noisy: we performed simple normalizations, and filtered out tweets that contained mid-tweet hashtags, were longer than 80 characters 2 or contained URLs not linking to the image. (Sample conversations from this dataset can be found in the Supplementary Material.) A random sample of tweets suggests that about 46% of the Twitter conversations is affected by prior history between users, making response generation particularly difficult. In addition, the abundance of screenshots and non-photograph graphics is potentially a major source of noise in extracting features for neural generation, though we did not attempt to exclude these from the training set. Figure 6 overviews our three generation models. Across all the models, we use the VGGNet architecture (Simonyan and Zisserman, 2015) for computing deep convolutional image features. We use the 4096-dimensional output of the last fully connected layer (f c7) as the input to all the models sensitive to visual context.",
"cite_spans": [
{
"start": 777,
"end": 807,
"text": "(Simonyan and Zisserman, 2015)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [
{
"start": 675,
"end": 683,
"text": "Figure 6",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "IGC Twitter Training Dataset",
"sec_num": "4.4"
},
{
"text": "Visual Context Sensitive Model (V-Gen). Similar to Recurrent Neural Network (RNN) models for image captioning (Devlin et al., 2015; , (V-Gen) transforms the image feature vector to a 500-dimensional vector that serves as the initial recurrent state to a 500-dimensional one-layer Gated Recurrent Unit (GRU) which is the decoder module. The output sentence is generated one word at a time until the <EOS> (end-of-sentence) token is generated. We set the vocabulary size to 6000 which yielded the best results on the validation set. For this model, we got better results by greedy decoding. Unknown words are mapped to an <UNK> token during training, which is not allowed to be generated at decoding time.",
"cite_spans": [
{
"start": 110,
"end": 131,
"text": "(Devlin et al., 2015;",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generation Models",
"sec_num": "5.1"
},
{
"text": "Textual Context Sensitive Model (T-Gen). This is a neural Machine Translation-like model that maps an input sequence to an output sequence (Seq2Seq model (Cho et al., 2014; Sutskever et al., 2014) ) using an encoder and a decoder RNN. The decoder module is like the model described above, in this case the initial recurrent state being the 500dimensional encoding of the textual context. For consistency, we use the same vocab size and number of layers as in the (V-Gen) model.",
"cite_spans": [
{
"start": 154,
"end": 172,
"text": "(Cho et al., 2014;",
"ref_id": "BIBREF3"
},
{
"start": 173,
"end": 196,
"text": "Sutskever et al., 2014)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generation Models",
"sec_num": "5.1"
},
{
"text": "Visual & Textual Context Sensitive Model (V&T-Gen). This model fully leverages both textual and visual contexts. The vision feature is transformed to a 500-dimensional vector, and the textual context is likewise encoded into a 500-dimensional vector. The textual feature vector can be obtained using either a bagof-words (V&T.BOW-Gen) representation, or an RNN (V&T.RNN-Gen), as depicted in Figure 7 . The textual feature vector is then concatenated to the vision vector and fed into a fully connected (FC) feed forward neural network. As a result, we obtain a single 500-dimensional vector encoding both visual and textual context, which then serves as the initial recurrent state of the decoder RNN.",
"cite_spans": [],
"ref_spans": [
{
"start": 391,
"end": 399,
"text": "Figure 7",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Generation Models",
"sec_num": "5.1"
},
{
"text": "In order to generate the response (the third utterance in the conversation), we need to represent the conversational turns in the textual context input. There are various ways to represent conversational history, including a bag of words model, or a concatenation of all textual utterances into one sentence (Sordoni et al., 2015) . For response generation, we implement a more complex treatment in which utterances are fed into an RNN one word at a time (Figure 7) following their temporal order in the conversation. An <UTT> marker designates the boundary between successive utterances.",
"cite_spans": [
{
"start": 308,
"end": 330,
"text": "(Sordoni et al., 2015)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [
{
"start": 455,
"end": 465,
"text": "(Figure 7)",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Generation Models",
"sec_num": "5.1"
},
{
"text": "Decoding and Reranking. For all generation models, at decoding time we generate the N-best lists using left-to-right beam search with beamsize 25. We set the maximum number of tokens to 13 for the generated partial hypotheses. Any partial hypothesis that reaches <EOS> token becomes a viable full hypothesis for reranking. The first few hypotheses on top of the N-best lists generated by Seq2Seq models tend to be very generic, 3 disregarding the input context. In order to address this issue we rerank the N-best list using the following score function:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generation Models",
"sec_num": "5.1"
},
{
"text": "log p(h|C) + \u03bb idf(h,D) + \u00b5|h| + \u03ba V (h) (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generation Models",
"sec_num": "5.1"
},
{
"text": "where p(h|C) is the probability of the generated hypothesis h given the context C. The function V counts the number of verbs in the hypothesis and |h| denotes the number of tokens in the hypothesis. The function idf is the inverse document frequency, computing how common a hypothesis is across all the generated N-best lists. Here D is the set of all N-best lists and d is a specific Nbest list. We define idf(h, D) = log |D| |{d\u2208D:h\u2208d}| , where we set N =10 to cut short each N-best list. These parameters were selected following reranking experiments on the validation set. We optimize all the parameters of the scoring function towards maximizing the smoothed-BLEU score (Lin and Och, 2004) using the Pairwise Ranking Optimization algorithm (Hopkins and May, 2011) .",
"cite_spans": [
{
"start": 675,
"end": 694,
"text": "(Lin and Och, 2004)",
"ref_id": "BIBREF16"
},
{
"start": 745,
"end": 768,
"text": "(Hopkins and May, 2011)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generation Models",
"sec_num": "5.1"
},
{
"text": "In addition to generation, we implemented two retrieval models customized for the tasks of question and response generation. Work in vision and language has demonstrated the effectiveness of retrieval models, where one uses the annotation (e.g., caption) of a nearest neighbor in the training image set to annotate a given test image (Mostafazadeh et al., 2016b; Visual Context",
"cite_spans": [
{
"start": 334,
"end": 362,
"text": "(Mostafazadeh et al., 2016b;",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Retrieval Models",
"sec_num": "5.2"
},
{
"text": "The weather was amazing at this baseball game.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Textual Context",
"sec_num": null
},
{
"text": "I got in a car wreck today! My cousins at the family reunion. Hodosh et al., 2013; Ordonez et al., 2011; Farhadi et al., 2010) . -Ret) . This model uses only the provided image for retrieval. First, we find a set of K nearest training images for the given test image based on cosine similarity of the f c7 vision feature vectors. Then we retrieve those K annotations as our pool of K candidates. Finally, we compute the textual similarity among the questions in the pool according to a Smoothed-BLEU (Lin and Och, 2004 ) similarity score, then emit the sentence that has the highest similarity to the rest of the pool.",
"cite_spans": [
{
"start": 69,
"end": 82,
"text": "et al., 2013;",
"ref_id": "BIBREF11"
},
{
"start": 83,
"end": 104,
"text": "Ordonez et al., 2011;",
"ref_id": "BIBREF27"
},
{
"start": 105,
"end": 126,
"text": "Farhadi et al., 2010)",
"ref_id": "BIBREF9"
},
{
"start": 129,
"end": 134,
"text": "-Ret)",
"ref_id": null
},
{
"start": 500,
"end": 518,
"text": "(Lin and Och, 2004",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Textual Context",
"sec_num": null
},
{
"text": "Visual Context Sensitive Model (V",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gold Question",
"sec_num": null
},
{
"text": "Visual & Textual Context Sensitive Model (V&T-Ret). This model uses a linear combination of f c7 and word2vec feature vectors for retrieving similar training instances.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gold Question",
"sec_num": null
},
{
"text": "We provide both human (Table 5 ) and automatic (Table 6 ) evaluations for our question and response generation tasks on the IGC Crowd test set. We crowdsourced our human evaluation on an AMT-like crowdsourcing system, asking seven crowd workers to each rate the quality of candidate questions or responses on a three-point Likert-like scale, ranging from 1 to 3 (the highest). To ensure a calibrated rating, we showed the human judges all system hypotheses for a particular test case at the same time. System outputs were randomly ordered to prevent judges from guessing which systems were which on the basis of position. After collecting judgments, we averaged the scores throughout the test set for each model. We discarded any annotators whose ratings varied from the mean by more than 2 standard deviations.",
"cite_spans": [],
"ref_spans": [
{
"start": 22,
"end": 30,
"text": "(Table 5",
"ref_id": "TABREF10"
},
{
"start": 47,
"end": 55,
"text": "(Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation Setup",
"sec_num": "6"
},
{
"text": "Although human evaluation is to be preferred, and currently essential in open-domain generation tasks involving intrinsically diverse outputs, it is useful to have an automatic metric for day-to-day evaluation. For ease of replicability, we use the standard Machine Translation metric, BLEU (Papineni et al., 2002) , which captures n-gram overlap between hypotheses and multiple references. Results reported in Table 6 employ BLEU with equal weights up to 4-grams at corpus-level on the multi-reference IGC Crowd test set. Although Liu et al. (2016) suggest that BLEU fails to correlate with human judgment at the sentence level, correlation increases when BLEU is applied at the document or corpus level (Galley et al., 2015; Przybocki et al., 2008) .",
"cite_spans": [
{
"start": 291,
"end": 314,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF28"
},
{
"start": 532,
"end": 549,
"text": "Liu et al. (2016)",
"ref_id": "BIBREF18"
},
{
"start": 705,
"end": 726,
"text": "(Galley et al., 2015;",
"ref_id": "BIBREF37"
},
{
"start": 727,
"end": 750,
"text": "Przybocki et al., 2008)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [
{
"start": 411,
"end": 418,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation Setup",
"sec_num": "6"
},
{
"text": "We experimented with all the models presented in Section 5. For question generation, we used a visual & textual sensitive model that uses bagof-words (V&T.BOW-Gen) to represent the textual context, which achieved better results. Earlier vision & language work such as VQA (Antol et al., 2015) has shown that a bag-of-words baseline outperforms LSTM-based models for representing textual input when visual features are available (Zhou et al., 2015) . In response generation, which needs to account for textual input consisting of two turns, we used the V&T.RNN-Gen model as the visual & textual-sensitive model for the response rows of tables 5 and 6. Since generating a response solely from visual context is unlikely to be successful, we did not use the V-Gen model in response generation. All models are trained on IGC Twitter dataset, except for VQG, which shares the same architecture as with the (V-Gen) model, but is trained on 7,500 questions from the VQG dataset (Mostafazadeh et al., 2016b) as a point of reference. We also include the gold human references from the IGC Crowd dataset in the human evaluation to set a bound on human performance. Table 4 presents example generations by our best performing systems. In human evaluation shown in Tables 5, the model that encodes both visual and textual context outperforms others. We note that human judges preferred the top generation in the n-best list over the reranked best, likely owing to the tradeoff between a safe and generic utterance and a riskier but contentful one. The human gold references are consistently favored throughout the table. We take this as evidence that IGC Crowd test set provides a robust and challenging test set for benchmarking progress.",
"cite_spans": [
{
"start": 272,
"end": 292,
"text": "(Antol et al., 2015)",
"ref_id": "BIBREF0"
},
{
"start": 428,
"end": 447,
"text": "(Zhou et al., 2015)",
"ref_id": "BIBREF44"
},
{
"start": 971,
"end": 999,
"text": "(Mostafazadeh et al., 2016b)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [
{
"start": 1155,
"end": 1162,
"text": "Table 4",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "7"
},
{
"text": "As shown in Table 6 , BLEU scores are low, as is characteristic for language tasks with intrin-Human Generation (Greedy) Generation (Beam, best) Table 6 : Results of evaluating using multireference BLEU. sically diverse outputs (Li et al., 2016b,a) . On BLEU, the multimodal V&T model outperforms all the other models across test sets, except for the VQG model which does significantly better. We attribute this to two issues: (1) the VQG training dataset contains event-centric images similar to the IGC Crowd test set, (2) Training on a highquality crowd-sourced dataset with controlled parameters can, to a significant extent, produce better results on similarly crowd-sourced test data than training on data found \"in the wild\" such as Twitter. However, crowd-sourcing multi-turn conversations between paired workers at large scale is prohibitively expensive, a factor that favors the use of readily available naturally-occurring but noisier data. Overall, in both automatic and human evaluation, the question generation models are more successful than response generation. This disparity might be overcome by (1) implementation of more sophisticated systems for richer modeling of long contexts across multiple conversational turns, (2) training on larger, higher-quality datasets.",
"cite_spans": [
{
"start": 112,
"end": 120,
"text": "(Greedy)",
"ref_id": null
},
{
"start": 228,
"end": 248,
"text": "(Li et al., 2016b,a)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 12,
"end": 19,
"text": "Table 6",
"ref_id": null
},
{
"start": 145,
"end": 152,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "7"
},
{
"text": "Generation (Reranked, best) Retrieval Gold Textual Visual V & T Textual Visual V & T VQG Textual Visual V & T Visual V",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "7"
},
{
"text": "We have introduced a new task of multimodal image-grounded conversation, in which, given an image and a natural language text, the system must generate meaningful conversational turns, the second turn being a question. We are releasing to the research community a crowd-sourced dataset of 4,222 high-quality, multiple-turn, multiplereference conversations about eventful images. Inasmuch as this dataset is not tied to the characteristics of any specific social media resource, e.g., Twitter or Reddit, we expect it to remain sta-ble over time, as it is less susceptible to attrition in the form of deleted posts or accounts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "8"
},
{
"text": "Our experiments provide evidence that capturing multimodal context improves the quality of question and response generation. Nonetheless, the performance gap between our best models and humans opens opportunities further research in the continuum from casual chit-chat conversation to more topic-oriented dialog. We expect that addition of other forms of grounding, such as temporal and geolocation information, often embedded in images, will further improve performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "8"
},
{
"text": "In this paper, we illustrated the application of this dataset using simple models trained on conversations on Twitter. In the future, we expect that more complex models and richer datasets will permit emergence of intelligent human-like agent behavior that can engage in implicit commonsense reasoning around images and proactively drive the interaction forward.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "8"
},
{
"text": "These annotations can be accessed through https:// goo.gl/MVyGzP",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Pilot studies showed that 80 character limit more effectively retains one-sentence utterances that are to the point.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "An example generic question is where is this? and a generic response is I don't know.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We are grateful to Piali Choudhury, Rebecca Hanson, Ece Kamar, and Andrew Mao for their assistance with crowd-sourcing. We would also like to thank our reviewers for their helpful comments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "VQA: Visual question answering",
"authors": [
{
"first": "Stanislaw",
"middle": [],
"last": "Antol",
"suffix": ""
},
{
"first": "Aishwarya",
"middle": [],
"last": "Agrawal",
"suffix": ""
},
{
"first": "Jiasen",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Margaret",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "Dhruv",
"middle": [],
"last": "Batra",
"suffix": ""
},
{
"first": "C",
"middle": [
"Lawrence"
],
"last": "Zitnick",
"suffix": ""
},
{
"first": "Devi",
"middle": [],
"last": "Parikh",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. ICCV",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Mar- garet Mitchell, Dhruv Batra, C. Lawrence Zitnick, and Devi Parikh. 2015. VQA: Visual question an- swering. In Proc. ICCV.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The Berkeley FrameNet Project",
"authors": [
{
"first": "Collin",
"middle": [
"F"
],
"last": "Baker",
"suffix": ""
},
{
"first": "Charles",
"middle": [
"J"
],
"last": "Fillmore",
"suffix": ""
},
{
"first": "John",
"middle": [
"B"
],
"last": "Lowe",
"suffix": ""
}
],
"year": 1998,
"venue": "Proc. COLING",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Collin F. Baker, Charles J. Fillmore, and John B. Lowe. 1998. The Berkeley FrameNet Project. In Proc. COLING.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "D\u00e9j\u00e0 image-captions: A corpus of expressive descriptions in repetition",
"authors": [
{
"first": "Jianfu",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Polina",
"middle": [],
"last": "Kuznetsova",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Warren",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. NAACL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jianfu Chen, Polina Kuznetsova, David Warren, and Yejin Choi. 2015. D\u00e9j\u00e0 image-captions: A corpus of expressive descriptions in repetition. In Proc. NAACL-HLT.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Learning phrase representations using rnn encoder-decoder for statistical machine translation",
"authors": [
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Bart",
"middle": [],
"last": "Van Merri\u00ebnboer",
"suffix": ""
},
{
"first": "Caglar",
"middle": [],
"last": "Gulcehre",
"suffix": ""
},
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Fethi",
"middle": [],
"last": "Bougares",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "Proc. EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kyunghyun Cho, Bart Van Merri\u00ebnboer, Caglar Gul- cehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. In Proc. EMNLP.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Visual dialog. In Proc. CVPR",
"authors": [
{
"first": "Abhishek",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Satwik",
"middle": [],
"last": "Kottur",
"suffix": ""
},
{
"first": "Khushi",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Avi",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Deshraj",
"middle": [],
"last": "Yadav",
"suffix": ""
},
{
"first": "M",
"middle": [
"F"
],
"last": "Jos\u00e9",
"suffix": ""
},
{
"first": "Devi",
"middle": [],
"last": "Moura",
"suffix": ""
},
{
"first": "Dhruv",
"middle": [],
"last": "Parikh",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Batra",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, Jos\u00e9 M. F. Moura, Devi Parikh, and Dhruv Batra. 2017a. Visual dialog. In Proc. CVPR.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Learning cooperative visual dialog agents with deep reinforcement learning",
"authors": [
{
"first": "Abhishek",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Satwik",
"middle": [],
"last": "Kottur",
"suffix": ""
},
{
"first": "M",
"middle": [
"F"
],
"last": "Jos\u00e9",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Moura",
"suffix": ""
},
{
"first": "Dhruv",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Batra",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. ICVV",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abhishek Das, Satwik Kottur, Jos\u00e9 M. F. Moura, Stefan Lee, and Dhruv Batra. 2017b. Learning cooperative visual dialog agents with deep reinforcement learn- ing. In Proc. ICVV.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Language models for image captioning: The quirks and what works",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Fang",
"suffix": ""
},
{
"first": "Saurabh",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Zweig",
"suffix": ""
},
{
"first": "Margaret",
"middle": [],
"last": "Mitchell",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. ACL-IJCNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Hao Cheng, Hao Fang, Saurabh Gupta, Li Deng, Xiaodong He, Geoffrey Zweig, and Mar- garet Mitchell. 2015. Language models for image captioning: The quirks and what works. In Proc. ACL-IJCNLP.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Long-term recurrent convolutional networks for visual recognition and description",
"authors": [
{
"first": "Jeff",
"middle": [],
"last": "Donahue",
"suffix": ""
},
{
"first": "Lisa",
"middle": [
"Anne"
],
"last": "Hendricks",
"suffix": ""
},
{
"first": "Sergio",
"middle": [],
"last": "Guadarrama",
"suffix": ""
},
{
"first": "Marcus",
"middle": [],
"last": "Rohrbach",
"suffix": ""
},
{
"first": "Subhashini",
"middle": [],
"last": "Venugopalan",
"suffix": ""
},
{
"first": "Kate",
"middle": [],
"last": "Saenko",
"suffix": ""
},
{
"first": "Trevor",
"middle": [
"Darrell"
],
"last": "",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. CVPR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeff Donahue, Lisa Anne Hendricks, Sergio Guadar- rama, Marcus Rohrbach, Subhashini Venugopalan, Kate Saenko, and Trevor Darrell. 2015. Long-term recurrent convolutional networks for visual recogni- tion and description. In Proc. CVPR.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "From captions to visual concepts and back",
"authors": [
{
"first": "Saurabh",
"middle": [],
"last": "Hao Fang",
"suffix": ""
},
{
"first": "Forrest",
"middle": [
"N"
],
"last": "Gupta",
"suffix": ""
},
{
"first": "Rupesh",
"middle": [],
"last": "Iandola",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Doll\u00e1r",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Margaret",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "John",
"middle": [
"C"
],
"last": "Mitchell",
"suffix": ""
},
{
"first": "C",
"middle": [
"Lawrence"
],
"last": "Platt",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Zitnick",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zweig",
"suffix": ""
}
],
"year": 2014,
"venue": "Proc. CVPR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hao Fang, Saurabh Gupta, Forrest N. Iandola, Ru- pesh Srivastava, Li Deng, Piotr Doll\u00e1r, Jianfeng Gao, Xiaodong He, Margaret Mitchell, John C. Platt, C. Lawrence Zitnick, and Geoffrey Zweig. 2014. From captions to visual concepts and back. In Proc. CVPR.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Every picture tells a story: Generating sentences from images",
"authors": [
{
"first": "Ali",
"middle": [],
"last": "Farhadi",
"suffix": ""
},
{
"first": "Mohsen",
"middle": [],
"last": "Hejrati",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [
"Amin"
],
"last": "Sadeghi",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Young",
"suffix": ""
},
{
"first": "Cyrus",
"middle": [],
"last": "Rashtchian",
"suffix": ""
},
{
"first": "Julia",
"middle": [],
"last": "Hockenmaier",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Forsyth",
"suffix": ""
}
],
"year": 2010,
"venue": "Proc. ECCV",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ali Farhadi, Mohsen Hejrati, Mohammad Amin Sadeghi, Peter Young, Cyrus Rashtchian, Julia Hockenmaier, and David Forsyth. 2010. Every pic- ture tells a story: Generating sentences from images. In Proc. ECCV.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Chris Quirk, Margaret Mitchell, Jianfeng Gao, and Bill Dolan. 2015. deltaBLEU: A discriminative metric for generation tasks with intrinsically diverse targets",
"authors": [
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Brockett",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Sordoni",
"suffix": ""
},
{
"first": "Yangfeng",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
}
],
"year": null,
"venue": "Proc. ACL-IJCNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michel Galley, Chris Brockett, Alessandro Sordoni, Yangfeng Ji, Michael Auli, Chris Quirk, Mar- garet Mitchell, Jianfeng Gao, and Bill Dolan. 2015. deltaBLEU: A discriminative metric for generation tasks with intrinsically diverse targets. In Proc. ACL-IJCNLP.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Framing image description as a ranking task: Data, models and evaluation metrics",
"authors": [
{
"first": "Micah",
"middle": [],
"last": "Hodosh",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Young",
"suffix": ""
},
{
"first": "Julia",
"middle": [],
"last": "Hockenmaier",
"suffix": ""
}
],
"year": 2013,
"venue": "J. Artif. Int. Res",
"volume": "47",
"issue": "1",
"pages": "853--899",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Micah Hodosh, Peter Young, and Julia Hockenmaier. 2013. Framing image description as a ranking task: Data, models and evaluation metrics. J. Artif. Int. Res., 47(1):853-899.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Tuning as ranking",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Hopkins",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2011,
"venue": "Proc. EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Hopkins and Jonathan May. 2011. Tuning as ranking. In Proc. EMNLP.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Michel Galley, and Margaret Mitchell. 2016. Visual storytelling. In Proc. NAACL-HLT",
"authors": [
{
"first": ";",
"middle": [],
"last": "Ting-Hao",
"suffix": ""
},
{
"first": ")",
"middle": [],
"last": "Kenneth",
"suffix": ""
},
{
"first": "Francis",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Nasrin",
"middle": [],
"last": "Ferraro",
"suffix": ""
},
{
"first": "Ishan",
"middle": [],
"last": "Mostafazadeh",
"suffix": ""
},
{
"first": "Aishwarya",
"middle": [],
"last": "Misra",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Agrawal",
"suffix": ""
},
{
"first": "Ross",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "Girshick",
"suffix": ""
},
{
"first": "Pushmeet",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Dhruv",
"middle": [],
"last": "Kohli",
"suffix": ""
},
{
"first": "C",
"middle": [
"Lawrence"
],
"last": "Batra",
"suffix": ""
},
{
"first": "Devi",
"middle": [],
"last": "Zitnick",
"suffix": ""
},
{
"first": "Lucy",
"middle": [],
"last": "Parikh",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Vanderwende",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ting-Hao (Kenneth) Huang, Francis Ferraro, Nasrin Mostafazadeh, Ishan Misra, Aishwarya Agrawal, Ja- cob Devlin, Ross Girshick, Xiaodong He, Push- meet Kohli, Dhruv Batra, C. Lawrence Zitnick, Devi Parikh, Lucy Vanderwende, Michel Galley, and Margaret Mitchell. 2016. Visual storytelling. In Proc. NAACL-HLT.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A diversity-promoting objective function for neural conversation models",
"authors": [
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Brockett",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Dolan",
"suffix": ""
}
],
"year": 2016,
"venue": "Proc. NAACL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016a. A diversity-promoting ob- jective function for neural conversation models. In Proc. NAACL-HLT.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A persona-based neural conversation model",
"authors": [
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Brockett",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Dolan",
"suffix": ""
}
],
"year": 2016,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016b. A persona-based neural con- versation model. In Proc. ACL.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Automatic evaluation of machine translation quality using longest common subsequence and skip-bigram statistics",
"authors": [
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Franz Josef",
"middle": [],
"last": "Och",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chin-Yew Lin and Franz Josef Och. 2004. Auto- matic evaluation of machine translation quality us- ing longest common subsequence and skip-bigram statistics. In Proc. ACL.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Microsoft coco: Common objects in context",
"authors": [
{
"first": "Tsung-Yi",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Maire",
"suffix": ""
},
{
"first": "Serge",
"middle": [],
"last": "Belongie",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Hays",
"suffix": ""
},
{
"first": "Pietro",
"middle": [],
"last": "Perona",
"suffix": ""
},
{
"first": "Deva",
"middle": [],
"last": "Ramanan",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Dollr",
"suffix": ""
},
{
"first": "C",
"middle": [
"Lawrence"
],
"last": "Zitnick",
"suffix": ""
}
],
"year": 2014,
"venue": "Proc. ECCV",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollr, and C. Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In Proc. ECCV.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "How NOT to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation",
"authors": [
{
"first": "Chia-Wei",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Lowe",
"suffix": ""
},
{
"first": "Iulian",
"middle": [],
"last": "Serban",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Noseworthy",
"suffix": ""
},
{
"first": "Laurent",
"middle": [],
"last": "Charlin",
"suffix": ""
},
{
"first": "Joelle",
"middle": [],
"last": "Pineau",
"suffix": ""
}
],
"year": 2016,
"venue": "Proc. EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chia-Wei Liu, Ryan Lowe, Iulian Serban, Mike Nose- worthy, Laurent Charlin, and Joelle Pineau. 2016. How NOT to evaluate your dialogue system: An em- pirical study of unsupervised evaluation metrics for dialogue response generation. In Proc. EMNLP.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "The ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Lowe",
"suffix": ""
},
{
"first": "Nissan",
"middle": [],
"last": "Pow",
"suffix": ""
},
{
"first": "Iulian",
"middle": [],
"last": "Serban",
"suffix": ""
},
{
"first": "Joelle",
"middle": [],
"last": "Pineau",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. SIGDIAL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan Lowe, Nissan Pow, Iulian Serban, and Joelle Pineau. 2015. The ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dia- logue systems. In Proc. SIGDIAL.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "A multiworld approach to question answering about realworld scenes based on uncertain input",
"authors": [
{
"first": "Mateusz",
"middle": [],
"last": "Malinowski",
"suffix": ""
},
{
"first": "Mario",
"middle": [],
"last": "Fritz",
"suffix": ""
}
],
"year": 2014,
"venue": "NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mateusz Malinowski and Mario Fritz. 2014. A multi- world approach to question answering about real- world scenes based on uncertain input. In NIPS.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Turkserver: Enabling synchronous and longitudinal online experiments",
"authors": [
{
"first": "Andrew",
"middle": [],
"last": "Mao",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Parkes",
"suffix": ""
},
{
"first": "Yiling",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Ariel",
"middle": [
"D"
],
"last": "Procaccia",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Krzysztof",
"suffix": ""
},
{
"first": "Haoqi",
"middle": [],
"last": "Gajos",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2012,
"venue": "Workshop on Human Computation (HCOMP)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew Mao, David Parkes, Yiling Chen, Ariel D. Pro- caccia, Krzysztof Z. Gajos, and Haoqi Zhang. 2012. Turkserver: Enabling synchronous and longitudinal online experiments. In Workshop on Human Com- putation (HCOMP).",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "A framework for representing knowledge",
"authors": [
{
"first": "Marvin",
"middle": [],
"last": "Minsky",
"suffix": ""
}
],
"year": 1974,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marvin Minsky. 1974. A framework for represent- ing knowledge. Technical report, Cambridge, MA, USA.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "with most of it being pictures now, I rarely use it\": Understanding twitter's evolving accessibility to blind users",
"authors": [
{
"first": "Kane",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2016,
"venue": "Proc. CHI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kane. 2016. \"with most of it being pictures now, I rarely use it\": Understanding twitter's evolving ac- cessibility to blind users. In Proc. CHI.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Caters: Causal and temporal relation scheme for semantic annotation of event structures",
"authors": [
{
"first": "Nasrin",
"middle": [],
"last": "Mostafazadeh",
"suffix": ""
},
{
"first": "Alyson",
"middle": [],
"last": "Grealish",
"suffix": ""
},
{
"first": "Nathanael",
"middle": [],
"last": "Chambers",
"suffix": ""
},
{
"first": "James",
"middle": [
"F"
],
"last": "Allen",
"suffix": ""
},
{
"first": "Lucy",
"middle": [],
"last": "Vanderwende",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the The 4th Workshop on EVENTS: Definition, Detection, Coreference, and Representation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nasrin Mostafazadeh, Alyson Grealish, Nathanael Chambers, James F. Allen, and Lucy Vanderwende. 2016a. Caters: Causal and temporal relation scheme for semantic annotation of event structures. In Pro- ceedings of the The 4th Workshop on EVENTS: Def- inition, Detection, Coreference, and Representation.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Margaret Mitchell, Xiaodong He, and Lucy Vanderwende",
"authors": [
{
"first": "Nasrin",
"middle": [],
"last": "Mostafazadeh",
"suffix": ""
},
{
"first": "Ishan",
"middle": [],
"last": "Misra",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
}
],
"year": 2016,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nasrin Mostafazadeh, Ishan Misra, Jacob Devlin, Mar- garet Mitchell, Xiaodong He, and Lucy Vander- wende. 2016b. Generating natural questions about an image. In Proc. ACL.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Im2text: Describing images using 1 million captioned photographs",
"authors": [
{
"first": "Vicente",
"middle": [],
"last": "Ordonez",
"suffix": ""
},
{
"first": "Girish",
"middle": [],
"last": "Kulkarni",
"suffix": ""
},
{
"first": "Tamara",
"middle": [
"L"
],
"last": "Berg",
"suffix": ""
}
],
"year": 2011,
"venue": "Proc. NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vicente Ordonez, Girish Kulkarni, and Tamara L. Berg. 2011. Im2text: Describing images using 1 million captioned photographs. In Proc. NIPS.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "BLEU: A method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. BLEU: A method for automatic evaluation of machine translation. In Proc. ACL.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Official results of the NIST 2008 metrics for machine translation challenge",
"authors": [
{
"first": "M",
"middle": [],
"last": "Przybocki",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Peterson",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Bronsart",
"suffix": ""
}
],
"year": 2008,
"venue": "Proc. MetricsMATR08 workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Przybocki, K. Peterson, and S. Bronsart. 2008. Of- ficial results of the NIST 2008 metrics for machine translation challenge. In Proc. MetricsMATR08 workshop.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Question relevance in VQA: identifying non-visual and false-premise questions",
"authors": [
{
"first": "Arijit",
"middle": [],
"last": "Ray",
"suffix": ""
},
{
"first": "Gordon",
"middle": [],
"last": "Christie",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
},
{
"first": "Dhruv",
"middle": [],
"last": "Batra",
"suffix": ""
},
{
"first": "Devi",
"middle": [],
"last": "Parikh",
"suffix": ""
}
],
"year": 2016,
"venue": "Proc. EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arijit Ray, Gordon Christie, Mohit Bansal, Dhruv Ba- tra, and Devi Parikh. 2016. Question relevance in VQA: identifying non-visual and false-premise questions. In Proc. EMNLP.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Data-driven response generation in social media",
"authors": [
{
"first": "Alan",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Cherry",
"suffix": ""
},
{
"first": "William B",
"middle": [],
"last": "Dolan",
"suffix": ""
}
],
"year": 2011,
"venue": "Proc. EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alan Ritter, Colin Cherry, and William B Dolan. 2011. Data-driven response generation in social media. In Proc. EMNLP.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Mykhaylo Andriluka, and Bernt Schiele. 2012. A database for fine grained activity detection of cooking activities",
"authors": [
{
"first": "Marcus",
"middle": [],
"last": "Rohrbach",
"suffix": ""
},
{
"first": "Sikandar",
"middle": [],
"last": "Amin",
"suffix": ""
}
],
"year": null,
"venue": "Proc. CVPR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marcus Rohrbach, Sikandar Amin, Mykhaylo An- driluka, and Bernt Schiele. 2012. A database for fine grained activity detection of cooking activities. In Proc. CVPR.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Hierarchical neural network generative models for movie dialogues",
"authors": [
{
"first": "Alessandro",
"middle": [],
"last": "Iulian V Serban",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Sordoni",
"suffix": ""
},
{
"first": "Aaron",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Joelle",
"middle": [],
"last": "Courville",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pineau",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1507.04808"
]
},
"num": null,
"urls": [],
"raw_text": "Iulian V Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. 2015a. Hierar- chical neural network generative models for movie dialogues. arXiv preprint arXiv:1507.04808.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "A survey of available corpora for building data-driven dialogue systems",
"authors": [
{
"first": "Iulian",
"middle": [],
"last": "Vlad Serban",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Lowe",
"suffix": ""
},
{
"first": "Laurent",
"middle": [],
"last": "Charlin",
"suffix": ""
},
{
"first": "Joelle",
"middle": [],
"last": "Pineau",
"suffix": ""
}
],
"year": 2015,
"venue": "CoRR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Iulian Vlad Serban, Ryan Lowe, Laurent Charlin, and Joelle Pineau. 2015b. A survey of available corpora for building data-driven dialogue systems. CoRR, abs/1512.05742.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Neural responding machine for short-text conversation",
"authors": [
{
"first": "Lifeng",
"middle": [],
"last": "Shang",
"suffix": ""
},
{
"first": "Zhengdong",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Hang",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. ACL-IJCNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lifeng Shang, Zhengdong Lu, and Hang Li. 2015. Neural responding machine for short-text conversa- tion. In Proc. ACL-IJCNLP.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Very deep convolutional networks for large-scale image recognition",
"authors": [
{
"first": "K",
"middle": [],
"last": "Simonyan",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Zisserman",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Simonyan and A. Zisserman. 2015. Very deep con- volutional networks for large-scale image recogni- tion. In Proc. ICLR.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "A neural network approach to context-sensitive generation of conversational responses",
"authors": [
{
"first": "Alessandro",
"middle": [],
"last": "Sordoni",
"suffix": ""
},
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Brockett",
"suffix": ""
},
{
"first": "Yangfeng",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Margaret",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "Jian-Yun",
"middle": [],
"last": "Nie",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Dolan",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. NAACL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, and Bill Dolan. 2015. A neural network approach to context-sensitive generation of conversational responses. In Proc. NAACL-HLT.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Sequence to sequence learning with neural networks",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2014,
"venue": "NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural net- works. In NIPS.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Translating videos to natural language using deep recurrent neural networks",
"authors": [
{
"first": "Subhashini",
"middle": [],
"last": "Venugopalan",
"suffix": ""
},
{
"first": "Huijuan",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Donahue",
"suffix": ""
},
{
"first": "Marcus",
"middle": [],
"last": "Rohrbach",
"suffix": ""
},
{
"first": "Raymond",
"middle": [],
"last": "Mooney",
"suffix": ""
},
{
"first": "Kate",
"middle": [],
"last": "Saenko",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. NAACL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Subhashini Venugopalan, Huijuan Xu, Jeff Donahue, Marcus Rohrbach, Raymond Mooney, and Kate Saenko. 2015. Translating videos to natural lan- guage using deep recurrent neural networks. In Proc. NAACL-HLT.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "A neural conversational model",
"authors": [
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Quoc",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. Deep Learning Workshop, ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oriol Vinyals and Quoc Le. 2015. A neural conver- sational model. In Proc. Deep Learning Workshop, ICML.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Show and tell: A neural image caption generator",
"authors": [
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Toshev",
"suffix": ""
},
{
"first": "Samy",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Dumitru",
"middle": [],
"last": "Erhan",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. CVPR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2015. Show and tell: A neural im- age caption generator. In Proc. CVPR.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Understanding Natural Language",
"authors": [
{
"first": "Terry",
"middle": [],
"last": "Winograd",
"suffix": ""
}
],
"year": 1972,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Terry Winograd. 1972. Understanding Natural Lan- guage. Academic Press, New York.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Show, attend and tell: Neural image caption generation with visual attention",
"authors": [
{
"first": "Kelvin",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Ba",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Kiros",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Aaron",
"middle": [
"C"
],
"last": "Courville",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "Richard",
"middle": [
"S"
],
"last": "Zemel",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron C. Courville, Ruslan Salakhutdinov, Richard S. Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. In Proc. ICML.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Simple baseline for visual question answering",
"authors": [
{
"first": "Bolei",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Yuandong",
"middle": [],
"last": "Tian",
"suffix": ""
},
{
"first": "Sainbayar",
"middle": [],
"last": "Sukhbaatar",
"suffix": ""
},
{
"first": "Arthur",
"middle": [],
"last": "Szlam",
"suffix": ""
},
{
"first": "Rob",
"middle": [],
"last": "Fergus",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bolei Zhou, Yuandong Tian, Sainbayar Sukhbaatar, Arthur Szlam, and Rob Fergus. 2015. Simple baseline for visual question answering. CoRR, abs/1512.02167.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "A naturally-occurring Image-Grounded Conversation."
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "The effectiveness of textual and visual context for asking questions."
},
"FIGREF2": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Question generation using the Visual Context Sensitive Model (V-Gen), Textual Context Sensitive Model (T-Gen), and the Visual & Textual Context Sensitive Model (V&T.BOW-Gen), respectively."
},
"FIGREF3": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "The visual & textual context sensitive model with RNN encoding (V&T.RNN-Gen)."
},
"TABREF2": {
"type_str": "table",
"content": "<table><tr><td colspan=\"2\">IGCCrowd (val and test sets, split: 40% and 60%)</td></tr><tr><td># conversations = # images</td><td>4,222</td></tr><tr><td>total # utterances</td><td>25,332</td></tr><tr><td># all workers participated</td><td>308</td></tr><tr><td>Max # conversations by one worker</td><td>20</td></tr><tr><td>Average payment per worker (min)</td><td>1.8 dollars</td></tr><tr><td>Median work time per worker (min)</td><td>10.0</td></tr><tr><td colspan=\"2\">IGCCrowd \u2212multiref (val and test sets, split: 40% and 60%)</td></tr><tr><td># additional references per question/response</td><td>5</td></tr><tr><td>total # multi-reference utterances</td><td>42,220</td></tr></table>",
"text": "Example full conversations in our IGC Crowd dataset. For comparison, we also include VQG questions in which the image is the only context.",
"html": null,
"num": null
},
"TABREF3": {
"type_str": "table",
"content": "<table/>",
"text": "Basic Dataset Statistics.",
"html": null,
"num": null
},
"TABREF5": {
"type_str": "table",
"content": "<table><tr><td/><td/><td>cause</td><td/><td/></tr><tr><td>My son</td><td>is ahead</td><td>and</td><td>surprised</td><td/></tr><tr><td/><td/><td>before</td><td/><td/></tr><tr><td>Did he end up</td><td>winning</td><td>the</td><td>race</td><td/></tr><tr><td>Yes, he</td><td>won</td><td>he</td><td>can't believe</td><td>it</td></tr><tr><td/><td/><td>cause</td><td/><td/></tr></table>",
"text": "FrameNet (FN) annotation of an example.",
"html": null,
"num": null
},
"TABREF8": {
"type_str": "table",
"content": "<table/>",
"text": "Example question and response generations on IGC Crowd test set. All the generation models use beam search with reranking. In the textual context, <UTT> separates different utterances. The generations in bold are acceptable utterances given the underlying context.",
"html": null,
"num": null
},
"TABREF10": {
"type_str": "table",
"content": "<table><tr><td/><td colspan=\"2\">Generation</td><td/><td colspan=\"2\">Retrieval</td></tr><tr><td colspan=\"6\">Textual Visual V &amp; T VQG Visual V &amp; T</td></tr><tr><td>Question 1.71</td><td>3.23</td><td colspan=\"3\">4.41 8.61 0.76</td><td>1.16</td></tr><tr><td>Response 1.34</td><td>-</td><td>1.57</td><td>-</td><td>-</td><td>0.66</td></tr></table>",
"text": "Human judgment results on the IGC Crowd test set. The maximum score is 3. Per model, the human score is computed by averaging across multiple images. The boldfaced numbers show the highest score among the systems. The overall highest scores (underlined) are the human gold standards.",
"html": null,
"num": null
}
}
}
}