{ "paper_id": "D19-1014", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:10:18.097525Z" }, "title": "Building Task-Oriented Visual Dialog Systems Through Alternative Optimization Between Dialog Policy and Language Generation", "authors": [ { "first": "Mingyang", "middle": [], "last": "Zhou", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of California", "location": { "settlement": "Davis" } }, "email": "minzhou@ucdavis.edu" }, { "first": "Josh", "middle": [], "last": "Arnold", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of California", "location": { "settlement": "Davis" } }, "email": "jarnold@ucdavis.edu" }, { "first": "Zhou", "middle": [], "last": "Yu", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of California", "location": { "settlement": "Davis" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Reinforcement learning (RL) is an effective approach to learn an optimal dialog policy for task-oriented visual dialog systems. A common practice is to apply RL on a neural sequence-to-sequence (seq2seq) framework with the action space being the output vocabulary in the decoder. However, it is difficult to design a reward function that can achieve a balance between learning an effective policy and generating a natural dialog response. This paper proposes a novel framework that alternatively trains a RL policy for image guessing and a supervised seq2seq model to improve dialog generation quality. We evaluate our framework on the Guess-Which task and the framework achieves the state-of-the-art performance in both task completion and dialog quality.", "pdf_parse": { "paper_id": "D19-1014", "_pdf_hash": "", "abstract": [ { "text": "Reinforcement learning (RL) is an effective approach to learn an optimal dialog policy for task-oriented visual dialog systems. A common practice is to apply RL on a neural sequence-to-sequence (seq2seq) framework with the action space being the output vocabulary in the decoder. However, it is difficult to design a reward function that can achieve a balance between learning an effective policy and generating a natural dialog response. This paper proposes a novel framework that alternatively trains a RL policy for image guessing and a supervised seq2seq model to improve dialog generation quality. We evaluate our framework on the Guess-Which task and the framework achieves the state-of-the-art performance in both task completion and dialog quality.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Visually-grounded conversational artificial intelligence (AI) is an important field that explores the extent intelligent systems are able to hold meaningful conversations regarding visual content. Visually-grounded conversational AI can be applied to a wide range of real-world tasks, including assisting blind people to navigate their surroundings, online recommendation systems, and analysing mass amounts of visual media through natural language. Current approaches to these tasks involve an end-to-end framework that maps the multi-modal context to a deep vector and in order to decode a natural dialog response. This framework can be trained through supervised learning (SL) with the objective to maximize the distribution of the response given a human-human dialog history. Given a large conversational data, the neural end-to-end system can effectively learn to generate coherent and natural language.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "While much success has been achieved by applying neural sequence to sequence models to open visual grounding conversation, the visual dialog system also needs to learn an optimal strategy to efficiently accomplish an external goal through natural conversations. To address this issue, various image guessing tasks such us Guess-Which (Chattopadhyay et al., 2017) and Guess-What (de Vries et al., 2016) are proposed to evaluate a visual-grounded conversational agent on its ability to retrieve visual content via conversing in natural language. To obtain an optimal dialog policy, reinforcement learning (RL) is introduced to enable the neural end-to-end framework to model a more effective action distribution by exploring different dialog strategies. With the application of RL, the visual dialog system can generate more consistent responses and achieve a higher level of engagement in the conversation when compared to a dialog system trained via SL (Das et al., 2017b; Zhang et al., 2017) . A typical way to apply RL on a dialog system is to assign a task-related reward to influence the utterance generation process by treating each output word as the action step. A significant limitation of this approach is that it is difficult to achieve an optimal dialog policy that can both effectively complete the external goal and generate natural utterances (Zhao et al., 2019; Das et al., 2017b) . As there is no ground truth reference during the RL training stage, the dialog system can only leverage the reward signal when generating the response. However, this approach often deviates from natural language as it is challenging to define a comprehensive reward that considers all aspects of the dialog quality, and in addition, assigns appropriate rewards to the large word vocabulary action space.", "cite_spans": [ { "start": 334, "end": 362, "text": "(Chattopadhyay et al., 2017)", "ref_id": "BIBREF0" }, { "start": 367, "end": 401, "text": "Guess-What (de Vries et al., 2016)", "ref_id": null }, { "start": 953, "end": 972, "text": "(Das et al., 2017b;", "ref_id": "BIBREF2" }, { "start": 973, "end": 992, "text": "Zhang et al., 2017)", "ref_id": "BIBREF14" }, { "start": 1357, "end": 1376, "text": "(Zhao et al., 2019;", "ref_id": "BIBREF15" }, { "start": 1377, "end": 1395, "text": "Das et al., 2017b)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper we propose a novel learning curriculum to address the challenge of joint learning between the dialog policy and language generation for task-oriented dialog systems. In our framework, we separate the training of the image re-trieval policy from dialog generation by applying RL, with the goal of achieving an optimal policy for guessing the target image at every turn. In addition, we apply a language model objective function to optimize the utterance generator to mitigate language degeneration. We specifically study this framework in the image guessing task Guess-Which, where a conversational agent attempts to guess a target image by asking a series of questions. When compared to state-of-art RL visual dialog systems, our method achieves superior performance in both task-accomplishment and dialog quality.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2 Related Work", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Visual dialog systems are an emerging area of interdisciplinary research that attracts both the vision and language communities due to the potential applications. Das et al. (2017a) proposed a visual dialog task in which a conversational agent attempts to answer questions regarding an assigned image based on a dialog history. To approach this task, they initially collected data by having two people chat about an image with one person acting as the questioner and the other as the answerer. GuessWhich (Chattopadhyay et al., 2017) extends VisDial with the goal to build an agent that learns how to identify a target image through question and answers. (de Vries et al., 2016) additionally introduced a game in which a series of yes-or-no questions are asked by an agent in order to locate an object in an image. Many researchers approached these tasks via reinforcement learning (RL), with the goal of obtaining an optimal dialog policy. Zhang et al. (2017) , for example, designed three rewards with respect to the goals of task achievement, efficiency, and question informativeness, in order to help the agent to achieve an effective question generation policy for GuessWhat game. Das et al. (2017b) applies reinforcement learning in the GuessWhich task and demonstrates a moderate improvement in accuracy compared to the supervised learning approach. Both methods apply RL on a neural end-to-end pipeline to jointly influence the language generation and dialog policy. Due the challenge of designing an appropriate reward for language generation, these methods generate responses that deviate from human natural language. Zhang et al. (2018) , proposed an approach involving hierarchical reinforcement learning and state-adaptation techniques that enable the agent to learn an optimal and efficient multi-modal policy. The bottleneck of (Zhang et al., 2018) 's method, however, is that the system response is retrieved from a predefined humanwritten or system-generated utterance. The number of predefined responses are limited, therefore, this method does not easily generalize to other tasks in real-world settings. We address these limitations by applying RL on a reduced, yet more relevant action space, while optimizing the dialog generator in a supervised fashion. We alternatively optimize policy learning to language generation to combine the two tasks together.", "cite_spans": [ { "start": 163, "end": 181, "text": "Das et al. (2017a)", "ref_id": "BIBREF1" }, { "start": 505, "end": 533, "text": "(Chattopadhyay et al., 2017)", "ref_id": "BIBREF0" }, { "start": 941, "end": 960, "text": "Zhang et al. (2017)", "ref_id": "BIBREF14" }, { "start": 1186, "end": 1204, "text": "Das et al. (2017b)", "ref_id": "BIBREF2" }, { "start": 1628, "end": 1647, "text": "Zhang et al. (2018)", "ref_id": "BIBREF13" }, { "start": 1843, "end": 1863, "text": "(Zhang et al., 2018)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Visual Dialog System", "sec_num": "2.1" }, { "text": "Various RL-based models have been proposed to train task-oriented dialog systems (Williams and Young, 2007) . In order to build a traditional modular-based dialog system, researchers first identify the semantic representation, such as the dialog acts and slots in user utterances. Then they accumulate these semantic representations over time to track the dialog state. Finally they apply RL to learn an optimized dialog policy given the dialog state (Raux et al., 2005; Shi and Yu, 2018) . Such modular-based dialog systems are effective in narrow task domains, such as searching a bus route schedule and reserving a restaurant through natural language, but they fail to generalize to complex settings where the size of the action space increases. Owing to the development of deep learning, RL on neural sequence-to-sequence models has been explored in more complex dialog domains such as open-domain conversation (Li et al., 2016) and negotiation (Lewis et al., 2017) . However, due to the difficulty of assigning appropriate rewards when operating in a large action space, these frameworks cannot generate fluent dialog utterances. Zhao et al. (2019) proposed a novel latent action RL framework to marry the advantage of a module based approach and sequence-to-sequence approach. They learned the optimal dialog policy in a complex task-oriented dialog domain while achieving decent conversation quality. We study the similar issue in a multimodal task-oriented dialog scenario. We propose an iterative approach to optimize dialog policy using RL methods and system response generation via SL. In the GuessWhich problem, we aim to build an agent (Q-Bot) that attempts to guess an image i tgt that another agent (A-Bot) knows by asking it a series of questions. At the beginning of the conversation, the Q-Bot is primed with a short caption c of the target image that is only known by A-Bot. At every round t, the Q-Bot generates a question q t to elicit as much information as possible about the target image and the A-Bot provides an appropriate answer a t with regard to q t and the target image. In the end, the agent guesses the target image among a set of images considering the entire conversation.", "cite_spans": [ { "start": 81, "end": 107, "text": "(Williams and Young, 2007)", "ref_id": "BIBREF12" }, { "start": 451, "end": 470, "text": "(Raux et al., 2005;", "ref_id": "BIBREF6" }, { "start": 471, "end": 488, "text": "Shi and Yu, 2018)", "ref_id": "BIBREF8" }, { "start": 915, "end": 932, "text": "(Li et al., 2016)", "ref_id": "BIBREF5" }, { "start": 949, "end": 969, "text": "(Lewis et al., 2017)", "ref_id": "BIBREF4" }, { "start": 1135, "end": 1153, "text": "Zhao et al. (2019)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "RL on Task-oriented Dialog System", "sec_num": "2.2" }, { "text": "In addition, our dialog system also guesses a candidate image i t out of an image database I = {i k } m k=0 at every turn. This action models the process of sequentially updating the visual belief state on the target image based on the latest dialog history. Conditioned on the current guessed image and the prior dialog contexts, the system will generate an optimal question in order to get the maximum information from A-Bot that can strengthen the system's belief on the target image. At the end of the conversation, our Q-Bot will guess the target image based on the multimodal contexts s n = (q 1:n , a 1:n , i 1:n , c) consisting of the dialog history and the trajectory of guessed images.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "RL on Task-oriented Dialog System", "sec_num": "2.2" }, { "text": "Our Q-Bot is constructed on top of a hierarchical encoder-decoder framework (Serban et al., 2015), which consists of three major components: The Response Encoder, the Question Decoder, and the Image Guesser. We introduce each component as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Architecture", "sec_num": "3.2" }, { "text": "Response Encoder The goal of the response encoder is to append the question q t , the answer a t , and the guessed image i t received at current round to the dialog history and obtain an updated vector representation of the multimodal context s t . The image i t is encoded with a pre-trained convolutional neural network VGG-16 (Simonyan and Zisserman, 2015) followed by a linear embedding layer and the image feature vector denoted as z t . For the question and answer pair at the current round (q t , a t ), we map them to a hidden state vector f t through the LSTM based QA Encoder. We then apply a linear projection on the concatenation of f t and z t in order to obtain the multi-modal context vector h t for the current round. The context vector is then passed through another LSTM encoder: History Encoder generates an updated dialog history representation s t = HistoryEnc(h t , s t\u22121 ). We denote the trainable parameters for Response Encoder as \u03b8 e .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Architecture", "sec_num": "3.2" }, { "text": "The question decoder is a two-layer LSTM network initialized with the most updated dialog history representation vector s t from the response encoder. It will sequentially sample the words to come up with the next question q t . The learned parameters for question decoder are denoted as \u03b8 d .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Decoder", "sec_num": null }, { "text": "Image Guesser The Image Guesser attempts to identify the candidate image that best aligns with the dialog history. Given a image database I = {i k } m k=0 where we sample the candidate image, we first extract the image feature representations {z k } m k=0 for all candidate images with the convolutional neural network and image embedding layer defined in response encoder. Then, we can sample a candidate image i k for the current turn based on the euclidean distance d(z k , s t ) between the image feature of the candidate image and the current dialog history vector. The image with the smallest euclidean distance is selected as the guess i t at the current round. The associated parameters for image guesser are defined as \u03b8 g .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Decoder", "sec_num": null }, { "text": "We follow a two-stage training fashion as introduced in many previous end-to-end RL dialog systems (Das et al., 2017b; Zhang et al., 2017; Zhao et al., 2019) , where we first pre-train the dialog framework with a supervised objective then apply reinforcement learning to learn an optimal policy to retrieve the target image. The Supervised pretraining is a critical step that facilitates an effective policy exploration for RL training, as it is difficult to explore a complex action space with limited prior knowledge. During RL training, we introduce an alternative learning method between dialog policy exploration and natural utterance generation that addresses the issue of language degeneration in previous RL based visual dialog systems (Das et al., 2017b) . We introduce each training method as follows.", "cite_spans": [ { "start": 99, "end": 118, "text": "(Das et al., 2017b;", "ref_id": "BIBREF2" }, { "start": 119, "end": 138, "text": "Zhang et al., 2017;", "ref_id": "BIBREF14" }, { "start": 139, "end": 157, "text": "Zhao et al., 2019)", "ref_id": "BIBREF15" }, { "start": 744, "end": 763, "text": "(Das et al., 2017b)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Training Dialog System", "sec_num": "3.3" }, { "text": "During the supervised pre-training process, we jointly optimize the objective to generate questions and also predict target image features from dialog contexts. The task of question generation is optimized by maximizing the log conditional probability of the next question dependent on a ground truth dialog for every round of the conversation. For the image feature prediction, we minimize the mean square error (MSE) between the target image feature z tgt and the dialog context vector s t at each round. The joint loss function for supervised pre-training is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Supervised Pre-training", "sec_num": "3.3.1" }, { "text": "L SL (\u03b8 r , \u03b8 d , \u03b8 g ) = \u03b1 n t=0 log p(q t |s t ) + \u03b2 n t=0 MSE(z tgt , s t ) (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Supervised Pre-training", "sec_num": "3.3.1" }, { "text": "Where \u03b1 and \u03b2 are weights assigned to the objective function of each task in the joint objective function. With SL pre-training process, the dialog system is facilitated with the ability to estimate a visual object and emit a natural language sentence given a dialog context.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Supervised Pre-training", "sec_num": "3.3.1" }, { "text": "In our framework, we treat the sequence of image guess through out the conversation as a partially observable markov decision process and train a policy network via RL to obtain an optimal strategy to retrieve the target image. We formally describe state, policy, action, rewards, and the training procedures in our pipeline.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reinforcement Learning on Image Retrieval", "sec_num": "3.3.2" }, { "text": "State The dialog states in our framework consist of a combination of multimodal contexts, including the image caption c, the dialog history with A-Bot [q 1 , a 2 , . . . , q t , a t ], and the image guessing", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reinforcement Learning on Image Retrieval", "sec_num": "3.3.2" }, { "text": "trajectories [i 1 , i 2 , . . . , i t ].", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reinforcement Learning on Image Retrieval", "sec_num": "3.3.2" }, { "text": "Policy The dialog policy \u03c0 \u03b8r,\u03b8g (i t |S t ) is a stochastic policy that samples the candidate image to guess from an image set based on the previous dialog histories. The policy is learned from response encoder and image generator which is parameterized via \u03b8 r and \u03b8 g .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reinforcement Learning on Image Retrieval", "sec_num": "3.3.2" }, { "text": "Action The full action space is the number of images in the database that we can sample to guess an image. As the pre-trained process already enables the system to approximate a target image feature z tgt with the dialog history representation vector s t , we reduce the action space to the top-K nearest images, s t , based upon the euclidean distance. The probability to sample an image i j is gained by applying a softmax function over the top-K candidates on their distance to s t : \u03c0(j) = e \u2212d j K k=1 e \u2212d k . d j represents the meansquare-distance between the j-th image and the dialog history state vector s t .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reinforcement Learning on Image Retrieval", "sec_num": "3.3.2" }, { "text": "We use the ranking percentile of the target image with respect to the dialog history vector s t as the reward signal to credit the guess at each turn. The goal is to maximize the expectation value of the discounted return E[ n t=1 \u03b3 t r t ] over the n-round conversation. r t is the ranking percentile of target image at round t and \u03b3 is the discounted factor between (0, 1).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rewards", "sec_num": null }, { "text": "Training Procedure Inspired from the RL training process on the iterative image retrieval framework (Guo et al., 2018) , we apply the policy improvement theory (Sutton and Barto, 1998) to estimate an improved policy \u03c0 * (s t ) from an existing policy \u03c0(s t ) obtained from the pre-trained dialog system. Given a dialog state s t and the action a t derived from the existing policy, the value estimated by the current policy for taking the action", "cite_spans": [ { "start": 100, "end": 118, "text": "(Guo et al., 2018)", "ref_id": "BIBREF3" }, { "start": 160, "end": 184, "text": "(Sutton and Barto, 1998)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Rewards", "sec_num": null }, { "text": "a t is Q \u03c0 (s t , a t ) = E[ n t =t \u03b3 t r t ].", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rewards", "sec_num": null }, { "text": "To improve this, we explore a different action a * t = a t such that a larger policy value Q \u03c0 (s t , a * t ) > Q \u03c0 (s t , a t ) estimated with the current policy is achieved. Then we can adjust the existing policy \u03c0(s t ) to a new policy \u03c0 * (s t ) that executes that optimal action a * t given the current dialog state. The parameters of the policy can be effectively optimized via a cross entropy loss function conditioned on the derived optimal action a * t :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rewards", "sec_num": null }, { "text": "L RL (\u03b8 r , \u03b8 g ) = E[\u2212 n t=1 log(\u03c0 \u03b8r,\u03b8g (a * t |s t ))] (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rewards", "sec_num": null }, { "text": "Compared to the previous RL visual-grounded conversational agent, (Das et al., 2017b) , there are several advantages of conducting policy learning on the action level of guessing the image. First, the action space of the top-k nearest neighbors are much smaller compared to the vocabulary size of the output words which reduces the difficulty to explore optimal strategies. Second, only the parameters of response encoder and image generator will be optimized during the RL training stage. The question decoder stays intact so that it is less likely for the dialog system to suffer from language deviation.", "cite_spans": [ { "start": 66, "end": 85, "text": "(Das et al., 2017b)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Rewards", "sec_num": null }, { "text": "Although the parameters of the decoder won't be impacted during the RL training stage, the shared response encoder of the dialog context is still optimized with policy learning. The language distribution captured by both the response encoder and question decoder will gradually be differentiated from the original human dialog distribution. To prevent the potential language degeneration behavior, we alternatively optimize the dialog system with a policy learning objective in equation 2 and the language model objective function in equation 1 at every other epoch. It assures the dialog system maintains a good estimation of the human language distribution while also effectively exploring various dialog actions in order to achieve the task of guessing the right image.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Alternating Policy Learning and Language Generation", "sec_num": "3.3.3" }, { "text": "We evaluate the performance of our task-oriented dialog system by playing the image guessing game, GuessWhich with an automatic answer bot.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments 4.1 AI-AI Image Guessing Game", "sec_num": "4" }, { "text": "Our conversational agent's goal is to locate the target image out of the 9,628 test images by interacting with the other player in five conversation exchanges. We evaluate agent on both goal achievement and utterance generation quality using two automatic evaluation metrics Percentile Mean Rank (PMR) and perplexity respectively. PMR estimates how good the agent can rank the target image against other candidates in the test database based on its current dialog state. Perplexity estimates the closeness of the generated response to a reference utterance given a dialog context from the VisDial dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments 4.1 AI-AI Image Guessing Game", "sec_num": "4" }, { "text": "To evaluate the ability of our task-oriented dialog system in a realistic conversational scenario, we also make our agent play the image guessing game with human users. The games are set up as 20image guessing games where the agent attempts to guess a target image outside of a pool of 20 candidate images by asking a human player 5 rounds of questions. The objective of the human player is to play the role of answer bot and answer agent's question with respect to the target image. In this setting, the performance of the agent on task accomplishment is evaluated by the game win rates. The quality of the dialogs are manually rated on four criteria: fluency, comprehension, diversity and relevance. Fluency defines the naturalness and readability of the generated question in English. Comprehension represents the consistency of the generated question with respect to the previous dialog context. Diversity evaluates the uniqueness of the questions generated within one game. Relevance presents how well the asked question is related to the target image and the given caption.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Human-AI Image Guessing Game", "sec_num": "4.2" }, { "text": "We compare the performance of our model with state-of-the-art task-oriented visual dialog systems. Meanwhile we also perform an ablation study to evaluate the contribution of different designs in our framework. We introduce each model as follows: SL-Q: The dialog agent from (Das et al., 2017b) , which is trained with a joint supervised learning objective function for language generation and image prediction.", "cite_spans": [ { "start": 275, "end": 294, "text": "(Das et al., 2017b)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Comparative Models", "sec_num": "4.3" }, { "text": "RL-Q: The dialog agent from (Das et al., 2017b) which is fine-tuned on a trained SL-Q by applying RL to the action space of output word vocabulary.", "cite_spans": [ { "start": 28, "end": 47, "text": "(Das et al., 2017b)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Comparative Models", "sec_num": "4.3" }, { "text": "SL-Q-IG: The dialog agent from this framework is build on top of the SL-Q. Compared to SL-Q, SL-Q-IG has an additional image guesser module that makes a guess on target image at every round. SL-Q-IG also has an image encoder which fuses the guessed candidate image into the dialog history tracker. We only train this model with the supervised learning objective introduced equation 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparative Models", "sec_num": "4.3" }, { "text": "RL-Q-IG: We use RL method to fine-tune SL-Q-IG. The RL method used is applied on action space of guessing candidate image. We alternate the model to optimize towards dialog policy learning and language generation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparative Models", "sec_num": "4.3" }, { "text": "RL-Q-IG-NA: We fine-tune SL-Q-IG by applying RL to the action space of guessing candidate image and only optimized with policy learning objective function alone.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparative Models", "sec_num": "4.3" }, { "text": "The dialog agent from our framework, which is fine-tuned on a trained SL-Q-IG by applying reinforcement learning on output word vocabulary. It follows the same training procedures as RL-Q to conduct policy learning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "RL-Q-IG-W:", "sec_num": null }, { "text": "All the SL dialog agents are trained on the Vis-Dial Dataset with the default setting from (Das et al., 2017b) for 40 epochs. The RL dialog agents are then fine-tuned on their corresponding SL dialog agents for another 20 epochs. We evaluate every model on AI-AI image guessing games with the same answer bot, trained on the Visdial Dataset with the objective of visual question answering. We only evaluate RL-Q, SL-Q-IG and RL-Q-IG in human evaluation.", "cite_spans": [ { "start": 91, "end": 110, "text": "(Das et al., 2017b)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "RL-Q-IG-W:", "sec_num": null }, { "text": "In order to evaluate the effectiveness of the model, we designed three human evaluation tasks. Six college students were recruited to conduct the evaluation. Each student evaluated 100 games using the ground truth captions and 30 games using human generated captions. An additional three evaluators each completed 30 rounds of the relevancy experiment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Human-AI Evaluation Implementation", "sec_num": "4.4" }, { "text": "Ground Truth Captions We generated 100 image guessing games that used the ground truth captions to ensure a consistent amount of information is supplied across all human evaluators. Each game consists of a randomly selected set of 20 images from the VisDial Dataset, with one image randomly chosen as the target. For each game, we test three different models, each twice, resulting in a total of 600 evaluated games from the 100 generated games. We keep the identity of the models anonymous to the evaluator.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Human-AI Evaluation Implementation", "sec_num": "4.4" }, { "text": "During each game, the human evaluator is pre-sented with a target image the agent is trying to guess. Five rounds of Q&A take place in which the agent asks a question to elicit information and the human evaluator responds with a relevant truthful answer. At the end of each game, the evaluator is asked to rate the conversation on four criteria: fluency, relevance, comprehension and diversity. Human Generated Captions In order to distinguish SL-Q-IG and RL-Q-IG in a more natural setting, we generate an additional 30 games, similar to the previous human evaluation task, except when beginning the game, the evaluator is asked to provide the caption for the target image instead of using the ground truth.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Human-AI Evaluation Implementation", "sec_num": "4.4" }, { "text": "Relevance Experiment We noticed that the human evaluators found rating dialogues on the relevance criteria challenging and nuanced. In order to reduce the difficulty of rating dialogues using the relevance criteria, we designed a separate experiment in which, using the conversations obtained from the previous 600 evaluated ground truth games, a human evaluator is presented with three complete conversations side by side at each round. The evaluator then selects the most relevant conversation out of the three that corresponds to the image caption. Each of the three conversations have the same caption, however, correspond to a different model, thus allowing for an effective comparison between the relevancy of each model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Human-AI Evaluation Implementation", "sec_num": "4.4" }, { "text": "Image Retrieval It is clear from Fig 2 that our dialog system significantly outperforms the baseline models from (Das et al., 2017b) in terms of PMR on every round of the dialog. PMR estimates how good the agent can rank the target image against other candidates in the test database. The biggest improvement gap is observed between SL-Q-IG and SL-Q. In comparison to SL-Q, SL-Q-IG tracks the additional context from the previously guessed images which leads to a better estimation of the target image. RL-Q-IG has better performance compared to SL-Q-IG in terms of PMR. This suggests that fine-tuning dialog systems with RL can further improve the success of guessing the correct image. The best image retrieval result is achieved by RL-Q-IG-NA, as the objective function of RL-Q-IG-NA is based solely on policy learning without consideration for the dialog generation quality. Although our framework achieved an improved image retrieval accuracy, we observed, however, that there is little improvement gained in PMR after additional rounds of conversation. We suspect this is partially due to the fact that images from MSCOCO are composed of a diverse selection objects and background scenes, thus making images easily distinguishable with a detailed caption. In cases where candidate images are visually similar or the given caption is not informative, additional rounds of dialog are necessary to identify the target image.", "cite_spans": [ { "start": 113, "end": 132, "text": "(Das et al., 2017b)", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 33, "end": 43, "text": "Fig 2 that", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Results on AI-AI Image Guess Game", "sec_num": "5.1" }, { "text": "Language Generation We observe a marginal increase of perplexity from SL-Q to RL-Q in Table 2, thus demonstrating that there is a bottleneck when applying RL to improve the language generation. By decoupling the policy learning from the language generation and alternatively optimizing the dialog policy and language model, our RL-Q-IG avoids language deviation while still achieving an optimal dialog policy for the image retrieval task. To further evaluate the contribution from the RL and alternative training curriculum, we conduct two ablation studies. RL-Q-IG-NA is fine-tuned with a policy learning objective that excludes alternatively applying the language model loss. While RL-Q-IG-NA only achieves an in- Table 4 : Evaluation results on the human-AI image guessing game initialized with human generated captions cremental improvement over the full framework RL-Q-IG in terms of the PMR rate with less than 0.1%, it suffers from a dramatic increase of perplexity from 61.42 to 363.88, thus suggesting that alternatively applying the supervised learning objective can prevent the language model from deviating from the human language distribution. We additionally apply policy learning on the question decoder of SL-Q-IG and follow the RL fine-tuning process in (Das et al., 2017b) to train the agent, RL-Q-IG-W. While applying word-level RL enables RL-Q to achieve a moderate improvement over SL-Q in terms of PMR, we did not observe, the same degree of advantage in RL-Q-IG-W over SL-Q-IG. Additionally, RL-Q-IG-W is affected by a marginal increase in perplexity in comparison to the SL pre-trained agent, which approves the drawbacks of applying RL on a large action space in language generation.", "cite_spans": [ { "start": 1271, "end": 1290, "text": "(Das et al., 2017b)", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 716, "end": 723, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Results on AI-AI Image Guess Game", "sec_num": "5.1" }, { "text": "The performance of a dialog agent evaluated with a user simulator does not necessarily reflect its performance on real human (de Vries et al., 2016) . We conduct human evaluation on different dialog agents. From the results summarized in Table 3 and Table 4 , we observe a consistent optimal performance of our method from conversations with AI agent to conversations with real human. Our RL-Q-IG significantly outperforms the baseline RL agent in all criteria for both settings. RL-Q-IG's advantage over SL-Q-IG is not significant in the game when agents are primed with ground truth image caption. This observation correlates with the result in the Human-AI game, as both RL-Q-IG and SL-Q-IG achieve superior PMR over 96% when presented with the ground truth caption. However, if a human gen-erated caption is given, the performance of the SL pre-trained agent suffers a big drop in all metrics except fluency while our RL agent maintains similar performance. Applying RL to fine-tune the dialog system enables the agent to generate more consistent dialogs in unseen scenarios. We also notice a degradation of the baseline RL agent from its performance with the user simulator, which suggests deviation from natural language is due to the sub-optimal RL training on a large action space.", "cite_spans": [ { "start": 125, "end": 148, "text": "(de Vries et al., 2016)", "ref_id": "BIBREF11" } ], "ref_spans": [ { "start": 238, "end": 257, "text": "Table 3 and Table 4", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Results on Human-AI Image Guess Game", "sec_num": "5.2" }, { "text": "We conduct a qualitative analysis on the generated dialogs from the three models with human players. Besides a marginal improvement over the RL baseline model and SL pretrained agent in terms of decreased repetition and grammar mistakes, there is a distinct superiority in regards to the relevance to the image caption in the questions generated from our RL agent. For example, in Table 9, we demonstrate the three dialogs generated by RL-Q-IG, SL-Q-IG and RL-Q on one game. Given the image caption bunches of bananas hang on a wall and arranged for sale., RL-Q and SL-Q-IG ask very general questions that are not related to the caption such as \"planes\", \"zoo\" and \"animals\". In comparison, our agent asks high-quality questions regarding the caption that covers \"bananas\" and \"fruits\". These questions help our RL agent obtain useful information to guess the target image. This advantage is also evident from the results of comparative evaluation on the degree of relevance of the questions in Table 5 . We credit the positive result to the dialog policy, which explores multiple paths to conduct the conversation. The optimal path will involve a set of questions that obtains the maximum information of the target image such that it can construct the best estimation of the target image.", "cite_spans": [], "ref_spans": [ { "start": 995, "end": 1002, "text": "Table 5", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Results on Human-AI Image Guess Game", "sec_num": "5.2" }, { "text": "Prefered (%) RL-Q 8.93 SL-Q-ImGuess 39.90 RL-Q-IG 51.20 We present a novel framework for building a taskoriented visual dialog system. We model the agent to simultaneously optimize two actions: guessing the image and generating effective questions. We achieve this simultaneous optimization through alternatively applying reinforcement learning to obtain an effective image guessing policy, whilst also applying supervised learning to enhance the quality of generated questions. By decoupling the policy learning from language generation, we overcome language degeneration in the word-level reinforcement learning framework. Both analytical and human evaluation suggests our proposed framework leads to a higher task completion rate and an improved dialog quality.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": null }, { "text": "In the future, we plan to collect a fashion retrieval visual dialog dataset which simulates a realistic application for multi-modal dialog systems. To address the limitation of a high image retrieval rate with just the use of captions from the VisDial dataset, we plan to format a challenging candidate image pool in which images are visually similar to each other. This will incentivize the dialog system to conduct multiple rounds of dialog in order to retrieve the target image successfully. Furthermore, we will explore additional task-oriented settings where we can decouple task accomplishment from language generation to evaluate the extent our framework can generalize to other conversational tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": null }, { "text": "We demonstrate our web interface for our Human-AI guessing game in Fig 3. At the beginning of every game the human player is presented with a random sampled target image. For the trained Qbot, 20 images are sampled as the candidate pool with the objective of retrieving the target image. At the end of the conversation, the Q-Bot's guess is revealed to the human player followed by a re-quest to evaluate the quality of the conversation in the four aspects introduced in section 4.4 on a scale from 1 to 5.", "cite_spans": [], "ref_spans": [ { "start": 67, "end": 73, "text": "Fig 3.", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "A Human Evaluation Interface", "sec_num": null }, { "text": "Our human relevancy evaluation interface is demonstrated in Fig 4. We present side-by-side questions generated in the same game from each of the three different models and ask the human evaluators to pick the conversation that is most semantically relevant to the target image caption. (Das et al., 2017a) . The center section is a chat platform for human to communicate with a trained Q-Bot. On the right hand side are the 20 candidate images sampled for the Q-Bot to retrieve the target image. Q: are they in city? Q: is it sunny? A: I'm not sure A: yes they are A: it seems yes Q: is it sunny? Q: is it sunny? Q: is this in city? A: yes it is A: yes it is A: it is in city Q: are they on road? Q: any other people? Q: are there any people in picture? A: yes they are A: no other people A: there are two people Q: are they in parking lot? Q: animals? Q: is this in home or restaurant? A: No they're not A: no other animals A: it is outside Table 7 : A dialog example with the human generated caption: two men in formal wear standing next to a monster truck. Blue highlights ideal relevant questions and orange indicates less relevant questions.", "cite_spans": [ { "start": 286, "end": 305, "text": "(Das et al., 2017a)", "ref_id": "BIBREF1" } ], "ref_spans": [ { "start": 60, "end": 66, "text": "Fig 4.", "ref_id": "FIGREF3" }, { "start": 942, "end": 949, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "A Human Evaluation Interface", "sec_num": null }, { "text": "Image RL-Q-ImGuess SL-Q-ImGuess RL-Q Q: how many horses are there? Q: how many horses are there? Q: is this in park? A: there are no horses A: There are no horses, only bulls A: maybe it is in park Q: are they in wild? Q: what color are horses? Q: are there any people in picture? A: no, they are in village A: they are white and black A: yes there is one picture Q: is there fence? Q: are they in field? Q: is this in home or restaurant? A: yes, there is fence A: Yes they are A: it is in neither of those Q: any people? Q: is it sunny? Q: are there any people in picture? A: yes, a person A: Yes it is A: yes there is one person Q: animals? Q: is it sunny? Q: is this in restaurant? A: yes, several bulls A: Yes it is A: no it is not : A dialog example with the human generated caption: a man holding a kite while a girl tries to fly it. Blue indicates ideal relevant questions and orange indicates poor relevance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Human Evaluation Interface", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Evaluating visual conversational agents via cooperative human-ai games", "authors": [ { "first": "Prithvijit", "middle": [], "last": "Chattopadhyay", "suffix": "" }, { "first": "Deshraj", "middle": [], "last": "Yadav", "suffix": "" }, { "first": "Viraj", "middle": [], "last": "Prabhu", "suffix": "" }, { "first": "Arjun", "middle": [], "last": "Chandrasekaran", "suffix": "" }, { "first": "Abhishek", "middle": [], "last": "Das", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Dhruv", "middle": [], "last": "Batra", "suffix": "" }, { "first": "Devi", "middle": [], "last": "Parikh", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Prithvijit Chattopadhyay, Deshraj Yadav, Viraj Prabhu, Arjun Chandrasekaran, Abhishek Das, Stefan Lee, Dhruv Batra, and Devi Parikh. 2017. Evaluating vi- sual conversational agents via cooperative human-ai games. CoRR, abs/1708.05122.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Visual Dialog", "authors": [ { "first": "Abhishek", "middle": [], "last": "Das", "suffix": "" }, { "first": "Satwik", "middle": [], "last": "Kottur", "suffix": "" }, { "first": "Khushi", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Avi", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Deshraj", "middle": [], "last": "Yadav", "suffix": "" }, { "first": "M", "middle": [ "F" ], "last": "Jos\u00e9", "suffix": "" }, { "first": "Devi", "middle": [], "last": "Moura", "suffix": "" }, { "first": "Dhruv", "middle": [], "last": "Parikh", "suffix": "" }, { "first": "", "middle": [], "last": "Batra", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, Jos\u00e9 M.F. Moura, Devi Parikh, and Dhruv Batra. 2017a. Visual Dialog. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Learning cooperative visual dialog agents with deep reinforcement learning", "authors": [ { "first": "Abhishek", "middle": [], "last": "Das", "suffix": "" }, { "first": "Satwik", "middle": [], "last": "Kottur", "suffix": "" }, { "first": "M", "middle": [ "F" ], "last": "Jos\u00e9", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Moura", "suffix": "" }, { "first": "Dhruv", "middle": [], "last": "Lee", "suffix": "" }, { "first": "", "middle": [], "last": "Batra", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the IEEE International Conference on Computer Vision (ICCV)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Abhishek Das, Satwik Kottur, Jos\u00e9 M.F. Moura, Ste- fan Lee, and Dhruv Batra. 2017b. Learning coop- erative visual dialog agents with deep reinforcement learning. In Proceedings of the IEEE International Conference on Computer Vision (ICCV).", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Dialog-based interactive image retrieval", "authors": [ { "first": "Xiaoxiao", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Hui", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Cheng", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Rennie", "suffix": "" }, { "first": "Rog\u00e9rio", "middle": [], "last": "Schmidt Feris", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaoxiao Guo, Hui Wu, Yu Cheng, Steven Rennie, and Rog\u00e9rio Schmidt Feris. 2018. Dialog-based interac- tive image retrieval. CoRR, abs/1805.00145.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Deal or no deal? end-to-end learning for negotiation dialogues", "authors": [ { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Denis", "middle": [], "last": "Yarats", "suffix": "" }, { "first": "Yann", "middle": [ "N" ], "last": "Dauphin", "suffix": "" }, { "first": "Devi", "middle": [], "last": "Parikh", "suffix": "" }, { "first": "Dhruv", "middle": [], "last": "Batra", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mike Lewis, Denis Yarats, Yann N. Dauphin, Devi Parikh, and Dhruv Batra. 2017. Deal or no deal? end-to-end learning for negotiation dialogues. CoRR, abs/1706.05125.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Deep reinforcement learning for dialogue generation", "authors": [ { "first": "Jiwei", "middle": [], "last": "Li", "suffix": "" }, { "first": "Will", "middle": [], "last": "Monroe", "suffix": "" }, { "first": "Alan", "middle": [], "last": "Ritter", "suffix": "" }, { "first": "Michel", "middle": [], "last": "Galley", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiwei Li, Will Monroe, Alan Ritter, Michel Galley, Jianfeng Gao, and Dan Jurafsky. 2016. Deep rein- forcement learning for dialogue generation. CoRR, abs/1606.01541.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Lets go public! taking a spoken dialog system to the real world", "authors": [ { "first": "Antoine", "middle": [], "last": "Raux", "suffix": "" }, { "first": "Brian", "middle": [], "last": "Langner", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Bohus", "suffix": "" }, { "first": "Alan", "middle": [ "W" ], "last": "Black", "suffix": "" }, { "first": "Maxine", "middle": [], "last": "Eskenazi", "suffix": "" } ], "year": 2005, "venue": "Proc. of Interspeech", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Antoine Raux, Brian Langner, Dan Bohus, Alan W Black, and Maxine Eskenazi. 2005. Lets go pub- lic! taking a spoken dialog system to the real world. In in Proc. of Interspeech 2005.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Hierarchical neural network generative models for movie dialogues", "authors": [ { "first": "Iulian", "middle": [], "last": "Vlad Serban", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Sordoni", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "Aaron", "middle": [ "C" ], "last": "Courville", "suffix": "" }, { "first": "Joelle", "middle": [], "last": "Pineau", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Iulian Vlad Serban, Alessandro Sordoni, Yoshua Ben- gio, Aaron C. Courville, and Joelle Pineau. 2015. Hierarchical neural network generative models for movie dialogues. CoRR, abs/1507.04808.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Sentiment adaptive end-to-end dialog systems", "authors": [ { "first": "Weiyan", "middle": [], "last": "Shi", "suffix": "" }, { "first": "Zhou", "middle": [], "last": "Yu", "suffix": "" } ], "year": 2018, "venue": "CoRR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Weiyan Shi and Zhou Yu. 2018. Sentiment adaptive end-to-end dialog systems. CoRR, abs/1804.10731.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Very deep convolutional networks for large-scale image recognition", "authors": [ { "first": "K", "middle": [], "last": "Simonyan", "suffix": "" }, { "first": "A", "middle": [], "last": "Zisserman", "suffix": "" } ], "year": 2015, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. Simonyan and A. Zisserman. 2015. Very deep con- volutional networks for large-scale image recogni- tion. In International Conference on Learning Rep- resentations.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "troduction to Reinforcement Learning", "authors": [ { "first": "Richard", "middle": [ "S" ], "last": "Sutton", "suffix": "" }, { "first": "Andrew", "middle": [ "G" ], "last": "Barto", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard S. Sutton and Andrew G. Barto. 1998. In- troduction to Reinforcement Learning, 1st edition. MIT Press, Cambridge, MA, USA.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Guesswhat?! visual object discovery through multi-modal dialogue", "authors": [ { "first": "Florian", "middle": [], "last": "Harm De Vries", "suffix": "" }, { "first": "Sarath", "middle": [], "last": "Strub", "suffix": "" }, { "first": "Olivier", "middle": [], "last": "Chandar", "suffix": "" }, { "first": "Hugo", "middle": [], "last": "Pietquin", "suffix": "" }, { "first": "Aaron", "middle": [ "C" ], "last": "Larochelle", "suffix": "" }, { "first": "", "middle": [], "last": "Courville", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Harm de Vries, Florian Strub, Sarath Chandar, Olivier Pietquin, Hugo Larochelle, and Aaron C. Courville. 2016. Guesswhat?! visual object discovery through multi-modal dialogue. CoRR, abs/1611.08481.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Partially observable markov decision processes for spoken dialog systems", "authors": [ { "first": "Jason", "middle": [ "D" ], "last": "Williams", "suffix": "" }, { "first": "Steve", "middle": [], "last": "Young", "suffix": "" } ], "year": 2007, "venue": "Comput. Speech Lang", "volume": "21", "issue": "2", "pages": "393--422", "other_ids": { "DOI": [ "10.1016/j.csl.2006.06.008" ] }, "num": null, "urls": [], "raw_text": "Jason D. Williams and Steve Young. 2007. Partially observable markov decision processes for spoken di- alog systems. Comput. Speech Lang., 21(2):393- 422.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Multimodal hierarchical reinforcement learning policy for task-oriented visual dialog", "authors": [ { "first": "Jiaping", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Tiancheng", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Zhou", "middle": [], "last": "Yu", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiaping Zhang, Tiancheng Zhao, and Zhou Yu. 2018. Multimodal hierarchical reinforcement learn- ing policy for task-oriented visual dialog. CoRR, abs/1805.03257.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Asking the difficult questions: Goal-oriented visual question generation via intermediate rewards", "authors": [ { "first": "Junjie", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Qi", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Chunhua", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Anton", "middle": [], "last": "Van Den", "suffix": "" }, { "first": "", "middle": [], "last": "Hengel", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Junjie Zhang, Qi Wu, Chunhua Shen, Jian Zhang, Jian- feng Lu, and Anton van den Hengel. 2017. Asking the difficult questions: Goal-oriented visual ques- tion generation via intermediate rewards. CoRR, abs/1711.07614.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Rethinking action spaces for reinforcement learning in end-to-end dialog agents with latent variable models", "authors": [ { "first": "Tiancheng", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Kaige", "middle": [], "last": "Xie", "suffix": "" }, { "first": "Maxine", "middle": [], "last": "Esk\u00e9nazi", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tiancheng Zhao, Kaige Xie, and Maxine Esk\u00e9nazi. 2019. Rethinking action spaces for reinforcement learning in end-to-end dialog agents with latent vari- able models. CoRR, abs/1902.08858.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "The proposed end-to-end framework of the conversation agent for GuessWhich task-oriented visual dialog task", "type_str": "figure", "uris": null, "num": null }, "FIGREF1": { "text": "The percentile mean rank (PMR) over the 5round dialog in the AI-AI image guessing game", "type_str": "figure", "uris": null, "num": null }, "FIGREF2": { "text": "The web interface for human-AI guessing game. The left image is a target image randomly sampled from", "type_str": "figure", "uris": null, "num": null }, "FIGREF3": { "text": "The user interface for human relevancy experiment. The conversation from the three different models on the same game setting are presented in the three boxes. On top of every conversation is the caption for the target image. Followed by that are the five questions asked by the Q-Bot in that game.", "type_str": "figure", "uris": null, "num": null }, "TABREF1": { "text": "A dialog example with the ground truth caption: bunches of bananas hang on a wall and arranged for sale. blue indicates ideal relevant questions and orange indicates less relevant questions.", "num": null, "type_str": "table", "html": null, "content": "" }, "TABREF3": { "text": "", "num": null, "type_str": "table", "html": null, "content": "
: RL-Q-IG-NA performs best in PMR and RL-
Q-IG perform best in perplexity
" }, "TABREF5": { "text": "Evaluation results on the human-AI image guessing game initialized with ground truth captions", "num": null, "type_str": "table", "html": null, "content": "
ModelWin Fluency Relevance Comprehension Diversity
RL-Q29.24.042.882.712.29
SL-Q-IG 40.64.163.192.752.69
RL-Q-IG 67.64.233.743.323.06
" }, "TABREF6": { "text": "Results on comparative evaluation of relevance on the human-AI image guessing dialogs", "num": null, "type_str": "table", "html": null, "content": "
6 Conclusion and Future Work
" }, "TABREF8": { "text": "A dialog example with the human generated caption: there is a plant in a vase and cookies. Blue highlights diverse questions and orange indicates poor diversity.", "num": null, "type_str": "table", "html": null, "content": "
ImageRL-Q-ImGuessSL-Q-ImGuessRL-Q
Q: are men old?Q: how old are men? Q: what color is hat?
A: No they are notA: 30 years oldA: there is no hat
Q: are they in uniform?
" }, "TABREF9": { "text": "A dialog example with the human generated caption: an image of running with the bulls outside. Blue indicates ideal relevant questions and orange highlights irrelevant/repeating questions.", "num": null, "type_str": "table", "html": null, "content": "
ImageRL-Q-ImGuessSL-Q-ImGuessRL-Q
Q: what color is kite?Q: what color is horse?Q: is there only 1 person?
A: It is orangeA: there is no horseA: no there is not
Q: is it sunny?Q: how old is man?Q: is this in park?
A: Yes it isA: he is youngA: yes it is
Q: any other people?Q: is he wearing hat?Q: are there any people in picture?
A: Yes there are two peopleA: no, he is not wearing hat A: yes there are
Q: any animals?Q: what color is horse?Q: is there any other food on table?
A: No there is notA: there is no horseA: no food on the table
Q: any trees?Q: is it sunny?Q: is there anything else on table?
A: Yes, there are several trees A: yes it isA: nothing . .
" }, "TABREF10": { "text": "", "num": null, "type_str": "table", "html": null, "content": "" } } } }