{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:28:23.241247Z" }, "title": "Improving the Naturalness and Diversity of Referring Expression Generation models using Minimum Risk Training", "authors": [ { "first": "Nikolaos", "middle": [], "last": "Panagiaris", "suffix": "", "affiliation": { "laboratory": "", "institution": "Edinburgh Napier University", "location": { "postCode": "EH10 5DT" } }, "email": "n.panagaris@napier.ac.uk" }, { "first": "Emma", "middle": [], "last": "Hart", "suffix": "", "affiliation": { "laboratory": "", "institution": "Napier University", "location": { "addrLine": "10 Colinton Road Edinburgh", "postCode": "EH10 5DT", "settlement": "Edinburgh" } }, "email": "e.hart@napier.ac.uk" }, { "first": "Dimitra", "middle": [], "last": "Gkatzia", "suffix": "", "affiliation": { "laboratory": "", "institution": "Napier University", "location": { "addrLine": "10 Colinton Road Edinburgh", "postCode": "EH10 5DT", "settlement": "Edinburgh" } }, "email": "d.gkatzia@napier.ac.uk" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In this paper we consider the problem of optimizing neural Referring Expression Generation (REG) models with sequence level objectives. Recently reinforcement learning (RL) techniques have been adopted to train deep end-to-end systems to directly optimize sequence-level objectives. However, there are two issues associated with RL training: (1) effectively applying RL is challenging, and (2) the generated sentences lack in diversity and naturalness due to deficiencies in the generated word distribution, smaller vocabulary size, and repetitiveness of frequent words or phrases. To alleviate these issues, we propose a novel strategy for training REG models, using minimum risk training (MRT) with maximum likelihood estimation (MLE) and we show that our approach outperforms RL w.r.t naturalness and diversity of the output. Specifically, our approach achieves an increase in CIDEr scores between 23%-57% in two datasets. We further demonstrate the robustness of the proposed method through a detailed comparison with different REG models.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "In this paper we consider the problem of optimizing neural Referring Expression Generation (REG) models with sequence level objectives. Recently reinforcement learning (RL) techniques have been adopted to train deep end-to-end systems to directly optimize sequence-level objectives. However, there are two issues associated with RL training: (1) effectively applying RL is challenging, and (2) the generated sentences lack in diversity and naturalness due to deficiencies in the generated word distribution, smaller vocabulary size, and repetitiveness of frequent words or phrases. To alleviate these issues, we propose a novel strategy for training REG models, using minimum risk training (MRT) with maximum likelihood estimation (MLE) and we show that our approach outperforms RL w.r.t naturalness and diversity of the output. Specifically, our approach achieves an increase in CIDEr scores between 23%-57% in two datasets. We further demonstrate the robustness of the proposed method through a detailed comparison with different REG models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Referring expression generation (REG) aims at generating utterances that help anchoring an object within an image. Such descriptions are called referring expressions (REs) (Krahmer and van Deemter, 2012) . Early work focused on datasets with relatively simple visual stimuli (Viethen et al., 2013; Viethen and Dale, 2010; utilizing synthesized images of objects in artificial scenes. The recently released datasets Ref-CLEF, RefCOCO, RefCOCO+ and RefCOCOg (Kazemzadeh et al., 2014; Yu et al., 2016; which contain natural images of cluttered scenes, led to a surge of interest in using deep neural networks for REG. Such approaches utilize the encoder-decoder paradigm originally proposed for machine translation (Sutskever et al., 2014; Cho et al., 2014) and since have been widely used to various other NLG sub-fields Guo et al., 2018; Vinyals and Le, 2015; Li et al., 2016; Xu et al., 2015) . The encoderdecoder model consists of a deep convolutional neural network (CNN) (Krizhevsky et al., 2012) to encode the visual features into a fixed-size latent representation, and a variation of recurrent neural network (RNN) (Jain and Medsker, 1999) , e.g. a Long Short Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) network to generate the output.", "cite_spans": [ { "start": 172, "end": 203, "text": "(Krahmer and van Deemter, 2012)", "ref_id": "BIBREF12" }, { "start": 275, "end": 297, "text": "(Viethen et al., 2013;", "ref_id": "BIBREF34" }, { "start": 298, "end": 321, "text": "Viethen and Dale, 2010;", "ref_id": "BIBREF33" }, { "start": 415, "end": 481, "text": "Ref-CLEF, RefCOCO, RefCOCO+ and RefCOCOg (Kazemzadeh et al., 2014;", "ref_id": null }, { "start": 482, "end": 498, "text": "Yu et al., 2016;", "ref_id": "BIBREF41" }, { "start": 712, "end": 736, "text": "(Sutskever et al., 2014;", "ref_id": "BIBREF30" }, { "start": 737, "end": 754, "text": "Cho et al., 2014)", "ref_id": "BIBREF2" }, { "start": 819, "end": 836, "text": "Guo et al., 2018;", "ref_id": "BIBREF6" }, { "start": 837, "end": 858, "text": "Vinyals and Le, 2015;", "ref_id": "BIBREF35" }, { "start": 859, "end": 875, "text": "Li et al., 2016;", "ref_id": "BIBREF14" }, { "start": 876, "end": 892, "text": "Xu et al., 2015)", "ref_id": "BIBREF40" }, { "start": 974, "end": 999, "text": "(Krizhevsky et al., 2012)", "ref_id": "BIBREF13" }, { "start": 1121, "end": 1145, "text": "(Jain and Medsker, 1999)", "ref_id": "BIBREF9" }, { "start": 1185, "end": 1219, "text": "(Hochreiter and Schmidhuber, 1997)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The encoder-decoder model is typically trained to maximize the likelihood of a word given the history of generated words so far. This training approach is referred to as \"Teacher-Forcing\" . Although intuitive to train a model on token-level, during generation a model is evaluated based on its ability to optimize towards sequence level metrics resulting in a discrepancy between training and testing objectives. Furthermore, a second problem that stems from \"Teacher-Forcing\" is that during training, the model uses the groundtruth words to predict the next one, while during testing uses its own predictions. This missmatch, coined as exposure bias (Ranzato et al., 2016) , results in error accumulation during generation.", "cite_spans": [ { "start": 651, "end": 673, "text": "(Ranzato et al., 2016)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Recently reinforcement learning (RL) (Sutton and Barto, 2018) techniques have been adopted to alleviate the exposure bias problem and directly optimize the non-differentiable task specific metrics. For instance, Ranzato et al. (2016) propose a method that builds upon the REINFORCE algorithm to directly optimize the non-differential test metrics and reports promising results in machine translation, while Bahdanau et al. (2016) utilizes an Actor-critic method, that involves the training of an additional value network to normalize the reward.", "cite_spans": [ { "start": 37, "end": 61, "text": "(Sutton and Barto, 2018)", "ref_id": "BIBREF31" }, { "start": 212, "end": 233, "text": "Ranzato et al. (2016)", "ref_id": "BIBREF26" }, { "start": 407, "end": 429, "text": "Bahdanau et al. (2016)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "However, training with RL is a non-trivial task due to a number of limitations: (1) high variance of the gradient (Rennie et al., 2017); (2) lack of per-token advantage, i.e. the REINFORCE algorithm makes the assumption that every token contributes equally to the whole sequence ; and (3) reward configuration (Bahdanau et al., 2016; Ranzato et al., 2016) . Furthermore, effectively applying RL to REG has not been explored, with the exception of (Yu et al., 2017) who incorporate an additional module to reward discriminative REs by updating the speaker with a policy gradient algorithm. However, little is reported of how the RL was configured. To the best of our knowledge, this is the first work to thoroughly propose how to effectively train REG models with RL.", "cite_spans": [ { "start": 310, "end": 333, "text": "(Bahdanau et al., 2016;", "ref_id": "BIBREF0" }, { "start": 334, "end": 355, "text": "Ranzato et al., 2016)", "ref_id": "BIBREF26" }, { "start": 447, "end": 464, "text": "(Yu et al., 2017)", "ref_id": "BIBREF42" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Furthermore, beside the aforementioned limitations of RL methods there is another problem that is often overlooked. While directly optimizing the evaluation metrics one can achieve higher scores, the generated text lacks diversity due to repeated ngrams (Wang and Chan, 2019) . Our analysis shows that RL trained models are strongly biased towards frequent REs leading to smaller vocabulary and deficiencies in the generated word distribution.", "cite_spans": [ { "start": 254, "end": 275, "text": "(Wang and Chan, 2019)", "ref_id": "BIBREF37" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To address these issues we propose the use of minimum risk training (MRT) (Och, 2003) as an alternative way of optimizing REG systems on sequence level. Minimum risk training aims at minimizing the expected loss over training data by taking automatic evaluation metrics into consideration. The MRT objective has the following advantages over MLE. First, it can directly optimize sequence level objectives that are not necessarily differentiable. Second, while MLE maximizes the likelihood of the training data, MRT introduces a notion of ranking amongst candidate sequences by discriminating between sequences. Thus, by minimizing the risk, we expect to find a distribution that approximates well the ground-truth distribution. Furthermore, the MRT objective is similar to the REINFORCE algorithm in a sense that both maximize an expected reward or cost. However, there are two fundamental advantages of the MRT over RL: (1) the REINFORCE algorithm typically utilizes one sample in order to approximate the expectation, whereas the MRT objective considers multiple sequences making it sample and data sufficient; and (2) the MRT objective intuitively estimates the expected risk over a set of candidate sequences, whereas the REINFORCE algorithm typically relies on the baseline reward to determine effectively the sign of the gradient.", "cite_spans": [ { "start": 74, "end": 85, "text": "(Och, 2003)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Therefore, our main contributions are as follows: Firstly, we conduct an extensive analysis and benchmarking of RL training strategies for REG, by exploring how different aspects such as the reward and the baseline reward configuration affect REG models (Section 8.1). Our experiments reveal how to best train REG models using reinforcement learning. Secondly, we show that models optimised for CIDEr also achieve higher scores in all other metrics (BLEU etc.) even when compared to models directly optimised on them. Although our RL approach outperforms the state-of-art, RL still suffers from the limitations discussed earlier. Therefore, we propose a novel training strategy for REG which combines MRT with MLE and we show its effectiveness in comparison to a number of RL training strategies w.r.t naturalness, diversity and informativeness (Section 8.2). Our approach achieves improvements between 33.5%-38.7% and 23.4%-57.8% in terms of CIDEr on RefCOCO and RefCOCO+ respectively compared to previously proposed approaches. Finally, a detailed analysis shows that when a REG model is trained with the proposed approach, uses a larger vocabulary, produces longer referring expressions and generates more uni-grams and bi-grams.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Early work in referring expression generation can be dated back to the early 1970s (Winograd, 1972) . The traditional view of REG is a two step procedure where the REG model accounts for the content selection and determination of the referential form (Krahmer and van Deemter, 2012) . However, the large body of work in REG focuses on the determination of content for definite descriptions (Krahmer and van Deemter, 2012) . Algorithms such as the full brevity and the incremental algorithm (Dale and Reiter, 1995) have as foundation the Gricean maxims (Grice, 1975) , that provide insights of how people behave in different communication scenarios (Krahmer and van Deemter, 2012) .", "cite_spans": [ { "start": 83, "end": 99, "text": "(Winograd, 1972)", "ref_id": "BIBREF38" }, { "start": 251, "end": 282, "text": "(Krahmer and van Deemter, 2012)", "ref_id": "BIBREF12" }, { "start": 390, "end": 421, "text": "(Krahmer and van Deemter, 2012)", "ref_id": "BIBREF12" }, { "start": 490, "end": 513, "text": "(Dale and Reiter, 1995)", "ref_id": "BIBREF3" }, { "start": 552, "end": 565, "text": "(Grice, 1975)", "ref_id": "BIBREF5" }, { "start": 648, "end": 679, "text": "(Krahmer and van Deemter, 2012)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Recently due to the availability of larger and more complex natural image datasets, such as Ref-COCO (Yu et al., 2016; there is a surge of interest in applying deep learning methods. Neural REG approaches rely on incorporating contextual information by using visual features, appearance attributes , location features (Yu et al., 2016) and global image features as target object representation. In their seminal work, use a convolutional neural network to extract visual features and an LSTM to generate the expression trained on Maximum Mutual Information objective. Yu et al. (2016) propose a unified framework where a speaker module generates REs, a listener module comprehends REs, and a reinforcer module provides guidance towards informative REs. Lastly, Zarrie\u00df and Schlangen (2018) examine the impact that variations of beam search have in the length of REs.", "cite_spans": [ { "start": 101, "end": 118, "text": "(Yu et al., 2016;", "ref_id": "BIBREF41" }, { "start": 318, "end": 335, "text": "(Yu et al., 2016)", "ref_id": "BIBREF41" }, { "start": 568, "end": 584, "text": "Yu et al. (2016)", "ref_id": "BIBREF41" }, { "start": 761, "end": 789, "text": "Zarrie\u00df and Schlangen (2018)", "ref_id": "BIBREF43" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Although there are not published attempts on optimizing neural REG systems on a sequence level, we will review a number of works from the wider field of natural language generation. Ranzato et al. (2016) were the first to adopt the REINFORCE algorithm in order to optimize the encoder-decoder model. The discovery that baselines can effectively reduce the variance of the gradient estimation led to a significant body of work in NLG. Murphy et al. (2017) used fully connected layers to predict the baseline and used Monte Carlo rollouts to approximate the state-action value. Bahdanau et al. (2016) utilize an actor-critic framework and combine it with temporal difference learning. The state-action value was modelled by a separate RNN. Rennie et al. 2017propose the utilization of the output of the model at the test time to normalize the reward. Although MRT has a long history in training linear model for structured predictions, it has only be used in neural machine translation (Shen et al., 2016; Edunov et al., 2018) as an alternative to MLE training. In this work, however, we apply MRT to REG as an alternative to RL and we compare the output of those two training strategies in terms of naturalness and diversity.", "cite_spans": [ { "start": 182, "end": 203, "text": "Ranzato et al. (2016)", "ref_id": "BIBREF26" }, { "start": 434, "end": 454, "text": "Murphy et al. (2017)", "ref_id": "BIBREF23" }, { "start": 576, "end": 598, "text": "Bahdanau et al. (2016)", "ref_id": "BIBREF0" }, { "start": 984, "end": 1003, "text": "(Shen et al., 2016;", "ref_id": "BIBREF29" }, { "start": 1004, "end": 1024, "text": "Edunov et al., 2018)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "As this work focuses primarily on the training objectives for neural REG models, we adopt a standard encoder-decoder architecture language model similar to (Rennie et al., 2017; . The encoder is a CNN network that extracts the representation of the target object. Then this representation is embedded through a linear projection layer W I . The words are represented as one-hot vector, projected to the same space as the visual representation through a linear embedding layer. The start of each sequence is denoted by a special BOS token, while the special stop token EOS denotes the end of the sequence. The decoder, which is responsible for the generation of REs is modeled as an LSTM network. The image features are used only as an input to t = 0 in order to initialize the LSTM based on the visual contents. Then, at each time step t, its output depends on the previously generated words and on the hidden state, which encodes the knowledge of the observed input up to this time step.", "cite_spans": [ { "start": 156, "end": 177, "text": "(Rennie et al., 2017;", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "REG model", "sec_num": "3" }, { "text": "The parameters \u03b8 of the model are learned by maximizing the likelihood of the observed sequence. Specifically, given N training pairs the training objective is defined as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "REG model", "sec_num": "3" }, { "text": "L \u03b8 = 1 N N n=1 log p(y n |o n , I n \u03b8) = 1 N N n=1 T n t=1 log p(y n t |y n 1:t\u22121 , o n , I n \u03b8) (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "REG model", "sec_num": "3" }, { "text": "where o n is the n'th object in the I n image, y n = (y n 1 , . . . , y n T n ) is the ground truth referring expression of the n'th object and N is the total number of training examples.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "REG model", "sec_num": "3" }, { "text": "The generation process can be cast into a reinforcement learning process as first described in (Ranzato et al., 2016) . Within the classic reinforcement learning paradigm, an agent performs an action under a specific policy \u03c0. The nature of the policy is application dependent. Within REG the language model can be seen as an agent that interacts with its environment (i.e. the previously generated words and the visual features at each time step t). The parameters \u03b8 of the agent define a policy \u03c0 \u03b8 . The agent selects an action, which is a candidate token from the vocabulary under the policy, until it generates the EOS token. Once the agent reaches the end of the sequence it observes a terminal reward r, which is the score for generating a RE\u0177 n given an object o and a ground truth referring expression (or a set of referring expressions) y. The reward is a scalar produced by any evaluation metric such as CIDEr. Therefore, the training aims at parameterizing the agent in order to maximize the reward as follows:", "cite_spans": [ { "start": 95, "end": 117, "text": "(Ranzato et al., 2016)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Training REG with Reinforcement Learning", "sec_num": "4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L \u03b8 = N n=1 E\u0177 \u223c\u03c0 \u03b8 (\u0177 n |o n ,y n ) r(\u0177 n |o n , y n ) = N n=1 \u0177\u2208Y \u03c0 \u03b8 (\u0177|o n , y n )r(\u0177 n |o n , y n ),", "eq_num": "(2)" } ], "section": "Training REG with Reinforcement Learning", "sec_num": "4" }, { "text": "where N is the number of examples in the training set and Y denotes the entire space of all possible output referring expressions, which is intractable to enumerate or score with a model. Instead, RE-INFORCE allows to optimize the gradient of the expected reward by sampling\u0177 from the policy p(y|o, ) during training. Thus, it aims to maximize the following objective:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training REG with Reinforcement Learning", "sec_num": "4" }, { "text": "L \u03b8 = N n=1 r(\u0177 n |o n , y n ),\u0177 n \u223c \u03c0 \u03b8 (\u0177|o n , y n ) (3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training REG with Reinforcement Learning", "sec_num": "4" }, { "text": "An inherent challenge of the REINFORCE algorithm is that typically leads to highly unstable training due to the noise in gradient estimation and reward computation (Rennie et al., 2017; Ranzato et al., 2016) . Thus, in the next sections we explore a number of methods that have been proposed in literature that stabilize training. Specifically, for reward computation we study: (1) how to sample the candidate samples;", "cite_spans": [ { "start": 164, "end": 185, "text": "(Rennie et al., 2017;", "ref_id": "BIBREF27" }, { "start": 186, "end": 207, "text": "Ranzato et al., 2016)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Training REG with Reinforcement Learning", "sec_num": "4" }, { "text": "(2) which reward function to use; and (3) whether reward sampling is beneficial. Furthermore, as a variance reduction technique we explore the applicability of self-critical training proposed for image captioning (Rennie et al., 2017). Lastly, we explore whether the combination of MLE training with RL improves the diversity and naturalness of the output.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training REG with Reinforcement Learning", "sec_num": "4" }, { "text": "The standard training for REG poses two uncommon challenges for RL. First, the action space in REG problems is a high-dimensional discrete space that it is intractable, while in the classic RL paradigm the common scenario is a smaller discrete action space (e.g. games (Mnih et al., 2015) ), or a relatively low dimension continuous space of actions (e.g. robotics (Lillicrap et al., 2016) ). Hence, the first important factor is the search strategy for generating the sequence of actions. Secondly, the reward for REG is naturally sparse since each token of the training sequence is assigned the same reward value. Note that, the reward is observed when the full sequence is produced. Thus, we explore whether a cumulative reward is better than the terminal reward. This process is known as reward shaping (Ng et al., 1999) .", "cite_spans": [ { "start": 269, "end": 288, "text": "(Mnih et al., 2015)", "ref_id": "BIBREF22" }, { "start": 365, "end": 389, "text": "(Lillicrap et al., 2016)", "ref_id": "BIBREF15" }, { "start": 807, "end": 824, "text": "(Ng et al., 1999)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Reward Configuration", "sec_num": "4.1" }, { "text": "We consider two search strategies for generating sequences. The first is beam search, that finds the most likely sequence by performing a greedy breadth-first search over a limited search space. Specifically, each candidate sequence is expanded from left to right selecting all possible tokens from the vocabulary at a time. From this set, the top \u2212 k candidate sequences with the highest probabilities are selected, and the beam search process continues until the top \u2212 k candidates with the highest probability are returned. The second strategy is random sampling, which randomly samples from the model's distribution at every time-step until the end of the sequence token is produced.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reward Configuration", "sec_num": "4.1" }, { "text": "Balancing between exploration and exploitation is a major challenge in RL. For instance, it may be required for an agent to pick an action associated with the highest expected reward (i.e. exploitation). However, in this scenario it may fail to learn more rewarding actions. Therefore exploration, that is the choice of new actions and the visit of new states, may also be beneficial. Beam search focuses on producing high probability sequences and therefore is considered as an exploitation strategy, while random sampling introduces more diverse sequences and thus contributes towards the exploration of the action states. However, due to the fact that the actions are being sampled from the model being optimized the exploration is de facto limited.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reward Configuration", "sec_num": "4.1" }, { "text": "Although we aim to optimize a REG system to produce sequences that maximize a sequence level metric, simply awarding this score at the last step of a complete episode (sequence generation) provides naturally a sparse training signal. An agent however picks a number of actions in order to produce a sequence (dependent on the length of the sequence). In other words, assigning a terminal reward to the entire sequence is equivalent to a uniform tokenlevel reward. Dense rewards can be easier to learn from, thus we explore the use of reward shaping (Ng et al., 1999) as proposed in (Bahdanau et al., 2016) . Specifically, given the sequence of actions (i.e. words) y 1 ...y t\u22121 executed by the agent until time step t, the intermediate reward is calculated as:", "cite_spans": [ { "start": 549, "end": 566, "text": "(Ng et al., 1999)", "ref_id": "BIBREF24" }, { "start": 582, "end": 605, "text": "(Bahdanau et al., 2016)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Reward Configuration", "sec_num": "4.1" }, { "text": "r t (\u0177 t , y) = r(\u0177 1...t , y) \u2212 r(\u0177 1...t\u22121 , y)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reward Configuration", "sec_num": "4.1" }, { "text": "by comparing the incomplete sequence with the ground truth. Thus, at time step t the model's parameters are updated based on the cumulative reward.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reward Configuration", "sec_num": "4.1" }, { "text": "Another important weakness of the REINFORCE algorithm is that it exhibits high variance that leads to unstable training without proper contextdependent normalization. An intuitive way to reduce the variance is to reduce the magnitude of the learning signal by subtracting a quantity, called a baseline. It can be any value as long as it is independent of the parameters of the agent. For instance, one can sample N sequences of actions and update the gradient by averaging over the N sequences. In this case, the baseline could be the mean of the rewards of the N sequences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Variance Reduction with Self-Critical Training", "sec_num": "4.2" }, { "text": "As another solution to reduce the variance of the gradient estimator, Rennie et al. 2017proposed a self-critical training scheme. In order to calculate the baseline reward under this training strategy, two independent sequences are produced:\u0177, which is obtained by sampling from the policy, and\u0177 g , the baseline output, obtained by performing greedy search. Thus, the training aims to minimize the following objective:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Variance Reduction with Self-Critical Training", "sec_num": "4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L \u03b8 = N n=1 (r(\u0177|o n , y n ) \u2212 r(\u0177 g |o n , y n ))", "eq_num": "(4)" } ], "section": "Variance Reduction with Self-Critical Training", "sec_num": "4.2" }, { "text": "The minimization of L \u03b8 is analogous of maximizing the conditional likelihood of the sampled sequence\u0177 if it obtains a higher reward than the baseline\u0177 g , thus increasing the reward expectation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Variance Reduction with Self-Critical Training", "sec_num": "4.2" }, { "text": "Beside the aforementioned problems, there are two other limitations that are often overlooked. First, while these methods can directly optimize the non-differentiable rewards and improve the performance of evaluation metrics, the generated text suffers from lack of diversity due to repetition of common n-grams. The second limitation is that the approximation of the reward is based on one sample which is data and sample inefficient. To address these limitations we explore a principled alternative to the REINFORCE algorithm, the minimum risk training (Och, 2003) . Minimum risk training (MRT) minimizes the value of a given task-specific cost function, i.e. risk, over the training data at sequence level. Specifically, let x denote a fixed-size representation of the input, then the set Y(x (s) ) denotes the set of all possible referring expressions generated by the model with parameters \u03b8. For a given candidate sequence y and ground truth referring expression y, MRT defines a cost function \u2206(y , y) which is the semantic distance between y and the standard y. The cost function can be any function that captures the discrepancy between the model's prediction and the ground truth. Formally, the objective function of MRT is the following:", "cite_spans": [ { "start": 555, "end": 566, "text": "(Och, 2003)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Minimum Risk Training for Referring Expression Generation", "sec_num": "5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L MRT = N n=1 E Y(x) \u2206(y , y (n) ).", "eq_num": "(5)" } ], "section": "Minimum Risk Training for Referring Expression Generation", "sec_num": "5" }, { "text": "where E Y(x) denotes the expectation over the set of all possible candidate sequences Y(x (n) ). However, as previously mentioned enumerating and scoring candidate sequences over the entire space is intractable. Instead, we sample a subset S(x) \u2282 Y(x) to approximate the probability distribution, and formalize the objective function as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Minimum Risk Training for Referring Expression Generation", "sec_num": "5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L MRT = S s=1 y \u2208S(x (s) ) p(y |x (s) ) y * \u2208S(x (s) ) p(y * |x (s) ) \u2206(y , y (s) )", "eq_num": "(6)" } ], "section": "Minimum Risk Training for Referring Expression Generation", "sec_num": "5" }, { "text": "The MRT objective minimizes the expected value of a cost function which enables us to optimize REG models with respect to specific evaluation metrics of the task. In this work we explore the use of various REG evaluation metrics such as CIDEr and BLEU and combination of those. Furthermore, for the construction of the subset of the candidate sequences we consider online setting, specifically we regenerate the candidate set for each training sample. Again we consider random sampling and beam search as search strategies (see Section 4.1). Moreover, we also considered offline generation, that is the candidate sequences are generated before training and never refreshed. However, we found that it leads to inferior performance and thus was not included.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Minimum Risk Training for Referring Expression Generation", "sec_num": "5" }, { "text": "We also experiment with combining the MLE training objective either RL or MRT. The motivation of the loss combination is to maintain good tokenlevel accuracy while optimizing on the sequencelevel. In other words, using an evaluation metric as a reward can suppress the probability of the words that do not increase the metric score, and thus concentrate the distribution to a single point. Thus, we explore a combined objective in order to scale the peakiness of the output distribution. Specifically, the weighted combination of MLE (Equation 1) with RL objective (Equation 4 ) is defined as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combined objectives", "sec_num": "6" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L weighed RL = (1 \u2212 \u03b1) * L mle + \u03b1 * L rl ,", "eq_num": "(7)" } ], "section": "Combined objectives", "sec_num": "6" }, { "text": "Equivalently, combing the MRT objective (Equation 6 ) with MLE we have:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combined objectives", "sec_num": "6" }, { "text": "L weighed MRT = (1 \u2212 \u03b1) * L mle + \u03b1 * L MRT , (8)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combined objectives", "sec_num": "6" }, { "text": "where \u03b1 is a scaling factor controlling the difference in magnitude between the combined objectives.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combined objectives", "sec_num": "6" }, { "text": "We trained our models on RefCOCO and Ref-COCO+ (Yu et al., 2016) . Although both datasets contain similar images since they are built upon the MSCOCO dataset (Lin et al., 2014) , the textual properties of their expressions are different due to different data collection objectives. In particular, for ReFCOCO+, the use of absolute location words (e.g. top right, bottom left, etc.) was not allowed and thus the RE are appearance focused, while for the RefCOCO the use of location is essential in order for the target object to be successfully individualized. Furthermore, for each dataset different test splits are provided. The predefined test splits for both datasets are divided between person vs object splits. In particular, images containing people are in \"TestA\" and images that contain all other object categories are in \"TestB\".", "cite_spans": [ { "start": 47, "end": 64, "text": "(Yu et al., 2016)", "ref_id": "BIBREF41" }, { "start": 158, "end": 176, "text": "(Lin et al., 2014)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "7.1" }, { "text": "Visual Features The visual representation used is a 4101-dimensional vector that is a concatenation of: (1) a 2048-dimensional vector of the target object region; (2) a 2048-dimensional vector representation of the whole image that serves as context features and (3) object location features as presented in (Yu et al., 2016) . As main feature extractor we used ResNet-152 . In more detail, for the object region features, the aspect ratio of the region was kept constant and was scaled to 224 \u00d7 224 resolution. The margins were padded with the mean pixel value, following .", "cite_spans": [ { "start": 308, "end": 325, "text": "(Yu et al., 2016)", "ref_id": "BIBREF41" } ], "ref_spans": [], "eq_spans": [], "section": "Implementation Details", "sec_num": "7.2" }, { "text": "Training For our language model, we set the dimension of LSTM hidden state, image feature embeddings, and word embeddings to 512. The batch size is set to 128 images. The learning rate is initialized to be 5 \u00d7 10 \u22124 , and then annealed by shrinking it by a factor of 0.8 for every three epochs. Both the RL and MRT models are trained according to the following scheme: We first pretrain the language model using MLE, optimized with Adam (Kingma and Ba, 2014). At each epoch, we evaluate the model on the validation set and select the model with the best CIDEr score as an initialization for RL and MRT training. We then run RL or MRT training initialized with the MLE model to optimize the CIDEr metric using ADAM with a fixed learning rate 5 \u00d7 10 \u22125 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Implementation Details", "sec_num": "7.2" }, { "text": "For evaluation we opt for automatic metrics. Specifically, in order to measure the naturalness of referring expressions we use the standard automatic metrics that have been used in REG Zarrie\u00df and Schlangen, 2018; Yu et al., 2016) that compare the generated referring expression with the human ones: BLEU 1 for unigrams, CIDEr and METEOR. In order to evaluate the diversity, we report: (1) the average length of referring expressions (ASL) (2) the number of unique words of the generated corpus; (Voc) and (3) the average number of unique bigrams per 1000 bigrams (TTR). (van Miltenburg et al., 2018) .", "cite_spans": [ { "start": 185, "end": 213, "text": "Zarrie\u00df and Schlangen, 2018;", "ref_id": "BIBREF43" }, { "start": 214, "end": 230, "text": "Yu et al., 2016)", "ref_id": "BIBREF41" }, { "start": 571, "end": 600, "text": "(van Miltenburg et al., 2018)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "7.3" }, { "text": "We first explore a number context-dependent normalization factors that affect the RL training described in Section 4. Regarding the reward configuration (see Section 4.1) we explore: (1) which reward function to use to evaluate the sequences;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluating different RL training strategies", "sec_num": "8.1" }, { "text": "(2) which search strategy will be used to sample the actions from the policy; and (3) whether reward normalization further stabilizes the training.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluating different RL training strategies", "sec_num": "8.1" }, { "text": "Reward Function: First we compare various evaluation measures as reward functions, namely CIDEr, BLEU and METEOR as well as metrics combinations. A summary of the results is given in Table 1 , where RL stands for the REINFORCE algorithm. We present the performance of the MLE model we used for the initialization of the RL training. As expected, optimizing towards a particular evaluation metric during training leads to an increase on that particular metric during testing. However, the benefits are not comparable with those gained when optimizing CIDEr. Specifically, CIDEr optimization leads to improvements in scores for all other metrics as opposed to directly optimize them. A notable exception is the combination of CIDER+BLEU where BLEU score is higher compared to optimizing only for CIDEr. Therefore, for the rest of the paper, all RL models are based on CIDEr optimization. Table 2 : Results of different search strategies for reward computation and variance reduction. \"RS\" stands for random sampling, while \"BS\" refers to beam search and \"GD\" for greedy decoding. \"SCTS\" refers to self-critical training. Shaping denotes that we used reward shaping.", "cite_spans": [], "ref_spans": [ { "start": 183, "end": 190, "text": "Table 1", "ref_id": "TABREF1" }, { "start": 886, "end": 893, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Evaluating different RL training strategies", "sec_num": "8.1" }, { "text": "Action sampling strategy: So far we sampled the words using random sampling. Next, we compare beam search and random sampling as search strategies to sample the words. The results are shown in Table 2 . Although beam search (with width of 2) has been the de facto decoding strategy for neural REG systems, it produces inferior results when compared to random sampling. We hypothesize due to the deterministic nature of beam search, the sampled sequences are often duplicates and thus uninformative for the gradient estimation, while the stochasticity of sampling generates sequences with exploratory usefulness for the gradient estimation and it results in more natural-sounding expressions.", "cite_spans": [], "ref_spans": [ { "start": 193, "end": 200, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Evaluating different RL training strategies", "sec_num": "8.1" }, { "text": "Self-critical training for REG: We next investigate, whether the inclusion of a baseline is an effective way of stabilizing the training by reducing the variance of the gradient. We follow the self-critical training strategy that utilizes the output of the greedy decoding to normalize the rewards. We further investigate random sampling and greedy decoding as search strategies. Table 3 depicts the results. Self-critical training improves over the REINFORCE algorithm, which indicates that the variance of the gradient is significant in neural REG. However, we notice that instead of using the greedy decoding that is originally proposed in (Rennie et al., 2017) random sampling is a better choice.", "cite_spans": [], "ref_spans": [ { "start": 380, "end": 387, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Evaluating different RL training strategies", "sec_num": "8.1" }, { "text": "Combining MLE with RL: Next we evaluate the combination of self-critical objective with MLE. Figure 1 shows the results on the validation set. The best trade-off between MLE and RL objectives in our experiment is when \u03b1 = 0.9 . Table 3 depicts the results on the test set where we observe that the weighed combination of MLE and SCTS objective further improves the quality of the generated expressions.", "cite_spans": [], "ref_spans": [ { "start": 93, "end": 101, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 228, "end": 235, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Evaluating different RL training strategies", "sec_num": "8.1" }, { "text": "In this subsection, we report the results for training a REG model with minimum risk training and we compare it with MLE. Training with MRT requires generating and scoring multiple candidate referring expressions for each input. Thus, we explore two factors: (1) which search strategy should be used Table 3 : System results: CIDEr and BLEU scores; average sentence length (ASL); vocabulary size (Voc); meansegmented bigram ratio (TTR); SCTS denotes self-critical training with random sampling as baseline; MRT denotes minimum risk training with candidate size of 5 for RefCOCO and size of 8 for ReFCOCO+. to generate the candidate sequences;", "cite_spans": [], "ref_spans": [ { "start": 300, "end": 307, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Evaluating Minimum Risk Training for REG", "sec_num": "8.2" }, { "text": "(2) and how many sequences we should generate for one input. We found that random sampling performs better than beam search both in terms of CIDEr score and is considerably faster. Thus, Figure 2 compares different set sizes on the validation set when random sampling is used. For RefCOCO we choose candidate set size of 5, while for RefCOCO+ 8. Table 3 presents the results on the test set. Optimizing the REG model with MRT improves both CIDEr and BLEU by several figures over the MLE.", "cite_spans": [], "ref_spans": [ { "start": 187, "end": 195, "text": "Figure 2", "ref_id": "FIGREF1" }, { "start": 346, "end": 353, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Evaluating Minimum Risk Training for REG", "sec_num": "8.2" }, { "text": "Our final experiment compares MRT to RL training w.r.t naturalness and diversity. Table 3 shows all sequence level optimization methods used. When analyzing the effect that different training methods have on naturalness and diversity of the referring expressions a few clear patterns can be observed:", "cite_spans": [], "ref_spans": [ { "start": 82, "end": 89, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Comparison of MRT to RL training", "sec_num": "8.3" }, { "text": "(1) SCTS has the lowest diversity and naturalness (i.e. BLEU score) and highest repetition among all models;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison of MRT to RL training", "sec_num": "8.3" }, { "text": "(2) Out of the 4 different test sets, SCTS has the highest accuracy CIDEr scores when compared to MLE and MRT training;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison of MRT to RL training", "sec_num": "8.3" }, { "text": "(3) combining the SCTS loss with MLE improves sightly the accuracy, naturalness and diversity of the produced referring expressions. Still, however, the diversity is considerably lower than MLE and MRT. (4) Minimum risk training improves over MLE in all tests sets. However, when compared to SCTS it only produces higher CIDEr in only one case (i.e. Re-fCOCO testB); (5) MRT has the highest diversity and naturalness compared to the other two training strategies; (6) combing the MRT loss with MLE further improves the diversity and naturalness of the generated referring expressions. In particular, as can be seen in Table 3 , the MLE + MRT loss achieves the highest scores in all categories, except in testB+ where the combination of two losses produces inferior results in terms of CIDEr. Examples of generated REs are illustrated in Figure 3 . In all images presented in Figure 3 , we observe that the proposed MLE + MRT model improves over all compared training objectives in inferring more pragmatically adequate referring expressions by using, for example, precise appearance and location attributes (e.g. \"man with hand on chin\" and \"left side of pic brown thing in front\") or negations (e.g \"cat no reflection\")", "cite_spans": [], "ref_spans": [ { "start": 618, "end": 625, "text": "Table 3", "ref_id": null }, { "start": 837, "end": 845, "text": "Figure 3", "ref_id": "FIGREF2" }, { "start": 875, "end": 883, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Comparison of MRT to RL training", "sec_num": "8.3" }, { "text": "In this work we considered the problem of optimizing referring expression generation models with sequence level objectives. Specifically, we firstly provide a comprehensive comparison of different aspects of configuring REG models with RL training. We found that (1) random sampling is a better search strategy than beam search; (2) we showed that using random sampling with self-critical training improves CIDEr scores; (3) incorporating reward shaping improves the performance; (4) we showed that combining RL objectives with MLE is beneficial to the training, resulting in higher CIDEr scores and diversity. However, there is a considerable gap between MLE and RL methods w.r.t. to diversity. Thus, as an alternative to RL we proposed the use of minimum risk training. We showed that MRT combined with MLE produces superior re- sults in terms of naturalness and diversity of the referring expressions compared to both MLE and RL training. While we have focused on analyzing the performance of the presented models with automated evaluation metrics, we intend to further verify these results in a human evaluation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "9" } ], "back_matter": [ { "text": "We wish to thank NVIDIA for its kind donation of the GPU used in the presented experiments. DG is supported under the EPSRC project CiViL (EP/T014598/1).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": "10" } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "An actor-critic algorithm for sequence prediction", "authors": [ { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Philemon", "middle": [], "last": "Brakel", "suffix": "" }, { "first": "Kelvin", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Anirudh", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Lowe", "suffix": "" }, { "first": "Joelle", "middle": [], "last": "Pineau", "suffix": "" }, { "first": "Aaron", "middle": [], "last": "Courville", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, and Yoshua Bengio. 2016. An actor-critic algorithm for sequence prediction. arXiv e-prints, abs/1607.07086.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Scheduled sampling for sequence prediction with recurrent neural networks", "authors": [ { "first": "Samy", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "Navdeep", "middle": [], "last": "Jaitly", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" } ], "year": 2015, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. 2015. Scheduled sampling for se- quence prediction with recurrent neural networks. In Advances in Neural Information Processing Sys- tems.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "On the properties of neural machine translation: Encoder-decoder approaches", "authors": [ { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Bart", "middle": [], "last": "Van Merri\u00ebnboer", "suffix": "" }, { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kyunghyun Cho, Bart van Merri\u00ebnboer, Dzmitry Bah- danau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder ap- proaches. In Proceedings of the Eighth Workshop on Syntax, Semantics and Structure in Statistical Trans- lation (SSST-8).", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Computational interpretations of the gricean maxims in the generation of referring expressions", "authors": [ { "first": "Robert", "middle": [], "last": "Dale", "suffix": "" }, { "first": "Ehud", "middle": [], "last": "Reiter", "suffix": "" } ], "year": 1995, "venue": "Cognitive Science", "volume": "19", "issue": "2", "pages": "233--263", "other_ids": { "DOI": [ "10.1016/0364-0213(95)90018-7" ] }, "num": null, "urls": [], "raw_text": "Robert Dale and Ehud Reiter. 1995. Computational interpretations of the gricean maxims in the gener- ation of referring expressions. Cognitive Science, 19(2):233-263.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Classical structured prediction losses for sequence to sequence learning", "authors": [ { "first": "Sergey", "middle": [], "last": "Edunov", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Auli", "suffix": "" }, { "first": "David", "middle": [], "last": "Grangier", "suffix": "" }, { "first": "Marc'aurelio", "middle": [], "last": "Ranzato", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "355--364", "other_ids": { "DOI": [ "10.18653/v1/N18-1033" ] }, "num": null, "urls": [], "raw_text": "Sergey Edunov, Myle Ott, Michael Auli, David Grang- ier, and Marc'Aurelio Ranzato. 2018. Classical structured prediction losses for sequence to se- quence learning. In Proceedings of the 2018 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 355-364, New Orleans, Louisiana. Association for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Logic and conversation", "authors": [ { "first": "P", "middle": [], "last": "Herbert", "suffix": "" }, { "first": "", "middle": [], "last": "Grice", "suffix": "" } ], "year": 1975, "venue": "Syntax and Semantics", "volume": "3", "issue": "", "pages": "41--58", "other_ids": {}, "num": null, "urls": [], "raw_text": "Herbert P. Grice. 1975. Logic and conversation. In Pe- ter Cole and Jerry L. Morgan, editors, Syntax and Semantics: Vol. 3: Speech Acts, pages 41-58. Aca- demic Press, New York.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Soft layer-specific multi-task summarization with entailment and question generation", "authors": [ { "first": "Han", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Ramakanth", "middle": [], "last": "Pasunuru", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Bansal", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Han Guo, Ramakanth Pasunuru, and Mohit Bansal. 2018. Soft layer-specific multi-task summarization with entailment and question generation. In Pro- ceedings of the 56th Annual Meeting of the Associa- tion for Computational Linguistics (ACL).", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Deep residual learning for image recognition", "authors": [ { "first": "Kaiming", "middle": [], "last": "He", "suffix": "" }, { "first": "Xiangyu", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Shaoqing", "middle": [], "last": "Ren", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2016, "venue": "2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recog- nition. In 2016 IEEE Conference on Computer Vi- sion and Pattern Recognition (CVPR).", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Long short-term memory", "authors": [ { "first": "Sepp", "middle": [], "last": "Hochreiter", "suffix": "" }, { "first": "J\u00fcrgen", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 1997, "venue": "Neural Computation", "volume": "9", "issue": "8", "pages": "1735--1780", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735-1780.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Recurrent Neural Networks: Design and Applications", "authors": [ { "first": "C", "middle": [], "last": "Lakhmi", "suffix": "" }, { "first": "Larry", "middle": [ "R" ], "last": "Jain", "suffix": "" }, { "first": "", "middle": [], "last": "Medsker", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lakhmi C. Jain and Larry R. Medsker. 1999. Recur- rent Neural Networks: Design and Applications, 1st edition. CRC Press, Inc., Boca Raton, FL, USA.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "ReferItGame: Referring to objects in photographs of natural scenes", "authors": [ { "first": "Sahar", "middle": [], "last": "Kazemzadeh", "suffix": "" }, { "first": "Vicente", "middle": [], "last": "Ordonez", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Matten", "suffix": "" }, { "first": "Tamara", "middle": [], "last": "Berg", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "787--798", "other_ids": { "DOI": [ "10.3115/v1/D14-1086" ] }, "num": null, "urls": [], "raw_text": "Sahar Kazemzadeh, Vicente Ordonez, Mark Matten, and Tamara Berg. 2014. ReferItGame: Referring to objects in photographs of natural scenes. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 787-798, Doha, Qatar. Association for Com- putational Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Adam: A method for stochastic optimization", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2014, "venue": "the 3rd International Conference for Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. Cite arxiv:1412.6980Comment: Published as a confer- ence paper at the 3rd International Conference for Learning Representations, San Diego, 2015.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Computational generation of referring expressions: A survey", "authors": [ { "first": "Emiel", "middle": [], "last": "Krahmer", "suffix": "" }, { "first": "", "middle": [], "last": "Kees Van Deemter", "suffix": "" } ], "year": 2012, "venue": "Comput. Linguist", "volume": "38", "issue": "1", "pages": "173--218", "other_ids": {}, "num": null, "urls": [], "raw_text": "Emiel Krahmer and Kees van Deemter. 2012. Compu- tational generation of referring expressions: A sur- vey. Comput. Linguist., 38(1):173-218.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Imagenet classification with deep convolutional neural networks", "authors": [ { "first": "Alex", "middle": [], "last": "Krizhevsky", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Geoffrey", "middle": [ "E" ], "last": "Hinton", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 25th Conference on Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hin- ton. 2012. Imagenet classification with deep convo- lutional neural networks. In Proceedings of the 25th Conference on Advances in Neural Information Pro- cessing Systems.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Deep reinforcement learning for dialogue generation", "authors": [ { "first": "Jiwei", "middle": [], "last": "Li", "suffix": "" }, { "first": "Will", "middle": [], "last": "Monroe", "suffix": "" }, { "first": "Alan", "middle": [], "last": "Ritter", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Jurafsky", "suffix": "" }, { "first": "Michel", "middle": [], "last": "Galley", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiwei Li, Will Monroe, Alan Ritter, Dan Jurafsky, Michel Galley, and Jianfeng Gao. 2016. Deep rein- forcement learning for dialogue generation. In Pro- ceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP).", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Continuous control with deep reinforcement learning", "authors": [ { "first": "Timothy", "middle": [ "P" ], "last": "Lillicrap", "suffix": "" }, { "first": "Jonathan", "middle": [ "J" ], "last": "Hunt", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Pritzel", "suffix": "" }, { "first": "Nicolas", "middle": [], "last": "Heess", "suffix": "" } ], "year": 2016, "venue": "ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. 2016. Continuous control with deep reinforcement learning. In ICLR.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Microsoft coco: Common objects in context", "authors": [ { "first": "Tsung-Yi", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Maire", "suffix": "" }, { "first": "Serge", "middle": [], "last": "Belongie", "suffix": "" }, { "first": "James", "middle": [], "last": "Hays", "suffix": "" }, { "first": "Pietro", "middle": [], "last": "Perona", "suffix": "" }, { "first": "Deva", "middle": [], "last": "Ramanan", "suffix": "" }, { "first": "Piotr", "middle": [], "last": "Doll\u00e1r", "suffix": "" }, { "first": "C", "middle": [ "Lawrence" ], "last": "Zitnick", "suffix": "" } ], "year": 2014, "venue": "Computer Vision -ECCV", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll\u00e1r, and C. Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In Computer Vision - ECCV.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Referring expression generation and comprehension via attributes", "authors": [ { "first": "Jingyu", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Liang", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Ming-Hsuan", "middle": [], "last": "Yang", "suffix": "" } ], "year": 2017, "venue": "The IEEE International Conference on Computer Vision (ICCV)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jingyu Liu, Liang Wang, and Ming-Hsuan Yang. 2017. Referring expression generation and comprehension via attributes. In The IEEE International Confer- ence on Computer Vision (ICCV).", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Generation and comprehension of unambiguous object descriptions", "authors": [ { "first": "J", "middle": [], "last": "Mao", "suffix": "" }, { "first": "J", "middle": [], "last": "Huang", "suffix": "" }, { "first": "A", "middle": [], "last": "Toshev", "suffix": "" }, { "first": "O", "middle": [], "last": "Camburu", "suffix": "" }, { "first": "A", "middle": [], "last": "Yuille", "suffix": "" }, { "first": "K", "middle": [], "last": "Murphy", "suffix": "" } ], "year": 2016, "venue": "2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Mao, J. Huang, A. Toshev, O. Camburu, A. Yuille, and K. Murphy. 2016. Generation and comprehen- sion of unambiguous object descriptions. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Generation and comprehension of unambiguous object descriptions", "authors": [ { "first": "Junhua", "middle": [], "last": "Mao", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Toshev", "suffix": "" }, { "first": "Oana", "middle": [], "last": "Camburu", "suffix": "" }, { "first": "Alan", "middle": [], "last": "Yuille", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Murphy", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Junhua Mao, Jonathan Huang, Alexander Toshev, Oana Camburu, Alan Yuille, and Kevin Murphy. 2016. Generation and comprehension of unambiguous ob- ject descriptions. In Proceedings of the IEEE Con- ference on Computer Vision and Pattern Recognition (CVPR).", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Measuring the diversity of automatic image descriptions", "authors": [ { "first": "Desmond", "middle": [], "last": "Emiel Van Miltenburg", "suffix": "" }, { "first": "Piek", "middle": [], "last": "Elliott", "suffix": "" }, { "first": "", "middle": [], "last": "Vossen", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 27th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "1730--1741", "other_ids": {}, "num": null, "urls": [], "raw_text": "Emiel van Miltenburg, Desmond Elliott, and Piek Vossen. 2018. Measuring the diversity of automatic image descriptions. In Proceedings of the 27th Inter- national Conference on Computational Linguistics, pages 1730-1741, Santa Fe, New Mexico, USA. As- sociation for Computational Linguistics.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Generating expressions that refer to visual objects", "authors": [ { "first": "Margaret", "middle": [], "last": "Mitchell", "suffix": "" }, { "first": "Ehud", "middle": [ "Baruch" ], "last": "Kees Van Deemter", "suffix": "" }, { "first": "", "middle": [], "last": "Reiter", "suffix": "" } ], "year": 2013, "venue": "Proc of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Margaret Mitchell, Kees van Deemter, and Ehud Baruch Reiter. 2013. Generating expressions that refer to visual objects. In Proc of the 2013 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies. Association for Computational Linguistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis", "authors": [ { "first": "Volodymyr", "middle": [], "last": "Mnih", "suffix": "" }, { "first": "Koray", "middle": [], "last": "Kavukcuoglu", "suffix": "" }, { "first": "David", "middle": [], "last": "Silver", "suffix": "" }, { "first": "Andrei", "middle": [ "A" ], "last": "Rusu", "suffix": "" }, { "first": "Joel", "middle": [], "last": "Veness", "suffix": "" }, { "first": "Marc", "middle": [ "G" ], "last": "Bellemare", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Graves", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Riedmiller", "suffix": "" }, { "first": "Andreas", "middle": [ "K" ], "last": "Fidjeland", "suffix": "" }, { "first": "Georg", "middle": [], "last": "Ostrovski", "suffix": "" }, { "first": "Stig", "middle": [], "last": "Petersen", "suffix": "" } ], "year": 2015, "venue": "Nature", "volume": "518", "issue": "7540", "pages": "529--533", "other_ids": { "DOI": [ "10.1038/nature14236" ] }, "num": null, "urls": [], "raw_text": "Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin Riedmiller, Andreas K. Fid- jeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis. 2015. Human-level con- trol through deep reinforcement learning. Nature, 518(7540):529-533.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Improved image captioning via policy gradient optimization of spider", "authors": [ { "first": "Kevin", "middle": [], "last": "Murphy", "suffix": "" }, { "first": "Ning", "middle": [], "last": "Ye", "suffix": "" }, { "first": "Sergio", "middle": [], "last": "Guadarrama", "suffix": "" }, { "first": "Siqi", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Zhenhai", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kevin Murphy, Ning Ye, Sergio Guadarrama, Siqi Liu, and Zhenhai Zhu. 2017. Improved image captioning via policy gradient optimization of spider.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Policy invariance under reward transformations: Theory and application to reward shaping", "authors": [ { "first": "Andrew", "middle": [ "Y" ], "last": "Ng", "suffix": "" }, { "first": "Daishi", "middle": [], "last": "Harada", "suffix": "" }, { "first": "Stuart", "middle": [ "J" ], "last": "", "suffix": "" } ], "year": 1999, "venue": "Proceedings of the Sixteenth International Conference on Machine Learning, ICML '99", "volume": "", "issue": "", "pages": "278--287", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrew Y. Ng, Daishi Harada, and Stuart J. Rus- sell. 1999. Policy invariance under reward trans- formations: Theory and application to reward shap- ing. In Proceedings of the Sixteenth International Conference on Machine Learning, ICML '99, page 278-287, San Francisco, CA, USA. Morgan Kauf- mann Publishers Inc.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Minimum error rate training in statistical machine translation", "authors": [ { "first": "Franz Josef", "middle": [], "last": "Och", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "160--167", "other_ids": { "DOI": [ "10.3115/1075096.1075117" ] }, "num": null, "urls": [], "raw_text": "Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In Proceedings of the 41st Annual Meeting of the Association for Compu- tational Linguistics, pages 160-167, Sapporo, Japan. Association for Computational Linguistics.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Sequence level training with recurrent neural networks", "authors": [ { "first": "Aurelio", "middle": [], "last": "Marc", "suffix": "" }, { "first": "Sumit", "middle": [], "last": "Ranzato", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Chopra", "suffix": "" }, { "first": "Wojciech", "middle": [], "last": "Auli", "suffix": "" }, { "first": "", "middle": [], "last": "Zaremba", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 4th International Conference on Learning Representations ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marc'Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2016. Sequence level train- ing with recurrent neural networks. In Proceed- ings of the 4th International Conference on Learn- ing Representations ICLR.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Self-critical sequence training for image captioning", "authors": [ { "first": "J", "middle": [], "last": "Steven", "suffix": "" }, { "first": "Etienne", "middle": [], "last": "Rennie", "suffix": "" }, { "first": "Youssef", "middle": [], "last": "Marcheret", "suffix": "" }, { "first": "Jarret", "middle": [], "last": "Mroueh", "suffix": "" }, { "first": "Vaibhava", "middle": [], "last": "Ross", "suffix": "" }, { "first": "", "middle": [], "last": "Goel", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Steven J. Rennie, Etienne Marcheret, Youssef Mroueh, Jarret Ross, and Vaibhava Goel. 2017. Self-critical sequence training for image captioning. In 2017", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "IEEE Conference on Computer Vision and Pattern Recognition (CVPR)", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "IEEE Conference on Computer Vision and Pattern Recognition (CVPR).", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Minimum risk training for neural machine translation", "authors": [ { "first": "Shiqi", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Yong", "middle": [], "last": "Cheng", "suffix": "" }, { "first": "Zhongjun", "middle": [], "last": "He", "suffix": "" }, { "first": "Wei", "middle": [], "last": "He", "suffix": "" }, { "first": "Hua", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Maosong", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1683--1692", "other_ids": { "DOI": [ "10.18653/v1/P16-1159" ] }, "num": null, "urls": [], "raw_text": "Shiqi Shen, Yong Cheng, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2016. Minimum risk training for neural machine translation. In Pro- ceedings of the 54th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), pages 1683-1692, Berlin, Germany. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Sequence to sequence learning with neural networks", "authors": [ { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "Quoc V", "middle": [], "last": "Le", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 27th International Conference on Neural Information Processing Systems", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Proceedings of the 27th International Conference on Neural Information Processing Systems.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Reinforcement Learning: An Introduction, second edition", "authors": [ { "first": "Richard", "middle": [ "S" ], "last": "Sutton", "suffix": "" }, { "first": "Andrew", "middle": [ "G" ], "last": "Barto", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard S. Sutton and Andrew G. Barto. 2018. Rein- forcement Learning: An Introduction, second edi- tion. The MIT Press.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Abstractive document summarization with a graphbased attentional neural model", "authors": [ { "first": "Jiwei", "middle": [], "last": "Tan", "suffix": "" }, { "first": "Xiaojun", "middle": [], "last": "Wan", "suffix": "" }, { "first": "Jianguo", "middle": [], "last": "Xiao", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiwei Tan, Xiaojun Wan, and Jianguo Xiao. 2017. Abstractive document summarization with a graph- based attentional neural model. In Proceedings of the 55th Annual Meeting of the Association for Com- putational Linguistics (ACL).", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Speakerdependent variation in content selection for referring expression generation", "authors": [ { "first": "Jette", "middle": [], "last": "Viethen", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Dale", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the Australasian Language Technology Association Workshop 2010", "volume": "", "issue": "", "pages": "81--89", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jette Viethen and Robert Dale. 2010. Speaker- dependent variation in content selection for referring expression generation. In Proceedings of the Aus- tralasian Language Technology Association Work- shop 2010, pages 81-89, Melbourne, Australia.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Graphs and spatial relations in the generation of referring expressions", "authors": [ { "first": "Jette", "middle": [], "last": "Viethen", "suffix": "" }, { "first": "Margaret", "middle": [], "last": "Mitchell", "suffix": "" }, { "first": "Emiel", "middle": [], "last": "Krahmer", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 14th European Workshop on Natural Language Generation", "volume": "", "issue": "", "pages": "72--81", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jette Viethen, Margaret Mitchell, and Emiel Krahmer. 2013. Graphs and spatial relations in the generation of referring expressions. In Proceedings of the 14th European Workshop on Natural Language Genera- tion, pages 72-81, Sofia, Bulgaria. Association for Computational Linguistics.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "A neural conversational model", "authors": [ { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "V", "middle": [], "last": "Quoc", "suffix": "" }, { "first": "", "middle": [], "last": "Le", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oriol Vinyals and Quoc V. Le. 2015. A neural conver- sational model. CoRR, abs/1506.05869.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Show and tell: A neural image caption generator", "authors": [ { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Toshev", "suffix": "" }, { "first": "Samy", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "Dumitru", "middle": [], "last": "Erhan", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2015. Show and tell: A neural im- age caption generator. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recog- nition (CVPR).", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Describing like humans: On diversity in image captioning", "authors": [ { "first": "Qingzhong", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Antoni", "middle": [ "B" ], "last": "Chan", "suffix": "" } ], "year": 2019, "venue": "The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Qingzhong Wang and Antoni B. Chan. 2019. Describ- ing like humans: On diversity in image captioning. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR).", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Understanding Natural Language", "authors": [ { "first": "Terry", "middle": [], "last": "Winograd", "suffix": "" } ], "year": 1972, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Terry Winograd. 1972. Understanding Natural Lan- guage. Academic Press, Inc., USA.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Google's neural machine translation system: Bridging the gap between human and machine translation", "authors": [ { "first": "Yonghui", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Schuster", "suffix": "" }, { "first": "Zhifeng", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Quoc", "middle": [ "V" ], "last": "Le", "suffix": "" }, { "first": "Mohammad", "middle": [], "last": "Norouzi", "suffix": "" }, { "first": "Wolfgang", "middle": [], "last": "Macherey", "suffix": "" }, { "first": "Maxim", "middle": [], "last": "Krikun", "suffix": "" }, { "first": "Yuan", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Qin", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Klaus", "middle": [], "last": "Macherey", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Klingner", "suffix": "" }, { "first": "Apurva", "middle": [], "last": "Shah", "suffix": "" }, { "first": "Melvin", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Xiaobing", "middle": [], "last": "Liu", "suffix": "" }, { "first": "\u0141ukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Stephan", "middle": [], "last": "Gouws", "suffix": "" }, { "first": "Yoshikiyo", "middle": [], "last": "Kato", "suffix": "" }, { "first": "Taku", "middle": [], "last": "Kudo", "suffix": "" }, { "first": "Hideto", "middle": [], "last": "Kazawa", "suffix": "" }, { "first": "Keith", "middle": [], "last": "Stevens", "suffix": "" }, { "first": "George", "middle": [], "last": "Kurian", "suffix": "" }, { "first": "Nishant", "middle": [], "last": "Patil", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2016, "venue": "Oriol Vinyals", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin John- son, Xiaobing Liu, \u0141ukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rud- nick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google's neural machine translation system: Bridging the gap between human and machine translation. CoRR, abs/1609.08144.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Show, attend and tell: Neural image caption generation with visual attention", "authors": [ { "first": "Kelvin", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Ba", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Kiros", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Aaron", "middle": [], "last": "Courville", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhudinov", "suffix": "" }, { "first": "Rich", "middle": [], "last": "Zemel", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 32nd International Conference on Machine Learning", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual atten- tion. In Proceedings of the 32nd International Con- ference on Machine Learning.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Modeling Context in Referring Expressions", "authors": [ { "first": "Licheng", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Poirson", "suffix": "" }, { "first": "Shan", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Alexander", "middle": [ "C" ], "last": "Berg", "suffix": "" }, { "first": "Tamara", "middle": [ "L" ], "last": "Berg", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 14th European Conference on Computer Vision (ECCV)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Licheng Yu, Patrick Poirson, Shan Yang, Alexander C. Berg, and Tamara L. Berg. 2016. Modeling Context in Referring Expressions. In Proceedings of the 14th European Conference on Computer Vision (ECCV).", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "A joint speaker-listener-reinforcer model for referring expressions", "authors": [ { "first": "Licheng", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Hao", "middle": [], "last": "Tan", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Bansal", "suffix": "" }, { "first": "Tamara", "middle": [ "L" ], "last": "Berg", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition CVPR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Licheng Yu, Hao Tan, Mohit Bansal, and Tamara L. Berg. 2017. A joint speaker-listener-reinforcer model for referring expressions. In Proceedings of the IEEE Conference on Computer Vision and Pat- tern Recognition CVPR.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Decoding strategies for neural referring expression generation", "authors": [ { "first": "Sina", "middle": [], "last": "Zarrie\u00df", "suffix": "" }, { "first": "David", "middle": [], "last": "Schlangen", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 11th International Conference on Natural Language Generation", "volume": "", "issue": "", "pages": "503--512", "other_ids": { "DOI": [ "10.18653/v1/W18-6563" ] }, "num": null, "urls": [], "raw_text": "Sina Zarrie\u00df and David Schlangen. 2018. Decoding strategies for neural referring expression generation. In Proceedings of the 11th International Conference on Natural Language Generation, pages 503-512, Tilburg University, The Netherlands. Association for Computational Linguistics.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Validation set CIDEr scores for different values of \u03b1 for combining MLE with either RL objective (see Equation 7 ) or MRT objective (see Equation 8 ). Best viewed in color.", "uris": null, "num": null, "type_str": "figure" }, "FIGREF1": { "text": "Validation set CIDEr scores for different candidate set sizes for the MRT model. Best viewed in color.", "uris": null, "num": null, "type_str": "figure" }, "FIGREF2": { "text": "Examples of objects and expressions drawn from both RefCOCO and RefCOCO+ datasets. The target object is highlighted with a red box.", "uris": null, "num": null, "type_str": "figure" }, "TABREF1": { "html": null, "type_str": "table", "text": "Performance of different reward functions on RefCOCO dataset (the same trend applies to RefCOCO+ and thus omitted). RL stands for the REINFORCE algorithm. Optimizing the training for the CIDEr metric increases all evaluation metrics significantly. All models were decoded using greedy decoding. The performance of the seed model is also reported. The best overall values for each metric are emphasized with bold.", "content": "
testAtestBtestA+testB+
MethodBLEU METEOR CIDEr BLEU METEOR CIDEr BLEU METEOR CIDEr BLEU METEOR CIDEr
MLE0.5420.2000.8410.6140.2581.507 0.481 0.1790.7150.409 0.1730.829
RL+ RS0.5690.2220.9540.6250.2771.564 0.469 0.1850.7450.286 0.1630.913
RL + BS0.5610.2170.9460.6170.2701.549 0.465 0.1840.7430.277 0.1600.901
RL + RS+ Shaping 0.5740.2230.9570.6280.2781.567 0.473 0.1810.7520.279 0.1620.915
RL + BS+ Shaping 0.5650.2190.9480.6180.2721.552 0.468 0.1550.7490.275 0.1610.904
SCTS+RS0.5930.2311.0120.6380.2901.607 0.481 0.1940.8090.282 0.1650.942
SCTS+GD0.5830.2270.9950.6350.2791.585 0.461 0.1850.7610.276 0.1630.934
", "num": null } } } }