Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D19-1010",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:58:04.406844Z"
},
"title": "Guided Dialog Policy Learning: Reward Estimation for Multi-Domain Task-Oriented Dialog",
"authors": [
{
"first": "Ryuichi",
"middle": [],
"last": "Takanobu",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Hanlin",
"middle": [],
"last": "Zhu",
"suffix": "",
"affiliation": {
"laboratory": "IIIS",
"institution": "Tsinghua University",
"location": {
"settlement": "Beijing",
"country": "China"
}
},
"email": ""
},
{
"first": "Minlie",
"middle": [],
"last": "Huang",
"suffix": "",
"affiliation": {},
"email": "aihuang@tsinghua.edu.cn"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Dialog policy decides what and how a taskoriented dialog system will respond, and plays a vital role in delivering effective conversations. Many studies apply Reinforcement Learning to learn a dialog policy with the reward function which requires elaborate design and pre-specified user goals. With the growing needs to handle complex goals across multiple domains, such manually designed reward functions are not affordable to deal with the complexity of real-world tasks. To this end, we propose Guided Dialog Policy Learning, a novel algorithm based on Adversarial Inverse Reinforcement Learning for joint reward estimation and policy optimization in multi-domain task-oriented dialog. The proposed approach estimates the reward signal and infers the user goal in the dialog sessions. The reward estimator evaluates the state-action pairs so that it can guide the dialog policy at each dialog turn. Extensive experiments on a multi-domain dialog dataset show that the dialog policy guided by the learned reward function achieves remarkably higher task success than state-of-the-art baselines. * Corresponding author U: I'm looking for a hotel to stay that has 5 stars and cheap price range. S: I am sorry that there is no such hotel, would you like to reserve a 3-star hotel as an alternative? U: I'd prefer a 4-star hotel even if it's a bit expensive. Oh, and I need parking. S: OK, I find a moderately priced 4-star hotel that includes parking and free wifi. U: Are there any places to eat around it? S: Many. Japanese, Indian, French, etc. What kind of food would you like?",
"pdf_parse": {
"paper_id": "D19-1010",
"_pdf_hash": "",
"abstract": [
{
"text": "Dialog policy decides what and how a taskoriented dialog system will respond, and plays a vital role in delivering effective conversations. Many studies apply Reinforcement Learning to learn a dialog policy with the reward function which requires elaborate design and pre-specified user goals. With the growing needs to handle complex goals across multiple domains, such manually designed reward functions are not affordable to deal with the complexity of real-world tasks. To this end, we propose Guided Dialog Policy Learning, a novel algorithm based on Adversarial Inverse Reinforcement Learning for joint reward estimation and policy optimization in multi-domain task-oriented dialog. The proposed approach estimates the reward signal and infers the user goal in the dialog sessions. The reward estimator evaluates the state-action pairs so that it can guide the dialog policy at each dialog turn. Extensive experiments on a multi-domain dialog dataset show that the dialog policy guided by the learned reward function achieves remarkably higher task success than state-of-the-art baselines. * Corresponding author U: I'm looking for a hotel to stay that has 5 stars and cheap price range. S: I am sorry that there is no such hotel, would you like to reserve a 3-star hotel as an alternative? U: I'd prefer a 4-star hotel even if it's a bit expensive. Oh, and I need parking. S: OK, I find a moderately priced 4-star hotel that includes parking and free wifi. U: Are there any places to eat around it? S: Many. Japanese, Indian, French, etc. What kind of food would you like?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Dialog policy, deciding the next action that the dialog agent should take at each turn, is a crucial component of a task-oriented dialog system. Among many models, Reinforcement Learning (RL) is commonly used to learn dialog policy (Fatemi et al., 2016; Peng et al., 2017; Yarats and Lewis, 2018; Lei et al., 2018; He et al., 2018; , where users are modeled as a part of the environment and the policy is learned through interactions with users.",
"cite_spans": [
{
"start": 232,
"end": 253,
"text": "(Fatemi et al., 2016;",
"ref_id": "BIBREF8"
},
{
"start": 254,
"end": 272,
"text": "Peng et al., 2017;",
"ref_id": "BIBREF19"
},
{
"start": 273,
"end": 296,
"text": "Yarats and Lewis, 2018;",
"ref_id": "BIBREF33"
},
{
"start": 297,
"end": 314,
"text": "Lei et al., 2018;",
"ref_id": "BIBREF16"
},
{
"start": 315,
"end": 331,
"text": "He et al., 2018;",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "While it is too expensive to learn directly from real users since RL requires a large number of Table 1 : An example of the multi-domain task-oriented dialog between the user (U) and the system (S). The dialog proceeds successfully because the system informs the user that no matching hotel exists (the first turn), identifies the new user goal about parking (the second turn), and shifts the topic to the restaurant domain (the third turn), which well understands the user's demand. samples to train, most existing studies use datadriven approaches to build a dialog system from conversational corpora (Zhao and Eskenazi, 2016; Dhingra et al., 2017; Shi and Yu, 2018) , where a common strategy is to build a user simulator, and then to learn dialog policy through making simulated interactions between an agent and the simulator. A typical reward function on policy learning consists of a small negative penalty at each turn to encourage a shorter session, and a large positive reward when the session ends successfully if the agent completes the user goal.",
"cite_spans": [
{
"start": 603,
"end": 628,
"text": "(Zhao and Eskenazi, 2016;",
"ref_id": "BIBREF35"
},
{
"start": 629,
"end": 650,
"text": "Dhingra et al., 2017;",
"ref_id": "BIBREF7"
},
{
"start": 651,
"end": 668,
"text": "Shi and Yu, 2018)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [
{
"start": 96,
"end": 103,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, specifying an effective reward function is challenging in task-oriented dialog. On one hand, the short dialogs resulted from the negative constant rewards are not always efficient. The agent may end a session too quickly to complete the task properly. For example, it is inappropriate to book a 3-star hotel without confirming with the user at the first turn in Table 1 . On the other hand, an explicit user goal is essential to evaluate the task success in the reward design, but user goals are hardly available in real situations (Su et al., 2016) . In addition, the user goal may change as the conversation proceeds. For instance, the user introduces a new requirement for the parking information at the second turn in Table 1 .",
"cite_spans": [
{
"start": 541,
"end": 558,
"text": "(Su et al., 2016)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [
{
"start": 371,
"end": 378,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 731,
"end": 738,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Unlike a handcrafted reward function that only evaluates the task success at the end of a session, a good reward function should be able to guide the policy dynamically to complete the task during the conversation. We refer to this as the reward sparsity issue. Furthermore, the reward function is often manually tweaked until the dialog policy performs desired behaviors. With the growing needs for the system to handle complex tasks across multiple domains, a more sophisticated reward function would be designed, which poses a serious challenge to manually trade off those different factors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we propose a novel model for learning task-oriented dialog policy. The model includes a robust dialog reward estimator based on Inverse Reinforcement Learning (IRL). The main idea is to automatically infer the reward and goal that motivates human behaviors and interactions from the real human-human dialog sessions. Different from conventional IRL that learns a reward function first and then trains the policy, we integrate Adversarial Learning (AL) into the method so that the policy and reward estimator can be learned simultaneously in an alternate way, thus improving each other during training. To deal with reward sparsity, the reward estimator evaluates the generated dialog session using state-action pairs instead of the entire session, which provides reward signals at each dialog turn and guides dialog policy learning better.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To evaluate the proposed approach, we conduct our experiments on a multi-domain, multi-intent task-oriented dialog corpus. The corpus involves large state and action spaces, multiple decision making in one turn, which makes it more challenging for the reward estimator to infer the user goal. Furthermore, we experiment with two different user simulators. The contributions of our work are in three folds:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We build a reward estimator via Inverse Reinforcement Learning (IRL) to infer an appropriate reward from multi-domain dialog sessions, in order to avoid manual design of reward function.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We integrate Adversarial Learning (AL) to train the policy and estimator simultaneously, and evaluate the policy using state-action pairs to better guide dialog policy learning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We conduct experiments on the multidomain, multi-intent task-oriented dialog corpus, with different types of user simulators. Results show the superiority of our model to the state-of-the-art baselines.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2 Related Work",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Some recent efforts have been paid to multidomain task-oriented dialog systems where users converse with the agent across multiple domains.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Domain Dialog Policy Learning",
"sec_num": "2.1"
},
{
"text": "A natural way to handle multi-domain dialog systems is to learn multiple independent singledomain sub-policies (Wang et al., 2014; Ga\u0161i\u0107 et al., 2015; Cuay\u00e1huitl et al., 2016) . Multidomain dialog completion was also addressed by hierarchical RL which decomposes the task into several sub-tasks in terms of temporal order (Peng et al., 2017) or space abstraction , but the hierarchical structure can be very complex and constraints between different domains should be considered if an agent conveys multiple intents.",
"cite_spans": [
{
"start": 111,
"end": 130,
"text": "(Wang et al., 2014;",
"ref_id": "BIBREF30"
},
{
"start": 131,
"end": 150,
"text": "Ga\u0161i\u0107 et al., 2015;",
"ref_id": "BIBREF11"
},
{
"start": 151,
"end": 175,
"text": "Cuay\u00e1huitl et al., 2016)",
"ref_id": "BIBREF6"
},
{
"start": 322,
"end": 341,
"text": "(Peng et al., 2017)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Domain Dialog Policy Learning",
"sec_num": "2.1"
},
{
"text": "Handcrafted reward functions for dialog policy learning require elaborate design. Several reward learning algorithms have been proposed to find better rewards, including supervised learning on expert dialogs (Li et al., 2014) , online active learning from user feedback (Su et al., 2016) , multiobject RL to aggregate measurements of various aspects of user satisfaction (Ultes et al., 2017) , etc. However, these methods still require some knowledge about user goals or annotations of dialog ratings from real users. Boularias et al. (2010) and Barahona and Cerisara (2014) ",
"cite_spans": [
{
"start": 208,
"end": 225,
"text": "(Li et al., 2014)",
"ref_id": "BIBREF17"
},
{
"start": 270,
"end": 287,
"text": "(Su et al., 2016)",
"ref_id": "BIBREF26"
},
{
"start": 371,
"end": 391,
"text": "(Ultes et al., 2017)",
"ref_id": "BIBREF28"
},
{
"start": 518,
"end": 541,
"text": "Boularias et al. (2010)",
"ref_id": "BIBREF2"
},
{
"start": 546,
"end": 574,
"text": "Barahona and Cerisara (2014)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Reward Learning in Dialog Systems",
"sec_num": "2.2"
},
{
"text": "We propose Guided Dialog Policy Learning (GDPL), a flexible and practical method on joint reward learning and policy optimization for multidomain task-oriented dialog systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Guided Dialog Policy Learning",
"sec_num": "3"
},
{
"text": "The overview of the full model is depicted in Fig. 1 . The framework consists of three modules: a multi-domain Dialog State Tracker (DST) at the dialog act level, a dialog policy module for deciding the next dialog act, and a reward estimator for policy evaluation. Specifically, given a set of collected human dialog sessions D = {\u03c4 1 , \u03c4 2 , . . . }, each dialog session \u03c4 is a trajectory of state-action pairs",
"cite_spans": [],
"ref_spans": [
{
"start": 46,
"end": 53,
"text": "Fig. 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Overview",
"sec_num": "3.1"
},
{
"text": "{s u 0 , a u 0 , s 0 , a 0 , s u 1 , a u 1 , s 1 , a 1 , . . . }.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overview",
"sec_num": "3.1"
},
{
"text": "The user simulator \u00b5(a u , t u |s u ) posts a response a u according to the user dialog state s u where t u denotes a binary terminal signal indicating whether the user wants to end the dialog session. The dialog policy \u03c0 \u03b8 (a|s) decides the action a according to the cur-rent state s and interacts with the simulator \u00b5. During the conversation, DST records the action from one dialog party and returns the state to the other party for deciding what action to take in the next step. Then, the reward estimator f \u03c9 (s, a) evaluates the quality of the response from the dialog policy, by comparing it with sampled human dialog sessions from the corpus. The dialog policy \u03c0 and the reward estimator f are MLPs parameterized by \u03b8, \u03c9 respectively. Note that our approach does not need any human supervision during training, and modeling a user simulator is beyond the scope of this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overview",
"sec_num": "3.1"
},
{
"text": "In the subsequent subsections, we will first explain the state, action, and DST used in our algorithm. Then, the algorithm is introduced in a session level, and last followed by a decomposition of state-action pair level.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overview",
"sec_num": "3.1"
},
{
"text": "A dialog state tracker keeps track of the dialog session to update the dialog state (Williams et al., 2016; . It records informable slots about the constraints from users and requestable slots that indicates what users want to inquiry. DST maintains a separate belief state for each slot. Given a user action, the belief state of its slot type is updated according to its slot value (Roy et al., 2000) . Action and state in our algorithm are defined as follows: Action : Each system action a or user action a u is a subset of dialog act set A as there may be multiple intents in one dialog turn. A dialog act is an abstract representation of an intention (Stolcke et al., 2000) , which can be represented in a quadruple composed of domain, intent, slot type and slot value in the multi-domain setting (e.g. [restaurant, inform, food, Italian]). In practice, dialog acts are delexicalized in the dialog policy. We replace the slot value with a count placeholder and refill it with the true value according to the entity selected from the external database, which allows the system to operate on unseen values.",
"cite_spans": [
{
"start": 84,
"end": 107,
"text": "(Williams et al., 2016;",
"ref_id": "BIBREF32"
},
{
"start": 383,
"end": 401,
"text": "(Roy et al., 2000)",
"ref_id": "BIBREF20"
},
{
"start": 655,
"end": 677,
"text": "(Stolcke et al., 2000)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Domain Dialog State Tracker",
"sec_num": "3.2"
},
{
"text": "State : At dialog turn t 1 , the system state s t = [a u t ; a t\u22121 ; b t ; q t ] consists of (I) user action at current turn a u t ; (II) system action at the last turn a t\u22121 ; (III) all belief state b t from DST; and (IV) embedding vectors of the number of query results q t from the external database.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Domain Dialog State Tracker",
"sec_num": "3.2"
},
{
"text": "As our model works at the dialog act level, DST can be simply implemented by extracting the slots from actions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Domain Dialog State Tracker",
"sec_num": "3.2"
},
{
"text": "Based on maximum entropy IRL (Ziebart et al., 2008) , the reward estimator maximizes the log likelihood of observed human dialog sessions to infer the underlying goal,",
"cite_spans": [
{
"start": 29,
"end": 51,
"text": "(Ziebart et al., 2008)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Session Level Reward Estimation",
"sec_num": "3.3"
},
{
"text": "\u03c9 * = argmax \u03c9 E \u03c4 \u223cD [f \u03c9 (\u03c4 )], f \u03c9 (\u03c4 ) = log p \u03c9 (\u03c4 ) = log e R\u03c9(\u03c4 ) Z \u03c9 , R \u03c9 (\u03c4 ) = T t=0 \u03b3 t r \u03c9 (s t , a t ), Z \u03c9 = \u03c4 e R\u03c9(\u03c4 ) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Session Level Reward Estimation",
"sec_num": "3.3"
},
{
"text": "where f models human dialogs as a Boltzmann distribution (Ziebart et al., 2008) , R stands for the return of a session, i.e. \u03b3-discounted cumulative rewards, and Z is the corresponding partition function. The dialog policy is encouraged to mimic human dialog behaviors. It maximizes the expected entropy-regularized return E \u03c0 [R] + H(\u03c0) (Ziebart et al., 2010) based on the principle of maximum entropy through minimizing the KL-divergence between the policy distribution and Boltzmann distribution,",
"cite_spans": [
{
"start": 57,
"end": 79,
"text": "(Ziebart et al., 2008)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Session Level Reward Estimation",
"sec_num": "3.3"
},
{
"text": "J \u03c0 (\u03b8) = \u2212KL(\u03c0 \u03b8 (\u03c4 )||p \u03c9 (\u03c4 )) = E \u03c4 \u223c\u03c0 [f \u03c9 (\u03c4 ) \u2212 log \u03c0 \u03b8 (\u03c4 )] = E \u03c4 \u223c\u03c0 [R \u03c9 (\u03c4 )] \u2212 log Z \u03c9 + H(\u03c0 \u03b8 ),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Session Level Reward Estimation",
"sec_num": "3.3"
},
{
"text": "where the term log Z \u03c9 is independent to \u03b8, and H(\u2022) denotes the entropy of a model. Intuitively, maximizing entropy is to resolve the ambiguity of language that many optimal policies can explain a set of natural dialog sessions. With the aid of the likelihood ratio trick, the gradient for the dialog policy is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Session Level Reward Estimation",
"sec_num": "3.3"
},
{
"text": "\u2207 \u03b8 J \u03c0 = E \u03c4 \u223c\u03c0 [(f \u03c9 (\u03c4 ) \u2212 log \u03c0 \u03b8 (\u03c4 ))\u2207 \u03b8 log \u03c0 \u03b8 (\u03c4 )].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Session Level Reward Estimation",
"sec_num": "3.3"
},
{
"text": "In the fashion of AL, the reward estimator aims to distinguish real human sessions and generated sessions from the dialog policy. Therefore, it minimizes KL-divergence with the empirical distribution, while maximizing the KL-divergence with the policy distribution, Similarly, H(p) and H(\u03c0 \u03b8 ) is independent to \u03c9, so the gradient for the reward estimator yields",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Session Level Reward Estimation",
"sec_num": "3.3"
},
{
"text": "J f (\u03c9)=\u2212KL(p D (\u03c4 )||p \u03c9 (\u03c4 ))+KL(\u03c0 \u03b8 (\u03c4 )||p \u03c9 (\u03c4 )) =E \u03c4 \u223cD [f \u03c9 (\u03c4 )]+H(p)\u2212E \u03c4 \u223c\u03c0 [f \u03c9 (\u03c4 )]\u2212H(\u03c0 \u03b8 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Session Level Reward Estimation",
"sec_num": "3.3"
},
{
"text": "\u2207 \u03c9 J f = E \u03c4 \u223cD [\u2207 \u03c9 f \u03c9 (\u03c4 )] \u2212 E \u03c4 \u223c\u03c0 [\u2207 \u03c9 f \u03c9 (\u03c4 )].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Session Level Reward Estimation",
"sec_num": "3.3"
},
{
"text": "So far, the reward estimation uses the entire session \u03c4 , which can be very inefficient because of reward sparsity and may be of high variance due to the different lengths of sessions. Here we decompose a session \u03c4 into state-action pairs (s, a) in the reward estimator to address the issues. Therefore, the loss functions for the dialog policy and the reward estimator become respectively as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "State-Action Level Reward Estimation",
"sec_num": "3.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "J \u03c0 (\u03b8) = E s,a\u223c\u03c0 [ T k=t \u03b3 k\u2212t (f \u03c9 (s k , a k ) \u2212 log \u03c0 \u03b8 (a k |s k ))], (1) J f (\u03c9) = E s,a\u223cD [f \u03c9 (s, a)] \u2212 E s,a\u223c\u03c0 [f \u03c9 (s, a)],",
"eq_num": "(2)"
}
],
"section": "State-Action Level Reward Estimation",
"sec_num": "3.4"
},
{
"text": "where T is the number of dialog turns. Since the reward estimator evaluates a state-action pair, it can guide the dialog policy at each dialog turn with the predicted rewardr \u03c9 (s, a) = f \u03c9 (s, a) \u2212 log \u03c0 \u03b8 (a|s). Moreover, the reward estimator f \u03c9 can be transformed to a reward approximator g \u03c9 and a shaping term h \u03c9 according to (Fu et al., 2018) to recover an interpretable and robust reward from real human sessions. Formally,",
"cite_spans": [
{
"start": 333,
"end": 350,
"text": "(Fu et al., 2018)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "State-Action Level Reward Estimation",
"sec_num": "3.4"
},
{
"text": "f \u03c9 (s t , a t , s t+1 ) = g \u03c9 (s t , a t )+\u03b3h \u03c9 (s t+1 )\u2212h \u03c9 (s t ),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "State-Action Level Reward Estimation",
"sec_num": "3.4"
},
{
"text": "where we replace the state-action pair (s t , a t ) with the state-action-state triple (s t , a t , s t+1 ) as the input of the reward estimator. Note that, different from the objective in (Fu et al., 2018) that learns a discriminator in the form D \u03c9 (s, a) = p\u03c9(s,a) p\u03c9(s,a)+\u03c0(a|s) , GDPL directly optimizes f \u03c9 , which avoids unstable or vanishing gradient issue in vanilla GAN (Arjovsky et al., 2017) .",
"cite_spans": [
{
"start": 189,
"end": 206,
"text": "(Fu et al., 2018)",
"ref_id": "BIBREF10"
},
{
"start": 380,
"end": 403,
"text": "(Arjovsky et al., 2017)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "State-Action Level Reward Estimation",
"sec_num": "3.4"
},
{
"text": "In practice, we apply Proximal Policy Optimization (PPO) (Schulman et al., 2017) , a simple and stable policy based RL algorithm using a constant clipping mechanism as the soft constraint for dialog policy optimization,",
"cite_spans": [
{
"start": 57,
"end": 80,
"text": "(Schulman et al., 2017)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "State-Action Level Reward Estimation",
"sec_num": "3.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "J \u03c0 (\u03b8)=E s,a\u223c\u03c0 [min{\u03b2 t\u00c2t , clip(\u03b2 t ,1\u2212 ,1+ )\u00c2 t }], (3) A t =\u03b4 t + \u03b3\u03bb\u00c2 t+1 , \u03b4 t =r t + \u03b3V \u03b8 (s t+1 ) \u2212 V \u03b8 (s t ), J V (\u03b8)=\u2212(V \u03b8 (s t ) \u2212 T k=t \u03b3 k\u2212tr k ) 2 ,",
"eq_num": "(4)"
}
],
"section": "State-Action Level Reward Estimation",
"sec_num": "3.4"
},
{
"text": "where V \u03b8 is the approximate value function, \u03b2 t = \u03c0 \u03b8 (at|st) \u03c0 \u03b8 old (at|st) is the ratio of the probability under the new and old policies,\u00c2 is the estimated advantage, \u03b4 is TD residual, \u03bb and are hyper-parameters. In summary, a brief script for GPDL algorithm is shown in Algorithm 1.",
"cite_spans": [
{
"start": 71,
"end": 78,
"text": "(at|st)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "State-Action Level Reward Estimation",
"sec_num": "3.4"
},
{
"text": "We use MultiWOZ , a multi-domain, multi-intent task-oriented dialog corpus that contains 7 domains, 13 intents, 25 slot types, 10,483 dialog sessions, and 71,544 dialog turns in our experiments. Among all the sessions, 1,000 each are used for validation and test. During the data collection process, a user is asked to follow a pre-specified user goal, but it encourages the user to change its goal during the session and the changed goal is also stored, so the collected dialogs are much closer to reality. The corpus also provides the ontology that defines all the entity attributes for the external database.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data and Simulators",
"sec_num": "4.1"
},
{
"text": "We apply two user simulators as the interaction environment for the agent. One is the agendabased user simulator (Schatzmann et al., 2007) which uses heuristics, and the other is a datadriven neural model, namely, Variational Hierarchical User Simulator (VHUS) derived from (G\u00fcr et al., 2018) . Both simulators initialize a user goal when the dialog starts 2 , provide the agent with a simulated user response at each dialog turn, and work at the dialog act level. Since the original corpus only annotates the dialog acts at the system side, we use the annotation at the user side from ConvLab (Lee et al., 2019) to implement the two simulators.",
"cite_spans": [
{
"start": 113,
"end": 138,
"text": "(Schatzmann et al., 2007)",
"ref_id": "BIBREF21"
},
{
"start": 274,
"end": 292,
"text": "(G\u00fcr et al., 2018)",
"ref_id": "BIBREF12"
},
{
"start": 594,
"end": 612,
"text": "(Lee et al., 2019)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data and Simulators",
"sec_num": "4.1"
},
{
"text": "Evaluation of a task-oriented dialog mainly consists of the cost (dialog turns) and task success (inform F1 & match rate). The definition of inform F1 and match rate is explained as follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "4.2"
},
{
"text": "Inform F1 : This evaluates whether all the requested information (e.g. address, phone number of a hotel) has been informed. Here we compute the F1 score so that a policy which greedily answers all the attributes of an entity will only get a high recall but a low precision.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "4.2"
},
{
"text": "Match rate : This evaluates whether the booked entities match all the indicated constraints (e.g. Japanese food in the center of the city) for all domains. If the agent fails to book an entity in one domain, it will obtain 0 score on that domain. This metric ranges from 0 to 1 for each domain, and the average on all domains stands for the score of a session.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "4.2"
},
{
"text": "Finally, a dialog is considered successful only if all the information is provided (i.e. inform recall = 1) and the entities are correctly booked (i.e. match rate = 1) as well 3 . Dialog success is either 0 or 1 for each session.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "4.2"
},
{
"text": "Both the dialog policy \u03c0(a|s) and the value function V (s) are implemented with two hidden layer MLPs. For the reward estimator f (s, a), it is split into two networks g(s, a) and h(s) according to the proposed algorithm, where each is a one hidden layer MLP. The activation function is all Relu for MLPs. We use Adam as the optimization algorithm. The hyper-parameters of GDPL used in our experiments are shown in Table 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 415,
"end": 422,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Implementation Details",
"sec_num": "4.3"
},
{
"text": "First of all, we introduce three baselines that use handcrafted reward functions. Following (Peng et al., 2017) , the agent receives a positive reward of 2L for success at the end of each dialog, or a negative reward of \u2212L for failure, where L is the maximum number of turns in each dialog and is set to 40 in our experiments. Furthermore, the agent receives a reward of \u22121 at each turn so that a shorter dialog is encouraged.",
"cite_spans": [
{
"start": 92,
"end": 111,
"text": "(Peng et al., 2017)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "4.4"
},
{
"text": "Bayesian Committee Machine for dialog management based on Gaussian process, which decomposes the dialog policy into several domainspecific policies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GP-MBCM (Ga\u0161i\u0107 et al., 2015): Multi-domain",
"sec_num": null
},
{
"text": "ACER (Wang et al., 2017) : Actor-Critic RL policy with Experience Replay, a sample efficient learning algorithm that has low variance and scales well with large discrete action spaces.",
"cite_spans": [
{
"start": 5,
"end": 24,
"text": "(Wang et al., 2017)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "GP-MBCM (Ga\u0161i\u0107 et al., 2015): Multi-domain",
"sec_num": null
},
{
"text": "PPO (Schulman et al., 2017) : The same as the dialog policy in GDPL. Then, we also compare with another strong baseline that involves reward learning.",
"cite_spans": [
{
"start": 4,
"end": 27,
"text": "(Schulman et al., 2017)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "GP-MBCM (Ga\u0161i\u0107 et al., 2015): Multi-domain",
"sec_num": null
},
{
"text": "ALDM (Liu and Lane, 2018) : Adversarial Learning Dialog Model that learns dialog rewards with a Bi-LSTM encoding the dialog sequence as the discriminator to predict the task success. The reward is only estimated at the end of the session and is further used to optimize the dialog policy.",
"cite_spans": [
{
"start": 5,
"end": 25,
"text": "(Liu and Lane, 2018)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "GP-MBCM (Ga\u0161i\u0107 et al., 2015): Multi-domain",
"sec_num": null
},
{
"text": "For a fair comparison, each method is pretrained for 5 epoches by simple imitation learning on the state-action pairs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GP-MBCM (Ga\u0161i\u0107 et al., 2015): Multi-domain",
"sec_num": null
},
{
"text": "The performance of each approach that interacts with the agenda-based user simulator is shown in Table 3 . GDPL achieves extremely high performance in the task success on account of the substantial improvement in inform F1 and match rate over the baselines. Since the reward estimator of GDPL evaluates state-action pairs, it can always guide the dialog policy during the conversation thus leading the dialog policy to a successful strategy, which also indirectly demonstrates that the reward estimator has learned a reasonable reward at each dialog turn. Surprisingly, GDPL even outperforms human in completing the task, and its average dialog turns are close to those of humans, though GDPL is inferior in terms of match rate. Humans almost manage to make a reservation in each session, which contributes to high task success. However, it is also interesting to find that human have low inform F1, and that may explain why the task is not always completed successfully. Actually, there have high recall (86.75%) but low precision (54.43%) in human dialogs when answering the requested information. This is possibly because during data collection human users forget to ask for all required information of the task, as reported in (Su et al., 2016) .",
"cite_spans": [
{
"start": 1231,
"end": 1248,
"text": "(Su et al., 2016)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [
{
"start": 97,
"end": 104,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Main Results",
"sec_num": "5.1"
},
{
"text": "ACER and PPO obtain high performance in inform F1 and match rate as well. However, they obtain poor performance on the overall task success, even when they are provided with the designed reward that already knows the real user goals. This is because they only receive the reward about the success at the last turn and fail to understand what the user needs or detect the change of user goals.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Main Results",
"sec_num": "5.1"
},
{
"text": "Though ALDM obtains a lower inform F1 and match rate than PPO, it gets a slight improvement Table 4 : KL-divergence between different dialog policy and the human dialog KL(\u03c0 turns ||p turns ), where \u03c0 turns denotes the discrete distribution over the number of dialog turns of simulated sessions between the policy \u03c0 and the agenda-based user simulator, and p turns for the real human-human dialog.",
"cite_spans": [],
"ref_spans": [
{
"start": 92,
"end": 99,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Main Results",
"sec_num": "5.1"
},
{
"text": "on task success by encoding the entire session in its reward estimator. This demonstrates that learning effective rewards can help the policy to capture user intent shift, but the reward sparsity issue remains unsolved. This may explain why the gain is limited, and ALDM even has longer dialog turns than others. In conclusion, the dialog policy benefits from the guidance of the reward estimator per dialog turn. Moreover, GDPL can establish an efficient dialog thanks to the learned rewards that infer human behaviors. Table 4 shows that GDPL has the smallest KL-divergence to the human on the number of dialog turns over the baselines, which implies that GDPL behaves more like the human. It seems that all the approaches generate many more short dialogs (dialog turns less than 3) than human, but GDPL generates far less long dialogs (dialog turns larger than 11) than other baselines except GP-MBCM. Most of the long dialog sessions fail to reach a task success.",
"cite_spans": [],
"ref_spans": [
{
"start": 521,
"end": 528,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Main Results",
"sec_num": "5.1"
},
{
"text": "We also observe that GP-MBCM tries to provide many dialog acts to avoid the negative penalty at each turn, which results in a very low inform F1 and short dialog turns. However, as explained in the introduction, a shorter dialog is not always the best. The dialog generated by GP-MBCM is too short to complete the task successfully. GP-MBCM is a typical case that focuses too much on the cost of the dialog due to the handcrafted reward function and fails to realize the true target that helps the users to accomplish their goals.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Main Results",
"sec_num": "5.1"
},
{
"text": "Ablation test is investigated in Table 3 . GDPLsess sums up all the rewards at each turn to the last turn and does not give any other reward before the dialog terminates, while GDPL-discr is to use the discriminator form as (Fu et al., 2018) in the reward estimator. It is perceptible that GDPL has better performance than GDPL-sess on the task success and is comparable regarding the dialog turns, so it can be concluded that GDPL does benefit from the guidance of the reward estimator at each dialog turn, and well addresses the reward sparsity issue. GDPL also outperforms GDPL-discr which means directly optimizing f \u03c9 improves the stability of AL.",
"cite_spans": [
{
"start": 224,
"end": 241,
"text": "(Fu et al., 2018)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 33,
"end": 40,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Ablation Study",
"sec_num": "5.2"
},
{
"text": "The performance that the agent interacts with VHUS is presented in it often gives unreasonable responses. Therefore, it is more laborious for the dialog policy to learn a proper strategy with the neural user simulator. All the methods cause a significant drop in performance when interacting with VHUS. ALDM even gets worse performance than ACER and PPO. In comparison, GDPL is still comparable with ACER and PPO, obtains a better match rate, and even achieves higher task success. This indicates that GDPL has learned a more robust reward function than ALDM. Fig. 2 shows the performance with the different number of domains in the user goal. In comparison with other approaches, GDPL is more scalable to the number of domains and achieves the best performance in all metrics. PPO suffers from the increasing number of the domain and has remarkable drops in all metrics. This demonstrates the limited capability for the handcrafted reward function to handle complex tasks across multiple domains in the dialog.",
"cite_spans": [],
"ref_spans": [
{
"start": 560,
"end": 566,
"text": "Fig. 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Interaction with Neural Simulator",
"sec_num": "5.3"
},
{
"text": "ALDM also has a serious performance degradation with 2 domains, but it is interesting to find that ALDM performs much better with 3 domains than with 2 domains. We further observe that ALDM performs well on the taxi domain, most of which appear in the dialogs with 3 domains. Taxi domain has the least slots for constraints and requests, which makes it easier to learn a reward about that domain, thus leading ALDM to a local optimal. In general, our reward estimator has higher effectiveness and scalability.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Goal across Multiple Domains",
"sec_num": "5.4"
},
{
"text": "For human evaluation, we hire Amazon Mechanical Turkers to state their preferences between GDPL and other methods. Because all the policies work at dialog act level, we generate the texts Table 7 : Return distribution of GDPL on each metric. The first row counts the dialog sessions that get the full score of the corresponding metric, and the results of the rest sessions are included in the second row.",
"cite_spans": [],
"ref_spans": [
{
"start": 188,
"end": 195,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Human Evaluation",
"sec_num": "5.5"
},
{
"text": "from dialog acts using hand-crafted templates to make the dialog readable. Given a certain user goal, Turkers first read two simulated dialog sessions, one from the interaction between GDPL and the agenda-based user simulator, the other from another baseline with the same simulator. Then, they are asked to judge which dialog is better (win, draw or lose) according to different subjective assessments. In addition to Task Success, we examine another two measures concerning Dialog Cost in the human evaluation: Efficiency such as dialog turn cost or response delay, and Quality such as redundant information or inappropriate reply (Walker et al., 1997) . Since the text is generated by templates for all policies, we do not evaluate language generation here (including grammar, diversity, etc.). We randomly sample 300 user goals from the test set, 100 each for one baseline, and each session pair is evaluated by 3 Turkers. Table 6 presents the results of human evaluation. GDPL outperforms three baselines significantly in all aspects (sign test, p-value < 0.01) except for the quality compared with ACER. Among all the baselines, GDPL obtains the most preference against PPO. Note that the difference between PPO and GDPL is only in the reward signal. This again demonstrates the advantage of reward learning in GDPL over the handcrafted reward function. The agreement on the superiority of GDPL between objective rating in Table 3 and human preference here also indicates that the automatic metrics used in our experiments is reliable to reflect user satisfaction to some extent.",
"cite_spans": [
{
"start": 633,
"end": 654,
"text": "(Walker et al., 1997)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [
{
"start": 927,
"end": 934,
"text": "Table 6",
"ref_id": "TABREF10"
},
{
"start": 1429,
"end": 1436,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Human Evaluation",
"sec_num": "5.5"
},
{
"text": "To provide an insight into the learned reward function itself, Table 7 provides a quantitative evaluation on the learned rewards by showing the distribution of the return R = t \u03b3 t r t according to each metric. It should be noted that some sessions do not have inform F1 because the user does not request any information, and similarly for match Reward U: I'm looking for a cheap Italian restaurant in the center of the city. S: There are 3 restaurants available for you. U: Can I get the address of one? S: The first result is Pizza Hut City Centre, address is Regent Street City Centre. U: Sounds good. Can you book me a table for 2 at 11:45 on Wednesday? S: You're all set. Your reference number is 7GAWK763. U: I also need a train on Thursday. S: Where are you departing from and heading for? U: From Bishops Stortford to Cambridge. S: Is there a specific time you would like to travel at? U: I would perfer to leave after 21:00. S: There is a train leaving at 21:29. Would you like me to make a reservation? U: Yes, I need the train ID, please. S: The ID is TR6336. Is there any other help you need? U: That will be all. Have a good day! Bye. S: Glad to be of help. Good day to you too. rate. It can be observed that the learned reward function has good interpretability in that the reward is positive when the dialog gets a full score on each metric, and negative otherwise. Fig. 3 gives an illustration of the learned reward at each dialog turn as a qualitative evaluation. In the beginning, the agent is unaware of the user goal thus it starts with a low reward. As the dialog proceeds, the agent has collected enough information from the user, then books the restaurant successfully and the reward remarkably increases at the third turn. The reward continues to grow stably after the topic shifts to the train domain. Again, the agent offers the correct train ID given sufficient information. Since the user has been informed all the requested information and the restaurant and train are both booked successfully, the user leaves the session with satisfaction at last, and the reward rises to the top as well. In brief, the learned reward can well reflect the current state of the dialog. It is also noticeable that the dialog policy manages to express multiple intents during the session.",
"cite_spans": [],
"ref_spans": [
{
"start": 63,
"end": 70,
"text": "Table 7",
"ref_id": null
},
{
"start": 1381,
"end": 1387,
"text": "Fig. 3",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Reward Evaluation",
"sec_num": "5.6"
},
{
"text": "In this paper, we propose a guided policy learning method for joint reward estimation and policy optimization in multi-domain task-oriented dialog. The method is based on Adversarial Inverse Reinforcement Learning. Extensive experiments demonstrate the effectiveness of our proposed ap- 4 Refer to the appendix for the dialog acts. proach and that it can achieve higher task success and better user satisfaction than state-of-theart baselines.",
"cite_spans": [
{
"start": 287,
"end": 288,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "Though the action space A of the dialog policy is defined as the set of all dialog acts, it should be noted that GDPL can be equipped with NLU modules that identify the dialog acts expressed in utterance, and with NLG modules that generate utterances from dialog acts. In this way, we can construct the framework in an end-to-end scenario.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "The agenda-based user simulator is powerful to provide a simulated interaction for the dialog policy learning, however, it needs careful design and is lack of generalization. While training a neural user simulator is quite challenging due to the high diversity of user modeling and the difficulty of defining a proper reward function, GDPL may offer some solutions for multi-agent dialog policy learning where the user is regarded as another agent and trained with the system agent simultaneously. We leave this as the future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "We regard a user turn and a system turn as one dialog turn throughout the paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Refer to the appendix for user goal generation.3 If the user does not request any information in the session, this will just compute match rate, and similarly for inform recall.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was supported by the National Science Foundation of China (Grant No. 61936010 / 61876096) and the National Key R&D Program of China (Grant No. 2018YFC0830200). We would like to thank THUNUS NExT Joint-Lab for the support, anonymous reviewers for their valuable suggestions, and our lab mate Qi Zhu for helpful discussions. The code is available at https: //github.com/truthless11/GDPL.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Wasserstein generative adversarial networks",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Arjovsky",
"suffix": ""
},
{
"first": "Soumith",
"middle": [],
"last": "Chintala",
"suffix": ""
},
{
"first": "L\u00e9on",
"middle": [],
"last": "Bottou",
"suffix": ""
}
],
"year": 2017,
"venue": "34th International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "214--223",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martin Arjovsky, Soumith Chintala, and L\u00e9on Bottou. 2017. Wasserstein generative adversarial networks. In 34th International Conference on Machine Learn- ing, pages 214-223.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Bayesian inverse reinforcement learning for modeling conversational agents in a virtual environment",
"authors": [
{
"first": "Lina",
"middle": [
"M"
],
"last": "",
"suffix": ""
},
{
"first": "Rojas",
"middle": [],
"last": "Barahona",
"suffix": ""
},
{
"first": "Christophe",
"middle": [],
"last": "Cerisara",
"suffix": ""
}
],
"year": 2014,
"venue": "15th International Conference on Computational Linguistics and Intelligent Text Processing",
"volume": "",
"issue": "",
"pages": "503--514",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lina M Rojas Barahona and Christophe Cerisara. 2014. Bayesian inverse reinforcement learning for model- ing conversational agents in a virtual environment. In 15th International Conference on Computational Linguistics and Intelligent Text Processing, pages 503-514.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Learning the reward model of dialogue pomdps from data",
"authors": [
{
"first": "Abdeslam",
"middle": [],
"last": "Boularias",
"suffix": ""
},
{
"first": "Brahim",
"middle": [],
"last": "Hamid R Chinaei",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chaib-Draa",
"suffix": ""
}
],
"year": 2010,
"venue": "24th Annual Conference on Neural Information Processing Systems, Workshop on Machine Learning for Assistive Technologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abdeslam Boularias, Hamid R Chinaei, and Brahim Chaib-draa. 2010. Learning the reward model of dialogue pomdps from data. In 24th Annual Con- ference on Neural Information Processing Systems, Workshop on Machine Learning for Assistive Tech- nologies.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Multiwoz: A largescale multi-domain wizard-of-oz dataset for taskoriented dialogue modelling",
"authors": [
{
"first": "Pawe\u0142",
"middle": [],
"last": "Budzianowski",
"suffix": ""
},
{
"first": "Tsung-Hsien",
"middle": [],
"last": "Wen",
"suffix": ""
},
{
"first": "Bo-Hsiang",
"middle": [],
"last": "Tseng",
"suffix": ""
},
{
"first": "I\u00f1igo",
"middle": [],
"last": "Casanueva",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Ultes",
"suffix": ""
},
{
"first": "Milica",
"middle": [],
"last": "Osman Ramadan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ga\u0161i\u0107",
"suffix": ""
}
],
"year": 2018,
"venue": "Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "5016--5026",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pawe\u0142 Budzianowski, Tsung-Hsien Wen, Bo-Hsiang Tseng, I\u00f1igo Casanueva, Stefan Ultes, Osman Ra- madan, and Milica Ga\u0161i\u0107. 2018. Multiwoz: A large- scale multi-domain wizard-of-oz dataset for task- oriented dialogue modelling. In 2018 Conference on Empirical Methods in Natural Language Process- ing, pages 5016-5026.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Feudal reinforcement learning for dialogue management in large domains",
"authors": [
{
"first": "I\u00f1igo",
"middle": [],
"last": "Casanueva",
"suffix": ""
},
{
"first": "Pawe\u0142",
"middle": [],
"last": "Budzianowski",
"suffix": ""
},
{
"first": "Pei-Hao",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Ultes",
"suffix": ""
},
{
"first": "Lina M Rojas",
"middle": [],
"last": "Barahona",
"suffix": ""
},
{
"first": "Bo-Hsiang",
"middle": [],
"last": "Tseng",
"suffix": ""
},
{
"first": "Milica",
"middle": [],
"last": "Ga\u0161i\u0107",
"suffix": ""
}
],
"year": 2018,
"venue": "2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "714--719",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "I\u00f1igo Casanueva, Pawe\u0142 Budzianowski, Pei-Hao Su, Stefan Ultes, Lina M Rojas Barahona, Bo-Hsiang Tseng, and Milica Ga\u0161i\u0107. 2018. Feudal reinforce- ment learning for dialogue management in large do- mains. In 2018 Conference of the North Ameri- can Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 714-719.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Agent-aware dropout dqn for safe and efficient on-line dialogue policy learning",
"authors": [
{
"first": "Lu",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Xiang",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Cheng",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Runzhe",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2017,
"venue": "2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2454--2464",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lu Chen, Xiang Zhou, Cheng Chang, Runzhe Yang, and Kai Yu. 2017. Agent-aware dropout dqn for safe and efficient on-line dialogue policy learning. In 2017 Conference on Empirical Methods in Natu- ral Language Processing, pages 2454-2464.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Deep reinforcement learning for multi-domain dialogue systems",
"authors": [
{
"first": "Heriberto",
"middle": [],
"last": "Cuay\u00e1huitl",
"suffix": ""
},
{
"first": "Seunghak",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Ashley",
"middle": [],
"last": "Williamson",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Carse",
"suffix": ""
}
],
"year": 2016,
"venue": "30th Annual Conference on Neural Information Processing Systems, Deep Reinforcement Learning Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Heriberto Cuay\u00e1huitl, Seunghak Yu, Ashley Williamson, and Jacob Carse. 2016. Deep re- inforcement learning for multi-domain dialogue systems. In 30th Annual Conference on Neural In- formation Processing Systems, Deep Reinforcement Learning Workshop.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Towards end-to-end reinforcement learning of dialogue agents for information access",
"authors": [
{
"first": "Bhuwan",
"middle": [],
"last": "Dhingra",
"suffix": ""
},
{
"first": "Lihong",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Xiujun",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Yun-Nung",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Faisal",
"middle": [],
"last": "Ahmed",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Deng",
"suffix": ""
}
],
"year": 2017,
"venue": "55th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "484--495",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bhuwan Dhingra, Lihong Li, Xiujun Li, Jianfeng Gao, Yun-Nung Chen, Faisal Ahmed, and Li Deng. 2017. Towards end-to-end reinforcement learning of dia- logue agents for information access. In 55th Annual Meeting of the Association for Computational Lin- guistics, pages 484-495.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Policy networks with two-stage training for dialogue systems",
"authors": [
{
"first": "Mehdi",
"middle": [],
"last": "Fatemi",
"suffix": ""
},
{
"first": "Layla",
"middle": [
"El"
],
"last": "Asri",
"suffix": ""
},
{
"first": "Hannes",
"middle": [],
"last": "Schulz",
"suffix": ""
},
{
"first": "Jing",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Kaheer",
"middle": [],
"last": "Suleman",
"suffix": ""
}
],
"year": 2016,
"venue": "17th Annual Meeting of the Special Interest Group on Discourse and Dialogue",
"volume": "",
"issue": "",
"pages": "101--110",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mehdi Fatemi, Layla El Asri, Hannes Schulz, Jing He, and Kaheer Suleman. 2016. Policy networks with two-stage training for dialogue systems. In 17th An- nual Meeting of the Special Interest Group on Dis- course and Dialogue, pages 101-110.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A connection between generative adversarial networks, inverse reinforcement learning, and energy-based models",
"authors": [
{
"first": "Chelsea",
"middle": [],
"last": "Finn",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Christiano",
"suffix": ""
},
{
"first": "Pieter",
"middle": [],
"last": "Abbeel",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Levine",
"suffix": ""
}
],
"year": 2016,
"venue": "30th Annual Conference on Neural Information Processing Systems, Workshop on Adversarial Training",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chelsea Finn, Paul Christiano, Pieter Abbeel, and Sergey Levine. 2016. A connection between gen- erative adversarial networks, inverse reinforcement learning, and energy-based models. In 30th Annual Conference on Neural Information Processing Sys- tems, Workshop on Adversarial Training.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Learning robust rewards with adversarial inverse reinforcement learning",
"authors": [
{
"first": "Justin",
"middle": [],
"last": "Fu",
"suffix": ""
},
{
"first": "Katie",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Levine",
"suffix": ""
}
],
"year": 2018,
"venue": "6th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Justin Fu, Katie Luo, and Sergey Levine. 2018. Learn- ing robust rewards with adversarial inverse rein- forcement learning. In 6th International Conference on Learning Representations.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Policy committee for adaptation in multidomain spoken dialogue systems",
"authors": [
{
"first": "Milica",
"middle": [],
"last": "Ga\u0161i\u0107",
"suffix": ""
},
{
"first": "Nikola",
"middle": [],
"last": "Mrk\u0161i\u0107",
"suffix": ""
},
{
"first": "Pei-Hao",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Vandyke",
"suffix": ""
},
{
"first": "Tsung-Hsien",
"middle": [],
"last": "Wen",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Young",
"suffix": ""
}
],
"year": 2015,
"venue": "IEEE Workshop on Automatic Speech Recognition and Understanding",
"volume": "",
"issue": "",
"pages": "806--812",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Milica Ga\u0161i\u0107, Nikola Mrk\u0161i\u0107, Pei-Hao Su, David Vandyke, Tsung-Hsien Wen, and Steve Young. 2015. Policy committee for adaptation in multi- domain spoken dialogue systems. In 2015 IEEE Workshop on Automatic Speech Recognition and Understanding, pages 806-812.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "User modeling for task oriented dialogues",
"authors": [
{
"first": "Izzeddin",
"middle": [],
"last": "G\u00fcr",
"suffix": ""
},
{
"first": "Dilek",
"middle": [],
"last": "Hakkani-T\u00fcr",
"suffix": ""
},
{
"first": "Gokhan",
"middle": [],
"last": "T\u00fcr",
"suffix": ""
},
{
"first": "Pararth",
"middle": [],
"last": "Shah",
"suffix": ""
}
],
"year": 2018,
"venue": "IEEE Spoken Language Technology Workshop",
"volume": "",
"issue": "",
"pages": "900--906",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Izzeddin G\u00fcr, Dilek Hakkani-T\u00fcr, Gokhan T\u00fcr, and Pararth Shah. 2018. User modeling for task oriented dialogues. In 2018 IEEE Spoken Language Technol- ogy Workshop, pages 900-906.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Decoupling strategy and generation in negotiation dialogues",
"authors": [
{
"first": "He",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Derek",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Anusha",
"middle": [],
"last": "Balakrishnan",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2018,
"venue": "2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2333--2343",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "He He, Derek Chen, Anusha Balakrishnan, and Percy Liang. 2018. Decoupling strategy and generation in negotiation dialogues. In 2018 Conference on Em- pirical Methods in Natural Language Processing, pages 2333-2343.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Generative adversarial imitation learning",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Ho",
"suffix": ""
},
{
"first": "Stefano",
"middle": [],
"last": "Ermon",
"suffix": ""
}
],
"year": 2016,
"venue": "30th Annual Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "4565--4573",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan Ho and Stefano Ermon. 2016. Generative ad- versarial imitation learning. In 30th Annual Con- ference on Neural Information Processing Systems, pages 4565-4573.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Convlab: Multi-domain end-to-end dialog system platform",
"authors": [
{
"first": "Sungjin",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Qi",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Ryuichi",
"middle": [],
"last": "Takanobu",
"suffix": ""
},
{
"first": "Zheng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yaoqin",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xiang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jinchao",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Baolin",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Xiujun",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Minlie",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
}
],
"year": 2019,
"venue": "57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "64--69",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sungjin Lee, Qi Zhu, Ryuichi Takanobu, Zheng Zhang, Yaoqin Zhang, Xiang Li, Jinchao Li, Baolin Peng, Xiujun Li, Minlie Huang, and Jianfeng Gao. 2019. Convlab: Multi-domain end-to-end dialog system platform. In 57th Annual Meeting of the Associa- tion for Computational Linguistics, pages 64-69.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Sequicity: Simplifying task-oriented dialogue systems with single sequence-to-sequence architectures",
"authors": [
{
"first": "Wenqiang",
"middle": [],
"last": "Lei",
"suffix": ""
},
{
"first": "Xisen",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "Min-Yen",
"middle": [],
"last": "Kan",
"suffix": ""
},
{
"first": "Zhaochun",
"middle": [],
"last": "Ren",
"suffix": ""
}
],
"year": 2018,
"venue": "56th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1437--1447",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wenqiang Lei, Xisen Jin, Min-Yen Kan, Zhaochun Ren, Xiangnan He, and Dawei Yin. 2018. Sequic- ity: Simplifying task-oriented dialogue systems with single sequence-to-sequence architectures. In 56th Annual Meeting of the Association for Computa- tional Linguistics, pages 1437-1447.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Temporal supervised learning for inferring a dialog policy from example conversations",
"authors": [
{
"first": "Lihong",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "He",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Jason",
"middle": [
"D"
],
"last": "Williams",
"suffix": ""
}
],
"year": 2014,
"venue": "IEEE Spoken Language Technology Workshop",
"volume": "",
"issue": "",
"pages": "312--317",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lihong Li, He He, and Jason D Williams. 2014. Tem- poral supervised learning for inferring a dialog pol- icy from example conversations. In 2014 IEEE Spo- ken Language Technology Workshop, pages 312- 317.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Adversarial learning of task-oriented neural dialog models",
"authors": [
{
"first": "Bing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Ian",
"middle": [],
"last": "Lane",
"suffix": ""
}
],
"year": 2018,
"venue": "19th Annual Meeting of the Special Interest Group on Discourse and Dialogue",
"volume": "",
"issue": "",
"pages": "350--359",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bing Liu and Ian Lane. 2018. Adversarial learning of task-oriented neural dialog models. In 19th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 350-359.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Composite task-completion dialogue policy learning via hierarchical deep reinforcement learning",
"authors": [
{
"first": "Baolin",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Xiujun",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Lihong",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Asli",
"middle": [],
"last": "Celikyilmaz",
"suffix": ""
},
{
"first": "Sungjin",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kam-Fai",
"middle": [],
"last": "Wong",
"suffix": ""
}
],
"year": 2017,
"venue": "2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2231--2240",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Baolin Peng, Xiujun Li, Lihong Li, Jianfeng Gao, Asli Celikyilmaz, Sungjin Lee, and Kam-Fai Wong. 2017. Composite task-completion dialogue policy learning via hierarchical deep reinforcement learn- ing. In 2017 Conference on Empirical Methods in Natural Language Processing, pages 2231-2240.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Spoken dialogue management using probabilistic reasoning",
"authors": [
{
"first": "Nicholas",
"middle": [],
"last": "Roy",
"suffix": ""
},
{
"first": "Joelle",
"middle": [],
"last": "Pineau",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Thrun",
"suffix": ""
}
],
"year": 2000,
"venue": "38th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "93--100",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nicholas Roy, Joelle Pineau, and Sebastian Thrun. 2000. Spoken dialogue management using proba- bilistic reasoning. In 38th Annual Meeting of the As- sociation for Computational Linguistics, pages 93- 100.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Agenda-based user simulation for bootstrapping a pomdp dialogue system",
"authors": [
{
"first": "Jost",
"middle": [],
"last": "Schatzmann",
"suffix": ""
},
{
"first": "Blaise",
"middle": [],
"last": "Thomson",
"suffix": ""
},
{
"first": "Karl",
"middle": [],
"last": "Weilhammer",
"suffix": ""
},
{
"first": "Hui",
"middle": [],
"last": "Ye",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Young",
"suffix": ""
}
],
"year": 2007,
"venue": "2007 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "149--152",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jost Schatzmann, Blaise Thomson, Karl Weilhammer, Hui Ye, and Steve Young. 2007. Agenda-based user simulation for bootstrapping a pomdp dialogue system. In 2007 Conference of the North Ameri- can Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 149-152.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Proximal policy optimization algorithms",
"authors": [
{
"first": "John",
"middle": [],
"last": "Schulman",
"suffix": ""
},
{
"first": "Filip",
"middle": [],
"last": "Wolski",
"suffix": ""
},
{
"first": "Prafulla",
"middle": [],
"last": "Dhariwal",
"suffix": ""
},
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Oleg",
"middle": [],
"last": "Klimov",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1707.06347"
]
},
"num": null,
"urls": [],
"raw_text": "John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. 2017. Proxi- mal policy optimization algorithms. arXiv preprint arXiv:1707.06347.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Bootstrapping a neural conversational agent with dialogue self-play, crowdsourcing and on-line reinforcement learning",
"authors": [
{
"first": "Pararth",
"middle": [],
"last": "Shah",
"suffix": ""
},
{
"first": "Dilek",
"middle": [],
"last": "Hakkani-T\u00fcr",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Gokhan",
"middle": [],
"last": "T\u00fcr",
"suffix": ""
}
],
"year": 2018,
"venue": "2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "41--51",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pararth Shah, Dilek Hakkani-T\u00fcr, Bing Liu, and Gokhan T\u00fcr. 2018. Bootstrapping a neural conver- sational agent with dialogue self-play, crowdsourc- ing and on-line reinforcement learning. In 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 41-51.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Sentiment adaptive end-to-end dialog systems",
"authors": [
{
"first": "Weiyan",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Zhou",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2018,
"venue": "56th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1509--1519",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Weiyan Shi and Zhou Yu. 2018. Sentiment adaptive end-to-end dialog systems. In 56th Annual Meet- ing of the Association for Computational Linguis- tics, pages 1509-1519.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Dialogue act modeling for automatic tagging and recognition of conversational speech",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Stolcke",
"suffix": ""
},
{
"first": "Klaus",
"middle": [],
"last": "Ries",
"suffix": ""
},
{
"first": "Noah",
"middle": [],
"last": "Coccaro",
"suffix": ""
},
{
"first": "Elizabeth",
"middle": [],
"last": "Shriberg",
"suffix": ""
},
{
"first": "Rebecca",
"middle": [],
"last": "Bates",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Taylor",
"suffix": ""
},
{
"first": "Rachel",
"middle": [],
"last": "Martin",
"suffix": ""
},
{
"first": "Carol",
"middle": [],
"last": "Van Ess-Dykema",
"suffix": ""
},
{
"first": "Marie",
"middle": [],
"last": "Meteer",
"suffix": ""
}
],
"year": 2000,
"venue": "Computational linguistics",
"volume": "26",
"issue": "3",
"pages": "339--373",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andreas Stolcke, Klaus Ries, Noah Coccaro, Eliza- beth Shriberg, Rebecca Bates, Daniel Jurafsky, Paul Taylor, Rachel Martin, Carol Van Ess-Dykema, and Marie Meteer. 2000. Dialogue act modeling for au- tomatic tagging and recognition of conversational speech. Computational linguistics, 26(3):339-373.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "On-line active reward learning for policy optimisation in spoken dialogue systems",
"authors": [
{
"first": "Pei-Hao",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Milica",
"middle": [],
"last": "Ga\u0161i\u0107",
"suffix": ""
},
{
"first": "Nikola",
"middle": [],
"last": "Mrk\u0161i\u0107",
"suffix": ""
},
{
"first": "Lina M Rojas",
"middle": [],
"last": "Barahona",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Ultes",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Vandyke",
"suffix": ""
},
{
"first": "Tsung-Hsien",
"middle": [],
"last": "Wen",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Young",
"suffix": ""
}
],
"year": 2016,
"venue": "54th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2431--2441",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pei-Hao Su, Milica Ga\u0161i\u0107, Nikola Mrk\u0161i\u0107, Lina M Ro- jas Barahona, Stefan Ultes, David Vandyke, Tsung- Hsien Wen, and Steve Young. 2016. On-line active reward learning for policy optimisation in spoken di- alogue systems. In 54th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 2431- 2441.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Discriminative deep dyna-q: Robust planning for dialogue policy learning",
"authors": [
{
"first": "Shang-Yu",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Xiujun",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Jingjing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yun-Nung",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2018,
"venue": "2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "3813--3823",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shang-Yu Su, Xiujun Li, Jianfeng Gao, Jingjing Liu, and Yun-Nung Chen. 2018. Discriminative deep dyna-q: Robust planning for dialogue policy learn- ing. In 2018 Conference on Empirical Methods in Natural Language Processing, pages 3813-3823.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Reward-balancing for statistical spoken dialogue systems using multi-objective reinforcement learning",
"authors": [
{
"first": "Stefan",
"middle": [],
"last": "Ultes",
"suffix": ""
},
{
"first": "Pawe\u0142",
"middle": [],
"last": "Budzianowski",
"suffix": ""
},
{
"first": "I\u00f1igo",
"middle": [],
"last": "Casanueva",
"suffix": ""
},
{
"first": "Nikola",
"middle": [],
"last": "Mrk\u0161i\u0107",
"suffix": ""
},
{
"first": "Lina M Rojas",
"middle": [],
"last": "Barahona",
"suffix": ""
},
{
"first": "Pei-Hao",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Tsung-Hsien",
"middle": [],
"last": "Wen",
"suffix": ""
},
{
"first": "Milica",
"middle": [],
"last": "Ga\u0161i\u0107",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Young",
"suffix": ""
}
],
"year": 2017,
"venue": "18th Annual Meeting of the Special Interest Group on Discourse and Dialogue",
"volume": "",
"issue": "",
"pages": "65--70",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stefan Ultes, Pawe\u0142 Budzianowski, I\u00f1igo Casanueva, Nikola Mrk\u0161i\u0107, Lina M Rojas Barahona, Pei-Hao Su, Tsung-Hsien Wen, Milica Ga\u0161i\u0107, and Steve Young. 2017. Reward-balancing for statistical spo- ken dialogue systems using multi-objective rein- forcement learning. In 18th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 65-70.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Paradise: A framework for evaluating spoken dialogue agents",
"authors": [
{
"first": "A",
"middle": [],
"last": "Marilyn",
"suffix": ""
},
{
"first": "Diane",
"middle": [
"J"
],
"last": "Walker",
"suffix": ""
},
{
"first": "Candace",
"middle": [
"A"
],
"last": "Litman",
"suffix": ""
},
{
"first": "Alicia",
"middle": [],
"last": "Kamm",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Abella",
"suffix": ""
}
],
"year": 1997,
"venue": "35th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "271--280",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marilyn A Walker, Diane J Litman, Candace A Kamm, and Alicia Abella. 1997. Paradise: A framework for evaluating spoken dialogue agents. In 35th Annual Meeting of the Association for Computational Lin- guistics, pages 271-280.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Policy learning for domain selection in an extensible multidomain spoken dialogue system",
"authors": [
{
"first": "Zhuoran",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Hongliang",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Guanchun",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Tian",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Haifeng",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2014,
"venue": "Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "57--67",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhuoran Wang, Hongliang Chen, Guanchun Wang, Hao Tian, Hua Wu, and Haifeng Wang. 2014. Policy learning for domain selection in an extensible multi- domain spoken dialogue system. In 2014 Confer- ence on Empirical Methods in Natural Language Processing, pages 57-67.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Sample efficient actor-critic with experience replay",
"authors": [
{
"first": "Ziyu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Bapst",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Heess",
"suffix": ""
},
{
"first": "Volodymyr",
"middle": [],
"last": "Mnih",
"suffix": ""
},
{
"first": "Remi",
"middle": [],
"last": "Munos",
"suffix": ""
},
{
"first": "Koray",
"middle": [],
"last": "Kavukcuoglu",
"suffix": ""
},
{
"first": "Nando",
"middle": [],
"last": "De Freitas",
"suffix": ""
}
],
"year": 2017,
"venue": "5th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, and Nando de Freitas. 2017. Sample efficient actor-critic with experience replay. In 5th International Confer- ence on Learning Representations.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "The dialog state tracking challenge series: A review",
"authors": [
{
"first": "D",
"middle": [],
"last": "Jason",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Raux",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Henderson",
"suffix": ""
}
],
"year": 2016,
"venue": "Dialogue & Discourse",
"volume": "7",
"issue": "3",
"pages": "4--33",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason D Williams, Antoine Raux, and Matthew Hen- derson. 2016. The dialog state tracking challenge series: A review. Dialogue & Discourse, 7(3):4-33.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Hierarchical text generation and planning for strategic dialogue",
"authors": [
{
"first": "Denis",
"middle": [],
"last": "Yarats",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
}
],
"year": 2018,
"venue": "35th International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "5587--5595",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Denis Yarats and Mike Lewis. 2018. Hierarchical text generation and planning for strategic dialogue. In 35th International Conference on Machine Learn- ing, pages 5587-5595.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Memoryaugmented dialogue management for task-oriented dialogue systems",
"authors": [
{
"first": "Zheng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Minlie",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Zhongzhou",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Feng",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Haiqing",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Xiaoyan",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2019,
"venue": "ACM Transactions on Information Systems",
"volume": "37",
"issue": "3",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zheng Zhang, Minlie Huang, Zhongzhou Zhao, Feng Ji, Haiqing Chen, and Xiaoyan Zhu. 2019. Memory- augmented dialogue management for task-oriented dialogue systems. ACM Transactions on Informa- tion Systems, 37(3):34.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Towards end-to-end learning for dialog state tracking and management using deep reinforcement learning",
"authors": [
{
"first": "Tiancheng",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Maxine",
"middle": [],
"last": "Eskenazi",
"suffix": ""
}
],
"year": 2016,
"venue": "17th Annual Meeting of the Special Interest Group on Discourse and Dialogue",
"volume": "",
"issue": "",
"pages": "1--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tiancheng Zhao and Maxine Eskenazi. 2016. Towards end-to-end learning for dialog state tracking and management using deep reinforcement learning. In 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 1-10.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Modeling interaction via the principle of maximum causal entropy",
"authors": [
{
"first": "D",
"middle": [],
"last": "Brian",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Ziebart",
"suffix": ""
},
{
"first": "Anind K",
"middle": [],
"last": "Bagnell",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dey",
"suffix": ""
}
],
"year": 2010,
"venue": "27th International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "1255--1262",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brian D Ziebart, J Andrew Bagnell, and Anind K Dey. 2010. Modeling interaction via the principle of max- imum causal entropy. In 27th International Confer- ence on Machine Learning, pages 1255-1262.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Maximum entropy inverse reinforcement learning",
"authors": [
{
"first": "D",
"middle": [],
"last": "Brian",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Ziebart",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Maas",
"suffix": ""
},
{
"first": "Anind K",
"middle": [],
"last": "Bagnell",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dey",
"suffix": ""
}
],
"year": 2008,
"venue": "23rd AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "1433--1438",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brian D Ziebart, Andrew Maas, J Andrew Bagnell, and Anind K Dey. 2008. Maximum entropy inverse re- inforcement learning. In 23rd AAAI Conference on Artificial Intelligence, pages 1433-1438.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "Reward visualization of a dialog session simulated between GDPL and the agenda-based user simulator that contains restaurant and train domains.4",
"uris": null,
"type_str": "figure"
},
"TABREF1": {
"content": "<table><tr><td>4</td><td>Update the reward estimator f by</td></tr><tr><td/><td>maximizing J f w.r.t. \u03c9 adversarially</td></tr><tr><td/><td>(Eq. 2)</td></tr><tr><td>5</td><td>Compute the estimated reward of each</td></tr><tr><td/><td>state-action pair in D \u03a0 ,</td></tr><tr><td/><td>r = f \u03c9 (s, a) \u2212 log \u03c0 \u03b8 (a|s)</td></tr><tr><td/><td>. 3</td></tr><tr><td/><td>and Eq. 4)</td></tr><tr><td colspan=\"2\">7 end</td></tr></table>",
"html": null,
"num": null,
"text": "Guided Dialog Policy Learning Require: Dialog corpus D, User simulator \u00b5 1 foreach training iteration do Sample human dialog sessions D H from D randomly Collect the dialog sessions D \u03a0 by executing the dialog policy \u03c0 and interacting with \u00b5, a u \u223c \u00b5(\u2022|s u ), a \u223c \u03c0(\u2022|s), where s is maintained by DST",
"type_str": "table"
},
"TABREF3": {
"content": "<table/>",
"html": null,
"num": null,
"text": "Hyper-parameter settings.",
"type_str": "table"
},
"TABREF5": {
"content": "<table><tr><td>: Performance of different dialog agents on</td></tr><tr><td>the multi-domain dialog corpus by interacting with the</td></tr><tr><td>agenda-based user simulator. All the results except</td></tr><tr><td>\"dialog turns\" are shown in percentage terms. Real</td></tr><tr><td>human-human performance computed from the test set</td></tr><tr><td>(i.e. the last row) serves as the upper bounds.</td></tr></table>",
"html": null,
"num": null,
"text": "",
"type_str": "table"
},
"TABREF8": {
"content": "<table/>",
"html": null,
"num": null,
"text": "Performance of different agents on the neural user simulator.",
"type_str": "table"
},
"TABREF9": {
"content": "<table><tr><td>. VHUS has poor</td></tr><tr><td>performance on multi-domain dialog. It some-</td></tr><tr><td>times becomes insensible about the dialog act so</td></tr></table>",
"html": null,
"num": null,
"text": "",
"type_str": "table"
},
"TABREF10": {
"content": "<table/>",
"html": null,
"num": null,
"text": "The count of human preference on dialog session pairs that GDPL wins (W), draws with (D) or loses to (L) other methods based on different criteria. One method wins the other if the majority prefer the former one.",
"type_str": "table"
}
}
}
}