{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:24:13.750637Z" }, "title": "Neural Language Generation for a Turkish Task-Oriented Dialogue System", "authors": [ { "first": "Artun", "middle": [ "Burak" ], "last": "Mecik", "suffix": "", "affiliation": { "laboratory": "", "institution": "MEF University", "location": { "settlement": "Istanbul", "country": "TURKEY" } }, "email": "mecika@mef.edu.tr" }, { "first": "Volkan", "middle": [], "last": "Ozer", "suffix": "", "affiliation": { "laboratory": "", "institution": "MEF University", "location": { "settlement": "Istanbul", "country": "TURKEY" } }, "email": "ozerv@mef.edu.tr" }, { "first": "Batuhan", "middle": [], "last": "Bilgin", "suffix": "", "affiliation": { "laboratory": "", "institution": "MEF University", "location": { "settlement": "Istanbul", "country": "TURKEY" } }, "email": "bilginba@mef.edu.tr" }, { "first": "Tuna", "middle": [], "last": "Cakar", "suffix": "", "affiliation": { "laboratory": "", "institution": "MEF University", "location": { "settlement": "Istanbul", "country": "TURKEY" } }, "email": "cakart@mef.edu.tr" }, { "first": "Seniz", "middle": [], "last": "Demir", "suffix": "", "affiliation": { "laboratory": "", "institution": "MEF University", "location": { "settlement": "Istanbul", "country": "TURKEY" } }, "email": "demirse@mef.edu.tr" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Rapidly growing language and speech-enabled technologies contribute to the development of task-oriented dialogue systems. The demand for better user engagement has been increasing at an accelerating pace and this brings new remarkable challenges including the generation of informative and natural system utterances. In this work, our ultimate goal is to develop a Turkish task-oriented dialogue system that enables users to navigate over a map in order to get informed about dining venues that best match their preferences and make reservations based on received recommendations. This paper presents the pipeline architecture of our dialogue system with a particular focus on the language generator. We utilize an open source framework for building the components of our system and develop a sequenceto-sequence (Seq2Seq) neural model for language generation. This pioneering work is the first that proposes the use of a neural generation model in a Turkish conversational system. Our evaluations suggest that Turkish neural generation from meaning representations given in the form of dialogue acts is effective, but still in need of further improvements.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Rapidly growing language and speech-enabled technologies contribute to the development of task-oriented dialogue systems. The demand for better user engagement has been increasing at an accelerating pace and this brings new remarkable challenges including the generation of informative and natural system utterances. In this work, our ultimate goal is to develop a Turkish task-oriented dialogue system that enables users to navigate over a map in order to get informed about dining venues that best match their preferences and make reservations based on received recommendations. This paper presents the pipeline architecture of our dialogue system with a particular focus on the language generator. We utilize an open source framework for building the components of our system and develop a sequenceto-sequence (Seq2Seq) neural model for language generation. This pioneering work is the first that proposes the use of a neural generation model in a Turkish conversational system. Our evaluations suggest that Turkish neural generation from meaning representations given in the form of dialogue acts is effective, but still in need of further improvements.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "In the last decades, task-oriented dialogue systems with human-like communication capabilities have been widely deployed in applications with commercial value such as restaurant reservation (Henderson et al., 2019) and online shopping (Yan et al., 2017) . As opposed to open-domain dialogue systems without a clear dialogue goal, these systems present adequate intelligence in understanding user utterances and taking actions in response to accomplish constrained tasks. Task-oriented dialogue systems that can converse naturally with users through text or auditory conversation have received increasing attention of language and speech communities. Conventional taskoriented dialogue systems combine different modules in a pipeline architecture (Raux et al., 2005) : i) language understanding (Gupta et al., 2019) , ii) dialogue state tracking (Lee and Stent, 2016) , iii) dialogue policy (English and Heeman, 2005) , and iv) natural language generation (Zhu et al., 2019) . These modules are independently trained and optimized with separate objective functions. Pipeline architectures often suffer from cascaded error propagation and a change in the output representation of a previous module also affects subsequent modules. Recent end-to-end task-oriented dialogue systems (Liu and Lane, 2018; Wen et al., 2017) mitigate these problems by training a single model directly from data without distinguishing individual modules and optimizing a single objective function. Although end-to-end systems enable multi-domain adaptation by minimizing laborious feature engineering, they unfortunately might generate generic utterances or utterances that are repetitive.", "cite_spans": [ { "start": 190, "end": 214, "text": "(Henderson et al., 2019)", "ref_id": "BIBREF16" }, { "start": 235, "end": 253, "text": "(Yan et al., 2017)", "ref_id": "BIBREF52" }, { "start": 746, "end": 765, "text": "(Raux et al., 2005)", "ref_id": "BIBREF36" }, { "start": 794, "end": 814, "text": "(Gupta et al., 2019)", "ref_id": "BIBREF14" }, { "start": 845, "end": 866, "text": "(Lee and Stent, 2016)", "ref_id": "BIBREF20" }, { "start": 890, "end": 916, "text": "(English and Heeman, 2005)", "ref_id": "BIBREF12" }, { "start": 955, "end": 973, "text": "(Zhu et al., 2019)", "ref_id": "BIBREF56" }, { "start": 1278, "end": 1298, "text": "(Liu and Lane, 2018;", "ref_id": "BIBREF23" }, { "start": 1299, "end": 1316, "text": "Wen et al., 2017)", "ref_id": "BIBREF50" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "End users face utterances generated by dialogue systems and their satisfaction heavily depends on the quality and semantic coherence of these productions. The natural language generation module is mainly responsible for producing informative and fluent utterances that engage users and improve their experiences. The input to this module is often a dialog act given in a semantic form that either conveys or requests information as directed by the dialogue policy (Zhao and Kawahara, 2019) . A dialogue act is a meaning representation of an action (i.e., system or user) that can be realized using one or more sentences. Depending on the action type (e.g., greeting, inform, or confirm), dialog acts contain one or more slots (attributes) of different types (e.g., numeric or string) to fulfill the meaning (e.g., inform(name=\"Green Food\",phone=415986223)).", "cite_spans": [ { "start": 464, "end": 489, "text": "(Zhao and Kawahara, 2019)", "ref_id": "BIBREF54" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Early research methods of language generation for task-oriented dialogue systems include manually-crafted rules and templates. This kind of generation is adequate to cover all information captured in a dialog act, but it lacks preferred flexibility, requires heavy manual effort, and necessitates domain expertise. Although these issues hinder scalability across different domains, they can be addressed by statistical generation approaches which can learn human writing patterns directly from annotated data. Recently, neural generation models have become a common approach for joint learning of sentence planning to cover all selected information and surface realization to incorporate that content in a fluent text. However, it is not straightforward to find large amounts of domainspecific labeled data (real conversational data) for training statistical or neural generation models, and it is yet infeasible for some languages including the morphologically rich language Turkish.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this study, we describe our efforts towards building a task-oriented dialogue system for Turkish that enables users to navigate over a map and reach descriptive information of dining venues based on their preferences until a venue is booked for reservation. The system, implemented as a mobile application, interacts with users through an interface where textual and visual modalities are employed. In the current version, all venues that match user preferences are listed on a map and the user is presented with a single sentence description of any venue selected on that map. Although our goal is to enhance this work to a venue recommendation and reservation system where more sophisticated human-like conversations can take place, the system currently engages in a limited dialogue with end users mainly due to the lack of labeled conversational corpora for Turkish in this domain. We use the RASA open-source machinelearning based framework (Bocklisch et al., 2017) to develop natural language understanding and dialogue management components of the system. We also leverage knowledge obtained from a humanannotated English conversational data in restaurant reservation domain to imitate humans while building our dialogue policies.", "cite_spans": [ { "start": 949, "end": 973, "text": "(Bocklisch et al., 2017)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, our focus is on the language generation component of the system which is implemented as a sequence-to-sequence (Seq2Seq) neural model. To our best knowledge, this work is the first that utilizes a neural generation model for producing task-oriented Turkish utterances. The literature does not report any study to show how effective neural models are in generating Turkish sentences from dialog acts in terms of coverage and correspondence to human generated texts. In this study, we report the system performance using automatic evaluation metrics over our corpus of 4200 pairs of dialog acts and reference sentences collected via crowdsourcing. In our experiments, we also assess the impact of delexicalization on the quality of generated utterances where verbalizations of rare words in dialogue acts are targeted.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Previous research on pipelined dialogue systems has focused on improving the performance of individual components in the architecture. Rule-based parsing methods (Denis et al., 2006) , multiclass classification algorithms such as SVMs (Sarikaya et al., 2016) , and deep convex networks were shown to be effective in detecting user's intent. Promising results were also achieved with the use of recurrent (Yao et al., 2013) and recently hierarchical (Zhao and Kawahara, 2019) neural networks. Mapping textual spans of an utterance to slots in a dialogue act was often considered as a sequence tagging problem and quite good results were achieved with maximum entropy models such as conditional random fields (CRFs) and stochastic finite state transducers (Raymond and Riccardi, 2007) . Deep belief networks (Deoras and Sarikaya, 2013) , convex networks , and bidirectional long short-term memory networks (Jaech et al., 2016) were later shown to outperform CRF-based approaches. A variety of different approaches have emerged for dialogue state tracking. A tracker that benefits from domain independent rules and basic probability (Wang and Lemon, 2013) , and a CRF-based discriminative approach (Ren et al., 2013) achieved comparable performances to machine-learning based methods. The effectiveness of neural models was also exploited for state tracking task. One pioneering work combined an RNN model with delexicalized feature representations in order to generalize it to unseen slots and values, and with an online unsupervised adaptation approach to exploit unlabeled data (Henderson et al., 2014 ). An RNN model was later used to train a state tracker capable of working across different domains (Mrk\u0161i\u0107 et al., 2015) . Recently, dialogue state tracking was tackled as a reading comprehension problem and addressed using an attention-based neural network (Gao et al., 2019) . Reinforcement learning was heavily utilized for learning dialogue policies (Cuay\u00e1huitl, 2017; Shah et al., 2016; Weisz et al., 2018) . Recent experiments suggested that utilizing pre-trained language models in task-oriented dialogue components is a promising approach (Wu et al., 2020) .", "cite_spans": [ { "start": 162, "end": 182, "text": "(Denis et al., 2006)", "ref_id": "BIBREF8" }, { "start": 235, "end": 258, "text": "(Sarikaya et al., 2016)", "ref_id": "BIBREF40" }, { "start": 404, "end": 422, "text": "(Yao et al., 2013)", "ref_id": "BIBREF53" }, { "start": 449, "end": 474, "text": "(Zhao and Kawahara, 2019)", "ref_id": "BIBREF54" }, { "start": 754, "end": 782, "text": "(Raymond and Riccardi, 2007)", "ref_id": "BIBREF37" }, { "start": 806, "end": 833, "text": "(Deoras and Sarikaya, 2013)", "ref_id": "BIBREF9" }, { "start": 904, "end": 924, "text": "(Jaech et al., 2016)", "ref_id": "BIBREF18" }, { "start": 1130, "end": 1152, "text": "(Wang and Lemon, 2013)", "ref_id": "BIBREF46" }, { "start": 1195, "end": 1213, "text": "(Ren et al., 2013)", "ref_id": "BIBREF38" }, { "start": 1578, "end": 1601, "text": "(Henderson et al., 2014", "ref_id": "BIBREF15" }, { "start": 1702, "end": 1723, "text": "(Mrk\u0161i\u0107 et al., 2015)", "ref_id": "BIBREF28" }, { "start": 1861, "end": 1879, "text": "(Gao et al., 2019)", "ref_id": "BIBREF13" }, { "start": 1957, "end": 1975, "text": "(Cuay\u00e1huitl, 2017;", "ref_id": null }, { "start": 1976, "end": 1994, "text": "Shah et al., 2016;", "ref_id": "BIBREF42" }, { "start": 1995, "end": 2014, "text": "Weisz et al., 2018)", "ref_id": "BIBREF47" }, { "start": 2150, "end": 2167, "text": "(Wu et al., 2020)", "ref_id": "BIBREF51" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Although many generation methods have been proposed so far, they can be broadly classified into three types. Rule or template based approaches require significant expertise and human effort, and the number of manually constructed templates is limited (Jur\u010d\u00ed\u010dek et al., 2014; Mitchell et al., 2014) . On the other hand, stochastic or statistical approaches enable less monotonic generation by training a generator from data directly (Mairesse et al., 2010; Mairesse and Walker, 2011; Oh and Rudnicky, 2000) . Recent developments in neural networks have enabled generation to be handled as a transformation from meaning representations to system responses via a single model. In a work that simulates the few-shot learning setting with scarce annotated data, a multilayer transformer model was trained for generating responses and generalization to new domains was achieved by utilizing pretrained language models (Peng et al., 2020) . The work of Wen et al. (Wen et al., 2015a) jointly utilized recurrent and convolutional neural networks for realizing the content of a dialog act, and the RNN-based generator that encodes one-hot representation of the dialog act as its initial state was trained with semantically unaligned data. Semantically controlled long short-term memory was also explored for training a generator from unaligned data where sentence planning and surface realization are jointly optimized (Wen et al., 2015b) . A recent work employed a Seq2Seq generator with attention using GRU cells to capture the semantic content of dialog acts and used a language model to achieve naturalness in generated utterances (Zhu et al., 2019 ). Our work is most similar to the work of Du\u0161ek and Jur\u010d\u00ed\u010dek (Du\u0161ek and Jur\u010d\u00ed\u010dek, 2016) but their dialog act representation formed by concatenating triples of act type, slot name, and slot value differs from our input representation.", "cite_spans": [ { "start": 251, "end": 274, "text": "(Jur\u010d\u00ed\u010dek et al., 2014;", "ref_id": "BIBREF19" }, { "start": 275, "end": 297, "text": "Mitchell et al., 2014)", "ref_id": "BIBREF27" }, { "start": 432, "end": 455, "text": "(Mairesse et al., 2010;", "ref_id": "BIBREF25" }, { "start": 456, "end": 482, "text": "Mairesse and Walker, 2011;", "ref_id": "BIBREF26" }, { "start": 483, "end": 505, "text": "Oh and Rudnicky, 2000)", "ref_id": "BIBREF32" }, { "start": 912, "end": 931, "text": "(Peng et al., 2020)", "ref_id": "BIBREF34" }, { "start": 1410, "end": 1429, "text": "(Wen et al., 2015b)", "ref_id": "BIBREF49" }, { "start": 1626, "end": 1643, "text": "(Zhu et al., 2019", "ref_id": "BIBREF56" }, { "start": 1687, "end": 1732, "text": "Du\u0161ek and Jur\u010d\u00ed\u010dek (Du\u0161ek and Jur\u010d\u00ed\u010dek, 2016)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Turkish, a morphologically rich language with free-constituent order, has been in focus of language processing research for many years (Oflazer and Saraclar, 2018) . However, Turkish language generation has been relatively less-studied up to now. Scarcity of available data and lack of annotations are some of the obstacles to developing robust systems with high performances. Previous generation literature is restricted to some well-known problems of surface form generation (Cicekli and Korkmaz, 1998; Ayan, 2000) and text summarization (Nuzumlal\u0131 and\u00d6zg\u00fcr, 2014; \u00c7 agdas Can Birant et al., 2016). Recently, template-based language generation was employed in a venue recommendation system (Elifoglu and G\u00fcng\u00f6r, 2018) where a distinct template for each venue property is used. To our best knowledge, Turkish text generation from structured data has not been yet exploited. Moreover, there is no prior knowledge as to whether the use of neural models in generating utterances from dialog acts is effective or not, especially in domains with a very limited amount of annotated data. Our work reports first empirical evaluations that measure the usability and effectiveness of a neural model in this task.", "cite_spans": [ { "start": 135, "end": 163, "text": "(Oflazer and Saraclar, 2018)", "ref_id": "BIBREF31" }, { "start": 477, "end": 504, "text": "(Cicekli and Korkmaz, 1998;", "ref_id": "BIBREF4" }, { "start": 505, "end": 516, "text": "Ayan, 2000)", "ref_id": "BIBREF0" }, { "start": 692, "end": 719, "text": "(Elifoglu and G\u00fcng\u00f6r, 2018)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Our task-oriented dialogue system is implemented as a mobile application and exhibits the traditional pipeline architecture. A user utterance is processed by three downstream components before a dialog act is transferred to the language generation component. In the rest of this section, the mobile application, and the language understanding and dialogue management components are described in detail.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System Architecture", "sec_num": "3" }, { "text": "Users interact with our mobile application through an interface where they rely on menus that display listings of choices for different properties of dining venues. At any time while using the application, users can search for venues exhibiting different properties by choosing any of these alternatives. As shown in Figure 1 -a, a user is initially asked to specify venue properties being sought (i.e., its location, customer rating, price range, and type of served food). All venues that exhibit these properties are listed on a map of the selected region (Figure 1-b ) and the user can navigate between these venues. If the user selects a listed venue on the map, a single sentence description of the venue along with some of the matching properties are presented to the user in a separate window at the bottom of the screen. That description is produced by our neural generator using the meaning representation passed from the system. On this map view, the user can also update venue properties from the menu given on the upper left corner (the red icon) and start a completely new search (Figure 1-c) . Although it is not fully implemented yet, the user will engage in a dialogue with the system over this map view (using the blue icon on the upper right corner), and get recommendations/make reservations in the future.", "cite_spans": [], "ref_spans": [ { "start": 317, "end": 325, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 558, "end": 569, "text": "(Figure 1-b", "ref_id": "FIGREF0" }, { "start": 1093, "end": 1105, "text": "(Figure 1-c)", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Mobile Application", "sec_num": "3.1" }, { "text": "This component identifies user's intent from a given utterance by classifying it into predefined classes. Moreover, it extracts information related to that intent and uses them to fill corresponding slots. In the current implementation, we use the RASA NLU framework (Bocklisch et al., 2017) for building our language understanding component. The RASA NLU combines embeddings of word tokens that appear in a sentence in order to obtain a representation of the sentence. An SVM classifier trained on these sentences then classifies a given utterance into one or more intents. For entity extraction, the framework offers different extractors and we train a CRF extractor using our custom entities. To train a Turkish intent classifier and an entity extractor, we use our dataset and some manually translated examples from an English dataset in the restaurant domain (Novikova et al., 2017) . For each sentence in our collection, we manually determine the intent and annotate text spans that correspond to different entities with appropriate tags. ", "cite_spans": [ { "start": 267, "end": 291, "text": "(Bocklisch et al., 2017)", "ref_id": "BIBREF1" }, { "start": 864, "end": 887, "text": "(Novikova et al., 2017)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Natural Language Understanding", "sec_num": "3.2" }, { "text": "This component maintains the current dialogue state by keeping user's intents and a dialogue history (dialogue state tracker). Its main responsibility is to estimate the user's goal at each turn of the dialogue. The dialogue history is treated as an abstraction of previous dialogue turns. Moreover, it behaves as the decision maker of the whole system and takes appropriate actions according to a policy by considering the current dialogue state. Due to lack of available Turkish dialogue conversations that we can use for training a dialogue management component, we first analyze the E2E dialogue challenge dataset that consists of English conversations in the restaurant reservation domain . By processing the provided dialogues and manually filtering intents and entities that are out of our scope, we then compile training data for our dialogue manager. Since our focus here is to mimic natural conversations rather than modeling the language, this data collection approach enables us to train our language-independent dialogue manager with 2800 different representations of actual conversations of varying length. Using an RNN-based approach, the RASA Core dialogue engine learns policies from our training data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dialogue Management", "sec_num": "3.3" }, { "text": "We develop a sequence to sequence (Seq2Seq) model Sha et al., 2018) as our generation component. The model utilizes a dialog act as input and produces a single Turkish sentence to preferably convey all the information expressed in that act. Since there is no available data that we can use to train the model, we first conduct human subject experiments in order to collect a small-sized corpus as our starting point.", "cite_spans": [ { "start": 50, "end": 67, "text": "Sha et al., 2018)", "ref_id": "BIBREF41" } ], "ref_spans": [], "eq_spans": [], "section": "Neural Turkish Generation Component", "sec_num": "4" }, { "text": "A dialog act is a logical representation of meaning that might be expressed using single or multiple sentences. Each dialog act contains an action type (i.e., what is intended to be conveyed by the system or user) and a set of slot-value pairs associated with that action (e.g., the properties of a venue in focus). Since our goal is to engage in dialogue with end users, restricting the system to only describe properties of a venue is not adequate. Moreover, the number of slots that might be associated with an action type is too large to be listed in a single sentence with a moderate complexity. In order to determine action types and slots that would be utilized, we explore similar well-studied datasets compiled for other languages (SFRest (Wen et al., 2015b), E2E (Novikova et al., 2017) , Bagel (Mairesse et al., 2010) ). Nine different action types are incorporated into the current version but these action types and slots will be populated in the future:", "cite_spans": [ { "start": 773, "end": 796, "text": "(Novikova et al., 2017)", "ref_id": "BIBREF29" }, { "start": 805, "end": 828, "text": "(Mairesse et al., 2010)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Corpus Collection", "sec_num": "4.1" }, { "text": "\u2022 greeting: Greet the user We conduct a data collection study with 90 participants where each participant is presented with 45-50 dialog acts of different action types. The participants are asked to express a given dialog act in a single sentence and to use all slots given in the act. Moreover, they are told to not rely on their commonsense knowledge or use any information that might be inferred from the given ones. In the study, greeting and goodbye actions are not used. Each dialog act contains two to four randomly chosen slots in addition to the name of the venue in focus. It is guaranteed that a participant receives different sets of slots for the same action type even if the number of slots are the same. We use both real and artificial data in order to fill in slot values. Information about a small set of dining venues is obtained from an online restaurant search service and that information is augmented with artificial information in order to expand the collection. For instance, new dialog acts are produced by adding new neighbour restaurants to existing dialog acts without any neighbourhood information. Each dialog act is presented to four different participants. At the end, 4200 dialog act and reference sentence pairs are collected. Figure 3 shows two dialog acts with three reference sentences from our collection. (type='inform', name='Lezzet Mekan ', customer_satisfaction='Y\u00fcksek', cuisines='Tatl , D\u00fcnya Mutfa Yemekleri', price_range='Pahal ', region='Caddebostan, stanbul') i) Lezzet Mekan , stanbul Caddebostan'da, tatl ve d\u00fcnya mutfa yemekleri servis eden pahal fakat lezzetli yemekleriyle m\u00fc teri memnuniyetini \u00fcst seviyede tutan bir mekand r.", "cite_spans": [ { "start": 1344, "end": 1507, "text": "(type='inform', name='Lezzet Mekan ', customer_satisfaction='Y\u00fcksek', cuisines='Tatl , D\u00fcnya Mutfa Yemekleri', price_range='Pahal ', region='Caddebostan, stanbul')", "ref_id": null } ], "ref_spans": [ { "start": 1261, "end": 1269, "text": "Figure 3", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Corpus Collection", "sec_num": "4.1" }, { "text": "(Lezzet Mekan is a place in Caddebostan, Istanbul that serves sweet and world cuisine and keeps customer satisfaction at the highest level with its expensive but delicious dishes.)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus Collection", "sec_num": "4.1" }, { "text": "ii) D\u00fcnya mutfa na ait yemekler ve tatl lar bulabilece iniz, m\u00fc teri memnuniyeti konusunda \u00e7ok ba ar l olmas ra men fiyatlar pahal olan Lezzet Mekan , stanbul Caddebostan'da bulunmaktad r. (Lezzet Mekan , where you can find desserts and dishes from the world cuisine, is very successful in customer satisfaction though it is expensive, and is located in Caddebostan, Istanbul.) iii) stanbul Caddebostan'da tatl lar ile d\u00fcnya mutfa na ait yemekler yenebilecek Lezzet Mekan , pahal fiyata yemekler sunan ve m\u00fc terilerin \u00e7ok memnun oldu u bir restorand r. ", "cite_spans": [ { "start": 354, "end": 377, "text": "Caddebostan, Istanbul.)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Corpus Collection", "sec_num": "4.1" }, { "text": "A dialog act is represented as a sequence of field value pairs (e.g., field 1 = value 1 ) where the first pair corresponds to the action type and the rest are slot value pairs. The value of a field might contain a single word or a sequence of words. The field name (f x ) and its position in the value sequence (p x ) are used to represent each word (w x ). To represent the position of a word in a sequence, its position from the beginning of the sequence (p x +) and from the end of the sequence (p x \u2212) are used. Therefore, a word that appears in a field value is represented as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input Representation", "sec_num": "4.2" }, { "text": "R x = (f x , p x + , p x \u2212).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input Representation", "sec_num": "4.2" }, { "text": "All punctuation characters in field values are represented similarly. Table 2 shows the representations of all words in the dialog act (type = 'inform only', name = 'Denizalt\u0131 Restaurant', cuisine = 'Kafeterya\u00dcr\u00fcnleri, T\u00fcrk Yemekleri', region = 'Urla,\u0130zmir', near = 'VVapiano'). In this example, the value of the name field consists of two words, namely Denizalt\u0131 and Restaurant. The word Denizalt\u0131 is the first word starting from the beginning of value sequence and the second word from the end of the sequence. Therefore, its representation is (name,1,2). Each word in a field value (w x ) and its representation (R x ) are encoded into four embeddings and then concatenated to form the final input embedding of the encoder (i e = w e \u2295 f e \u2295 p e + \u2295p e \u2212). A reference sentence already has a sequence of word tokens and thus each token is encoded into a word embedding only:", "cite_spans": [], "ref_spans": [ { "start": 70, "end": 77, "text": "Table 2", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Input Representation", "sec_num": "4.2" }, { "text": "\u2022 Word embedding: Vector representation of the word (w e )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input Representation", "sec_num": "4.2" }, { "text": "\u2022 Field embedding: Vector representation of the field name (f e )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input Representation", "sec_num": "4.2" }, { "text": "\u2022 Beginning position embedding: Vector representation of the position from the beginning of the field value (p e +)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input Representation", "sec_num": "4.2" }, { "text": "\u2022 End position embedding: Vector representation of the position from the end of the field value (p e \u2212)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input Representation", "sec_num": "4.2" }, { "text": "To capture temporal processing and feedback requirements of sequences in learning, we approach to the generation problem using a recurrent neural network (RNN) based solution. RNN models are of great utility in computing current output with respect to previous computations kept in hidden states and their processing power makes them widely applicable to speech recognition (Hsu et al., 2016; Prabhavalkar et al., 2017) and language processing studies (Socher et al., 2011; Daza and Frank, 2018) . In our work, dialog acts and reference sentences are sequences of variable-length. Thus, we formulate our generation task as sequence-to-sequence (Seq2Seq) learning (Sutskever et al., 2014) , a type of an RNN with encoder-decoder. Our model uses a long shortterm memory (LSTM) based RNN to encode the input sequence into hidden states. A second LSTMbased RNN is used to decode hidden states and generate the output sequence. Given that x t and h t are the input and hidden state at time step t; i, f , and o are input, forget and output gates; and C andC are cell and candidate cell states, the computations used with LSTM units are as follows:", "cite_spans": [ { "start": 374, "end": 392, "text": "(Hsu et al., 2016;", "ref_id": "BIBREF17" }, { "start": 393, "end": 419, "text": "Prabhavalkar et al., 2017)", "ref_id": "BIBREF35" }, { "start": 452, "end": 473, "text": "(Socher et al., 2011;", "ref_id": "BIBREF43" }, { "start": 474, "end": 495, "text": "Daza and Frank, 2018)", "ref_id": "BIBREF6" }, { "start": 663, "end": 687, "text": "(Sutskever et al., 2014)", "ref_id": "BIBREF44" } ], "ref_spans": [], "eq_spans": [], "section": "Sequence-to-Sequence Generation Model", "sec_num": "4.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "i t = \u03c3(W i .[h t\u22121 , x t ] + b i ) f t = \u03c3(W f .[h t\u22121 , x t ] + b f ) o t = \u03c3(W 0 .[h t\u22121 , x t ] + b 0 ) C t = tanh(W C .[h t\u22121 , x t ] + b C ) C t = f t * C t\u22121 + i t * C t h t = o t * tanh(C t )", "eq_num": "(1)" } ], "section": "Sequence-to-Sequence Generation Model", "sec_num": "4.3" }, { "text": "Neural models often suffer from rare words while generating text from data since their verbalization cannot be predicted properly. Delexicalization is one of the mostly studied solutions to this issue where words are replaced with placeholders in data before being used for training. Texts produced by the generation model are then processed to replace these placeholders with actual words that appear in the original data. For this study, we delexicalize our input collection (-Del) and obtain a second version of our dataset (+Del). We only replace content words of slots with verbatim strings (e.g., name and region in Table 1 ) and leave those with categorical values (e.g., cuisine and price range) untouched. We have different dialog acts that differ only in slot values that are not replaced during delexicalization. Therefore, these dialog acts are counted as different acts in the second dataset. The number of placeholders in our delexicalized dataset corresponds to 17.71% of all words in reference sentences. Table 3 presents token-based statistics for both datasets. We train two models on both original and delexicalized datasets. The first model is the sequence-to-sequence model described in Section 4.3 (Model Att-) and the same model augmented with an attention mechanism (Model Att+).", "cite_spans": [], "ref_spans": [ { "start": 622, "end": 629, "text": "Table 1", "ref_id": "TABREF1" }, { "start": 1021, "end": 1028, "text": "Table 3", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Evaluation", "sec_num": "5" }, { "text": "We perform experiments to finetune model parameters by optimizing BLEU score on the development set. The models reported here use a single hidden layer and 700 LSTM units in encoder and decoder. Word embeddings of length 400, field embedding of length 50, and position embedding of length 5 are used. The epoch number is set to 10 and Adam optimizer with a learning rate of 0.003 is utilized. We compare our models with a prior Seq2Seq generation model ) (Model SA) whose primary focus is to generate one sentence biographies from Wikipedia infoboxes where the structure and content of infobox tables are modeled separately. In addition to learning what to convey in the output, the model also learns how to order the selected content. To train this structure-aware generation model with dual attention, we process all dialog acts in our dataset as infobox tables where the action type is considered as infobox table type and remaining slot value pairs as field value pairs of infobox tables. The same model parameters are used in learning. Our input collection of dialog act and reference sentence pairs is splitted into training set of 3360, validation set of 420, and test set of 420 pairs. Table 4 presents the distribution of action types in these sets. In our experiments, we evaluate the efficiency of models in producing utterances from dialog acts and leave an evaluation of fluency and naturalness of these productions to future work. Here, we report performances using three evaluation metrics, BLEU (Papineni et al., 2002) , ROUGE-n and ROUGE-L fmeasures (Lin, 2004) , and Slot error rate (SER) (Riou et al., 2019) . The slot error rate is computed as (M+R)/N where M and R correspond to the number of missing and redundant slots in the generated utterance, and N is the total number of slots in the corresponding dialogue act. For each model, we perform 5 runs with different random initializations on both datasets. Table 5 presents computed average scores. The model without attention (Model Att-), not surprisingly, fails to learn the generation effectively and receives the lowest performance scores in all metrics. In addition, repetitive slot values and very similar sentence productions for different dialog acts are highly observed in the productions. On the other hand, we observe that our model with attention (Model Att+) achieves highest BLEU and ROUGE scores on the original dataset (-Del). However, our model is behind the structure-aware model (Model SA) on the delexicalized dataset (+Del) with respect to the BLEU score and over high order n-grams (ROUGE-3 and ROUGE-4). This less significant difference might be attributed to the fact that structure-aware model performs better in producing longer matching sequences than our model, which is also validated by ROUGE-L scores. Both models exhibit large performance improvements on the delexicalized dataset where BLEU scores are more than doubled. The measured positive impact of delexicalization on structure-aware model is more than what we observe with our model. The contribution of delexicalized dataset to model Model SA is mainly observed on longer word sequences (e.g., from 0.063 to 0.328 in ROUGE-3).", "cite_spans": [ { "start": 1511, "end": 1534, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF33" }, { "start": 1567, "end": 1578, "text": "(Lin, 2004)", "ref_id": "BIBREF22" }, { "start": 1607, "end": 1626, "text": "(Riou et al., 2019)", "ref_id": "BIBREF39" } ], "ref_spans": [ { "start": 1930, "end": 1937, "text": "Table 5", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "Evaluation", "sec_num": "5" }, { "text": "Although BLEU and ROUGE evaluations validate word-based performances of these models, they do not provide any insights into the content quality, particularly the accuracy of selected content and the slot coverage of these models. On both datasets, our model with attention achieves the best slot error rates where delexicalization improves the performance by approximately 5%. The structureaware model performs similarly only on delexicalized dataset, but the achieved improvement is more substantial than that seen in our model. These results demonstrate that both models need further improvements to better cover slot values resulting in fewer repeated or omitted information in pro-duced utterances.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "5" }, { "text": "There are two major drawbacks of our model. First, it is learning from a corpus which is relatively small in comparison with many available datasets compiled for other languages. Second, it suffers from semantically similar entities in the dataset (e.g., cuisine or region) and entities that appear more frequently than others in the training data are selected by the model regardless of what is provided in the dialogue act. We argue that with a larger training corpus and more effective attention mechanism, our generation performance would be improved in the future.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "5" }, { "text": "This work presents our efforts towards developing a Turkish task-oriented dialogue system for venue recommendation and reservation. The current system is implemented using a pipeline approach, and natural language understanding and dialogue management components are built using the RASA open-source framework. In order to generate utterances from dialogue act representations, we develop a sequence-to-sequence neural model with attention. The model is trained with a small-sized Turkish corpus consisting of pairs of dialogue acts and reference sentences. To the best of our knowledge, this work is the first that investigates the use of Turkish neural generation in dialogue systems and measures the effectiveness of conversational generation from structured input on a morphologically rich language. In the future, we plan to collect a larger corpus and improve the performance of our generator. Moreover, enhancing the dialogue capabilities of our overall system and qualitatively evaluating the performance of the generation model are some of our future plans.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" } ], "back_matter": [ { "text": "This work is supported by TUBITAK-ARDEB under the grant number 117E977.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Morphosyntactic generation of turkish from predicate-argument structure", "authors": [ { "first": "", "middle": [], "last": "Burcu Karagol Ayan", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the COLING Student Session", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Burcu Karagol Ayan. 2000. Morphosyntactic genera- tion of turkish from predicate-argument structure. In Proceedings of the COLING Student Session.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Rasa: Open source language understanding and dialogue management", "authors": [ { "first": "Tom", "middle": [], "last": "Bocklisch", "suffix": "" }, { "first": "Joey", "middle": [], "last": "Faulkner", "suffix": "" }, { "first": "Nick", "middle": [], "last": "Pawlowski", "suffix": "" }, { "first": "Alan", "middle": [], "last": "Nichol", "suffix": "" } ], "year": 2017, "venue": "ArXiv", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tom Bocklisch, Joey Faulkner, Nick Pawlowski, and Alan Nichol. 2017. Rasa: Open source language understanding and dialogue management. ArXiv, abs/1712.05181.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A survey to text summarization methods for turkish", "authors": [ { "first": "\u00c7", "middle": [], "last": "Agdas Can", "suffix": "" }, { "first": "\u00d6zg\u00fcn", "middle": [], "last": "Birant", "suffix": "" }, { "first": "", "middle": [], "last": "Kosaner", "suffix": "" }, { "first": "Aktas", "middle": [], "last": "And\u00f6zlem", "suffix": "" } ], "year": 2016, "venue": "International Journal of Computer Applications", "volume": "144", "issue": "", "pages": "23--28", "other_ids": {}, "num": null, "urls": [], "raw_text": "\u00c7 agdas Can Birant,\u00d6zg\u00fcn Kosaner, and\u00d6zlem Aktas. 2016. A survey to text summarization methods for turkish. International Journal of Computer Applica- tions, 144:23-28.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "A survey on dialogue systems: Recent advances and new frontiers", "authors": [ { "first": "Hongshen", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Xiaorui", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Dawei", "middle": [], "last": "Yin", "suffix": "" }, { "first": "Jiliang", "middle": [], "last": "Tang", "suffix": "" } ], "year": 2017, "venue": "Special Interest Group on Knowledge Discovery in Data Explorations Newsletter", "volume": "19", "issue": "2", "pages": "25--35", "other_ids": { "DOI": [ "10.1145/3166054.3166058" ] }, "num": null, "urls": [], "raw_text": "Hongshen Chen, Xiaorui Liu, Dawei Yin, and Jiliang Tang. 2017. A survey on dialogue systems: Re- cent advances and new frontiers. Special Inter- est Group on Knowledge Discovery in Data Explo- rations Newsletter, 19(2):25-35.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Generation of simple turkish sentences with systemicfunctional grammar", "authors": [ { "first": "Ilyas", "middle": [], "last": "Cicekli", "suffix": "" }, { "first": "Turgay", "middle": [], "last": "Korkmaz", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the Joint Conferences on New Methods in Language Processing and Computational Natural Language Learning", "volume": "", "issue": "", "pages": "165--173", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ilyas Cicekli and Turgay Korkmaz. 1998. Gener- ation of simple turkish sentences with systemic- functional grammar. In Proceedings of the Joint Conferences on New Methods in Language Process- ing and Computational Natural Language Learning, page 165-173, USA. Association for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "SimpleDS: A Simple Deep Reinforcement Learning Dialogue System", "authors": [], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Heriberto Cuay\u00e1huitl. 2017. SimpleDS: A Simple Deep Reinforcement Learning Dialogue System. Springer.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "A sequence-tosequence model for semantic role labeling", "authors": [ { "first": "Angel", "middle": [], "last": "Daza", "suffix": "" }, { "first": "Anette", "middle": [], "last": "Frank", "suffix": "" } ], "year": 2018, "venue": "Proceedings of The Third Workshop on Representation Learning for NLP", "volume": "", "issue": "", "pages": "207--216", "other_ids": { "DOI": [ "10.18653/v1/W18-3027" ] }, "num": null, "urls": [], "raw_text": "Angel Daza and Anette Frank. 2018. A sequence-to- sequence model for semantic role labeling. In Pro- ceedings of The Third Workshop on Representation Learning for NLP, pages 207-216, Melbourne, Aus- tralia. Association for Computational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Use of kernel deep convex networks and end-to-end learning for spoken language understanding", "authors": [ { "first": "Gokhan", "middle": [], "last": "Li Deng", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "Tur", "suffix": "" }, { "first": "Dilek", "middle": [], "last": "He", "suffix": "" }, { "first": "", "middle": [], "last": "Hakkani-Tur", "suffix": "" } ], "year": 2012, "venue": "SLT 2012 -Proceedings", "volume": "", "issue": "", "pages": "210--215", "other_ids": {}, "num": null, "urls": [], "raw_text": "li Deng, Gokhan Tur, Xiaodong He, and Dilek Hakkani- Tur. 2012. Use of kernel deep convex networks and end-to-end learning for spoken language under- standing. In 2012 IEEE Workshop on Spoken Lan- guage Technology, SLT 2012 -Proceedings, pages 210-215.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "A deep-parsing approach to natural language understanding in dialogue system: Results of a corpus-based evaluation", "authors": [ { "first": "Alexandre", "middle": [], "last": "Denis", "suffix": "" }, { "first": "Matthieu", "middle": [], "last": "Quignard", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Pitel", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC'06)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexandre Denis, Matthieu Quignard, and Guillaume Pitel. 2006. A deep-parsing approach to natural lan- guage understanding in dialogue system: Results of a corpus-based evaluation. In Proceedings of the Fifth International Conference on Language Re- sources and Evaluation (LREC'06).", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Deep belief network based semantic taggers for spoken language understanding", "authors": [ { "first": "Anoop", "middle": [], "last": "Deoras", "suffix": "" }, { "first": "Ruhi", "middle": [], "last": "Sarikaya", "suffix": "" } ], "year": 2013, "venue": "INTERSPEECH", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anoop Deoras and Ruhi Sarikaya. 2013. Deep belief network based semantic taggers for spoken language understanding. In INTERSPEECH.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Sequence-tosequence generation for spoken dialogue via deep syntax trees and strings", "authors": [ { "first": "Ond\u0159ej", "middle": [], "last": "Du\u0161ek", "suffix": "" }, { "first": "Filip", "middle": [], "last": "Jur\u010d\u00ed\u010dek", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "45--51", "other_ids": { "DOI": [ "10.18653/v1/P16-2008" ] }, "num": null, "urls": [], "raw_text": "Ond\u0159ej Du\u0161ek and Filip Jur\u010d\u00ed\u010dek. 2016. Sequence-to- sequence generation for spoken dialogue via deep syntax trees and strings. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 2: Short Papers), pages 45-51.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "A restaurant recommendation system for turkish based on user conversations", "authors": [ { "first": "M", "middle": [], "last": "Elifoglu", "suffix": "" }, { "first": "T", "middle": [], "last": "G\u00fcng\u00f6r", "suffix": "" } ], "year": 2018, "venue": "26th Signal Processing and Communications Applications Conference (SIU)", "volume": "", "issue": "", "pages": "1--4", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Elifoglu and T. G\u00fcng\u00f6r. 2018. A restaurant recom- mendation system for turkish based on user conver- sations. In 2018 26th Signal Processing and Com- munications Applications Conference (SIU), pages 1-4.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Learning mixed initiative dialog strategies by using reinforcement learning on both conversants", "authors": [ { "first": "Michael", "middle": [], "last": "English", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Heeman", "suffix": "" } ], "year": 2005, "venue": "Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1011--1018", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael English and Peter Heeman. 2005. Learning mixed initiative dialog strategies by using reinforce- ment learning on both conversants. In Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Lan- guage Processing, pages 1011-1018.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Dialog state tracking: A neural reading comprehension approach", "authors": [ { "first": "Shuyang", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Abhishek", "middle": [], "last": "Sethi", "suffix": "" }, { "first": "Sanchit", "middle": [], "last": "Agarwal", "suffix": "" }, { "first": "Tagyoung", "middle": [], "last": "Chung", "suffix": "" }, { "first": "Dilek", "middle": [], "last": "Hakkani-Tur", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 20th Annual SIGdial Meeting on Discourse and Dialogue", "volume": "", "issue": "", "pages": "264--273", "other_ids": { "DOI": [ "10.18653/v1/W19-5917" ] }, "num": null, "urls": [], "raw_text": "Shuyang Gao, Abhishek Sethi, Sanchit Agarwal, Tagy- oung Chung, and Dilek Hakkani-Tur. 2019. Dia- log state tracking: A neural reading comprehension approach. In Proceedings of the 20th Annual SIG- dial Meeting on Discourse and Dialogue, pages 264- 273.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "CASA-NLU: Context-aware selfattentive natural language understanding for taskoriented chatbots", "authors": [ { "first": "Arshit", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Peng", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Garima", "middle": [], "last": "Lalwani", "suffix": "" }, { "first": "Mona", "middle": [], "last": "Diab", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "1285--1290", "other_ids": { "DOI": [ "10.18653/v1/D19-1127" ] }, "num": null, "urls": [], "raw_text": "Arshit Gupta, Peng Zhang, Garima Lalwani, and Mona Diab. 2019. CASA-NLU: Context-aware self- attentive natural language understanding for task- oriented chatbots. In Proceedings of the 2019 Con- ference on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 1285-1290.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Robust dialog state tracking using delexicalised recurrent neural networks and unsupervised adaptation", "authors": [ { "first": "Matthew", "middle": [], "last": "Henderson", "suffix": "" }, { "first": "Blaise", "middle": [], "last": "Thomson", "suffix": "" }, { "first": "Steve", "middle": [ "J" ], "last": "Young", "suffix": "" } ], "year": 2014, "venue": "IEEE Spoken Language Technology Workshop (SLT)", "volume": "", "issue": "", "pages": "360--365", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthew Henderson, Blaise Thomson, and Steve J. Young. 2014. Robust dialog state tracking using delexicalised recurrent neural networks and unsu- pervised adaptation. 2014 IEEE Spoken Language Technology Workshop (SLT), pages 360-365.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Polyresponse: A rank-based approach to task-oriented dialogue with application in restaurant search and booking", "authors": [ { "first": "Matthew", "middle": [], "last": "Henderson", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Vulic", "suffix": "" }, { "first": "Inigo", "middle": [], "last": "Casanueva", "suffix": "" }, { "first": "Pawe\u0142", "middle": [], "last": "Budzianowski", "suffix": "" }, { "first": "Daniela", "middle": [], "last": "Gerz", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Coope", "suffix": "" }, { "first": "Georgios", "middle": [], "last": "Spithourakis", "suffix": "" }, { "first": "Nikola", "middle": [], "last": "Tsung Hsien Wen", "suffix": "" }, { "first": "Pei-Hao", "middle": [], "last": "Mrksic", "suffix": "" }, { "first": "", "middle": [], "last": "Su", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 EMNLP and the 9th IJCNLP", "volume": "", "issue": "", "pages": "181--186", "other_ids": { "DOI": [ "10.18653/v1/D19-3031" ] }, "num": null, "urls": [], "raw_text": "Matthew Henderson, Ivan Vulic, Inigo Casanueva, Pawe\u0142 Budzianowski, Daniela Gerz, Sam Coope, Georgios Spithourakis, Tsung Hsien Wen, Nikola Mrksic, and Pei-Hao Su. 2019. Polyresponse: A rank-based approach to task-oriented dialogue with application in restaurant search and booking. In Pro- ceedings of the 2019 EMNLP and the 9th IJCNLP, pages 181-186.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "A prioritized grid long short-term memory rnn for speech recognition", "authors": [ { "first": "Wei-Ning", "middle": [], "last": "Hsu", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "James", "middle": [ "R" ], "last": "Glass", "suffix": "" } ], "year": 2016, "venue": "IEEE Spoken Language Technology Workshop (SLT)", "volume": "", "issue": "", "pages": "467--473", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wei-Ning Hsu, Yu Zhang, and James R. Glass. 2016. A prioritized grid long short-term memory rnn for speech recognition. 2016 IEEE Spoken Language Technology Workshop (SLT), pages 467-473.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Domain adaptation of recurrent neural networks for natural language understanding", "authors": [ { "first": "Aaron", "middle": [], "last": "Jaech", "suffix": "" }, { "first": "Larry", "middle": [], "last": "Heck", "suffix": "" }, { "first": "Mari", "middle": [], "last": "Ostendorf", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1604.00117" ] }, "num": null, "urls": [], "raw_text": "Aaron Jaech, Larry Heck, and Mari Ostendorf. 2016. Domain adaptation of recurrent neural networks for natural language understanding. arXiv preprint arXiv:1604.00117.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Alex: A statistical dialogue systems framework", "authors": [ { "first": "Filip", "middle": [], "last": "Jur\u010d\u00ed\u010dek", "suffix": "" }, { "first": "Ond\u0159ej", "middle": [], "last": "Du\u0161ek", "suffix": "" }, { "first": "Ond\u0159ej", "middle": [], "last": "Pl\u00e1tek", "suffix": "" }, { "first": "Luk\u00e1\u0161", "middle": [], "last": "Zilka", "suffix": "" } ], "year": 2014, "venue": "Text, Speech and Dialogue", "volume": "", "issue": "", "pages": "587--594", "other_ids": {}, "num": null, "urls": [], "raw_text": "Filip Jur\u010d\u00ed\u010dek, Ond\u0159ej Du\u0161ek, Ond\u0159ej Pl\u00e1tek, and Luk\u00e1\u0161 Zilka. 2014. Alex: A statistical dialogue systems framework. In Text, Speech and Dialogue, pages 587-594. Springer International Publishing.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Task lineages: Dialog state tracking for flexible interaction", "authors": [ { "first": "Sungjin", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Amanda", "middle": [], "last": "Stent", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue", "volume": "", "issue": "", "pages": "11--21", "other_ids": { "DOI": [ "10.18653/v1/W16-3602" ] }, "num": null, "urls": [], "raw_text": "Sungjin Lee and Amanda Stent. 2016. Task lineages: Dialog state tracking for flexible interaction. In Pro- ceedings of the 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 11-21.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Microsoft dialogue challenge: Building end-to-end task-completion dialogue systems", "authors": [ { "first": "Xiujun", "middle": [], "last": "Li", "suffix": "" }, { "first": "Sarah", "middle": [], "last": "Panda", "suffix": "" }, { "first": "Jingjing", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1807.11125" ] }, "num": null, "urls": [], "raw_text": "Xiujun Li, Sarah Panda, Jingjing Liu, and Jianfeng Gao. 2018. Microsoft dialogue challenge: Building end-to-end task-completion dialogue systems. arXiv preprint arXiv:1807.11125.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "ROUGE: A package for automatic evaluation of summaries", "authors": [ { "first": "Chin-Yew", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2004, "venue": "Text Summarization Branches Out", "volume": "", "issue": "", "pages": "74--81", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chin-Yew Lin. 2004. ROUGE: A package for auto- matic evaluation of summaries. In Text Summariza- tion Branches Out, pages 74-81.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "End-to-end learning of task-oriented dialogs", "authors": [ { "first": "Bing", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Ian", "middle": [], "last": "Lane", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Student Research Workshop", "volume": "", "issue": "", "pages": "67--73", "other_ids": { "DOI": [ "10.18653/v1/N18-4010" ] }, "num": null, "urls": [], "raw_text": "Bing Liu and Ian Lane. 2018. End-to-end learning of task-oriented dialogs. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Student Research Workshop, pages 67-73.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Table-to-text generation by structure-aware seq2seq learning", "authors": [ { "first": "Tianyu", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Kexiang", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Lei", "middle": [], "last": "Sha", "suffix": "" }, { "first": "Baobao", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Zhifang", "middle": [], "last": "Sui", "suffix": "" } ], "year": 2017, "venue": "CoRR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tianyu Liu, Kexiang Wang, Lei Sha, Baobao Chang, and Zhifang Sui. 2017. Table-to-text generation by structure-aware seq2seq learning. In CoRR.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Phrase-based statistical language generation using graphical models and active learning", "authors": [ { "first": "Fran\u00e7ois", "middle": [], "last": "Mairesse", "suffix": "" }, { "first": "Milica", "middle": [], "last": "Ga\u0161i\u0107", "suffix": "" }, { "first": "Filip", "middle": [], "last": "Jur\u010d\u00ed\u010dek", "suffix": "" }, { "first": "Simon", "middle": [], "last": "Keizer", "suffix": "" }, { "first": "Blaise", "middle": [], "last": "Thomson", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Steve", "middle": [ "Young" ], "last": "", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1552--1561", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fran\u00e7ois Mairesse, Milica Ga\u0161i\u0107, Filip Jur\u010d\u00ed\u010dek, Simon Keizer, Blaise Thomson, Kai Yu, and Steve Young. 2010. Phrase-based statistical language generation using graphical models and active learning. In Pro- ceedings of the 48th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 1552- 1561.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Controlling user perceptions of linguistic style: Trainable generation of personality traits", "authors": [ { "first": "Fran\u00e7ois", "middle": [], "last": "Mairesse", "suffix": "" }, { "first": "Marilyn", "middle": [ "A" ], "last": "Walker", "suffix": "" } ], "year": 2011, "venue": "Computational Linguistics", "volume": "37", "issue": "3", "pages": "455--488", "other_ids": { "DOI": [ "10.1162/COLI_a_00063" ] }, "num": null, "urls": [], "raw_text": "Fran\u00e7ois Mairesse and Marilyn A. Walker. 2011. Con- trolling user perceptions of linguistic style: Train- able generation of personality traits. Computational Linguistics, 37(3):455-488.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Crowdsourcing language generation templates for dialogue systems", "authors": [ { "first": "Margaret", "middle": [], "last": "Mitchell", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Bohus", "suffix": "" }, { "first": "Ece", "middle": [], "last": "Kamar", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the INLG and SIGDIAL 2014 Joint Session", "volume": "", "issue": "", "pages": "172--180", "other_ids": { "DOI": [ "10.3115/v1/W14-5003" ] }, "num": null, "urls": [], "raw_text": "Margaret Mitchell, Dan Bohus, and Ece Kamar. 2014. Crowdsourcing language generation templates for dialogue systems. In Proceedings of the INLG and SIGDIAL 2014 Joint Session, pages 172-180.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Multidomain dialog state tracking using recurrent neural networks", "authors": [ { "first": "Nikola", "middle": [], "last": "Mrk\u0161i\u0107", "suffix": "" }, { "first": "Diarmuid\u00f3", "middle": [], "last": "S\u00e9aghdha", "suffix": "" }, { "first": "Blaise", "middle": [], "last": "Thomson", "suffix": "" }, { "first": "Milica", "middle": [], "last": "Ga\u0161i\u0107", "suffix": "" }, { "first": "Pei-Hao", "middle": [], "last": "Su", "suffix": "" }, { "first": "David", "middle": [], "last": "Vandyke", "suffix": "" }, { "first": "Tsung-Hsien", "middle": [], "last": "Wen", "suffix": "" }, { "first": "Steve", "middle": [], "last": "Young", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", "volume": "2", "issue": "", "pages": "794--799", "other_ids": { "DOI": [ "10.3115/v1/P15-2130" ] }, "num": null, "urls": [], "raw_text": "Nikola Mrk\u0161i\u0107, Diarmuid\u00d3 S\u00e9aghdha, Blaise Thom- son, Milica Ga\u0161i\u0107, Pei-Hao Su, David Vandyke, Tsung-Hsien Wen, and Steve Young. 2015. Multi- domain dialog state tracking using recurrent neural networks. In Proceedings of the 53rd Annual Meet- ing of the Association for Computational Linguistics and the 7th International Joint Conference on Natu- ral Language Processing (Volume 2: Short Papers), pages 794-799.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "The E2E dataset: New challenges for endto-end generation", "authors": [ { "first": "Jekaterina", "middle": [], "last": "Novikova", "suffix": "" }, { "first": "Ond\u0159ej", "middle": [], "last": "Du\u0161ek", "suffix": "" }, { "first": "Verena", "middle": [], "last": "Rieser", "suffix": "" } ], "year": 2017, "venue": "Proc. of the 18th Annual SIGdial Meeting on Discourse and Dialogue", "volume": "", "issue": "", "pages": "201--206", "other_ids": { "DOI": [ "10.18653/v1/W17-5525" ] }, "num": null, "urls": [], "raw_text": "Jekaterina Novikova, Ond\u0159ej Du\u0161ek, and Verena Rieser. 2017. The E2E dataset: New challenges for end- to-end generation. In Proc. of the 18th Annual SIG- dial Meeting on Discourse and Dialogue, pages 201- 206.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Analyzing stemming approaches for Turkish multi-document summarization", "authors": [ { "first": "Yavuz", "middle": [], "last": "Muhammed", "suffix": "" }, { "first": "Arzucan\u00f6zg\u00fcr", "middle": [], "last": "Nuzumlal\u0131", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "702--706", "other_ids": { "DOI": [ "10.3115/v1/D14-1077" ] }, "num": null, "urls": [], "raw_text": "Muhammed Yavuz Nuzumlal\u0131 and Arzucan\u00d6zg\u00fcr. 2014. Analyzing stemming approaches for Turkish multi-document summarization. In Proceedings of the 2014 Conference on Empirical Methods in Natu- ral Language Processing (EMNLP), pages 702-706.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Turkish Natural Language Processing", "authors": [ { "first": "Kemal", "middle": [], "last": "Oflazer", "suffix": "" }, { "first": "Murat", "middle": [], "last": "Saraclar", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kemal Oflazer and Murat Saraclar. 2018. Turkish Nat- ural Language Processing, 1st. edition. Springer.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Stochastic language generation for spoken dialogue systems", "authors": [ { "first": "Alice", "middle": [ "H" ], "last": "Oh", "suffix": "" }, { "first": "Alexander", "middle": [ "I" ], "last": "Rudnicky", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the 2000 ANLP/NAACL Workshop on Conversational Systems", "volume": "3", "issue": "", "pages": "27--32", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alice H. Oh and Alexander I. Rudnicky. 2000. Stochas- tic language generation for spoken dialogue systems. In Proceedings of the 2000 ANLP/NAACL Workshop on Conversational Systems -Volume 3, page 27-32, USA. Association for Computational Linguistics.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Bleu: a method for automatic evaluation of machine translation", "authors": [ { "first": "Kishore", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "Salim", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "Todd", "middle": [], "last": "Ward", "suffix": "" }, { "first": "Wei-Jing", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "311--318", "other_ids": { "DOI": [ "10.3115/1073083.1073135" ] }, "num": null, "urls": [], "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Compu- tational Linguistics, pages 311-318.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Few-shot natural language generation for task-oriented dialog", "authors": [ { "first": "Baolin", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Chenguang", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Chunyuan", "middle": [], "last": "Li", "suffix": "" }, { "first": "Xiujun", "middle": [], "last": "Li", "suffix": "" }, { "first": "Jinchao", "middle": [], "last": "Li", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Zeng", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Baolin Peng, Chenguang Zhu, Chunyuan Li, Xiujun Li, Jinchao Li, Michael Zeng, and Jianfeng Gao. 2020. Few-shot natural language generation for task-oriented dialog.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "A comparison of sequence-to-sequence models for speech recognition", "authors": [ { "first": "Rohit", "middle": [], "last": "Prabhavalkar", "suffix": "" }, { "first": "Kanishka", "middle": [], "last": "Rao", "suffix": "" }, { "first": "Tara", "middle": [ "N" ], "last": "Sainath", "suffix": "" }, { "first": "Bo", "middle": [], "last": "Li", "suffix": "" }, { "first": "Leif", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Navdeep", "middle": [], "last": "Jaitly", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 18th International Speech Communication Association", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rohit Prabhavalkar, Kanishka Rao, Tara N. Sainath, Bo Li, Leif Johnson, and Navdeep Jaitly. 2017. A comparison of sequence-to-sequence models for speech recognition. In Proceedings of the 18th In- ternational Speech Communication Association (In- terspeech).", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Let's go public! taking a spoken dialog system to the real world", "authors": [ { "first": "Antoine", "middle": [], "last": "Raux", "suffix": "" }, { "first": "Brian", "middle": [], "last": "Langner", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Bohus", "suffix": "" }, { "first": "Alan", "middle": [ "W" ], "last": "Black", "suffix": "" }, { "first": "Maxine", "middle": [], "last": "Eskenazi", "suffix": "" } ], "year": 2005, "venue": "Proceedings of Interspeech", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Antoine Raux, Brian Langner, Dan Bohus, Alan W Black, and Maxine Eskenazi. 2005. Let's go pub- lic! taking a spoken dialog system to the real world. In Proceedings of Interspeech 2005.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Generative and discriminative algorithms for spoken language understanding", "authors": [ { "first": "Christian", "middle": [], "last": "Raymond", "suffix": "" }, { "first": "Giuseppe", "middle": [], "last": "Riccardi", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the Eighth Annual Conference of the International Speech Communication Association", "volume": "", "issue": "", "pages": "1605--1608", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christian Raymond and Giuseppe Riccardi. 2007. Gen- erative and discriminative algorithms for spoken lan- guage understanding. In Proceedings of the Eighth Annual Conference of the International Speech Com- munication Association, pages 1605-1608.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Dialog state tracking using conditional random fields", "authors": [ { "first": "Hang", "middle": [], "last": "Ren", "suffix": "" }, { "first": "Weiqun", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Yan", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yonghong", "middle": [], "last": "Yan", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the SIGDIAL 2013 Conference", "volume": "", "issue": "", "pages": "457--461", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hang Ren, Weiqun Xu, Yan Zhang, and Yonghong Yan. 2013. Dialog state tracking using conditional ran- dom fields. In Proceedings of the SIGDIAL 2013 Conference, pages 457-461.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Reinforcement adaptation of an attention-based neural natural language generator for spoken dialogue systems", "authors": [ { "first": "Matthieu", "middle": [], "last": "Riou", "suffix": "" }, { "first": "Bassam", "middle": [], "last": "Jabaian", "suffix": "" }, { "first": "St\u00e9phane", "middle": [], "last": "Huet", "suffix": "" }, { "first": "Fabrice", "middle": [], "last": "Lef\u00e8vre", "suffix": "" } ], "year": 2019, "venue": "Dialogue & Discourse", "volume": "10", "issue": "", "pages": "1--19", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthieu Riou, Bassam Jabaian, St\u00e9phane Huet, and Fabrice Lef\u00e8vre. 2019. Reinforcement adaptation of an attention-based neural natural language genera- tor for spoken dialogue systems. Dialogue & Dis- course, 10:1-19.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "An overview of end-to-end language understanding and dialog management for personal digital assistants", "authors": [ { "first": "R", "middle": [], "last": "Sarikaya", "suffix": "" }, { "first": "P", "middle": [ "A" ], "last": "Crook", "suffix": "" }, { "first": "A", "middle": [], "last": "Marin", "suffix": "" }, { "first": "M", "middle": [], "last": "Jeong", "suffix": "" }, { "first": "J", "middle": [ "P" ], "last": "Robichaud", "suffix": "" }, { "first": "A", "middle": [], "last": "Celikyilmaz", "suffix": "" }, { "first": "Y", "middle": [ "B" ], "last": "Kim", "suffix": "" }, { "first": "A", "middle": [], "last": "Rochette", "suffix": "" }, { "first": "O", "middle": [ "Z" ], "last": "Khan", "suffix": "" }, { "first": "X", "middle": [], "last": "Liu", "suffix": "" }, { "first": "D", "middle": [], "last": "Boies", "suffix": "" }, { "first": "T", "middle": [], "last": "Anastasakos", "suffix": "" }, { "first": "Z", "middle": [], "last": "Feizollahi", "suffix": "" }, { "first": "N", "middle": [], "last": "Ramesh", "suffix": "" }, { "first": "H", "middle": [], "last": "Suzuki", "suffix": "" }, { "first": "R", "middle": [], "last": "Holenstein", "suffix": "" }, { "first": "E", "middle": [], "last": "Krawczyk", "suffix": "" }, { "first": "V", "middle": [], "last": "Radostev", "suffix": "" } ], "year": 2016, "venue": "IEEE Spoken Language Technology Workshop", "volume": "", "issue": "", "pages": "391--397", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Sarikaya, P. A. Crook, A. Marin, M. Jeong, J. P. Ro- bichaud, A. Celikyilmaz, Y. B. Kim, A. Rochette, O. Z. Khan, X. Liu, D. Boies, T. Anastasakos, Z. Feizollahi, N. Ramesh, H. Suzuki, R. Holen- stein, E. Krawczyk, and V. Radostev. 2016. An overview of end-to-end language understanding and dialog management for personal digital assistants. In 2016 IEEE Spoken Language Technology Work- shop (SLT), pages 391-397.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Orderplanning neural text generation from structured data", "authors": [ { "first": "Lei", "middle": [], "last": "Sha", "suffix": "" }, { "first": "Lili", "middle": [], "last": "Mou", "suffix": "" }, { "first": "Tianyu", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Pascal", "middle": [], "last": "Poupart", "suffix": "" }, { "first": "Sujian", "middle": [], "last": "Li", "suffix": "" }, { "first": "Baobao", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Zhifang", "middle": [], "last": "Sui", "suffix": "" } ], "year": 2018, "venue": "Proc. of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18)", "volume": "", "issue": "", "pages": "5414--5421", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lei Sha, Lili Mou, Tianyu Liu, Pascal Poupart, Sujian Li, Baobao Chang, and Zhifang Sui. 2018. Order- planning neural text generation from structured data. In Proc. of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), pages 5414-5421.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Interactive reinforcement learning for taskoriented dialogue management", "authors": [ { "first": "Pararth", "middle": [], "last": "Shah", "suffix": "" }, { "first": "Dilek", "middle": [], "last": "Hakkani-Tur", "suffix": "" }, { "first": "Larry", "middle": [], "last": "Heck", "suffix": "" } ], "year": 2016, "venue": "Workshop on Deep Learning for Action and Interaction (NIPS)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pararth Shah, Dilek Hakkani-Tur, and Larry Heck. 2016. Interactive reinforcement learning for task- oriented dialogue management. In Workshop on Deep Learning for Action and Interaction (NIPS).", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Parsing natural scenes and natural language with recursive neural networks", "authors": [ { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Cliff", "middle": [], "last": "Chiung", "suffix": "" }, { "first": "-Yu", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Andrew", "middle": [ "Y" ], "last": "Ng", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 28th International Conference on International Conference on Machine Learning", "volume": "", "issue": "", "pages": "129--136", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard Socher, Cliff Chiung-Yu Lin, Andrew Y. Ng, and Christopher D. Manning. 2011. Parsing natu- ral scenes and natural language with recursive neu- ral networks. In Proceedings of the 28th Interna- tional Conference on International Conference on Machine Learning, page 129-136, Madison, WI, USA. Omnipress.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Sequence to sequence learning with neural networks", "authors": [ { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "V", "middle": [], "last": "Quoc", "suffix": "" }, { "first": "", "middle": [], "last": "Le", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 27th International Conference on Neural Information Processing Systems", "volume": "2", "issue": "", "pages": "3104--3112", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Proceedings of the 27th International Conference on Neural Information Processing Systems -Volume 2, page 3104-3112, Cambridge, MA, USA. MIT Press.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "Towards deeper understanding: Deep convex networks for semantic utterance classification", "authors": [ { "first": "G", "middle": [], "last": "Tur", "suffix": "" }, { "first": "L", "middle": [], "last": "Deng", "suffix": "" }, { "first": "D", "middle": [], "last": "Hakkani-T\u00fcr", "suffix": "" }, { "first": "X", "middle": [], "last": "He", "suffix": "" } ], "year": 2012, "venue": "2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)", "volume": "", "issue": "", "pages": "5045--5048", "other_ids": {}, "num": null, "urls": [], "raw_text": "G. Tur, L. Deng, D. Hakkani-T\u00fcr, and X. He. 2012. To- wards deeper understanding: Deep convex networks for semantic utterance classification. In 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5045-5048.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "A simple and generic belief tracking mechanism for the dialog state tracking challenge: On the believability of observed information", "authors": [ { "first": "Zhuoran", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Oliver", "middle": [], "last": "Lemon", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the SIGDIAL 2013 Conference", "volume": "", "issue": "", "pages": "423--432", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhuoran Wang and Oliver Lemon. 2013. A simple and generic belief tracking mechanism for the dialog state tracking challenge: On the believability of ob- served information. In Proceedings of the SIGDIAL 2013 Conference, pages 423-432.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "Sample efficient deep reinforcement learning for dialogue systems with large action spaces", "authors": [ { "first": "Gellert", "middle": [], "last": "Weisz", "suffix": "" }, { "first": "Pawel", "middle": [], "last": "Budzianowski", "suffix": "" }, { "first": "Pei-Hao", "middle": [], "last": "Su", "suffix": "" }, { "first": "Milica", "middle": [], "last": "Gasic", "suffix": "" } ], "year": 2018, "venue": "IEEE/ACM Transactions Audio, Speech and Language Processing", "volume": "26", "issue": "11", "pages": "2083--2097", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gellert Weisz, Pawel Budzianowski, Pei-Hao Su, and Milica Gasic. 2018. Sample efficient deep reinforce- ment learning for dialogue systems with large action spaces. IEEE/ACM Transactions Audio, Speech and Language Processing, 26(11):2083-2097.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Stochastic language generation in dialogue using recurrent neural networks with convolutional sentence reranking", "authors": [ { "first": "Milica", "middle": [], "last": "Tsung-Hsien Wen", "suffix": "" }, { "first": "Dongho", "middle": [], "last": "Ga\u0161i\u0107", "suffix": "" }, { "first": "Nikola", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Pei-Hao", "middle": [], "last": "Mrk\u0161i\u0107", "suffix": "" }, { "first": "David", "middle": [], "last": "Su", "suffix": "" }, { "first": "Steve", "middle": [], "last": "Vandyke", "suffix": "" }, { "first": "", "middle": [], "last": "Young", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue", "volume": "", "issue": "", "pages": "275--284", "other_ids": { "DOI": [ "10.18653/v1/W15-4639" ] }, "num": null, "urls": [], "raw_text": "Tsung-Hsien Wen, Milica Ga\u0161i\u0107, Dongho Kim, Nikola Mrk\u0161i\u0107, Pei-Hao Su, David Vandyke, and Steve Young. 2015a. Stochastic language generation in di- alogue using recurrent neural networks with convo- lutional sentence reranking. In Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 275-284.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "Semantically conditioned LSTM-based natural language generation for spoken dialogue systems", "authors": [ { "first": "Milica", "middle": [], "last": "Tsung-Hsien Wen", "suffix": "" }, { "first": "Nikola", "middle": [], "last": "Ga\u0161i\u0107", "suffix": "" }, { "first": "Pei-Hao", "middle": [], "last": "Mrk\u0161i\u0107", "suffix": "" }, { "first": "David", "middle": [], "last": "Su", "suffix": "" }, { "first": "Steve", "middle": [], "last": "Vandyke", "suffix": "" }, { "first": "", "middle": [], "last": "Young", "suffix": "" } ], "year": 2015, "venue": "Proc. of the 2015 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1711--1721", "other_ids": { "DOI": [ "10.18653/v1/D15-1199" ] }, "num": null, "urls": [], "raw_text": "Tsung-Hsien Wen, Milica Ga\u0161i\u0107, Nikola Mrk\u0161i\u0107, Pei- Hao Su, David Vandyke, and Steve Young. 2015b. Semantically conditioned LSTM-based natural lan- guage generation for spoken dialogue systems. In Proc. of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1711-1721.", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "A network-based end-to-end trainable task-oriented dialogue system", "authors": [ { "first": "David", "middle": [], "last": "Tsung-Hsien Wen", "suffix": "" }, { "first": "Nikola", "middle": [], "last": "Vandyke", "suffix": "" }, { "first": "Milica", "middle": [], "last": "Mrksic", "suffix": "" }, { "first": "Lina", "middle": [ "Maria" ], "last": "Gasic", "suffix": "" }, { "first": "Pei-Hao", "middle": [], "last": "Rojas-Barahona", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Su", "suffix": "" }, { "first": "Steve", "middle": [ "J" ], "last": "Ultes", "suffix": "" }, { "first": "", "middle": [], "last": "Young", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "438--449", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tsung-Hsien Wen, David Vandyke, Nikola Mrksic, Milica Gasic, Lina Maria Rojas-Barahona, Pei-Hao Su, Stefan Ultes, and Steve J. Young. 2017. A network-based end-to-end trainable task-oriented di- alogue system. In Proceedings of the 15th Confer- ence of the European Chapter of the Association for Computational Linguistics, pages 438-449. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF51": { "ref_id": "b51", "title": "Tod-bert: Pre-trained natural language understanding for task-oriented dialogues", "authors": [ { "first": "Chien-Sheng", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Hoi", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Caiming", "middle": [], "last": "Xiong", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chien-Sheng Wu, Steven Hoi, Richard Socher, and Caiming Xiong. 2020. Tod-bert: Pre-trained natural language understanding for task-oriented dialogues.", "links": null }, "BIBREF52": { "ref_id": "b52", "title": "Building task-oriented dialogue systems for online shopping", "authors": [ { "first": "Zhao", "middle": [], "last": "Yan", "suffix": "" }, { "first": "Nan", "middle": [], "last": "Duan", "suffix": "" }, { "first": "Peng", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Jianshe", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Zhoujun", "middle": [], "last": "Li", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, AAAI'17", "volume": "", "issue": "", "pages": "4618--4625", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhao Yan, Nan Duan, Peng Chen, Ming Zhou, Jianshe Zhou, and Zhoujun Li. 2017. Building task-oriented dialogue systems for online shopping. In Proceed- ings of the Thirty-First AAAI Conference on Artifi- cial Intelligence, AAAI'17, page 4618-4625. AAAI Press.", "links": null }, "BIBREF53": { "ref_id": "b53", "title": "Recurrent neural networks for language understanding", "authors": [ { "first": "Kaisheng", "middle": [], "last": "Yao", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Zweig", "suffix": "" }, { "first": "Mei-Yuh", "middle": [], "last": "Hwang", "suffix": "" }, { "first": "Yangyang", "middle": [], "last": "Shi", "suffix": "" }, { "first": "Dong", "middle": [], "last": "Yu", "suffix": "" } ], "year": 2013, "venue": "Proceedings of Interspeech", "volume": "", "issue": "", "pages": "2524--2528", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kaisheng Yao, Geoffrey Zweig, Mei-Yuh Hwang, Yangyang Shi, and Dong Yu. 2013. Recurrent neural networks for language understanding. In Proceed- ings of Interspeech, pages 2524-2528.", "links": null }, "BIBREF54": { "ref_id": "b54", "title": "Joint dialog act segmentation and recognition in human conversations using attention to dialog context", "authors": [ { "first": "Tianyu", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Tatsuya", "middle": [], "last": "Kawahara", "suffix": "" } ], "year": 2019, "venue": "Computer Speech & Language", "volume": "57", "issue": "", "pages": "108--127", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tianyu Zhao and Tatsuya Kawahara. 2019. Joint dialog act segmentation and recognition in human conver- sations using attention to dialog context. Computer Speech & Language, 57:108 -127.", "links": null }, "BIBREF55": { "ref_id": "b55", "title": "A review of the research on dialogue management of task-oriented systems", "authors": [ { "first": "Yin Jiang", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Yan", "middle": [ "Ling" ], "last": "Li", "suffix": "" }, { "first": "Min", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2019, "venue": "Journal of Physics: Conference Series", "volume": "1267", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yin Jiang Zhao, Yan Ling Li, and Min Lin. 2019. A review of the research on dialogue management of task-oriented systems. Journal of Physics: Confer- ence Series, 1267:012025.", "links": null }, "BIBREF56": { "ref_id": "b56", "title": "Multi-task learning for natural language generation in task-oriented dialogue", "authors": [ { "first": "Chenguang", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Zeng", "suffix": "" }, { "first": "Xuedong", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "1261--1266", "other_ids": { "DOI": [ "10.18653/v1/D19-1123" ] }, "num": null, "urls": [], "raw_text": "Chenguang Zhu, Michael Zeng, and Xuedong Huang. 2019. Multi-task learning for natural language gen- eration in task-oriented dialogue. In Proceedings of the 2019 Conference on Empirical Methods in Nat- ural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1261-1266.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "(a) Opening screen view (b) Map listing view (c) Map listing+New search view.", "type_str": "figure", "num": null, "uris": null }, "FIGREF1": { "text": "For instance, Fig-ure 2 shows a sentence and a part of its annotation.", "type_str": "figure", "num": null, "uris": null }, "FIGREF2": { "text": "An annotated training data example for NLU.", "type_str": "figure", "num": null, "uris": null }, "FIGREF3": { "text": "Examples of dialog acts and reference sentences.", "type_str": "figure", "num": null, "uris": null }, "TABREF1": { "content": "", "html": null, "text": "", "type_str": "table", "num": null }, "TABREF2": { "content": "
(type = 'compare', name = 'Cafe Botanica', price_range = 'Ortalama', other_venues_names = 'Mayday Cafe Bar, Mevlana Lokantas , Cafe de Kedi', other_venues_price_range = 'Ucuz')
i)
", "html": null, "text": "(Lezzet Mekan in Istanbul Caddebostan, where you can eat desserts and dishes from the world cuisine, is a restaurant that offers expensive dishes and where customers are very satisfied.) Cafe Botanica; ucuz fiyatl Mayday Cafe Bar, Mevlana Lokantas , Cafe de Kedi'ye k yasla ortalama fiyatl bir mekand r. (Cafe Botanica is an average-priced venue compared to the cheaply priced Mayday Cafe Bar, Mevlana Lokantas and Cafe de Kedi.) ii) Cafe Botanica ortalama fiyatlardayken Mayday Cafe Bar, Mevlana Lokantas ve Cafe de Kedi ucuz mekanlard r (While Cafe Botanica is at average prices, Mayday Cafe Bar, Mevlana Restaurant and Cafe are cheap venues.) iii) Ortalama fiyatlar yla bilinen Cafe Botanica, Mayday Cafe Bar, Mevlana Lokantas ve Cafe de Kedi gibi mekanlar n ucuz men\u00fclerine k yasla pahal kalmaktad r (Cafe Botanica which is known with its average prices is expensive compared to the venues with cheap menus Mayday Cafe Bar, Mevlana Lokantas and Cafe de Kedi.)", "type_str": "table", "num": null }, "TABREF4": { "content": "", "html": null, "text": "Word representations.", "type_str": "table", "num": null }, "TABREF6": { "content": "
Act TypeTraining Validation Test
inform1690220200
inform only4485745
inform not6628193
inform all1091420
request2172434
compare1201112
compare only 1141316
", "html": null, "text": "Properties of input datasets.", "type_str": "table", "num": null }, "TABREF7": { "content": "", "html": null, "text": "Distribution of action types in datasets.", "type_str": "table", "num": null }, "TABREF9": { "content": "
", "html": null, "text": "Performance scores of different models.", "type_str": "table", "num": null } } } }