{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:18:06.006774Z" }, "title": "ADVISER: A Toolkit for Developing Multi-modal, Multi-domain and Socially-engaged Conversational Agents", "authors": [ { "first": "Chia-Yu", "middle": [], "last": "Li", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Stuttgart", "location": {} }, "email": "" }, { "first": "Daniel", "middle": [], "last": "Ortega", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Stuttgart", "location": {} }, "email": "" }, { "first": "Dirk", "middle": [], "last": "V\u00e4th", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Stuttgart", "location": {} }, "email": "" }, { "first": "Florian", "middle": [], "last": "Lux", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Stuttgart", "location": {} }, "email": "" }, { "first": "Lindsey", "middle": [], "last": "Vanderlyn", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Stuttgart", "location": {} }, "email": "" }, { "first": "Maximilian", "middle": [], "last": "Schmidt", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Stuttgart", "location": {} }, "email": "" }, { "first": "Michael", "middle": [], "last": "Neumann", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Stuttgart", "location": {} }, "email": "" }, { "first": "Moritz", "middle": [], "last": "V\u00f6lkel", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Stuttgart", "location": {} }, "email": "" }, { "first": "Pavel", "middle": [], "last": "Denisov", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Stuttgart", "location": {} }, "email": "" }, { "first": "Sabrina", "middle": [], "last": "Jenne", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Stuttgart", "location": {} }, "email": "" }, { "first": "Zorica", "middle": [], "last": "Kacarevic", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Stuttgart", "location": {} }, "email": "" }, { "first": "Ngoc", "middle": [ "Thang" ], "last": "Vu", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Stuttgart", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We present ADVISER 1-an open-source, multi-domain dialog system toolkit that enables the development of multi-modal (incorporating speech, text and vision), sociallyengaged (e.g. emotion recognition, engagement level prediction and backchanneling) conversational agents. The final Python-based implementation of our toolkit is flexible, easy to use, and easy to extend not only for technically experienced users, such as machine learning researchers, but also for less technically experienced users, such as linguists or cognitive scientists, thereby providing a flexible platform for collaborative research.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "We present ADVISER 1-an open-source, multi-domain dialog system toolkit that enables the development of multi-modal (incorporating speech, text and vision), sociallyengaged (e.g. emotion recognition, engagement level prediction and backchanneling) conversational agents. The final Python-based implementation of our toolkit is flexible, easy to use, and easy to extend not only for technically experienced users, such as machine learning researchers, but also for less technically experienced users, such as linguists or cognitive scientists, thereby providing a flexible platform for collaborative research.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Dialog systems or chatbots, both text-based and multi-modal, have received much attention in recent years, with an increasing number of dialog systems in both industrial contexts such as Amazon Alexa, Apple Siri, Microsoft Cortana, Google Duplex, XiaoIce (Zhou et al., 2018) and Furhat 2 , as well as academia such as MuMMER (Foster et al., 2016) and Alana (Curry et al., 2018) . However, open-source toolkits and frameworks for developing such systems are rare, especially for developing multi-modal systems comprised of speech, text, and vision. Most of the existing toolkits are designed for developing dialog systems focused only on core dialog components, with or without the option to access external speech processing services (Bohus and Rudnicky, 2009; Baumann and Schlangen, 2012; Lison and Kennington, 2016; Ultes et al., 2017; Ortega et al., 2019; Lee et al., 2019) .", "cite_spans": [ { "start": 255, "end": 274, "text": "(Zhou et al., 2018)", "ref_id": "BIBREF37" }, { "start": 325, "end": 346, "text": "(Foster et al., 2016)", "ref_id": "BIBREF11" }, { "start": 357, "end": 377, "text": "(Curry et al., 2018)", "ref_id": "BIBREF7" }, { "start": 734, "end": 760, "text": "(Bohus and Rudnicky, 2009;", "ref_id": "BIBREF4" }, { "start": 761, "end": 789, "text": "Baumann and Schlangen, 2012;", "ref_id": "BIBREF1" }, { "start": 790, "end": 817, "text": "Lison and Kennington, 2016;", "ref_id": "BIBREF17" }, { "start": 818, "end": 837, "text": "Ultes et al., 2017;", "ref_id": "BIBREF28" }, { "start": 838, "end": 858, "text": "Ortega et al., 2019;", "ref_id": "BIBREF22" }, { "start": 859, "end": 876, "text": "Lee et al., 2019)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To the best of our knowledge, there are only two toolkits, proposed in (Foster et al., 2016) and (Bohus et al., 2017) , that support developing dialog agents using multi-modal processing and social signals (Wagner et al., 2013) . Both provide a decent platform for building systems, however, to the best of our knowledge, the former is not open-source, and the latter is based on the .NET platform, which could be less convenient for non-technical users such as linguists and cognitive scientists, who play an important role in dialog research.", "cite_spans": [ { "start": 71, "end": 92, "text": "(Foster et al., 2016)", "ref_id": "BIBREF11" }, { "start": 97, "end": 117, "text": "(Bohus et al., 2017)", "ref_id": "BIBREF3" }, { "start": 206, "end": 227, "text": "(Wagner et al., 2013)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we introduce a new version of ADVISER -previously a text-based, multi-domain dialog system toolkit (Ortega et al., 2019) -that supports multi-modal dialogs, including speech, text and vision information processing. This provides a new option for building dialog systems that is open-source and Python-based for easy use and fast prototyping. The toolkit is designed in such a way that it is modular, flexible, transparent, and user-friendly for both technically experienced and less technically experienced users.", "cite_spans": [ { "start": 114, "end": 135, "text": "(Ortega et al., 2019)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Furthermore, we add novel features to AD-VISER, allowing it to process social signals and to incorporate them into the dialog flow. We believe that these features will be key to developing humanlike dialog systems because it is well-known that social signals, such as emotional states and engagement levels, play an important role in human computer interaction (McTear et al., 2016) . However in contrast to open-ended dialog systems (Weizenbaum, 1966) , our toolkit focuses on task-oriented applications (Bobrow et al., 1977) , such as searching for a lecturer at the university (Ortega et al., 2019) . The purpose we envision for dialog systems developed using our toolkit is not the same as the objective of a social chatbot such as XiaoIce (Zhou et al., 2018) . Rather than promoting \"an AI companion with an emotional connection to satisfy the human need for communication, affection, and social belonging\" (Zhou et al., 2018) , ADVISER helps develop dialog systems that support users in efficiently fulfilling concrete goals, while at the same time considering social signals such as emotional states and engagement levels so as to remain friendly and likeable.", "cite_spans": [ { "start": 361, "end": 382, "text": "(McTear et al., 2016)", "ref_id": "BIBREF18" }, { "start": 434, "end": 452, "text": "(Weizenbaum, 1966)", "ref_id": "BIBREF34" }, { "start": 505, "end": 526, "text": "(Bobrow et al., 1977)", "ref_id": "BIBREF2" }, { "start": 580, "end": 601, "text": "(Ortega et al., 2019)", "ref_id": "BIBREF22" }, { "start": 744, "end": 763, "text": "(Zhou et al., 2018)", "ref_id": "BIBREF37" }, { "start": 912, "end": 931, "text": "(Zhou et al., 2018)", "ref_id": "BIBREF37" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The main objective of this work is to develop a multi-domain dialog system toolkit that allows for multi-modal information processing and that provides different modules for extracting social signals such as emotional states and for integrating them into the decision making process. The toolkit should be easy to use and extend for users of all levels of technical experience, providing a flexible collaborative research platform.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Objectives", "sec_num": "2" }, { "text": "We extend and substantially modify our previous, text-based dialog system toolkit (Ortega et al., 2019) while following the same design choices. This means that our toolkit is meant to optimize the following four criteria: Modularity, Flexibility, Transparency and User-friendliness at different levels. This is accomplished by decomposing the dialog system into independent modules (services), which in turn are either rule-based, machine learning-based or both. These services can easily be combined in different orders/architectures, providing users with flexible options to design new dialog architectures.", "cite_spans": [ { "start": 82, "end": 103, "text": "(Ortega et al., 2019)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Toolkit Design", "sec_num": "2.1" }, { "text": "Multi-modality The main challenges in handling multi-modality are a) the design of a synchronization infrastructure and b) the large range of different latencies from different modalities. To alleviate the former, we use the publisher/subscriber software pattern presented in section 4 to synchronize signals coming from different sources. This software pattern also allows for services to run in a distributed manner. By assigning computationally heavy tasks such as speech recognition and speech synthesis to a more powerful computing node, it is possible to reduce differences in latency when processing different modalities, therefore achieving more natural interactions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Challenges & Proposed Solutions", "sec_num": "2.2" }, { "text": "Socially-Engaged Systems Determining the ideal scope of a socially-engaged dialog system is a complex issue, that is which information should be extracted from users and how the system can best react to these signals. Here we focus on two major social signals: emotional states and engagement levels (see section 3.1), and maintain an internal user state to track them over the course of a dialog. Note that the toolkit is designed in such a way that any social signal could be extracted and leveraged in the dialog manager. In order to react to social signals extracted from the user, we provide an initial affective policy module (see section 3.5) and an initial affective NLG module (see section 3.7), which could be easily extended to more sophisticated behavior. Furthermore, we provide a backchanneling module that enables the dialog system to give feedback to users during conversations. Utilizing these features could lead to increased trust and enhance the impression of an empathetic system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Challenges & Proposed Solutions", "sec_num": "2.2" }, { "text": "We present the three modules of ADVISER for processing social signals: (a) emotion recognition, (b) engagement level prediction, and (c) backchanneling. Figure 1 illustrates an example of our system tracking emotion states and engagement levels.", "cite_spans": [], "ref_spans": [ { "start": 153, "end": 161, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Social Signal Processing", "sec_num": "3.1" }, { "text": "Multi-modal Emotion Recognition For recognizing a user's emotional state, all three available modalities -text, audio, and vision -can potentially be exploited, as they can deliver complementary information (Zeng et al., 2009) . Therefore, the emotion recognition module can subscribe to the particular input streams of interest (see section 4 for details) and apply emotion prediction either in a time-continuous fashion or discretely per turn.", "cite_spans": [ { "start": 207, "end": 226, "text": "(Zeng et al., 2009)", "ref_id": "BIBREF36" } ], "ref_spans": [], "eq_spans": [], "section": "Social Signal Processing", "sec_num": "3.1" }, { "text": "In our example implementation in the toolkit, we integrate speech emotion recognition, i.e. using the acoustic signal as features. Based on the work presented in (Neumann and Vu, 2017) we use log Mel filterbank coefficients as input to convolutional neural networks (CNNs). For the sake of modularity, three separate models are employed for predicting different types of labels: (a) basic emotions {angry, happy, neutral, sad}, (b) arousal levels {low, medium, high}, and (c) valence levels {negative, neutral, positive}. The models are trained on the IEMOCAP dataset (Busso et al., 2008) . The output of the emotion recognition module consists of three predictions per user turn, which can then be used by the user state tracker (see section 3.4). For future releases, we plan to incorporate multiple training datasets as well as visual features.", "cite_spans": [ { "start": 162, "end": 184, "text": "(Neumann and Vu, 2017)", "ref_id": "BIBREF19" }, { "start": 568, "end": 588, "text": "(Busso et al., 2008)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Social Signal Processing", "sec_num": "3.1" }, { "text": "Engagement Level Prediction User engagement is closely related to states such as boredom and level of interest, with implications for user satisfaction and task success (Forbes-Riley et al., 2012; Schuller et al., 2009) . In ADVISER, we assume that eye activity serves as an indicator of various mental states (Schuller et al., 2009; Niu et al., 2018) and implement a gaze tracker that monitors the user's direction of focus via webcam.", "cite_spans": [ { "start": 169, "end": 196, "text": "(Forbes-Riley et al., 2012;", "ref_id": "BIBREF10" }, { "start": 197, "end": 219, "text": "Schuller et al., 2009)", "ref_id": "BIBREF26" }, { "start": 310, "end": 333, "text": "(Schuller et al., 2009;", "ref_id": "BIBREF26" }, { "start": 334, "end": 351, "text": "Niu et al., 2018)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Social Signal Processing", "sec_num": "3.1" }, { "text": "Using OpenFace 2.2.0, a toolkit for facial behavior analysis (Baltrusaitis et al., 2018), we extract the features gaze angle x and gaze angle y, which capture left-right and up-down eye movement, for each frame and compute the deviation from the central point of the screen. If the deviation exceeds a certain threshold for a certain number of seconds, the user is assumed to look away from the screen, thereby disengaging. Thus, the output of our engagement level prediction module is the binary decision {looking, not looking}. Both the spatial and temporal sensitivity can be adjusted, such that developers have the option to decide how far and how long the user's gaze can stray from the central point until they are considered to be disengaged. In an adaptive system, this information could be used to select re-engagement strategies, e.g. using an affective template (see section 3.7).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Social Signal Processing", "sec_num": "3.1" }, { "text": "Backchanneling In a conversation, a backchannel (BC) is a soft interjection from the listener to the speaker, with the purpose of signaling acknowledgment or reacting to what was just uttered. Backchannels contribute to a successful conversation flow (Clark and Krych, 2004) . Therefore, we add an acoustic backchannel module to create a more human-like dialog experience. For backchannel prediction, we extract 13 Mel-frequency-cepstral coefficients from the user's speech signal, which form the input to the convolutional neural network based on Ortega et al. (2020) . The model assigns one of three categories from the proactive backchanneling theory (Goodwin, 1986) to each user utterance {no-backchannel, backchannel-continuer and backchannel-assessment}. The predicted category is used to add the backchannel realization, such as Right or Uh-huh, to the next system response.", "cite_spans": [ { "start": 251, "end": 274, "text": "(Clark and Krych, 2004)", "ref_id": "BIBREF6" }, { "start": 562, "end": 568, "text": "(2020)", "ref_id": null }, { "start": 654, "end": 669, "text": "(Goodwin, 1986)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Social Signal Processing", "sec_num": "3.1" }, { "text": "Automatic Speech Recognition (ASR) The speech recognition module receives a speech signal as input, which can come from an internal or external microphone, and outputs decoded text. The specific realization of ASR can be interchanged or adapted, for example for new languages or different ASR methods. We provide an end-to-end ASR model for English based on the Transformer neural network architecture. We use the end-to-end speech processing toolkit ESPnet (Watanabe et al., 2018) and the IMS-speech English multi-dataset recipe (Denisov and Vu, 2019) , updated to match the LibriSpeech Transformer-based system in ESPnet (Karita et al., 2019) and to include more training data. Training data comprises the LibriSpeech, Switchboard, TED-LIUM 3, AMI, WSJ, Common Voice 3, SWC, VoxForge and M-AILABS datasets with a total amount of 3249 hours. As input features, 80-dimensional log Mel filterbank coefficients are used. Output of the ASR model is a sequence of subword units, which include single characters as well as combinations of several characters, making the model lexicon independent.", "cite_spans": [ { "start": 458, "end": 481, "text": "(Watanabe et al., 2018)", "ref_id": "BIBREF33" }, { "start": 530, "end": 552, "text": "(Denisov and Vu, 2019)", "ref_id": "BIBREF9" }, { "start": 623, "end": 644, "text": "(Karita et al., 2019)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Speech Processing", "sec_num": "3.2" }, { "text": "Speech Synthesis For ADVISER's voice output, we use the ESPnet-TTS toolkit , which is an extension of the ESPnet toolkit mentioned above. We use FastSpeech as the synthesis model speeding up mel-spectrogram generation by a factor of 270 and voice generation by a factor of 38 compared to autoregressive Transformer TTS (Ren et al., 2019) . We use a Parallel Wave-GAN (Yamamoto et al., 2020) to generate waveforms that is computationally efficient and achieves a high mean opinion score of 4.16. The FastSpeech and WaveGAN models were trained with 24 hours of the LJSpeech dataset from a single speaker (Ito, 2017) and are capable of generating voice output in real-time when using a GPU. The synthesis can run on any device in a distributed system. Additionally, we optimize the synthesizer for abbreviations, such as Prof., Univ., IMS, NLP, ECTS and PhD, as well as for German proper names, such as street names. These optimizations can be easily extended.", "cite_spans": [ { "start": 319, "end": 337, "text": "(Ren et al., 2019)", "ref_id": "BIBREF23" }, { "start": 367, "end": 390, "text": "(Yamamoto et al., 2020)", "ref_id": "BIBREF35" }, { "start": 602, "end": 613, "text": "(Ito, 2017)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Speech Processing", "sec_num": "3.2" }, { "text": "Turn Taking To make interacting with the system more natural, we use a naive end-of-utterance detection. Users indicate the start of their turn by pressing a hotkey, so they can choose to pause the interaction. The highest absolute peak of each recording chunk is then compared with a predefined threshold. If a certain number of sequential chunks do not peak above the threshold, the recording stops. We are currenlty in the process of planning more sophisticated turn taking models, such as Skantze et al. (2015) .", "cite_spans": [ { "start": 493, "end": 514, "text": "Skantze et al. (2015)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Speech Processing", "sec_num": "3.2" }, { "text": "The natural language understanding (NLU) unit parses the textual user input (De Mori et al., 2008) -or the output from the speech recognition systemand extracts the user action type, generally referred to as intent in goal-oriented dialog systems (e.g. Inform and Request), as well as the corresponding slots and values. The domain-independent, rulebased NLU presented in Ortega et al. 2019is integrated into ADVISER and adapted to the new domains presented in section 5.", "cite_spans": [ { "start": 80, "end": 98, "text": "Mori et al., 2008)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Natural Language Understanding", "sec_num": "3.3" }, { "text": "Belief State Tracking (BST): The BST tracks the history of user informs and the user action types, requests, with one BST entry per turn. This information is stored in a dictionary structure that is built up, as the user provides more details and the system has a better understanding of user intent.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "State Tracking", "sec_num": "3.4" }, { "text": "User State Tracking (UST): Similar to the BST, the UST tracks the history of the user's state over the course of a dialog, with one entry per turn. In the current implementation, the user state consists of the user's engagement level, valence, arousal, and emotion category (details in section 3.1).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "State Tracking", "sec_num": "3.4" }, { "text": "Policies To determine the correct system action, we provide three types of policy services: a handcrafted and a reinforcement learning policy for finding entities from a database (Ortega et al., 2019) , as well as a handcrafted policy for looking up information through an API call. Both handcrafted policies use a series of rules to help the user find a single entity or, once an entity has been found (or directly provided by the user), find information about that entity. The reinforcement learning (RL) policy's action-value function is approximated by a neural network which outputs a value for each possible system action, given the vectorized representation of a turn's belief state as input. The neural network is constructed as proposed in V\u00e4th and Vu (2019) following a duelling architecture (Wang et al., 2016) . It consists of two separate calculation streams, each with its own layers, where the final layer yields the action-value function. For off-policy batch-training, we make use of prioritized experience replay (Schaul et al., 2015) .", "cite_spans": [ { "start": 179, "end": 200, "text": "(Ortega et al., 2019)", "ref_id": "BIBREF22" }, { "start": 802, "end": 821, "text": "(Wang et al., 2016)", "ref_id": "BIBREF32" }, { "start": 1031, "end": 1052, "text": "(Schaul et al., 2015)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Dialog Policies", "sec_num": "3.5" }, { "text": "Affective Policy In addition, we have also implemented a rule-based affective policy service that can be used to determine the system's emotional response. As this policy is domain-agnostic, predicting the next system emotion output rather than the next system action, it can be used alongside any of the previously mentioned policies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dialog Policies", "sec_num": "3.5" }, { "text": "User Simulator To support automatic evaluation and to train the RL policy, we provide a user simulator service outputting at the user acts level. As we are concerned with task-oriented dialogs here, the user simulator has an agenda-based (Schatzmann et al., 2007) architecture and is randomly assigned a goal at the beginning of the dialog. Each turn, it then works to first respond to the system utterance, and then after to fulfill its own goal. When the system utterance also works toward fulfilling the user goal, the RL policy is rewarded by achieving a shorter total dialog turn count (Ortega et al., 2019) .", "cite_spans": [ { "start": 238, "end": 263, "text": "(Schatzmann et al., 2007)", "ref_id": "BIBREF24" }, { "start": 591, "end": 612, "text": "(Ortega et al., 2019)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Dialog Policies", "sec_num": "3.5" }, { "text": "ADVISER supports three options to access information from external information sources. In addition to being able to query information from SQL-based databases, we add two new options that includes querying information via APIs and from knowledge bases (e.g. Wikidata (Vrande\u010di\u0107 and Kr\u00f6tzsch, 2014) ). For example, when a user asks a simple question -Where was Dirk Nowitzki born?, our pretrained neural network predicts the topic entity -Dirk Nowitzki -and the relation -place of birth.", "cite_spans": [ { "start": 268, "end": 298, "text": "(Vrande\u010di\u0107 and Kr\u00f6tzsch, 2014)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "External Information Resources", "sec_num": "3.6" }, { "text": "Then, the answer is automatically looked up using Wikidata's SPARQL endpoint.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "External Information Resources", "sec_num": "3.6" }, { "text": "In the NLG service, the semantic representation of the system act is transformed into natural language. ADVISER currently uses a template-based approach to NLG in which each possible system act is mapped to exactly one utterance. A special syntax using placeholders reduces the number of templates needed and accounts for correct morphological inflections (Ortega et al., 2019) . Additionally, we developed an affective NLG service, which allows for different templates to be used depending on the user's emotional state. This enables a more sensitive/adaptive system. For example, if the user is sad and the system does not understand the user's input, it might try to establish common ground to prevent their mood from getting worse due to the bad news. An example response would be \"As much as I would love to help, I am a bit confused\" rather than the more neutral \"Sorry I am a bit confused\". One set of NLG templates can be specified for each possible emotional state. At runtime, the utterance is then generated from the template associated with the current system emotion and system action.", "cite_spans": [ { "start": 356, "end": 377, "text": "(Ortega et al., 2019)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Natural Language Generation (NLG)", "sec_num": "3.7" }, { "text": "4 Software Architecture", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Natural Language Generation (NLG)", "sec_num": "3.7" }, { "text": "To allow for maximum flexibility in combining and reusing components, we consider a dialog system as a group of services which communicate asynchronously by publishing/subscribing to certain topics. A service is called as soon as at least one message for all its subscribed topics is received and may additionally publish to one or more topics. Services can elect to receive the most recent message for a topic (e.g. up-to-date belief state) or a list of all messages for that topic since the last service call (e.g. a list of video frames). Constructing a dialog system in this way allows us to break free from a pipeline architecture. Each step in the dialog process is represented by one or more services which can operate in parallel or sequentially. For example, tasks like video and speech capture may be performed and processed in parallel before being synchronized by a user state tracking module subscribing to input from both sources. Figure 2 illustrates the system architecture. For debugging purposes, we provide a utility to draw the dialog graph, showing the information flow between services, including remote services, and any inconsistencies in publish/subscribe connections.", "cite_spans": [], "ref_spans": [ { "start": 945, "end": 953, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Dialog as a Collection of Services", "sec_num": "4.1" }, { "text": "Services are location-transparent and may thus be distributed across multiple machines. A central dialog system discovers local and remote services and provides synchronization guarantees for dialog initialization and termination. Distribution of services enables, for instance, a more powerful computer to handle tasks such as real-time text-to-speech generation (see Figure 2 ). This is particularly helpful when multiple resource-heavy tasks are combined into a single dialog system.", "cite_spans": [], "ref_spans": [ { "start": 369, "end": 377, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Support for Distributed Systems", "sec_num": "4.2" }, { "text": "In addition to providing multi-modal support, the publish/subscribe framework also allows for multidomain support by providing a structure which enables arbitrary branching and rejoining of graph structures. When a service is created, users simply specify which domain(s) it should publish/subscribe to. This, in combination with a domain tracking service, allows for seamless integration of domain-agnostic services (such as speech input/output) and domain-specific services (such as NLU/NLG for the lecturers domain).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Support for Multi-Domain Systems", "sec_num": "4.3" }, { "text": "5 Example Use Cases", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Support for Multi-Domain Systems", "sec_num": "4.3" }, { "text": "We provide several example domains to demonstrate ADVISER's functionalities. Databases for lecturers and courses at the Institute for Natural Language Processing (IMS), which we used in the previous version of ADVISER, were adapted to the new system architecture. As example APIs, we implemented a weather domain that makes calls to the OpenWeatherMap API 3 and a mensa domain for gathering information from the dining hall at the university of Stuttgart. Note that affective templates were only added to the lecturers and mensa domain. All domains can be used within the same dialog, simply by switching the topic.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example Domains", "sec_num": "5.1" }, { "text": "Our toolkit allows for easy creation of a dialog system within a few lines of code as follows. As a first step, a dialog system object is initialized, which is responsible for coordinating the initialization and graceful termination of all dialog services. Talking about multiple domains in one dialog is enabled by creating a simple keywordbased domain tracker which is introduced as the first argument to the dialog system. To make the dialog multi-modal, speech and vision modules are introduced next, along with modules to extract engagement and emotion. So far, all of these modules are domain-agnostic and can be used as shared resources between all domains. Next, domaindependent services such as NLUs, BSTs and NLGs for weather and mensa, are added. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "An Example Implementation", "sec_num": "5.2" }, { "text": "Other tools for building dialog systems include ConvLab (Lee et al., 2019) , an open-source, textbased dialog system platform that supports both pipelined architectures and an end-to-end neural architecture. ConvLab also provides reusable components and supports multi-domain settings. Other systems are largely text-based, but offer the incorporation of external speech components. In-proTK (Baumann and Schlangen, 2012) , for instance, in which modules communicate by networks via configuration files, uses ASR based on Sphinx-4 and synthesis based on MaryTTS. Similarly, RavenClaw (Bohus and Rudnicky, 2009) provides a framework for creating dialog managers; ASR and synthesis components can be supplied, for example, by connecting to Sphinx and Kalliope. OpenDial (Lison and Kennington, 2016) relies on probabilistic rules and provides options to connect to speech components such as Sphinx. Multidomain dialog toolkit -PyDial (Ultes et al., 2017) supports connection to DialPort.", "cite_spans": [ { "start": 56, "end": 74, "text": "(Lee et al., 2019)", "ref_id": "BIBREF16" }, { "start": 392, "end": 421, "text": "(Baumann and Schlangen, 2012)", "ref_id": "BIBREF1" }, { "start": 584, "end": 610, "text": "(Bohus and Rudnicky, 2009)", "ref_id": "BIBREF4" }, { "start": 768, "end": 796, "text": "(Lison and Kennington, 2016)", "ref_id": "BIBREF17" }, { "start": 931, "end": 951, "text": "(Ultes et al., 2017)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "As mentioned in the introduction, Microsoft Research's \\psi is an open and extensible platform that supports the development of multi-modal AI systems (Bohus et al., 2017) . It further offers audio and visual processing, such as speech recognition and face tracking, as well as output, such as synthesis and avatar rendering. And the MuMMER (multimodal Mall Entertainment Robot) project (Foster et al., 2016) is based on the SoftBank Robotics Pepper platform, and thereby comprises processing of audio-, visual-and social signals, with the aim to develop a socially engaging robot that can be deployed in public spaces.", "cite_spans": [ { "start": 151, "end": 171, "text": "(Bohus et al., 2017)", "ref_id": "BIBREF3" }, { "start": 387, "end": 408, "text": "(Foster et al., 2016)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "We introduce ADVISER -an open-source, multidomain dialog system toolkit that allows users to easily develop multi-modal and socially-engaged conversational agents. We provide a large variety of functionalities, ranging from speech processing to core dialog system capabilities and social signal processing. With this toolkit, we hope to provide a flexible platform for collaborative research in multi-domain, multi-modal, socially-engaged conversational agents.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "7" }, { "text": "Link to open-source code: https://github.com/ DigitalPhonetics/adviser 2 https://docs.furhat.io", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "OpenFace 2.0: Facial Behavior Analysis Toolkit", "authors": [ { "first": "Tadas", "middle": [], "last": "Baltrusaitis", "suffix": "" }, { "first": "Amir", "middle": [], "last": "Zadeh", "suffix": "" }, { "first": "Chong", "middle": [], "last": "Yao", "suffix": "" }, { "first": "Louis-Philippe", "middle": [], "last": "Lim", "suffix": "" }, { "first": "", "middle": [], "last": "Morency", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tadas Baltrusaitis, Amir Zadeh, Yao Chong Lim, and Louis-Philippe Morency. 2018. OpenFace 2.0: Fa- cial Behavior Analysis Toolkit.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "The In-proTK 2012 Release", "authors": [ { "first": "Timo", "middle": [], "last": "Baumann", "suffix": "" }, { "first": "David", "middle": [], "last": "Schlangen", "suffix": "" } ], "year": 2012, "venue": "NAACL-HLT Workshop on Future Directions and Needs in the Spoken Dialog Community: Tools and Data", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Timo Baumann and David Schlangen. 2012. The In- proTK 2012 Release. In NAACL-HLT Workshop on Future Directions and Needs in the Spoken Dialog Community: Tools and Data.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Gus, a frame-driven dialog system", "authors": [ { "first": "G", "middle": [], "last": "Daniel", "suffix": "" }, { "first": "Ronald", "middle": [ "M" ], "last": "Bobrow", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Kaplan", "suffix": "" }, { "first": "Donald", "middle": [ "A" ], "last": "Kay", "suffix": "" }, { "first": "Henry", "middle": [], "last": "Norman", "suffix": "" }, { "first": "Terry", "middle": [], "last": "Thompson", "suffix": "" }, { "first": "", "middle": [], "last": "Winograd", "suffix": "" } ], "year": 1977, "venue": "Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel G. Bobrow, Ronald M. Kaplan, Martin Kay, Donald A. Norman, Henry Thompson, and Terry Winograd. 1977. Gus, a frame-driven dialog system. Artificial Intelligence.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Rapid Development of Multimodal Interactive Systems: A Demonstration of Platform for Situated Intelligence", "authors": [ { "first": "Dan", "middle": [], "last": "Bohus", "suffix": "" }, { "first": "Sean", "middle": [], "last": "Andrist", "suffix": "" }, { "first": "Mihai", "middle": [], "last": "Jalobeanu", "suffix": "" } ], "year": 2017, "venue": "ICMI '17: Proceedings of the 19th ACM International Conference on Multimodal Interaction", "volume": "", "issue": "", "pages": "493--494", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dan Bohus, Sean Andrist, and Mihai Jalobeanu. 2017. Rapid Development of Multimodal Interactive Sys- tems: A Demonstration of Platform for Situated In- telligence. In ICMI '17: Proceedings of the 19th ACM International Conference on Multimodal Inter- action, pages 493-494.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "The ravenclaw dialog management framework: Architecture and systems", "authors": [ { "first": "Dan", "middle": [], "last": "Bohus", "suffix": "" }, { "first": "Alexander", "middle": [ "I" ], "last": "Rudnicky", "suffix": "" } ], "year": 2009, "venue": "Computer Speech & Language", "volume": "23", "issue": "3", "pages": "332--361", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dan Bohus and Alexander I Rudnicky. 2009. The ravenclaw dialog management framework: Architec- ture and systems. Computer Speech & Language, 23(3):332-361.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Iemocap: Interactive emotional dyadic motion capture database. Language resources and evaluation", "authors": [ { "first": "Carlos", "middle": [], "last": "Busso", "suffix": "" }, { "first": "Murtaza", "middle": [], "last": "Bulut", "suffix": "" }, { "first": "Chi-Chun", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Abe", "middle": [], "last": "Kazemzadeh", "suffix": "" }, { "first": "Emily", "middle": [], "last": "Mower", "suffix": "" }, { "first": "Samuel", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Jeannette", "middle": [ "N" ], "last": "Chang", "suffix": "" }, { "first": "Sungbok", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Shrikanth S", "middle": [], "last": "Narayanan", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carlos Busso, Murtaza Bulut, Chi-Chun Lee, Abe Kazemzadeh, Emily Mower, Samuel Kim, Jean- nette N Chang, Sungbok Lee, and Shrikanth S Narayanan. 2008. Iemocap: Interactive emotional dyadic motion capture database. Language re- sources and evaluation, 42(4).", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Speaking while monitoring addressees for understanding", "authors": [ { "first": "H", "middle": [], "last": "Clark", "suffix": "" }, { "first": "M", "middle": [], "last": "Krych", "suffix": "" } ], "year": 2004, "venue": "Journal of Memory and Language", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "H. H Clark and M. A Krych. 2004. Speaking while monitoring addressees for understanding. Journal of Memory and Language.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Alana v2: Entertaining and Informative Opendomain Social Dialogue using Ontologies and Entity Linking", "authors": [ { "first": "Amanda Cercas", "middle": [], "last": "Curry", "suffix": "" }, { "first": "Ioannis", "middle": [], "last": "Papaioannou", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Suglia", "suffix": "" }, { "first": "Shubham", "middle": [], "last": "Agarwal", "suffix": "" }, { "first": "Igor", "middle": [], "last": "Shalyminov", "suffix": "" }, { "first": "Xinnuo", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Ond\u0213ej", "middle": [], "last": "Dusek", "suffix": "" }, { "first": "Arash", "middle": [], "last": "Eshghi", "suffix": "" }, { "first": "Ioannis", "middle": [], "last": "Konstas", "suffix": "" }, { "first": "Verena", "middle": [], "last": "Rieser", "suffix": "" }, { "first": "Oliver", "middle": [], "last": "Lemon", "suffix": "" } ], "year": 2018, "venue": "1st Proceedings of Alexa Prize", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Amanda Cercas Curry, Ioannis Papaioannou, Alessan- dro Suglia, Shubham Agarwal, Igor Shalyminov, Xinnuo Xu, Ond\u0213ej Dusek, Arash Eshghi, Ioan- nis Konstas, Verena Rieser, and Oliver Lemon. 2018. Alana v2: Entertaining and Informative Open- domain Social Dialogue using Ontologies and Entity Linking. In 1st Proceedings of Alexa Prize.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Spoken language understanding", "authors": [ { "first": "Renato", "middle": [ "De" ], "last": "Mori", "suffix": "" }, { "first": "Fr\u00e9d\u00e9ric", "middle": [], "last": "Bechet", "suffix": "" }, { "first": "Dilek", "middle": [], "last": "Hakkani-Tur", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Mctear", "suffix": "" }, { "first": "Giuseppe", "middle": [], "last": "Riccardi", "suffix": "" }, { "first": "Gokhan", "middle": [], "last": "Tur", "suffix": "" } ], "year": 2008, "venue": "IEEE Signal Processing Magazine", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Renato De Mori, Fr\u00e9d\u00e9ric Bechet, Dilek Hakkani-Tur, Michael McTear, Giuseppe Riccardi, and Gokhan Tur. 2008. Spoken language understanding. IEEE Signal Processing Magazine.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Imsspeech: A speech to text tool", "authors": [ { "first": "Pavel", "middle": [], "last": "Denisov", "suffix": "" }, { "first": "Ngoc", "middle": [ "Thang" ], "last": "Vu", "suffix": "" } ], "year": 2019, "venue": "Studientexte zur Sprachkommunikation: Elektronische Sprachsignalverarbeitung 2019", "volume": "", "issue": "", "pages": "170--177", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pavel Denisov and Ngoc Thang Vu. 2019. Ims- speech: A speech to text tool. Studientexte zur Sprachkommunikation: Elektronische Sprachsig- nalverarbeitung 2019, pages 170-177.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Intrinsic and Extrinsic Evaluation of an Automatic User Disengagement Detector for an Uncertainty-Adaptive Spoken Dialogue System", "authors": [ { "first": "Kate", "middle": [], "last": "Forbes-Riley", "suffix": "" }, { "first": "Diane", "middle": [], "last": "Litman", "suffix": "" }, { "first": "Heather", "middle": [], "last": "Friedberg", "suffix": "" }, { "first": "Joanna", "middle": [], "last": "Drummond", "suffix": "" } ], "year": 2012, "venue": "Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "91--102", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kate Forbes-Riley, Diane Litman, Heather Friedberg, and Joanna Drummond. 2012. Intrinsic and Extrin- sic Evaluation of an Automatic User Disengagement Detector for an Uncertainty-Adaptive Spoken Dia- logue System. In Conference of the North Ameri- can Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 91-102, Montr\u00e9al.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "The MuM-MER project: Engaging human-robot interaction in real-world public spaces", "authors": [ { "first": "Mary", "middle": [ "Ellen" ], "last": "Foster", "suffix": "" }, { "first": "Rachid", "middle": [], "last": "Alami", "suffix": "" }, { "first": "Olli", "middle": [], "last": "Gestranius", "suffix": "" }, { "first": "Oliver", "middle": [], "last": "Lemon", "suffix": "" }, { "first": "Marketta", "middle": [], "last": "Niemel\u00e4", "suffix": "" }, { "first": "Jean", "middle": [ "Marc" ], "last": "Odobez", "suffix": "" }, { "first": "Amit Kumar", "middle": [], "last": "Pandey", "suffix": "" } ], "year": 2016, "venue": "Processings of the Eighth International Conference on Social Robotics (ICSR 2016)", "volume": "", "issue": "", "pages": "753--763", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mary Ellen Foster, Rachid Alami, Olli Gestranius, Oliver Lemon, Marketta Niemel\u00e4, Jean Marc Odobez, and Amit Kumar Pandey. 2016. The MuM- MER project: Engaging human-robot interaction in real-world public spaces. In Processings of the Eighth International Conference on Social Robotics (ICSR 2016), pages 753-763. Springer.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Between and within: Alternative sequential treatments of continuers and assessments", "authors": [ { "first": "Charles", "middle": [], "last": "Goodwin", "suffix": "" } ], "year": 1986, "venue": "Journal of Human Studies", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Charles Goodwin. 1986. Between and within: Alterna- tive sequential treatments of continuers and assess- ments. Journal of Human Studies.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Espnet-tts: Unified, reproducible, and integratable open source end-to-end text-to", "authors": [ { "first": "Tomoki", "middle": [], "last": "Hayashi", "suffix": "" }, { "first": "Ryuichi", "middle": [], "last": "Yamamoto", "suffix": "" }, { "first": "Katsuki", "middle": [], "last": "Inoue", "suffix": "" }, { "first": "Takenori", "middle": [], "last": "Yoshimura", "suffix": "" }, { "first": "Shinji", "middle": [], "last": "Watanabe", "suffix": "" }, { "first": "Tomoki", "middle": [], "last": "Toda", "suffix": "" }, { "first": "Kazuya", "middle": [], "last": "Takeda", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Xu", "middle": [], "last": "Tan", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomoki Hayashi, Ryuichi Yamamoto, Katsuki Inoue, Takenori Yoshimura, Shinji Watanabe, Tomoki Toda, Kazuya Takeda, Yu Zhang, and Xu Tan. 2019. Espnet-tts: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "The lj speech dataset", "authors": [ { "first": "Keith", "middle": [], "last": "Ito", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Keith Ito. 2017. The lj speech dataset. https:// keithito.com/LJ-Speech-Dataset/.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "A comparative study on transformer vs rnn in speech applications", "authors": [ { "first": "Shigeki", "middle": [], "last": "Karita", "suffix": "" }, { "first": "Nanxin", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Tomoki", "middle": [], "last": "Hayashi", "suffix": "" }, { "first": "Takaaki", "middle": [], "last": "Hori", "suffix": "" }, { "first": "Hirofumi", "middle": [], "last": "Inaguma", "suffix": "" }, { "first": "Ziyan", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Masao", "middle": [], "last": "Someki", "suffix": "" }, { "first": "Nelson", "middle": [ "Enrique" ], "last": "", "suffix": "" }, { "first": "Yalta", "middle": [], "last": "Soplin", "suffix": "" }, { "first": "Ryuichi", "middle": [], "last": "Yamamoto", "suffix": "" }, { "first": "Xiaofei", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2019, "venue": "IEEE Automatic Speech Recognition and Understanding Workshop (ASRU)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shigeki Karita, Nanxin Chen, Tomoki Hayashi, Takaaki Hori, Hirofumi Inaguma, Ziyan Jiang, Masao Someki, Nelson Enrique Yalta Soplin, Ryuichi Yamamoto, Xiaofei Wang, et al. 2019. A comparative study on transformer vs rnn in speech applications. In IEEE Automatic Speech Recogni- tion and Understanding Workshop (ASRU).", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "ConvLab: Multi-Domain End-to-End Dialog System Platform", "authors": [ { "first": "Sungjin", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Qi", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Ryuichi", "middle": [], "last": "Takanobu", "suffix": "" }, { "first": "Zheng", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yaoqin", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Xiang", "middle": [], "last": "Li", "suffix": "" }, { "first": "Jinchao", "middle": [], "last": "Li", "suffix": "" }, { "first": "Baolin", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Xiujun", "middle": [], "last": "Li", "suffix": "" }, { "first": "Minlie", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations", "volume": "", "issue": "", "pages": "64--69", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sungjin Lee, Qi Zhu, Ryuichi Takanobu, Zheng Zhang, Yaoqin Zhang, Xiang Li, Jinchao Li, Baolin Peng, Xiujun Li, Minlie Huang, and Jianfeng Gao. 2019. ConvLab: Multi-Domain End-to-End Dialog Sys- tem Platform. In Proceedings of the 57th Annual Meeting of the Association for Computational Lin- guistics: System Demonstrations, pages 64-69.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Opendial: A toolkit for developing spoken dialogue systems with probabilistic rules", "authors": [ { "first": "Pierre", "middle": [], "last": "Lison", "suffix": "" }, { "first": "Casey", "middle": [], "last": "Kennington", "suffix": "" } ], "year": 2016, "venue": "Proceedings of ACL-2016 system demonstrations", "volume": "", "issue": "", "pages": "67--72", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pierre Lison and Casey Kennington. 2016. Opendial: A toolkit for developing spoken dialogue systems with probabilistic rules. In Proceedings of ACL- 2016 system demonstrations, pages 67-72.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "The conversational interface", "authors": [ { "first": "Michael", "middle": [], "last": "Frederick Mctear", "suffix": "" }, { "first": "Zoraida", "middle": [], "last": "Callejas", "suffix": "" }, { "first": "David", "middle": [], "last": "Griol", "suffix": "" } ], "year": 2016, "venue": "", "volume": "6", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Frederick McTear, Zoraida Callejas, and David Griol. 2016. The conversational interface, volume 6. Springer.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Attentive convolutional neural network based speech emotion recognition: A study on the impact of input features, signal length, and acted speech", "authors": [ { "first": "Michael", "middle": [], "last": "Neumann", "suffix": "" }, { "first": "Ngoc", "middle": [ "Thang" ], "last": "Vu", "suffix": "" } ], "year": 2017, "venue": "Proceedings of Interspeech", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Neumann and Ngoc Thang Vu. 2017. Atten- tive convolutional neural network based speech emo- tion recognition: A study on the impact of input fea- tures, signal length, and acted speech. In Proceed- ings of Interspeech.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Automatic Engagement Prediction with GAP Feature", "authors": [ { "first": "Xuesong", "middle": [], "last": "Niu", "suffix": "" }, { "first": "Hu", "middle": [], "last": "Han", "suffix": "" }, { "first": "Jiabei", "middle": [], "last": "Zeng", "suffix": "" }, { "first": "Xuran", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Shiguang", "middle": [], "last": "Shan", "suffix": "" }, { "first": "Yan", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Songfan", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Xilin", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2018, "venue": "ICMI", "volume": "", "issue": "", "pages": "599--603", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xuesong Niu, Hu Han, Jiabei Zeng, Xuran Sun, Shiguang Shan, Yan Huang, Songfan Yang, and Xilin Chen. 2018. Automatic Engagement Predic- tion with GAP Feature. In ICMI, pages 599-603, Boulder. Association for Computing Machinery.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Oh, Jeez! or uh-huh? A listener-aware Backchannel predictor on ASR transcriptions", "authors": [ { "first": "Daniel", "middle": [], "last": "Ortega", "suffix": "" }, { "first": "Chia-Yu", "middle": [], "last": "Li", "suffix": "" }, { "first": "Thang", "middle": [], "last": "Vu", "suffix": "" } ], "year": 2020, "venue": "ICASSP 2020 -IEEE International Conference on Acoustics, Speech and Signal Processing", "volume": "", "issue": "", "pages": "8064--8068", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Ortega, Chia-Yu Li, and Thang Vu. 2020. Oh, Jeez! or uh-huh? A listener-aware Backchan- nel predictor on ASR transcriptions. In ICASSP 2020 -IEEE International Conference on Acoustics, Speech and Signal Processing, pages 8064-8068.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Adviser: A dialog system framework for education & research", "authors": [ { "first": "Daniel", "middle": [], "last": "Ortega", "suffix": "" }, { "first": "Dirk", "middle": [], "last": "V\u00e4th", "suffix": "" }, { "first": "Gianna", "middle": [], "last": "Weber", "suffix": "" }, { "first": "Lindsey", "middle": [], "last": "Vanderlyn", "suffix": "" }, { "first": "Maximilian", "middle": [], "last": "Schmidt", "suffix": "" }, { "first": "Moritz", "middle": [], "last": "V\u00f6lkel", "suffix": "" }, { "first": "Zorica", "middle": [], "last": "Karacevic", "suffix": "" }, { "first": "Ngoc", "middle": [ "Thang" ], "last": "Vu", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations", "volume": "", "issue": "", "pages": "93--98", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Ortega, Dirk V\u00e4th, Gianna Weber, Lindsey Van- derlyn, Maximilian Schmidt, Moritz V\u00f6lkel, Zorica Karacevic, and Ngoc Thang Vu. 2019. Adviser: A dialog system framework for education & research. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 93-98.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Fastspeech: Fast, robust and controllable text to speech", "authors": [ { "first": "Yi", "middle": [], "last": "Ren", "suffix": "" }, { "first": "Yangjun", "middle": [], "last": "Ruan", "suffix": "" }, { "first": "Xu", "middle": [], "last": "Tan", "suffix": "" }, { "first": "Tao", "middle": [], "last": "Qin", "suffix": "" }, { "first": "Sheng", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Zhou", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Tie-Yan", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2019, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "3165--3174", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yi Ren, Yangjun Ruan, Xu Tan, Tao Qin, Sheng Zhao, Zhou Zhao, and Tie-Yan Liu. 2019. Fastspeech: Fast, robust and controllable text to speech. In Ad- vances in Neural Information Processing Systems, pages 3165-3174.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Agenda-based user simulation for bootstrapping a pomdp dialogue system", "authors": [ { "first": "Jost", "middle": [], "last": "Schatzmann", "suffix": "" }, { "first": "Blaise", "middle": [], "last": "Thomson", "suffix": "" }, { "first": "Karl", "middle": [], "last": "Weilhammer", "suffix": "" }, { "first": "Hui", "middle": [], "last": "Ye", "suffix": "" }, { "first": "Steve", "middle": [], "last": "Young", "suffix": "" } ], "year": 2007, "venue": "Proceedings of NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jost Schatzmann, Blaise Thomson, Karl Weilhammer, Hui Ye, and Steve Young. 2007. Agenda-based user simulation for bootstrapping a pomdp dialogue sys- tem. In Proceedings of NAACL.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Prioritized experience replay", "authors": [ { "first": "Tom", "middle": [], "last": "Schaul", "suffix": "" }, { "first": "John", "middle": [], "last": "Quan", "suffix": "" }, { "first": "Ioannis", "middle": [], "last": "Antonoglou", "suffix": "" }, { "first": "David", "middle": [], "last": "Silver", "suffix": "" } ], "year": 2015, "venue": "Proceedings of ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tom Schaul, John Quan, Ioannis Antonoglou, and David Silver. 2015. Prioritized experience replay. In Proceedings of ICLR.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Being bored? Recognising natural interest by extensive audiovisual integration for real-life application", "authors": [ { "first": "Bj\u00f6rn", "middle": [], "last": "Schuller", "suffix": "" }, { "first": "Ronald", "middle": [], "last": "M\u00fcller", "suffix": "" }, { "first": "Florian", "middle": [], "last": "Eyben", "suffix": "" }, { "first": "J\u00fcrgen", "middle": [], "last": "Gast", "suffix": "" }, { "first": "Benedikt", "middle": [], "last": "H\u00f6rnler", "suffix": "" }, { "first": "Martin", "middle": [], "last": "W\u00f6llmer", "suffix": "" }, { "first": "Gerhard", "middle": [], "last": "Rigoll", "suffix": "" }, { "first": "Anja", "middle": [], "last": "H\u00f6thker", "suffix": "" }, { "first": "Hitoshi", "middle": [], "last": "Konosu", "suffix": "" } ], "year": 2009, "venue": "Image and Vision Computing", "volume": "27", "issue": "", "pages": "1760--1774", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bj\u00f6rn Schuller, Ronald M\u00fcller, Florian Eyben, J\u00fcrgen Gast, Benedikt H\u00f6rnler, Martin W\u00f6llmer, Gerhard Rigoll, Anja H\u00f6thker, and Hitoshi Konosu. 2009. Being bored? Recognising natural interest by exten- sive audiovisual integration for real-life application. Image and Vision Computing, 27:1760-1774.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Exploring turn-taking cues in multi-party human-robot discussions about objects", "authors": [ { "first": "Gabriel", "middle": [], "last": "Skantze", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Johansson", "suffix": "" }, { "first": "Jonas", "middle": [], "last": "Beskow", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 ACM on International Conference on Multimodal Interaction", "volume": "", "issue": "", "pages": "67--74", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gabriel Skantze, Martin Johansson, and Jonas Beskow. 2015. Exploring turn-taking cues in multi-party human-robot discussions about objects. In Proceed- ings of the 2015 ACM on International Conference on Multimodal Interaction, pages 67-74.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "PyDial: A Multi-domain Statistical Dialogue System Toolkit", "authors": [ { "first": "Stefan", "middle": [], "last": "Ultes", "suffix": "" }, { "first": "Lina", "middle": [ "M" ], "last": "Rojas Barahona", "suffix": "" }, { "first": "Pei-Hao", "middle": [], "last": "Su", "suffix": "" }, { "first": "David", "middle": [], "last": "Vandyke", "suffix": "" }, { "first": "Dongho", "middle": [], "last": "Kim", "suffix": "" }, { "first": "I\u00f1igo", "middle": [], "last": "Casanueva", "suffix": "" }, { "first": "Pawe\u0142", "middle": [], "last": "Budzianowski", "suffix": "" }, { "first": "Nikola", "middle": [], "last": "Mrk\u0161i\u0107", "suffix": "" }, { "first": "Tsung-Hsien", "middle": [], "last": "Wen", "suffix": "" }, { "first": "Milica", "middle": [], "last": "Gasic", "suffix": "" }, { "first": "Steve", "middle": [], "last": "Young", "suffix": "" } ], "year": 2017, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stefan Ultes, Lina M. Rojas Barahona, Pei-Hao Su, David Vandyke, Dongho Kim, I\u00f1igo Casanueva, Pawe\u0142 Budzianowski, Nikola Mrk\u0161i\u0107, Tsung-Hsien Wen, Milica Gasic, and Steve Young. 2017. PyDial: A Multi-domain Statistical Dialogue System Toolkit. In Proceedings of ACL.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "To combine or not to combine? a rainbow deep reinforcement learning agent for dialog policies", "authors": [ { "first": "Dirk", "middle": [], "last": "V\u00e4th", "suffix": "" }, { "first": "Ngoc", "middle": [ "Thang" ], "last": "Vu", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 20th Annual SIGdial Meeting on Discourse and Dialogue", "volume": "", "issue": "", "pages": "62--67", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dirk V\u00e4th and Ngoc Thang Vu. 2019. To combine or not to combine? a rainbow deep reinforcement learn- ing agent for dialog policies. In Proceedings of the 20th Annual SIGdial Meeting on Discourse and Di- alogue, pages 62-67.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Wikidata: a free collaborative knowledgebase", "authors": [ { "first": "Denny", "middle": [], "last": "Vrande\u010di\u0107", "suffix": "" }, { "first": "Markus", "middle": [], "last": "Kr\u00f6tzsch", "suffix": "" } ], "year": 2014, "venue": "Communications of the ACM", "volume": "57", "issue": "10", "pages": "78--85", "other_ids": {}, "num": null, "urls": [], "raw_text": "Denny Vrande\u010di\u0107 and Markus Kr\u00f6tzsch. 2014. Wiki- data: a free collaborative knowledgebase. Commu- nications of the ACM, 57(10):78-85.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "The social signal interpretation (ssi) framework: multimodal signal processing and recognition in real-time", "authors": [ { "first": "Johannes", "middle": [], "last": "Wagner", "suffix": "" }, { "first": "Florian", "middle": [], "last": "Lingenfelser", "suffix": "" }, { "first": "Tobias", "middle": [], "last": "Baur", "suffix": "" }, { "first": "Ionut", "middle": [], "last": "Damian", "suffix": "" }, { "first": "Felix", "middle": [], "last": "Kistler", "suffix": "" }, { "first": "Elisabeth", "middle": [], "last": "Andr\u00e9", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 21st ACM international conference on Multimedia", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Johannes Wagner, Florian Lingenfelser, Tobias Baur, Ionut Damian, Felix Kistler, and Elisabeth Andr\u00e9. 2013. The social signal interpretation (ssi) frame- work: multimodal signal processing and recognition in real-time. In Proceedings of the 21st ACM inter- national conference on Multimedia.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Dueling network architectures for deep reinforcement learning", "authors": [ { "first": "Ziyu", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Schaul", "suffix": "" }, { "first": "Matteo", "middle": [], "last": "Hessel", "suffix": "" }, { "first": "Hado", "middle": [], "last": "Van Hasselt", "suffix": "" }, { "first": "Marc", "middle": [], "last": "Lanctot", "suffix": "" }, { "first": "Nando De", "middle": [], "last": "Freitas", "suffix": "" } ], "year": 2016, "venue": "Proceedings of ICML", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ziyu Wang, Tom Schaul, Matteo Hessel, Hado Van Hasselt, Marc Lanctot, and Nando De Freitas. 2016. Dueling network architectures for deep rein- forcement learning. In Proceedings of ICML.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Espnet: End-to-end speech processing toolkit", "authors": [ { "first": "Shinji", "middle": [], "last": "Watanabe", "suffix": "" }, { "first": "Takaaki", "middle": [], "last": "Hori", "suffix": "" }, { "first": "Shigeki", "middle": [], "last": "Karita", "suffix": "" }, { "first": "Tomoki", "middle": [], "last": "Hayashi", "suffix": "" }, { "first": "Jiro", "middle": [], "last": "Nishitoba", "suffix": "" }, { "first": "Yuya", "middle": [], "last": "Unno", "suffix": "" } ], "year": 2018, "venue": "Interspeech", "volume": "", "issue": "", "pages": "2207--2211", "other_ids": { "DOI": [ "10.21437/Interspeech.2018-1456" ] }, "num": null, "urls": [], "raw_text": "Shinji Watanabe, Takaaki Hori, Shigeki Karita, Tomoki Hayashi, Jiro Nishitoba, Yuya Unno, Nelson En- rique Yalta Soplin, Jahn Heymann, Matthew Wies- ner, Nanxin Chen, Adithya Renduchintala, and Tsubasa Ochiai. 2018. Espnet: End-to-end speech processing toolkit. In Interspeech, pages 2207- 2211.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "ELIZA: A Computer Program for the Study of Natural Language Communication Between Man and Machine", "authors": [ { "first": "Joseph", "middle": [], "last": "Weizenbaum", "suffix": "" } ], "year": 1966, "venue": "Communications of the ACM", "volume": "9", "issue": "1", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joseph Weizenbaum. 1966. ELIZA: A Computer Pro- gram for the Study of Natural Language Commu- nication Between Man and Machine. Communica- tions of the ACM, 9(1).", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Parallel wavegan: A fast waveform generation model based on generative adversarial networks with multi-resolution spectrogram", "authors": [ { "first": "Ryuichi", "middle": [], "last": "Yamamoto", "suffix": "" }, { "first": "Eunwoo", "middle": [], "last": "Song", "suffix": "" }, { "first": "Jae-Min", "middle": [], "last": "Kim", "suffix": "" } ], "year": 2020, "venue": "ICASSP 2020 -IEEE International Conference on Acoustics, Speech and Signal Processing", "volume": "", "issue": "", "pages": "6199--6203", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ryuichi Yamamoto, Eunwoo Song, and Jae-Min Kim. 2020. Parallel wavegan: A fast waveform genera- tion model based on generative adversarial networks with multi-resolution spectrogram. In ICASSP 2020 -IEEE International Conference on Acoustics, Speech and Signal Processing, pages 6199-6203.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "A survey of affect recognition methods: Audio, visual, and spontaneous expressions", "authors": [ { "first": "Zhihong", "middle": [], "last": "Zeng", "suffix": "" }, { "first": "Maja", "middle": [], "last": "Pantic", "suffix": "" }, { "first": "Thomas S", "middle": [], "last": "Glenn I Roisman", "suffix": "" }, { "first": "", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2009, "venue": "IEEE transactions on pattern analysis and machine intelligence", "volume": "31", "issue": "", "pages": "39--58", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhihong Zeng, Maja Pantic, Glenn I Roisman, and Thomas S Huang. 2009. A survey of affect recog- nition methods: Audio, visual, and spontaneous ex- pressions. IEEE transactions on pattern analysis and machine intelligence, 31(1):39-58.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "The Design and Implementation of XiaoIce, an Empathetic Social Chatbot", "authors": [ { "first": "Li", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Di", "middle": [], "last": "Li", "suffix": "" }, { "first": "Heung-Yeung", "middle": [], "last": "Shum", "suffix": "" } ], "year": 2018, "venue": "Computational Linguistics", "volume": "", "issue": "", "pages": "1--62", "other_ids": {}, "num": null, "urls": [], "raw_text": "Li Zhou, Jianfeng Gao, Di Li, and Heung-Yeung Shum. 2018. The Design and Implementation of XiaoIce, an Empathetic Social Chatbot. Computational Lin- guistics, pages 1-62.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "num": null, "type_str": "figure", "text": "Tracking emotion states and engagement levels using multi-modal information." }, "FIGREF1": { "uris": null, "num": null, "type_str": "figure", "text": "Example ADVISER toolkit configuration: Grey represents backend components, blue represents domain-specific services, and all other colors represent domain-agnostic services. Two components are run remotely." }, "TABREF1": { "type_str": "table", "num": null, "text": "The weather in Stuttgart on January 28 at 3 PM is 3 degrees celsius with light snow. USER: Thank you, ADVISER, good bye! SYSTEM: Thank you, good bye.", "content": "
The following
shows an example dialog.
SYSTEM: Hello, please let me know how I can help
you, I can discuss the following domains: Mensa
Food and Weather.
USER: I could have something to eat. What does the
mensa offer today?
SYSTEM: What type of dish are you looking for?
USER: I would like a main dish.
SYSTEM: Should the meal be vegan?
USER: Yes.
SYSTEM: The meal mediterranean Ebly wheat is
served today, is a main dish and is vegan.
USER: Okay, cool, I will go there now! What is the
weather like?
", "html": null } } } }