text
stringlengths
0
128k
source
stringclasses
64 values
- J. Chiu, Yun Wang, J. Trmal, Daniel Povey, Guoguo Chen, Alexander I. Rudnicky. 2014. Combination of FST and CN search in spoken term detection. Abstract: Spoken Term Detection (STD) focuses on finding instances of a particular spoken word or phrase in an audio corpus. Most STD systems have a two-step pipeline, ASR followed by search. Two approaches to search are common, Confusion Network (CN) based search and Finite State Transducer (FST) based search. In this paper, we examine combination of these two different search approaches, using the same ASR output. We find that the CN search performs better on shorter queries, and FST search performs better on longer queries. By combining the different search results from the same ASR decoding, we achieve better performance compared to either search approach on its own. We also find that this improvement is additive to the usual combination of decoder results using different modeling techniques.
LTI_Alexander_Rudnicky.txt
- Yun-Nung (Vivian) Chen, Alexander I. Rudnicky. 2014. Two-Stage Stochastic Natural Language Generation for Email Synthesis by Modeling Sender Style and Topic Structure. Abstract: This paper describes a two-stage process for stochastic generation of email, in which the first stage structures the emails according to sender style and topic structure (high-level generation), and the second stage synthesizes text content based on the particulars of an email element and the goals of a given communication (surface-level realization). Synthesized emails were rated in a preliminary experiment. The results indicate that sender style can be detected. In addition we found that stochastic generation performs better if applied at the word level than at an original-sentence level (“template-based”) in terms of email coherence, sentence fluency, naturalness, and preference.
LTI_Alexander_Rudnicky.txt
- Aasish Pappu, Ming Sun, Seshadri Sridharan, Alexander I. Rudnicky. 2014. Conversational Strategies for Robustly Managing Dialog in Public Spaces. Abstract: Open environments present an attention management challenge for conversational systems. We describe a kiosk system (based on Ravenclaw‐Olympus) that uses simple auditory and visual information to interpret human presence and manage the system’s attention. The system robustly differentiates intended interactions from unintended ones at an accuracy of 93% and provides similar task completion rates in both a quiet room and a public space.
LTI_Alexander_Rudnicky.txt
- Aasish Pappu, Alexander I. Rudnicky. 2014. Knowledge Acquisition Strategies for Goal-Oriented Dialog Systems. Abstract: Many goal-oriented dialog agents are expected to identify slot-value pairs in a spoken query, then perform lookup in a knowledge base to complete the task. When the agent encounters unknown slotvalues, it may ask the user to repeat or reformulate the query. But a robust agent can proactively seek new knowledge from a user, to help reduce subsequent task failures. In this paper, we propose knowledge acquisition strategies for a dialog agent and show their effectiveness. The acquired knowledge can be shown to subsequently contribute to task completion.
LTI_Alexander_Rudnicky.txt
- Yun-Nung (Vivian) Chen, Alexander I. Rudnicky. 2014. Two-Stage Stochastic Email Synthesizer. Abstract: This paper presents the design and implementation details of an email synthesizer using two-stage stochastic natural language generation, where the first stage structures the emails according to sender style and topic structure, and the second stage synthesizes text content based on the particulars of an email structure element and the goals of a given communication for surface realization. The synthesized emails reflect sender style and the intent of communication, which can be further used as synthetic evidence for developing other applications.
LTI_Alexander_Rudnicky.txt
- Longlu Qin, Alexander I. Rudnicky. 2014. Building a vocabulary self-learning speech recognition system. Abstract: This paper presents initial studies on building a vocabulary selflearning speech recognition system that can automatically learn unknown words and expand its recognition vocabulary. Our recognizer can detect and recover out-of-vocabulary (OOV) words in speech, then incorporate OOV words into its lexicon and language model (LM). As a result, these unknown words can be correctly recognized when encountered by the recognizer in future. Specifically, we apply the word-fragment hybrid system framework to detect the presence of OOV words. We propose a better phoneme-to-grapheme (P2G) model so as to correctly recover the written form for more OOV words. Furthermore, we estimate LM scores for OOV words using their syntactic and semantic properties. The experimental results show that more than 40% OOV words are successfully learned from the development data, and about 60% learned OOV words are recognized in the testing data. Index Terms: Vocabulary learning, OOV word detection and recovery, lexicon expansion
LTI_Alexander_Rudnicky.txt
- Yun-Nung (Vivian) Chen, Alexander I. Rudnicky. 2014. Dynamically supporting unexplored domains in conversational interactions by enriching semantics with neural word embeddings. Abstract: Spoken language interfaces are being incorporated into various devices (e.g. smart-phones, smart TVs, etc). However, current technology typically limits conversational interactions to a few narrow predefined domains/topics. For example, dialogue systems for smartphone operation fail to respond when users ask for functions not supported by currently installed applications. We propose to dynamically add application-based domains according to users' requests by using descriptions of applications as a retrieval cue to find relevant applications. The approach uses structured knowledge resources (e.g. Freebase, Wikipedia, FrameNet) to induce types of slots for generating semantic seeds, and enriches the semantics of spoken queries with neural word embeddings, where semantically related concepts can be additionally included for acquiring knowledge that does not exist in the predefined domains. The system can then retrieve relevant applications or dynamically suggest users install applications that support unexplored domains. We find that vendor descriptions provide a reliable source of information for this purpose.
LTI_Alexander_Rudnicky.txt
- M. Kalinyak-Fliszar, N. Martin, E. Keshner, Alexander I. Rudnicky, Justin Y. Shi, G. Teodoro. 2014. Using Virtual Clinicians to Promote Functional Communication Skills in Aphasia. Abstract: Persons with aphasia (PWA) re-enter their community after their rehabilitation program is ended. Thus it is incumbent on rehabilitation specialists to incorporate training in using residual language skills for functional communication [1]. Evidence indicates that language abilities improve with continued treatment, even during chronic stages of aphasia (refs) For optimal generalization, PWA need to practice language in everyday living situations.
LTI_Alexander_Rudnicky.txt
LTI_Alexander_Rudnicky.txt
Virtual reality technology is a method of providing home-based therapeutic interventions. A valuable potential of virtual reality technology is that it supports the successful generalization of residual language skills to functional communication situations. Traditionally, role-playing [2] and script training [3] have been used to improve functional communication in PWA. A more recent approach has been the adaptation of scripts through the implementation of virtual technology. [4].
LTI_Alexander_Rudnicky.txt
LTI_Alexander_Rudnicky.txt
We report progress on a project that aims to develop a virtual clinician that is capable of recognizing a variety of potential responses in the context of functional communication scenarios. Our goal is to develop a virtual clinician-human interaction system that can be used independently by PWA to practice and improve communication skills. This involves development of software that will support a spoken dialog system (SDS) that can interact autonomously with an individual and can be configured to personalize treatment [5].
LTI_Alexander_Rudnicky.txt
LTI_Alexander_Rudnicky.txt
As use of virtual technology in aphasia rehabilitation increases, questions about the physical and psychosocial factors that influence successful use of residual communication skills need to be resolved. Thus, a second aim of this project, the topic of this paper, is to determine whether interactive dialogues between a client and virtual clinician differ in the quantity and quality of the client’s language output compared to dialogues between client and human clinician. Although the potential of using virtual clinicians is promising, it must be determined if individuals with aphasia (or other language disorder) will be responsive to the virtual clinician and produce as much language in this context as they would during dialogues with human clinicians.
LTI_Alexander_Rudnicky.txt
LTI_Alexander_Rudnicky.txt
We addressed two hypotheses in this study:
LTI_Alexander_Rudnicky.txt
LTI_Alexander_Rudnicky.txt
1. For PWA, practice with dialogues that focus on everyday activities will improve quality and quantity of verbal output in those dialogues.
LTI_Alexander_Rudnicky.txt
LTI_Alexander_Rudnicky.txt
2. For PWA, verbal output practiced in dialogues with a virtual clinician and a human clinician will yield similar amounts of verbal output as measured by information units in the dialogues.
LTI_Alexander_Rudnicky.txt
- Aasish Pappu, Alexander I. Rudnicky. 2014. Learning situated knowledge bases through dialog. Abstract: To respond to a user’s query, dialog agents can use a knowledge base that is either domain specific, commonsense (e.g., NELL, Freebase) or a combination of both. The drawback is that domain-specific knowledge bases will likely be limited and static; commonsense ones are dynamic but contain general information found on the web and will be sparse with respect to a domain. We address this issue through a system that solicits situational information from its users in a domain that provides information on events (seminar talks) to augment its knowledge base (covering an academic field). We find that this knowledge is consistent and useful and that it provides reliable information to users. We show that, in comparison to a base system, users find that retrievals are more relevant when the system uses its informally acquired knowledge to augment their queries.
LTI_Alexander_Rudnicky.txt
- J. Chiu, Alexander I. Rudnicky. 2014. LACS System Analysis on Retrieval Models for the MediaEval 2014 Search and Hyperlinking Task. Abstract: We describe the LACS submission to the Search sub-task of the Search and Hyperlinking Task at MediaEval 2014. Our experiments investigate how different retrieval models interact with word stemming and stopword removal. On the development data, we segment the subtitle and Automatic Speech Recognition (ASR) transcripts into fixed length time units, and examine the effect of different retrieval models. We find that stemming provides consistent improvement; stopword removal is more sensitive to the retrieval models on the subtitles. These manipulations do not contribute to stable improvement on the ASR transcripts. Our experiments on test data focus on the subtitle. The gap in performance for different retrieval models is much less compared to the development data. We achieved 0.477 MAP on the test data.
LTI_Alexander_Rudnicky.txt
- Yun-Nung (Vivian) Chen, William Yang Wang, Alexander I. Rudnicky. 2014. Leveraging frame semantics and distributional semantics for unsupervised semantic slot induction in spoken dialogue systems. Abstract: Distributional semantics and frame semantics are two representative views on language understanding in the statistical world and the linguistic world, respectively. In this paper, we combine the best of two worlds to automatically induce the semantic slots for spoken dialogue systems. Given a collection of unlabeled audio files, we exploit continuous-valued word embeddings to augment a probabilistic frame-semantic parser that identifies key semantic slots in an unsupervised fashion. In experiments, our results on a real-world spoken dialogue dataset show that the distributional word representations significantly improve the adaptation of FrameNet-style parses of ASR decodings to the target semantic space; that comparing to a state-of-the-art baseline, a 13% relative average precision improvement is achieved by leveraging word vectors trained on two 100-billion words datasets; and that the proposed technology can be used to reduce the costs for designing task-oriented spoken dialogue systems.
LTI_Alexander_Rudnicky.txt
- Longlu Qin, Alexander I. Rudnicky. 2013. Learning better lexical properties for recurrent OOV words. Abstract: Out-of-vocabulary (OOV) words can appear more than once in a conversation or over a period of time. Such multiple instances of the same OOV word provide valuable information for learning the lexical properties of the word. Therefore, we investigated how to estimate better pronunciation, spelling and part-of-speech (POS) label for recurrent OOV words. We first identified recurrent OOV words from the output of a hybrid decoder by applying a bottom-up clustering approach. Then, multiple instances of the same OOV word were used simultaneously to learn properties of the OOV word. The experimental results showed that the bottom-up clustering approach is very effective at detecting the recurrence of OOV words. Furthermore, by using evidence from multiple instances of the same word, the pronunciation accuracy, recovery rate and POS label accuracy of recurrent OOV words can be substantially improved.
LTI_Alexander_Rudnicky.txt
- A. Nanavati, Nitendra Rajput, Saurabh Srivastava, Cumhur Erkut, A. Jylhä, Alexander I. Rudnicky, S. Serafin, M. Turunen. 2013. SiMPE: 8th workshop on speech and sound in mobile and pervasive environments. Abstract: The SiMPE workshop series started in 2006 with the goal of enabling speech processing on mobile and embedded devices. The SiMPE 2012 workshop extended the notion of audio to non-speech "Sounds" and thus the expansion became "Speech and Sound". SiMPE 2010 and 2011 brought together researchers from the speech and the HCI communities. Speech User interaction in cars was a focus area in 2009. Multimodality got more attention in SiMPE 2008. In SiMPE 2007, the focus was on developing regions.
LTI_Alexander_Rudnicky.txt
With SiMPE 2013, the 8th in the series, we continue to explore the area of speech along with sound. Akin to language processing and text-to-speech synthesis in the voice-driven interaction loop, sensors can track continuous human activities such as singing, walking, or shaking the mobile phone, and non-speech audio can facilitate continuous interaction. The technologies underlying speech processing and sound processing are quite different and these communities have been working mostly independent of each other. And yet, for multimodal interactions on the mobile, it is perhaps natural to ask whether and how speech and sound can be mixed and used more effectively and naturally.
LTI_Alexander_Rudnicky.txt
- Aasish Pappu, Alexander I. Rudnicky. 2013. Predicting Tasks in Goal-Oriented Spoken Dialog Systems using Semantic Knowledge Bases. Abstract: Goal-oriented dialog agents are expected to recognize user-intentions from an utterance and execute appropriate tasks. Typically, such systems use a semantic parser to solve this problem. However, semantic parsers could fail if user utterances contain out-of-grammar words/phrases or if the semantics of uttered phrases did not match the parser’s expectations. In this work, we have explored a more robust method of task prediction. We define task prediction as a classification problem, rather than “parsing” and use semantic contexts to improve classification accuracy. Our classifier uses semantic smoothing kernels that can encode information from knowledge bases such as Wordnet, NELL and Freebase.com. Our experiments on two spoken language corpora show that augmenting semantic information from these knowledge bases gives about 30% absolute improvement in task prediction over a parserbased method. Our approach thus helps make a dialog agent more robust to user input and helps reduce number of turns required to detected intended tasks.
LTI_Alexander_Rudnicky.txt
- Longlu Qin, Alexander I. Rudnicky. 2013. Finding recurrent out-of-vocabulary words. Abstract: Out-of-vocabulary (OOV) words can appear more than once in a conversation or over a period of time. Such multiple instances of the same OOV word provide valuable information for estimating the pronunciation or the part-of-speech (POS) tag of the word. But in a conventional OOV word detection system, each OOV word is recognized and treated individually. We therefore investigated how to identify recurrent OOV words in speech recognition. Specifically, we propose to cluster multiple instances of the same OOV word using a bottom-up approach. Phonetic, acoustic and contextual features were collected to measure the distance between OOV candidates. The experimental results show that the bottom-up clustering approach is very effective at detecting the recurrence of OOV words. We also found that the phonetic feature is better than the acoustic and contextual features, and the best performance is achieved when combining all features.
LTI_Alexander_Rudnicky.txt
- J. Chiu, Alexander I. Rudnicky. 2013. Using conversational word bursts in spoken term detection. Abstract: We describe a language independent word burst feature based on the structure of conversational speech that can be used to improve spoken term detection (STD) performance. Word burst refers to a phenomenon in conversational speech in which particular content words tend to occur in close proximity of each other as a byproduct of the topic under discussion. To take advantage of bursts, we describe a rescoring procedure that can be applied to lattice and confusion network outputs to improve STD performance. This approach is particularly effective when acoustic models are built with limited training data (and ASR performance is relatively poor). We find that word bursts appear in the four languages we examined and that STD performance can be improved for three of them; the remaining language is agglutinative.
LTI_Alexander_Rudnicky.txt
- Aasish Pappu, Alexander I. Rudnicky. 2013. Deploying speech interfaces to the masses. Abstract: Speech systems are typically deployed either over phones, e.g. IVR agents, or on embodied agents, e.g. domestic robots. Most of these systems are limited to a particular platform i.e., only accessible by phone or in situated interactions. This limits scalability and potential domain of operation. Our goal is to make speech interfaces more widely available, and we are proposing a new approach for deploying such interfaces on the internet along with traditional platforms. In this work, we describe a lightweight speech interface architecture built on top of Freeswitch, an open source softswitch platform. A softswitch enables us to provide users with access over several types of channels (phone, VOIP, etc.) as well as support multiple users at the same time. We demonstrate two dialog applications developed using this approach: 1) Virtual Chauffeur: a voice based virtual driving experience and 2) Talkie: a speech-based chat bot.
LTI_Alexander_Rudnicky.txt
- Ankur Gandhe, Longlu Qin, Florian Metze, Alexander I. Rudnicky, Ian Lane, Matthias Eck. 2013. Using web text to improve keyword spotting in speech. Abstract: For low resource languages, collecting sufficient training data to build acoustic and language models is time consuming and often expensive. But large amounts of text data, such as online newspapers, web forums or online encyclopedias, usually exist for languages that have a large population of native speakers. This text data can be easily collected from the web and then used to both expand the recognizer's vocabulary and improve the language model. One challenge, however, is normalizing and filtering the web data for a specific task. In this paper, we investigate the use of online text resources to improve the performance of speech recognition specifically for the task of keyword spotting. For the five languages provided in the base period of the IARPA BABEL project, we automatically collected text data from the web using only Limited LP resources. We then compared two methods for filtering the web data, one based on perplexity ranking and the other based on out-of-vocabulary (OOV) word detection. By integrating the web text into our systems, we observed significant improvements in keyword spotting accuracy for four out of the five languages. The best approach obtained an improvement in actual term weighted value (ATWV) of 0.0424 compared to a baseline system trained only on LimitedLP resources. On average, ATWV was improved by 0.0243 across five languages.
LTI_Alexander_Rudnicky.txt
- M. Marge, Alexander I. Rudnicky. 2013. Towards evaluating recovery strategies for situated grounding problems in human-robot dialogue. Abstract: Robots can use information from their surroundings to improve spoken language communication with people. Even when speech recognition is correct, robots face challenges when interpreting human instructions. These situated grounding problems include referential ambiguities and impossible-to-execute instructions. We present an approach to resolving situated grounding problems through spoken dialogue recovery strategies that robots can invoke to repair these problems. We describe a method for evaluating these strategies in human-robot navigation scenarios.
LTI_Alexander_Rudnicky.txt
- Yun-Nung (Vivian) Chen, William Yang Wang, Alexander I. Rudnicky. 2013. An empirical investigation of sparse log-linear models for improved dialogue act classification. Abstract: Previous work on dialogue act classification have primarily focused on dense generative and discriminative models. However, since the automatic speech recognition (ASR) outputs are often noisy, dense models might generate biased estimates and overfit to the training data. In this paper, we study sparse modeling approaches to improve dialogue act classification, since the sparse models maintain a compact feature space, which is robust to noise. To test this, we investigate various element-wise frequentist shrinkage models such as lasso, ridge, and elastic net, as well as structured sparsity models and a hierarchical sparsity model that embed the dependency structure and interaction among local features. In our experiments on a real-world dataset, when augmenting N-best word and phone level ASR hypotheses with confusion network features, our best sparse log-linear model obtains a relative improvement of 19.7% over a rule-based baseline, a 3.7% significant improvement over a traditional non-sparse log-linear model, and outperforms a state-of-the-art SVM model by 2.2%.
LTI_Alexander_Rudnicky.txt
- Yun-Nung (Vivian) Chen, William Yang Wang, Alexander I. Rudnicky. 2013. Unsupervised induction and filling of semantic slots for spoken dialogue systems using frame-semantic parsing. Abstract: Spoken dialogue systems typically use predefined semantic slots to parse users' natural language inputs into unified semantic representations. To define the slots, domain experts and professional annotators are often involved, and the cost can be expensive. In this paper, we ask the following question: given a collection of unlabeled raw audios, can we use the frame semantics theory to automatically induce and fill the semantic slots in an unsupervised fashion? To do this, we propose the use of a state-of-the-art frame-semantic parser, and a spectral clustering based slot ranking model that adapts the generic output of the parser to the target semantic space. Empirical experiments on a real-world spoken dialogue dataset show that the automatically induced semantic slots are in line with the reference slots created by domain experts: we observe a mean averaged precision of 69.36% using ASR-transcribed data. Our slot filling evaluations also indicate the promising future of this proposed approach.
LTI_Alexander_Rudnicky.txt
- A. Nanavati, Nitendra Rajput, Alexander I. Rudnicky, M. Turunen, Thomas Sandholm, Cosmin Munteanu, Gerald Penn. 2012. SiMPE: 7th workshop on speech and sound in mobile and pervasive environments. Abstract: The SiMPE workshop series started in 2006 [2] with the goal of enabling speech processing on mobile and embedded devices to meet the challenges of pervasive environments (such as noise) and leveraging the context they offer (such as location). SiMPE 2010 and 2011 brought together researchers from the speech and the HCI communities. Multimodality got more attention in SiMPE 2008 than it had received in the previous years. In SiMPE 2007, the focus was on developing regions. Speech User interaction in cars was a focus area in 2009. With SiMPE 2012, the 7th in the series, we hope to explore the area of speech along with sound. When using the mobile in an eyes-free manner, it is natural and convenient to hear about notifications and events. The arrival of an SMS has used a very simple sound based notification for a long time now. The technologies underlying speech processing and sound processing are quite different and these communities have been working mostly independent of each other. And yet, for multimodal interactions on the mobile, it is perhaps natural to ask whether and how speech and sound can be mixed and used more effectively and naturally.
LTI_Alexander_Rudnicky.txt
- Elijah Mayfield, David Adamson, Alexander I. Rudnicky, Carolyn Penstein Rosé. 2012. Computational representation of discourse practices across populations in task-based dialogue. Abstract: In this work, we employ quantitative methods to describe the discourse practices observed in a direction giving task. We place a special emphasis on comparing differences in strategies between two separate populations and between successful and unsuccessful groups. We isolate differences in these strategies through several novel representations of discourse practices. We find that information sharing, instruction giving, and social feedback strategies are distinct between subpopulations in empirically identifiable ways.
LTI_Alexander_Rudnicky.txt
- Seshadri Sridharan, Yun-Nung (Vivian) Chen, K. Chang, Alexander I. Rudnicky. 2012. NeuroDialog: an EEG-enabled spoken dialog interface. Abstract: Understanding user intent is a difficult problem in Dialog Systems, as they often need to make decisions under uncertainty. Using an inexpensive, consumer grade EEG sensor and a Wizard-of-Oz dialog system, we show that it is possible to detect system misunderstanding even before the user reacts vocally. We also present the design and implementation details of NeuroDialog, a proof-of-concept dialog system that uses an EEG based predictive model to detect system misrecognitions during live interaction.
LTI_Alexander_Rudnicky.txt
- Longlu Qin, Ming Sun, Alexander I. Rudnicky. 2012. System combination for out-of-vocabulary word detection. Abstract: This paper presents a method to improve the out-of-vocabulary (OOV) word detection performance by combining multiple speech recognition systems' outputs. Three different fragment-word hybrid systems, the phone, subword, and graphone systems, were built for detecting OOV words. Then outputs from each individual system were combined using ROVER. Two combination metrics were explored in ROVER, voting by word frequency and voting by both word frequency and word confidence score. The experimental results show that the OOV word detection performance of the ROVER system with confidence scores is better than the ROVER system with only word frequency, as well as any of the individual hybrid systems.
LTI_Alexander_Rudnicky.txt
- Longlu Qin, Alexander I. Rudnicky. 2012. OOV Word Detection using Hybrid Models with Mixed Types of Fragments. Abstract: This paper presents initial studies to improve the out-ofvocabulary (OOV) word detection performance by using mixed types of fragment units in one hybrid system. Three types of fragment units, subwords, syllables, and graphones, were combined in two different ways to build the hybrid lexicon and language model. The experimental results show that hybrid systems with mixed types of fragment units perform better than hybrid systems using only one type of fragment unit. After comparing the OOV word detection performance with the number and length of fragment units of each system, we proposed future work to better utilize mixed types of fragment units in a hybrid system.
LTI_Alexander_Rudnicky.txt
- Aasish Pappu, Alexander I. Rudnicky. 2012. The Structure and Generality of Spoken Route Instructions. Abstract: A robust system that understands route instructions should be able to process instructions generated naturally by humans. Also desirable would be the ability to handle repairs and other modifications to existing instructions. To this end, we collected a corpus of spoken instructions (and modified instructions) produced by subjects provided with an origin and a destination. We found that instructions could be classified into four categories, depending on their intent such as imperative, feedback, or meta comment. We asked a different set of subjects to follow these instructions to determine the usefulness and comprehensibility of individual instructions. Finally, we constructed a semantic grammar and evaluated its coverage. To determine whether instruction-giving forms a predictable sub-language, we tested the grammar on three corpora collected by others and determined that this was largely the case. Our work suggests that predictable sub-languages may exist for well-defined tasks.
LTI_Alexander_Rudnicky.txt
- B. Miller, C. H. Hwang, Yugyung Lee, Janet Roberts, Alexander I. Rudnicky. 2011. The I 3 S Project : A Mixed , Behavioral and Semantic Approach to Discourse / Dialogue Systems. Abstract: The Intuitive Interfaces to Information Systems (I3S) project at MCC investigated novel approaches to simplifying the construction of spoken dialogue systems. Our goals included designing a system architecture that allows domain independent strategies, such as appropriate conversational gambits, to be separated from domain dependent strategies, such as the most effective prompt to accomplish the immediate task at hand. The system uses plan-based representations for dialogue and domain, and includes such components as a problem solver and plan recognizer. In addition, a new representation called Meta Problem Solving Actions that provides the rationale for problem solving behavior has been introduced to improve overall system behavior and coherency. Other important contributions include the development of a conceptual layer called Interaction Plans that relate Meta Problem Solving Actions to discourse phenomena. We have used these representations to develop an innovative interpretation strategy for user speech acts, using a combination of behavioral and semantic rules and representations to determine the most reasonable interpretation while maintaining real-time response. Our new representations lead to reduced application development and maintenance time. Prototypes have been implemented in an information service-based domain (City Resources). Overview of Current Technology As the amount and complexity of interaction between humans and computers increase, the role of the computer is becoming that of collaboration with humans. A key aspect of this role is the support of mixed-initiative interaction (Allen 1999). In order to properly support such interaction, as well as capture the rationale behind communicative and domain actions, an extensive and flexible framework is required. Common frameworks for dialogue systems built to date include graph-based, frame-based and plan-based. We will first look at the advantages and disadvantages of these systems to motivate our design decisions. * Authors’ current addresses are: Miller– Cycorp, Inc. 3721 Exec. Cntr. Dr., Austin, TX 78731. miller@cyc.com; Lee– CS/Telecom. U of Missouri Kansas City, Kansas City, MO 64110. yugi@cstp.umkc.edu; Roberts – BMC Software, Inc., 10431 Morado Circle Austin, TX 78759. janet_roberts@bmc.com; Rudnicky– School of CS, CMU, Pittsburgh PA 15213, air@cs.cmu.edu; Hwang–MCC, Austin, TX. hwang@mcc.com. First generation spoken dialogue management systems typically involve the construction of conversational flowcharts linking possible dialogue states. Each state specifies a prompt, and enumerates possible user responses that transition the dialogue to a new state (e.g., see (McTear 1998)). A practical difficulty with this approach is that conversational context must be explicitly encoded in the conversational graph. Thus, graph-based systems are exponentially hard in the input domain––i.e., every possible state must be explicitly encoded. Second generation approaches (for example, (Ward and Issar 1994)) substantially simplify this development work by making an assumption that possible interactions with the application can be expressed as a set of frames, e.g., see (Hayes and Reddy 1983), and that interaction in the domain can be driven by the filling out a frame. Given that the parser also outputs semantic frames, the application frame can be filled in with a few rules (e.g., what to do when the parsed frame specifies an already filled in slot) as well as rules for generating a prompt back to the user. Frame-and-Rule systems have a number of advantages over the graph-based model. For one, it is relatively simple to allow for a kind of mixed-initiative interaction when the domain supports keyword or phrase-based parsing. A keyword in the input stream allows the parser to make a fairly good guess as to the corresponding semantic frame, and a deep parse is not required. Prompting is simply based on unfilled slots in the application semantic frame. This allows the following sort of interaction: (S) Where would you like to go? (U) I need to leave sometime after 4pm today. (S) Leaving today after 4pm; where would you like to go? Here the system prompt may have been generated on the basis of having a slot for arrival-city in the application semantic frame. If it is currently unfilled, the rule system may select it as the next thing to ask the user. The user, on the other hand, ignores the prompt and supplies a departure time. So long as the parser can recognize the keyword “leave” and realize that a time has been supplied, it can guess that a departure time has been specified regardless of the deeper meaning of the sentence. If this fills in another unfilled slot in the application frame, the system may enter
LTI_Alexander_Rudnicky.txt
- M. Marge, Alexander I. Rudnicky. 2011. Towards Overcoming Miscommunication in Situated Dialogue by Asking Questions. Abstract: Situated dialogue is prominent in the robot navigation task, where a human gives route instructions (i.e., a sequence of navigation commands) to an agent. We propose an approach for situated dialogue agents whereby they use strategies such as asking questions to repair or recover from unclear instructions, namely those that an agent misunderstands or considers ambiguous. Most immediately in this work we study examples from existing human-human dialogue corpora and relate them to our proposed approach.
LTI_Alexander_Rudnicky.txt
- Longlu Qin, Ming Sun, Alexander I. Rudnicky. 2011. OOV Detection and Recovery Using Hybrid Models with Different Fragments. Abstract: In this paper, we address the out-of-vocabulary (OOV) detection and recovery problem by developing three different fragmentword hybrid systems. A fragment language model (LM) and a word LM were trained separately and then combined into a single hybrid LM. Using this hybrid model, the recognizer can recognize any OOVs as fragment sequences. Different types of fragments, such as phones, subwords, and graphones were tested and compared on the WSJ 5k and 20k evaluation sets. The experiment results show that the subword and graphone hybrid systems perform better than the phone hybrid system in both 5k and 20k tasks. Furthermore, given less training data, the subword hybrid system is more preferable than the graphone hybrid system.
LTI_Alexander_Rudnicky.txt
- M. Marge, Alexander I. Rudnicky. 2011. The TeamTalk Corpus: Route Instructions in Open Spaces. Abstract: This paper describes the TeamTalk corpus, a new corpus of route instructions consisting of directions given to a robot. Participants provided instructions to a robot that needed to move to a marked location. The environment contained two robots and a symbolic destination marker, all within an open space. The corpus contains the collected speech, speech transcriptions, stimuli, and logs of all participant interactions from the experiment. Route instruction transcriptions are divided into steps and annotated as either metric-based or landmarkbased instructions. This corpus captured variability in directions for robots represented in 2-dimensional schematic, 3-dimensional virtual, and natural environments, all in the context of open space navigation.
LTI_Alexander_Rudnicky.txt
- A. Nanavati, Nitendra Rajput, Alexander I. Rudnicky, M. Turunen, A. Kun, Tim Paek, I. Tashev. 2011. SiMPE: 6th Workshop on Speech in Mobile and Pervasive Environments. Abstract: With the proliferation of pervasive devices and the increase in their processing capabilities, client-side speech processing has been emerging as a viable alternative. The SiMPE workshop series started in 2006 [5] with the goal of enabling speech processing on mobile and embedded devices to meet the challenges of pervasive environments (such as noise) and leveraging the context they offer (such as location). SiMPE 2010, the latest in the series brought together, very successfully, researchers from the speech and the HCI communities. We believe this is the beginning.
LTI_Alexander_Rudnicky.txt
SiMPE 2011, the 6th in the series, will continue to explore issues, possibilities, and approaches for enabling speech processing as well as convenient and effective speech and multimodal user interfaces. Over the years, SiMPE has been evolving too, and since last year, one of our major goals has been to increase the participation of speech/multimodal HCI designers, and increase their interactions with speech processing experts.
LTI_Alexander_Rudnicky.txt
Multimodality got more attention in SiMPE 2008 than it has received in the previous years. In SiMPE 2007 [4], the focus was on developing regions. Given the importance of speech in developing regions, SiMPE 2008 had "SiMPE for developing regions" as a topic of interest. Speech User interaction in cars was a focus area in 2009 [2].
LTI_Alexander_Rudnicky.txt
- 李 清宰, 河原 達也, Alexander I. Rudnicky. 2011. Collecting Speech Data using Amazon's Mechanical Turk for Evaluating Voice Search System. Abstract: This paper describes a crowd-sourcing method to collect speech data using Amazon’s Mechanical Turk (MTurk). We designed a task (HIT) to collect speech data as an evaluation set for voice search and another task to verify the quality of the collected speech data. More than a thousand utterances are collected very efficiently. It turned out that more than 90% of them are valid with correct transcript, and reasonable recognition accuracy is achieved. Using the data, we conducted evaluation of the voice book search system, and confirmed that the combination of slot-based vector space models provides higher search accuracy than the conventional single vector space model.
LTI_Alexander_Rudnicky.txt
- M. Marge, Alexander I. Rudnicky. 2010. Comparing Spoken Language Route Instructions for Robots across Environment Representations. Abstract: Spoken language interaction between humans and robots in natural environments will necessarily involve communication about space and distance. The current study examines people's close-range route instructions for robots and how the presentation format (schematic, virtual or natural) and the complexity of the route affect the content of instructions. We find that people have a general preference for providing metric-based instructions. At the same time, presentation format appears to have less impact on the formulation of these instructions. We conclude that understanding of spatial language requires handling both landmark-based and metric-based expressions.
LTI_Alexander_Rudnicky.txt
- Cheongjae Lee, Alexander I. Rudnicky, G. G. Lee. 2010. Let's Buy Books: Finding eBooks using voice search. Abstract: We describe Let's Buy Books, a dialog system that helps users search for eBook titles. In this paper we compare different vector space approaches to voice search and find that a hybrid approach using a weighted sub-space model smoothed with a general model provides the best performance over different conditions and evaluated using both synthetic queries and queries collected from users through questionnaires.
LTI_Alexander_Rudnicky.txt
- M. Marge, João Miranda, A. Black, Alexander I. Rudnicky. 2010. Towards Improving the Naturalness of Social Conversations with Dialogue Systems. Abstract: We describe an approach to improving the naturalness of a social dialogue system, Talkie, by adding disfluencies and other content-independent enhancements to synthesized conversations. We investigated whether listeners perceive conversations with these improvements as natural (i.e., human-like) as human-human conversations. We also assessed their ability to correctly identify these conversations as between humans or computers. We find that these enhancements can improve the perceived naturalness of conversations for observers "overhearing" the dialogues.
LTI_Alexander_Rudnicky.txt
- M. Marge, S. Banerjee, Alexander I. Rudnicky. 2010. Using the Amazon Mechanical Turk to Transcribe and Annotate Meeting Speech for Extractive Summarization. Abstract: Due to its complexity, meeting speech provides a challenge for both transcription and annotation. While Amazon's Mechanical Turk (MTurk) has been shown to produce good results for some types of speech, its suitability for transcription and annotation of spontaneous speech has not been established. We find that MTurk can be used to produce high-quality transcription and describe two techniques for doing so (voting and corrective). We also show that using a similar approach, high quality annotations useful for summarization systems can also be produced. In both cases, accuracy is comparable to that obtained using trained personnel.
LTI_Alexander_Rudnicky.txt
- Alexander I. Rudnicky, Aasish Pappu, Peng Li, M. Marge, Benjamin Frisch. 2010. Instruction Taking in the TeamTalk System. Abstract: TeamTalk is a dialog framework that supports multiparticipant spoken interaction between humans and robots in a task-oriented setting that requires cooperation and coordination among team members. We describe two new features to the system, the ability for robots to accept and remember location labels and the ability to learn action sequences. These capabilities were made possible by incorporating an ontology and an instruction understanding component into the system.
LTI_Alexander_Rudnicky.txt
- M. Marge, S. Banerjee, Alexander I. Rudnicky. 2010. Using the Amazon Mechanical Turk for transcription of spoken language. Abstract: We investigate whether Amazon's Mechanical Turk (MTurk) service can be used as a reliable method for transcription of spoken language data. Utterances with varying speaker demographics (native and non-native English, male and female) were posted on the MTurk marketplace together with standard transcription guidelines. Transcriptions were compared against transcriptions carefully prepared in-house through conventional (manual) means. We found that transcriptions from MTurk workers were generally quite accurate. Further, when transcripts for the same utterance produced by multiple workers were combined using the ROVER voting scheme, the accuracy of the combined transcript rivaled that observed for conventional transcription methods. We also found that accuracy is not particularly sensitive to payment amount, implying that high quality results can be obtained at a fraction of the cost and turnaround time of conventional methods.
LTI_Alexander_Rudnicky.txt
- Longlu Qin, Alexander I. Rudnicky. 2010. Implementing and Improving MMIE Training in SphinxTrain. Abstract: Discriminative training schemes, such as Maximum Mutual Information Estimation (MMIE), have been used to improve the accuracy of speech recognition systems trained using Maximum Likelihood Estimation (MLE). In this paper, we present the implementation details of MMIE training in SphinxTrain and baseline results for MMIE training on the Wall Street Journal (WSJ) SI84 and SI284 data sets. This paper also introduces an efficient lattice pruning technique that both speeds up the process and increases the impact of MMIE training on recognition accuracy. The proposed pruning technique, based on posterior probability pruning, is shown to provide better performance than MMIE using standard pruning techniques.
LTI_Alexander_Rudnicky.txt
- Longlu Qin, Alexander I. Rudnicky. 2010. The effect of lattice pruning on MMIE training. Abstract: In discriminative training, such as Maximum Mutual Information Estimation (MMIE) training, a word lattice is usually used as a compact representation of many different sentence hypotheses and hence provides an efficient representation of the confusion data. However, in a large vocabulary continuous speech recognition (LVCSR) system trained from hundreds or thousands hours training data, the extended Baum-Welch (EBW) computation on the word lattice is still very expensive. In this paper, we investigated the effect of lattice pruning on MMIE training, where we tested the MMIE performance trained with different lattice complexity. A beam pruning and a posterior probability pruning method were applied to generate different sizes of word lattices. The experimental results show that using the posterior probability lattice pruning algorithm, we can save about 40% of the total computation and get the same or more improvement compared to the baseline MMIE result.
LTI_Alexander_Rudnicky.txt
- A. Nanavati, Nitendra Rajput, Alexander I. Rudnicky, M. Turunen, A. Kun, Tim Paek, I. Tashev. 2010. SiMPE: 5th workshop on speech in mobile and pervasive environments. Abstract: With the proliferation of pervasive devices and the increase in their processing capabilities, client-side speech processing has been emerging as a viable alternative. The SiMPE workshop series started in 2006 [5] with the goal of enabling speech processing on mobile and embedded devices to meet the challenges of pervasive environments (such as noise) and leveraging the context they offer (such as location).
LTI_Alexander_Rudnicky.txt
SiMPE 2010, the 5th in the series, will continue to explore issues, possibilities, and approaches for enabling speech processing as well as convenient and effective speech and multimodal user interfaces. Over the years, SiMPE has been evolving too, and since last year, one of our major goals has been to increase the participation of speech/multimodal HCI designers, and increase their interactions with speech processing experts.
LTI_Alexander_Rudnicky.txt
Multimodality got more attention in SiMPE 2008 than it has received in the previous years. In SiMPE 2007 [4], the focus was on developing regions. Given the importance of speech in developing regions, SiMPE 2008 had "SiMPE for developing regions" as a topic of interest. Speech User interaction in cars was a focus area in 2009 [2].
LTI_Alexander_Rudnicky.txt
Given the multi-disciplinary nature of our goal, we hope that SiMPE will become the prime meeting ground for experts in these varied fields to bring to fruition, novel, useful and usable mobile speech applications.
LTI_Alexander_Rudnicky.txt
- S. Banerjee, Alexander I. Rudnicky. 2009. Detecting the Noteworthiness of Utterances in Human Meetings. Abstract: Our goal is to make note-taking easier in meetings by automatically detecting noteworthy utterances in verbal exchanges and suggesting them to meeting participants for inclusion in their notes. To show feasibility of such a process we conducted a Wizard of Oz study where the Wizard picked automatically transcribed utterances that he judged as noteworthy, and suggested their contents to the participants as notes. Over 9 meetings, participants accepted 35% of these suggestions. Further, 41.5% of their notes at the end of the meeting contained Wizard-suggested text. Next, in order to perform noteworthiness detection automatically, we annotated a set of 6 meetings with a 3-level noteworthiness annotation scheme, which is a break from the binary "in summary"/"not in summary" labeling typically used in speech summarization. We report Kappa of 0.44 for the 3-way classification, and 0.58 when two of the 3 labels are merged into one. Finally, we trained an SVM classifier on this annotated data; this classifier's performance lies between that of trivial baselines and inter-annotator agreement.
LTI_Alexander_Rudnicky.txt
- Roni Rosenfeld, Alexander I. Rudnicky, J. Sherwani. 2009. Speech interfaces for information access by low literate users. Abstract: In the developing world, critical information, such as in the field of healthcare, can often mean the difference between life and death. While information and communications technologies enable multiple mechanisms for information access by literate users, there are limited options for information access by low literate users.
LTI_Alexander_Rudnicky.txt
In this thesis, I investigate the use of spoken language interfaces by low literate users in the developing world, specifically health information access by community health workers in Pakistan. I present results from five user studies comparing a variety of information access interfaces for these users. I first present a comparison of audio and text comprehension by users of varying literacy levels and with diverse linguistic backgrounds. I also present a comparison of two telephony-based interfaces with different input modalities: touch-tone and speech. Based on these studies, I show that speech interfaces outperform equivalent touch-tone interfaces for both low literate and literate users, and that speech interfaces outperform text interfaces for low literate users.
LTI_Alexander_Rudnicky.txt
A further contribution of the thesis is a novel approach for the rapid generation of speech recognition capability in resource-poor languages. Since most languages spoken in the developing world have limited speech resources, it is difficult to create speech recognizers for such languages. My approach leverages existing off-the-shelf technology to create robust, speaker-independent, small-vocabulary speech recognition capability with minimal training data requirements. I empirically show that this method is able to reach recognition accuracies of greater than 90% with very little effort and, even more importantly, little speech technology skill.
LTI_Alexander_Rudnicky.txt
The thesis concludes with an exploration of orality as a lens with which to analyze and understand low literate users, as well as recommendations on the design and testing of user interfaces for such users, such as an appreciation for the role of dramatic narrative in content creation for information access systems.
LTI_Alexander_Rudnicky.txt
- Mohit Kumar, Dipanjan Das, Sachin Agarwal, Alexander I. Rudnicky. 2009. Non-textual Event Summarization by Applying Machine Learning to Template-based Language Generation. Abstract: We describe a learning-based system that creates draft reports based on observation of people preparing such reports in a target domain (conference replanning). The reports (or briefings) are based on a mix of text and event data. The latter consist of task creation and completion actions, collected from a wide variety of sources within the target environment. The report drafting system is part of a larger learning-based cognitive assistant system that improves the quality of its assistance based on an opportunity to learn from observation. The system can learn to accurately predict the briefing assembly behavior and shows significant performance improvements relative to a non-learning system, demonstrating that it's possible to create meaningful verbal descriptions of activity from event streams.
LTI_Alexander_Rudnicky.txt
- Kazunori Komatani, Alexander I. Rudnicky. 2009. Predicting Barge-in Utterance Errors by using Implicitly-Supervised ASR Accuracy and Barge-in Rate per User. Abstract: Modeling of individual users is a promising way of improving the performance of spoken dialogue systems deployed for the general public and utilized repeatedly. We define "implicitly-supervised" ASR accuracy per user on the basis of responses following the system's explicit confirmations. We combine the estimated ASR accuracy with the user's barge-in rate, which represents how well the user is accustomed to using the system, to predict interpretation errors in barge-in utterances. Experimental results showed that the estimated ASR accuracy improved prediction performance. Since this ASR accuracy and the barge-in rate are obtainable at runtime, they improve prediction performance without the need for manual labeling.
LTI_Alexander_Rudnicky.txt
- A. Nanavati, Nitendra Rajput, Alexander I. Rudnicky, M. Turunen, A. Kun, Tim Paek, I. Tashev. 2009. SiMPE: Fourth Workshop on Speech in Mobile and Pervasive Environments. Abstract: With the proliferation of pervasive devices and the increase in their processing capabilities, client-side speech processing has been emerging as a viable alternative.
LTI_Alexander_Rudnicky.txt
SiMPE 2009, the fourth in the series, will continue to explore issues, possibilities, and approaches for enabling speech processing as well as convenient and effective speech and multimodal user interfaces. One of our major goals for SiMPE 2009 is to increase the participation of speech/multimodal HCI designers, and increase their interactions with speech processing experts.
LTI_Alexander_Rudnicky.txt
Multimodality got more attention in SiMPE 2008 than it has received in the previous years. In SiMPE 2007 [3], the focus was on developing regions. Given the importance of speech in developing regions, SiMPE 2008 had "SiMPE for developing regions" as a topic of interest. We think of this as a key emerging area for mobile speech applications, and will continue this in 2009 as well.
LTI_Alexander_Rudnicky.txt
- M. Marge, Aasish Pappu, Benjamin Frisch, T. Harris, Alexander I. Rudnicky. 2009. Exploring Spoken Dialog Interaction in Human-Robot Teams. Abstract: We describe TeamTalk: A human-robot interface capable of interpreting spoken dialog interactions between humans and robots in consistent real-world and virtual-world scenarios. The system is used in real environments by humanrobot teams to perform tasks associated with treasure hunting. In order to conduct research exploring spoken human-robot interaction, we have developed a virtual platform using USARSim. We describe the system, its use as a high-fidelity simulator with USARSim, and current experiments that benefit from a simulated environment and that would be difficult to implement in real-world scenarios.
LTI_Alexander_Rudnicky.txt
- David Huggins-Daines, Alexander I. Rudnicky. 2009. Combining mixture weight pruning and quantization for small-footprint speech recognition. Abstract: Semi-continuous acoustic models, where the output distributions for all Hidden Markov Model states share a common codebook of Gaussian density functions, are a well-known and proven technique for reducing computation in automatic speech recognition. However, the size of the parameter files, and thus their memory footprint at runtime, can be very large. We demonstrate how non-linear quantization can be combined with a mixture weight distribution pruning technique to halve the size of the models with minimal performance overhead and no increase in error rate.
LTI_Alexander_Rudnicky.txt
- A. Nanavati, Nitendra Rajput, Alexander I. Rudnicky, M. Turunen. 2008. SiMPE: third workshop on speech in mobile and pervasive environments. Abstract: In the past, voice-based applications have been accessed using unintelligent telephone devices through Voice Browsers that reside on the server. The proliferation of pervasive devices and the increase in their processing capabilities, clientside speech processing has been emerging as a viable alternative. In SiMPE 2008, the third in the series, we will continue to explore the various possibilities and issues that arise while enabling speech processing on resource-constrained, possibly mobile devices.
LTI_Alexander_Rudnicky.txt
In SiMPE 2007 [2], the focus was on developing regions. Given the importance of speech in developing regions, SiMPE 2008 will include "SiMPE for developing regions" as a topic of interest. As a result of discussions in SiMPE 2007, we plan to invite and encourage Speech UI designers to participate in SiMPE 2008. We will also review the progress made over the last two years, in the areas and key problems identified in SiMPE 2006 [3].
LTI_Alexander_Rudnicky.txt
- S. Banerjee, Alexander I. Rudnicky. 2008. An extractive-summarization baseline for the automatic detection of noteworthy utterances in multi-party human-human dialog. Abstract: Our goal is to reduce meeting participants' note-taking effort by automatically identifying utterances whose contents meeting participants are likely to include in their notes. Though note-taking is different from meeting summarization, these two problems are related. In this paper we apply techniques developed in extractive meeting summarization research to the problem of identifying noteworthy utterances. We show that these algorithms achieve an f-measure of 0.14 over a 5-meeting sequence of related meetings. The precision - 0.15 - is triple that of the trivial baseline of simply labeling every utterance as noteworthy. We also introduce the concept of ldquoshow-worthyrdquo utterances - utterances that contain information that could conceivably result in a note. We show that such utterances can be recognized with an 81% accuracy (compared to 53% accuracy of a majority classifier). Further, if non-show-worthy utterances are filtered out, the precision of noteworthiness detection improves by 33% relative.
LTI_Alexander_Rudnicky.txt
- David Huggins-Daines, Alexander I. Rudnicky. 2008. Interactive ASR Error Correction for Touchscreen Devices. Abstract: We will demonstrate a novel graphical interface for correcting search errors in the output of a speech recognizer. This interface allows the user to visualize the word lattice by "pulling apart" regions of the hypothesis to reveal a cloud of words simlar to the "tag clouds" popular in many Web applications. This interface is potentially useful for dictation on portable touchscreen devices such as the Nokia N800 and other mobile Internet devices.
LTI_Alexander_Rudnicky.txt
- Dipanjan Das, Mohit Kumar, Alexander I. Rudnicky. 2008. Automatic Extraction of Briefing Templates. Abstract: An approach to solving the problem of automatic briefing generation from non-textual events can be segmenting the task into two major steps, namely, extraction of briefing templates and learning aggregators that collate information from events and automatically fill up the templates. In this paper, we describe two novel unsupervised approaches for extracting briefing templates from human written reports. Since the problem is non-standard, we define our own criteria for evaluating the approaches and demonstrate that both approaches are effective in extracting domain relevant templates with promising accuracies.
LTI_Alexander_Rudnicky.txt
- David Huggins-Daines, Alexander I. Rudnicky. 2008. Mixture Pruning and Roughening for Scalable Acoustic Models. Abstract: In an automatic speech recognition system using a tied-mixture acoustic model, the main cost in CPU time and memory lies not in the evaluation and storage of Gaussians themselves but rather in evaluating the mixture likelihoods for each state output distribution. Using a simple entropy-based technique for pruning the mixture weight distributions, we can achieve a significant speedup in recognition for a 5000-word vocabulary with a negligible increase in word error rate. This allows us to achieve real-time connected-word dictation on an ARM-based mobile device.
LTI_Alexander_Rudnicky.txt
- A. Chotimongkol, Alexander I. Rudnicky. 2008. Acquiring Domain-Specific Dialog Information from Task-Oriented Human-Human Interaction through an Unsupervised Learning. Abstract: We describe an approach for acquiring the domain-specific dialog knowledge required to configure a task-oriented dialog system that uses human-human interaction data. The key aspects of this problem are the design of a dialog information representation and a learning approach that supports capture of domain information from in-domain dialogs. To represent a dialog for a learning purpose, we based our representation, the form-based dialog structure representation, on an observable structure. We show that this representation is sufficient for modeling phenomena that occur regularly in several dissimilar task-oriented domains, including information-access and problem-solving. With the goal of ultimately reducing human annotation effort, we examine the use of unsupervised learning techniques in acquiring the components of the form-based representation (i.e. task, subtask, and concept). These techniques include statistical word clustering based on mutual information and Kullback-Liebler distance, TextTiling, HMM-based segmentation, and bisecting K-mean document clustering. With some modifications to make these algorithms more suitable for inferring the structure of a spoken dialog, the unsupervised learning algorithms show promise.
LTI_Alexander_Rudnicky.txt
- Alexander I. Rudnicky. 2008. Improving Automatic Meeting- Understanding by Leveraging Meeting Participant Behavior. Abstract: Most office workers participate in multiple meetings on a daily basis. Although surveys show that large parts of these meetings are often not useful to all the participants, it has been shown (Banerjee, Rose, & Rudnicky, 2005) that participants do sometimes need to retrieve information discussed at previous meetings, and that this is usually a difficult task. The human-impact goal of this thesis is to help humans retrieve the information they need from past meetings. Several approaches have been explored in the past to help humans with this retrieval task. These approaches include meeting recording and browsing systems (Cutler, et al., 2002), and systems that automatically detect and extract pieces of useful information from the speech such as action items (Purver, Ehlen, & Niekrasz, 2006). These approaches are often examples of either classic supervised learning (with offline data collection, annotation and model training) or unsupervised learning with some adaptation to the meeting participants. While these approaches make use of the expertise of offline human annotators, we believe that little effort has been made to effectively harness the knowledge that the meeting participants have. Specifically, meeting participants will be the best judges of what information is important in a meeting. This judgment, if properly leveraged, can provide high quality information with which to improve automatic meeting-understanding systems. The challenge of leveraging meeting participant knowledge, however, is that they may have little motivation to provide labeled data to the system without some perceptible and immediate benefit. Moreover providing such information may be distracting and thus undesirable. Our hypothesis is that despite this challenge, it is possible to motivate the human users of an interactive system to provide supervision. We propose to extract this supervision by designing services that provide the user with immediate benefit, but that are designed in such a way that as the user interacts with the system, his actions can be interpreted as labeled data. Given this labeled data, the system can improve its performance over time. We propose two mechanisms: passive and active supervision extraction. In the passive approach, the system cannot select data points to query labels for, and data acquisition from user actions occurs entirely due to the design of the interface. In the active approach, the system selects data points and queries the user for their labels. Although this is similar to active learning, we are interested in motivating ordinary users to provide data (as opposed to giving the data to labelers). We create this motivation by embedding the queries in an interactive service that gives the user immediate benefit every time a query is made. The user’s responses are then interpreted as labels. We apply these
LTI_Alexander_Rudnicky.txt
- David Huggins-Daines, Alexander I. Rudnicky. 2007. Implicitly Supervised Language Model Adaptation for Meeting Transcription. Abstract: We describe the use of meeting metadata, acquired using a computerized meeting organization and note-taking system, to improve automatic transcription of meetings. By applying a two-step language model adaptation process based on notes and agenda items, we were able to reduce perplexity by 9% and word error rate by 4% relative on a set of ten meetings recorded in-house. This approach can be used to leverage other types of metadata.
LTI_Alexander_Rudnicky.txt
- Mohit Kumar, Nikesh Garera, Alexander I. Rudnicky. 2007. Learning from the Report-writing Behavior of Individuals. Abstract: We describe a briefing system that learns to predict the contents of reports generated by users who create periodic (weekly) reports as part of their normal activity. The system observes content-selection choices that users make and builds a predictive model that could, for example, be used to generate an initial draft report. Using a feature of the interface the system also collects information about potential user-specific features. The system was evaluated under realistic conditions, by collecting data in a project-based university course where student group leaders were tasked with preparing weekly reports for the benefit of the instructors, using the material from individual student reports.
LTI_Alexander_Rudnicky.txt
LTI_Alexander_Rudnicky.txt
This paper addresses the question of whether data derived from the implicit supervision provided by end-users is robust enough to support not only model parameter tuning but also a form of feature discovery. Results indicate that this is the case: system performance improves based on the feedback from user activity. We find that individual learned models (and features) are user-specific, although not completely idiosyncratic. Thismay suggest that approaches which seek to optimizemodels globally (say over a large corpus of data) may not in fact produce results acceptable to all individuals.
LTI_Alexander_Rudnicky.txt
- Giuseppe DiFabbrizio, Dilek Z. Hakkani-Tür, Oliver Lemon, M. Gilbert, Alexander I. Rudnicky. 2007. Panel on spoken dialog corpus composition and annotation for research. Abstract: The goal of this forum is to provide researchers from various institutes with the opportunity to comment on a proposed NSF-sponsored data collection plan for a spoken dialog corpus. The corpus is to be used for research in speech recognition, spoken language understanding, dialog management, machine learning, and language generation. Currently, there exists a corpus with over 600 dialog interactions, collected from users using the Discoh system (from the IEEE SLT 2006 workshop) and the Conquest system (from ICSLP 2006) to obtain general information about conference services. These systems were created as part of a joint collaboration between CMU, ATT, Edinburgh and ICSI.
LTI_Alexander_Rudnicky.txt
- T. Harris, Alexander I. Rudnicky. 2007. TeamTalk: A Platform for Multi-Human-Robot Dialog Research in Coherent Real and Virtual Spaces. Abstract: Performing experiments with human-robot interfaces often requires the allocation of expensive and complex hardware and large physical spaces. Those costs constrain development and research to the currently affordable resources, and they retard the testing-and-redevelopment cycle. In order to explore research free from mundane allocation constraints and speed-up our platform development cycle, we have developed a platform for research of multi-human-robot spoken dialog in coherent real and virtual spaces. We describe the system, and speculate on how it will further research in this domain.
LTI_Alexander_Rudnicky.txt
- Yi Wu, Rong Zhang, Alexander I. Rudnicky. 2007. Data selection for speech recognition. Abstract: This paper presents a strategy for efficiently selecting informative data from large corpora of transcribed speech. We propose to choose data uniformly according to the distribution of some target speech unit (phoneme, word, character, etc). In our experiment, in contrast to the common belief that "there is no data like more data", we found it possible to select a highly informative subset of data that produces recognition performance comparable to a system that makes use of a much larger amount of data. At the same time, our selection process is efficient and fast.
LTI_Alexander_Rudnicky.txt
- S. Banerjee, Alexander I. Rudnicky. 2007. Segmenting meetings into agenda items by extracting implicit supervision from human note-taking. Abstract: Splitting a meeting into segments such that each segment contains discussions on exactly one agenda item is useful for tasks such as retrieval and summarization of agenda item discussions. However, accurate topic segmentation of meetings is a difficult task. In this paper, we investigate the idea of acquiring implicit supervision from human meeting participants to solve the segmentation problem. Specifically we have implemented and tested a note taking interface that gives value to users by helping them organize and retrieve their notes easily, but that also extracts a segmentation of the meeting based on note taking behavior. We show that the segmentation so obtained achieves a Pk value of 0.212 which improves upon an unsupervised baseline by 45% relative, and compares favorably with a current state-of-the-art algorithm. Most importantly, we achieve this performance without any features or algorithms in the classic sense.
LTI_Alexander_Rudnicky.txt
- D. Bohus, Alexander I. Rudnicky. 2007. Implicitly-supervised Learning in Spoken Language Interfaces: an Application to the Confidence Annotation Problem. Abstract: In this paper we propose the use of a novel learning paradigm in spoken language interfaces – implicitly-supervised learning. The central idea is to extract a supervision signal online, directly from the user, from certain patterns that occur naturally in the conversation. The approach eliminates the need for developer supervision and facilitates online learning and adaptation. As a first step towards better understanding its properties, advantages and limitations, we have applied the proposed approach to the problem of confidence annotation. Experimental results indicate that we can attain performance similar to that of a fully supervised model, without any manual labeling. In effect, the system learns from its own experiences with the users. *
LTI_Alexander_Rudnicky.txt
- J. Bongard, Derek P. Brock, S. Collins, R. Duraiswami, Timothy W. Finin, Ian Harrison, Vasant G Honavar, G. Hornby, A. Jónsson, Mike Kassoff, D. Kortenkamp, Sanjeev Kumar, Ken Murray, Alexander I. Rudnicky, G. Trajkovski. 2007. Reports on the 2006 AAAI Fall Symposia. Abstract: The American Association for Artificial Intelligence was pleased to present the AAAI 2006 Fall Symposium Series, held Friday through Sunday, October 13-15, at the Hyatt Regency Crystal City in Washington, DC. Seven symposia were held. The titles were (1) Aurally Informed Performance: Integrating Ma- chine Listening and Auditory Presentation in Robotic Systems; (2) Capturing and Using Patterns for Evidence Detection; (3) Developmental Systems; (4) Integrating Reasoning into Everyday Applications; (5) Interaction and Emergent Phenomena in Societies of Agents; (6) Semantic Web for Collaborative Knowledge Acquisition; and (7) Spacecraft Autonomy: Using AI to Expand Human Space Exploration.
LTI_Alexander_Rudnicky.txt
- Alexander I. Rudnicky, Roni Rosenfeld, D. Bohus. 2007. Error awareness and recovery in conversational spoken language interfaces. Abstract: One of the most important and persistent problems in the development of conversational spoken language interfaces is their lack of robustness when confronted with understanding-errors. Most of these errors stem from limitations in current speech recognition technology, and, as a result, appear across all domains and interaction types. There are two approaches towards increased robustness: prevent the errors from happening, or recover from them through conversation, by interacting with the users.
LTI_Alexander_Rudnicky.txt
In this dissertation we have engaged in a research program centered on the second approach. We argue that three capabilities are needed in order to seamlessly and efficiently recover from errors: (1) systems must be able to detect the errors, preferably as soon as they happen, (2) systems must be equipped with a rich repertoire of error recovery strategies that can be used to set the conversation back on track, and (3) systems must know how to choose optimally between different recovery strategies at run-time, i.e. they must have good error recovery policies . This work makes a number of contributions in each of these areas.
LTI_Alexander_Rudnicky.txt
First, to provide a real-world experimental platform this error handling research program, we developed RavenClaw, a plan-based dialog management framework for task-oriented domains. The framework has a modular architecture that decouples the error handling mechanisms from the do main-specific dialog control logic; in the process, it lessens system authoring effort, promotes portability and reusability, and ensures consistency in error handling behaviors both within and across domains. To date, RavenClaw has been used to develop and successfully deploy a number of spoken dialog systems spanning different domains an interaction types. Together with these systems, RavenClaw provides the infrastructure for the error handling work described in this dissertation.
LTI_Alexander_Rudnicky.txt
To detect errors, spoken language interfaces typically rely on confidence scores. In this work we investigated in depth current supervised learning techniques for building error detection models. In addition, we proposed a novel, implicitly-supervised approach for this task. No developer supervision is required in this case; rather, the system obtains the supervision signal online, from naturally-occurring patterns in the interaction. We believe this learning paradigm represents an important step towards constructing autonomously self-improving systems. Furthermore, we developed a scalable, data-driven approach that allows a system to continuously monitor and update beliefs throughout the conversation; the proposed approach leads to significant improvements in both the overall effectiveness and efficiency of the interaction.
LTI_Alexander_Rudnicky.txt
We developed and empirically investigated a large set of recovery strategies, targeting two types of understanding-errors that commonly occur in these systems: misunderstandings and nonunderstandings. Our results add to an existing body of knowledge about the advantages and disadvantages of these strategies, and highlight the importance of good recovery policies.
LTI_Alexander_Rudnicky.txt
In the last part of this work, we proposed and evaluated a novel online-learning based approach for developing recovery policies. The system constructs runtime estimates for the likelihood of success of each recovery strategy, together with confidence bounds for those estimates. These estimates are then used to construct a policy online, while balancing the system's exploration and exploitation goals. Experiments with a deployed spoken dialog system showed that the system was able to learn a more effective recovery policy in a relatively short time period.
LTI_Alexander_Rudnicky.txt
- D. Bohus, A. Raux, T. Harris, M. Eskénazi, Alexander I. Rudnicky. 2007. Olympus: an open-source framework for conversational spoken language interface research. Abstract: We introduce Olympus, a freely available framework for research in conversational interfaces. Olympus' open, transparent, flexible, modular and scalable nature facilitates the development of large-scale, real-world systems, and enables research leading to technological and scientific advances in conversational spoken language interfaces. In this paper, we describe the overall architecture, several systems spanning different domains, and a number of current research efforts supported by Olympus.
LTI_Alexander_Rudnicky.txt
- Mohit Kumar, Dipanjan Das, Alexander I. Rudnicky. 2007. Summarizing non-textual events with 'Briefing' focus. Abstract: We describe a learning-based system for generating reports based on a mix of text and event data. The system incorporates several stages of processing, including aggregation, template-filling and importance ranking. Aggregators and templates were based on a corpus of reports evaluated by human judges. Importance and granularity were learned from this corpus as well. We find that high-scoring reports (with a recall of 0.89) can be reliably produced using this procedure given a set of oracle features. The report drafting system is part of a learning cognitive assistant RADAR, and is used to describe its performance.
LTI_Alexander_Rudnicky.txt
- Mohit Kumar, Nikesh Garera, Alexander I. Rudnicky. 2006. A Briefing Tool that Learns Individual Report-Writing Behavior. Abstract: We describe a briefing system that learns to predict the contents of reports generated by users who create periodic (weekly) reports as part of their normal activity. We address the question whether data derived from the implicit supervision provided by end-users is robust enough to support not only model parameter tuning but also a form of feature discovery. The system was evaluated under realistic conditions, by collecting data in a project-based university course where student group leaders were tasked with preparing weekly reports for the benefit of the instructors, using the material from individual student reports
LTI_Alexander_Rudnicky.txt
- S. Banerjee, Alexander I. Rudnicky. 2006. A texttiling based approach to topic boundary detection in meetings. Abstract: Our goal is to automatically detect boundaries between discussions of different topics in meetings. Towards this end we adapt the TextTiling algorithm [1] to the context of meetings. Our features include not only the overlapped words between adjacent windows, but also overlaps in the amount of speech contributed by each meeting participant. We evaluate our algorithm by comparing the automatically detected boundaries with the true ones, and computing precision, recall and f–measure. We report average precision of 0.85 and recall of 0.59 when segmenting unseen test meetings. Error analysis of our results shows that although the basic idea of our algorithm is sound, it breaks down when participants stray from typical behavior (such as when they monopolize the conversation for too long).
LTI_Alexander_Rudnicky.txt