text
stringlengths
0
128k
source
stringclasses
64 values
- Rong Zhang, Alexander I. Rudnicky. 2006. Investigations of issues for using multiple acoustic models to improve continuous speech recognition. Abstract: This paper investigates two important issues in constructing and combining ensembles of acoustic models for reducing recognition errors. First, we investigate the applicability of the AnyBoost algorithm for acoustic model training. AnyBoost is a generalized Boosting method that allows the use of an arbitrary loss function as the training criterion to construct ensemble of classifiers. We choose the MCE discriminative objective function for our experiments. Initial test results on a real-world meeting recognition corpus show that AnyBoost is a competitive alternate to the standard AdaBoost algorithm. Second, we investigate ROVER-based combination, focusing on the technique for selecting correct hypothesized words from aligned WTN. We propose a neural network based insertion detection and word scoring scheme for this. Our approach consistently outperforms the current voting technique used by ROVER in the experiments.
LTI_Alexander_Rudnicky.txt
- A. Nanavati, Nitendra Rajput, Alexander I. Rudnicky, Roberto Sicconi. 2006. SiMPE: speech in mobile and pervasive environments. Abstract: Traditionally, voice-based applications have been accessed using unintelligent telephone devices through Voice Browsers that reside on the server. The proliferation of pervasive devices and the increase in their processing capabilities, client-side speech processing is emerging as a viable alternative. This workshop will explore the various possibilities and issues that arise while enabling speech processing on resource-constrained, possibly mobile devices. The workshop will highlight the many open areas that require research attention, identify key problems that need to be addressed, and also discuss a few approaches for solving some of them - to build the next generation of conversational systems.
LTI_Alexander_Rudnicky.txt
- J. Bongard, Derek P. Brock, S. Collins, R. Duraiswami, Timothy W. Finin, Ian Harrison, Vasant G Honavar, G. Hornby, A. Jónsson, Mike Kassoff, D. Kortenkamp, Sanjeev Kumar, Ken Murray, Alexander I. Rudnicky, G. Trajkovski. 2006. Aurally Informed Performance: Integrating Machine Listening and Auditory Presentation in Robotic Systems, Papers from the 2006 AAAI Fall Symposium, Washington, DC, USA, October 13-15, 2006. Abstract: This symposium brought together a number of researchers who are concerned with performance issues that robots face that depend, in some way, on sound. Many commercially marketed robotic platforms, as well as others that are moving from the laboratory into specialized public settings, already have rudimentary speech communication interfaces, and some are even being engineered for specific types of auditory tasks. In general, though, the ability of robots to monitor the auditory scene before them and to execute interactive behaviors informed by the interpretation or production of sound information remains far behind the broad and mostly transparent skills of human beings. It is an easy thing, for instance, for people to discern on the basis of audition alone who a familiar voice is, where the voice is located in the environment, and to act on other aspects of the au■ The American Association for Artificial Intelligence was pleased to present the AAAI 2006 Fall Symposium Series, held Friday through Sunday, October 13–15, at the Hyatt Regency Crystal City in Washington, DC. Seven symposia were held. The titles were (1) Aurally Informed Performance: Integrating Machine Listening and Auditory Presentation in Robotic Systems; (2) Capturing and Using Patterns for Evidence Detection; (3) Developmental Systems; (4) Integrating Reasoning into Everyday Applications; (5) Interaction and Emergent Phenomena in Societies of Agents; (6) Semantic Web for Collaborative Knowledge Acquisition; and (7) Spacecraft Autonomy: Using AI to Expand Human Space Exploration. ditory scene before them, such as noise, informative sounds, or the need for proximity or loudness to facilitate verbal communication. When these auditory skills are integrated with people’s other perceptual and reasoning abilities, substantial capacities for performance and interaction arise. The design goals for robot audition and utterance of information by sound, though, are not just those that correspond to human skills. Machine auditory sensing can be designed, in certain ways, to be more capable and acute than human hearing, and going beyond speech, robotic auditory displays can be engineered to render nonspeech auditory information in and for a variety of manners and purposes. Thus, a substantial interaction design space arises when different modes of human-robot interaction are augmented by conventional and enhanced auditory functions. Since the idea of “aurally informed performance” can be thought of as a two-sided proposition, involving both listening and presentation behaviors, the symposium was organized to focus on these themes separately and then conclude with a session on integrated systems. Much of the contributed research fit easily into this division. On each of the two main days, we began with an outline of recent research trends in the day’s topic, “Listening” on the first day and “Presentation” on the second, and then heard contributed talks from participants. Afternoons were devoted to critical discussions of talks given in the morning and examination of a relevant philosophical question about the nature and role of sound as information in the design context of robotics. The outline of trends in machine listening covered developments in sound localization and techniques for recognizing and extracting information from sound. Papers given under this day’s theme made several contributions in these areas. Novel methods for classifying acoustical environments, localizing sounds with headrelated transfer functions, organizing sounds with similar meanings, and perceiving and synthesizing speech were presented, and a chip being engineered for real-time classification of Reports on the 2006 AAAI Fall Symposia
LTI_Alexander_Rudnicky.txt
- Rong Zhang, Alexander I. Rudnicky, Tanja Schultz, R. Stern, Karthik Venkat Ramanan. 2006. Improving the Performance of LVCSR Using Ensemble of Acoustic Models. Abstract: Recent advances in Machine Learning have brought to attention new theories of learning as well as new approaches. Among these, the Ensemble method has received wide attention and has been shown to be a promising method for classification problems. Simply speaking, the ensemble method is a learning algorithm that constructs a set of “weak” classifiers and then combines their predictions to produce a more accurate classification. The underlying idea of the ensemble method is that the combination of diversified classifiers that have uncorrelated, and ideally complementary, error patterns can offer improved performance and a robust generalization capability. Given its successes for many classification problems, we began investigating the problem of adapting ensemble techniques to continuous speech recognition. Continuous Speech Recognition has been acknowledged as one of the most challenging tasks in classification. The performance of an ASR system is negatively impacted by a number of issues, such as corruption of noise, variability of speaker and speaking mode, change of environment conditions, transmission of channel, inaccuracy of model assumption, complexity of language, etc.. The primary goal of our research is to discover methods suitable for ensemble construction and combination that meet these special requirements of continuous speech recognition. We propose several novel ensemble-based acoustic model training and combination schemes, and test their effectiveness using real-world speech corpora. Preliminary results are described in this proposal, in particular • Utterance-level Boosting training algorithm for large scale acoustic modeling • Frame-level Boosting training algorithm using a Word Error Rate reduction criterion • N-Best list re-ranking and Rover combination to generate a better hypothesis Encouraging experimental results convince us that the ensemble technique is a promising method and that it has the potential to substantially improve the performance of a LVCSR system. However research on ensemble methods for speech recognition is still in its early stage and unsolved questions on ensemble generation and hypothesis combination remain to be addressed. This proposal sets out several key research topics that, if successfully addressed will have the potential to significantly increase the accuracy of ensemble-based speech recognition systems. These include the following: • Training criteria targeted at reducing Word Error Rate rather than Sentence Error Rate. • Integrating data manipulation and feature manipulation methods for continuous speech recognition. • Combination methods working on different objects, different levels and different decoding stages. • Ensemble-based semi-supervised acoustic model training algorithm using labeled and unlabeled data.
LTI_Alexander_Rudnicky.txt
- David Huggins-Daines, Mohit Kumar, Arthur Chan, A. Black, M. Ravishankar, Alexander I. Rudnicky. 2006. Pocketsphinx: A Free, Real-Time Continuous Speech Recognition System for Hand-Held Devices. Abstract: The availability of real-time continuous speech recognition on mobile and embedded devices has opened up a wide range of research opportunities in human-computer interactive applications. Unfortunately, most of the work in this area to date has been confined to proprietary software, or has focused on limited domains with constrained grammars. In this paper, we present a preliminary case study on the porting and optimization of CMU Sphinx-11, a popular open source large vocabulary continuous speech recognition (LVCSR) system, to hand-held devices. The resulting system operates in an average 0.87 times real-time on a 206 MHz device, 8.03 times faster than the baseline system. To our knowledge, this is the first hand-held LVCSR system available under an open-source license
LTI_Alexander_Rudnicky.txt
- S. Banerjee, Alexander I. Rudnicky. 2006. SmartNotes: Implicit Labeling of Meeting Data through User Note-Taking and Browsing. Abstract: We have implemented SmartNotes, a system that automatically acquires labeled meeting data as users take notes during meetings and browse the notes afterwards. Such data can enable meeting understanding components such as topic and action item detectors to automatically improve their performance over a sequence of meetings. The SmartNotes system consists of a laptop based note taking application, and a web based note retrieval system. We shall demonstrate the functionalities of this system, and will also demonstrate the labeled data obtained during typical meetings and browsing sessions.
LTI_Alexander_Rudnicky.txt
- D. Bohus, B. Langner, A. Raux, A. Black, M. Eskénazi, Alexander I. Rudnicky. 2006. ONLINE SUPERVISED LEARNING OF NON-UNDERSTANDING RECOVERY POLICIES. Abstract: Spoken dialog systems typically use a limited number of non- understanding recovery strategies and simple heuristic policies to engage them (e.g. first ask user to repeat, then give help, then transfer to an operator). We propose a supervised, online method for learning a non-understanding recovery policy over a large set of recovery strategies. The approach consists of two steps: first, we construct runtime estimates for the likelihood of success of each recovery strategy, and then we use these estimates to construct a policy. An experiment with a publicly available spoken dialog system shows that the learned policy produced a 12.5% relative improvement in the non-understanding recovery rate.
LTI_Alexander_Rudnicky.txt
- Rong Zhang, Alexander I. Rudnicky. 2006. A New Data Selection Approach for Semi-Supervised Acoustic Modeling. Abstract: Current approaches to semi-supervised incremental learning prefer to select unlabeled examples predicted with high confidence for model re-training. However, this strategy can degrade the classification performance rather than improve it. We present an analysis for the reasons of this phenomenon, showing that only relying on high confidence for data selection can lead to an erroneous estimate to the true distribution when the confidence annotator is highly correlated with the classifier in the information they use. We propose a new data selection approach to address this problem and apply it to a variety of applications, including machine learning and speech recognition. Encouraging improvements in recognition accuracy are observed in our experiments
LTI_Alexander_Rudnicky.txt
- S. Banerjee, Alexander I. Rudnicky. 2006. You Are What You Say: Using Meeting Participants’ Speech to Detect their Roles and Expertise. Abstract: Our goal is to automatically detect the functional roles that meeting participants play, as well as the expertise they bring to meetings. To perform this task, we build decision tree classifiers that use a combination of simple speech features (speech lengths and spoken keywords) extracted from the participants' speech in meetings. We show that this algorithm results in a role detection accuracy of 83% on unseen test data, where the random baseline is 33.3%. We also introduce a simple aggregation mechanism that combines evidence of the participants' expertise from multiple meetings. We show that this aggregation mechanism improves the role detection accuracy from 66.7% (when aggregating over a single meeting) to 83% (when aggregating over 5 meetings).
LTI_Alexander_Rudnicky.txt
- Rong Zhang, Alexander I. Rudnicky. 2006. A New Data Selection Principle for Semi-Supervised Incremental Learning. Abstract: Current semi-supervised incremental learning approaches select unlabeled examples with predicted high confidence for model re-training. We show that for many applications this data selection strategy is not correct. This is because the confidence score is primarily a metric to measure the classification correctness on a particular example, rather than one to measure the example's contribution to the training of an improved model, especially in the case that the information used in the confidence annotator is correlated with that generated by the classifier. To address this problem, we propose a performance-driven principle for unlabeled data selection in which only the unlabeled examples that help to improve classification accuracy are selected for semi-supervised learning. Encouraging results are presented for a variety of public benchmark datasets
LTI_Alexander_Rudnicky.txt
- D. Bohus, Alexander I. Rudnicky. 2006. A “K Hypotheses + Other” Belief Updating Model. Abstract: Spoken dialog systems typically rely on recognition confidence scores to guard against potential misunderstandings. While confidence scores can provide an initial assessment for the reliability of the information obtained from the user, ideally systems should leverage information that is available in subsequent user responses to update and improve the accuracy of their beliefs. We present a machine-learning based solution for this problem. We use a compressed representation of beliefs that tracks up to k hypotheses for each concept at any given time. We train a generalized linear model to perform the updates. Experimental results show that the proposed approach significantly outperforms heuristic rules used for this task in current systems. Furthermore, a user study with a mixed-initiative spoken dialog system shows that the approach leads to significant gains in task success and in the efficiency of the interaction, across a wide range of recognition error-rates.
LTI_Alexander_Rudnicky.txt
- Rong Zhang, Alexander I. Rudnicky. 2006. FOR SEMI-SUPERVISED ACOUSTIC MODELING. Abstract: Current approaches to semi-supervised incremental learning prefer to select unlabeled examples predicted with high confidence for model re-training. However, this strategy can degrade the classification performance rather than improve it. We present an analysis for the reasons of this phenomenon, showing that only relying on high confidence for data selection can lead to an erroneous estimate to the true distribution when the confidence annotator is highly correlated with the classifier in the information they use. We propose a new data selection approach to address this problem and apply it to a variety of applications, including machine learning and speech recognition. Encouraging improvements in recognition accuracy are observed in our experiments.
LTI_Alexander_Rudnicky.txt
- M. Dias, T. Harris, Brett Browning, E. Jones, B. Argall, M. Veloso, A. Stentz, Alexander I. Rudnicky. 2006. Dynamically Formed Human-Robot Teams Performing Coordinated Tasks. Abstract: In this new era of space exploration where human-robot teams are envisioned maintaining a long-term presence on other planets, effective coordination of these teams is paramount. Three critical research challenges that must be solved to realize this vision are the human-robot team challenge, the pickup-team challenge, and the effective humanrobot communication challenge. In this paper, we address these challenges, propose a novel approach towards solving these challenges, and situate our approach in the newly introduced treasure hunt domain.
LTI_Alexander_Rudnicky.txt
- Derek P. Brock, R. Duraiswami, Alexander I. Rudnicky. 2006. Aurally informed performance : integrating machine listening and auditory presentation in robotic systems : Papers from the AAAI Fall Symposium : Technical Report FS-06-01. Abstract: AAAI maintains compilation copyright for this technical report and retains the right of first refusal to any publication (including electronic distribution) arising from this AAAI event. Please do not make any inquiries or arrangements for hardcopy or electronic publication of all or part of the papers contained in these working notes without first exploring the options available through AAAI Press and AI Magazine (concurrent submission to AAAI and an another publisher is not acceptable). A signed release of this right by AAAI is required before publication by a third party.
LTI_Alexander_Rudnicky.txt
- David Huggins-Daines, Alexander I. Rudnicky. 2006. A constrained baum-welch algorithm for improved phoneme segmentation and efficient training. Abstract: We describe an extension to the Baum-Welch algorithm for training Hidden Markov Models that uses explicit phoneme segmentation to constrain the forward and backward lattice. The HMMs trained with this algorithm can be shown to improve the accuracy of automatic phoneme segmentation. In addition, this algorithm is significantly more computationally efficient than the full BaumWelch algorithm, while producing models that achieve equivalent accuracy on a standard phoneme recognition task.
LTI_Alexander_Rudnicky.txt
- T. Harris, S. Banerjee, Alexander I. Rudnicky. 2005. Heterogeneous Multi-Robot Dialogues for Search Tasks. Abstract: Dialogue agents are often designed with the tacit assumption that at any one time, there is but one agent and one human, and that their communication channel is exclusive. We are interested in examining situations in which multiple heterogeneous dialogue agents need to interact with a human interlocutor, and where the communication channel becomes necessarily shared. To this end we have constructed a multi-agent dialogue test-bed on which to study dialogue coordination issues in multi-
LTI_Alexander_Rudnicky.txt
- Arthur Chan, M. Ravishankar, Alexander I. Rudnicky. 2005. On improvements to CI-based GMM selection. Abstract: Gaussian Mixture Model (GMM) computation is known to be one of the most computation-intensive components of speech recognition. In our previous work, context-independent model based GMM selection (CIGMMS) was found to be an effective way to reduce the cost of GMM computation without significant loss in recognition accuracy. In this work, we propose three methods to further improve the performance of CIGMMS. Each method brings an additional 5-10% relative speed improvement, with a cumulative improvement up to 37% on some tasks. Detailed analysis and experimental results on three corpora are presented.
LTI_Alexander_Rudnicky.txt
- Nikesh Garera, Alexander I. Rudnicky. 2005. Briefing Assistant: Learning Human Summarization Behavior over Time. Abstract: We describe a system intended to help report writers produce summaries of important activities based on weekly interviews with members of a project. A key element of this system is to learn different user and audience preferences in order to produce tailored summaries. The system learns desired qualities of summaries based on observation of user selection behavior, and builds a regression-based model using item features as parameters. The system’s assistance consists of presenting the writer with a successively better ordered list of items from which to choose. Our evaluation study indicates a significant improvement in average precision (and other metrics) by the end of the learning period as compared to baseline of no learning. We also describe our ongoing work on automatic feature extraction to make this approach domain independent.
LTI_Alexander_Rudnicky.txt
- D. Bohus, Alexander I. Rudnicky. 2005. Sorry and I Didn’t Catch That! - An Investigation of Non-understanding Errors and Recovery Strategies. Abstract: We present results from an extensive empirical analysis of non-understanding errors and ten non-understanding recovery strategies, based on a corpus of dialogs collected with a spoken dialog system that handles conference room reservations. More specifically, the issues we investigate are: what are the main sources of non-understanding errors? What is the impact of these errors on global performance? How do various strategies for recovery from non-understandings compare to each other? What are the relationships between these strategies and subsequent user response types, and which response types are more likely to lead to successful recovery? Can dialog performance be improved by using a smarter policy for engaging the non-understanding recovery strategies? If so, can we learn such a policy from data? Whenever available, we compare and contrast our results with other studies in the literature. Finally, we summarize the lessons learned and present our plans for future work inspired by this analysis.
LTI_Alexander_Rudnicky.txt
- D. Bohus, Alexander I. Rudnicky. 2005. A principled approach for rejection threshold optimization in spoken dialog systems. Abstract: A common design pattern in spoken dialog systems is to reject an input when the recognition confidence score falls below a preset rejection threshold. However, this introduces a potentially non-optimal tradeoff between various types of errors such as misunderstandings and false rejections. In this paper, we propose a data-driven method for determining the relative costs of these errors, and then use these costs to optimize state-specific rejection thresholds. We illustrate the use of this approach with data from a spoken dialog system that handles conference room reservations. The results obtained confirm our intuitions about the costs of the errors, and are consistent with anecdotal evidence gathered throughout the use of the system.
LTI_Alexander_Rudnicky.txt
- D. Bohus, Alexander I. Rudnicky. 2005. Error Handling in the RavenClaw Dialog Management Architecture. Abstract: We describe the error handling architect-ture underlying the RavenClaw dialog management framework. The architecture provides a robust basis for current and future research in error detection and recovery. Several objectives were pursued in its development: task-independence, ease-of-use, adaptability and scalability. We describe the key aspects of architectural de-sign which confer these properties, and discuss the deployment of this architect-ture in a number of spoken dialog systems spanning several domains and interaction types. Finally, we outline current research projects supported by this architecture.
LTI_Alexander_Rudnicky.txt
- Stefanie Tomko, T. Harris, Arthur R. Toth, James Sanders, Alexander I. Rudnicky, R. Rosenfeld. 2005. Towards efficient human machine speech communication: The speech graffiti project. Abstract: This research investigates the design and performance of the Speech Graffiti interface for spoken interaction with simple machines. Speech Graffiti is a standardized interface designed to address issues inherent in the current state-of-the-art in spoken dialog systems such as high word-error rates and the difficulty of developing natural language systems. This article describes the general characteristics of Speech Graffiti, provides examples of its use, and describes other aspects of the system such as the development toolkit. We also present results from a user study comparing Speech Graffiti with a natural language dialog system. These results show that users rated Speech Graffiti significantly better in several assessment categories. Participants completed approximately the same number of tasks with both systems, and although Speech Graffiti users often took more turns to complete tasks than natural language interface users, they completed tasks in slightly less time.
LTI_Alexander_Rudnicky.txt
- D. Bohus, Alexander I. Rudnicky. 2005. Error handling in the RavenClaw dialog management framework. Abstract: We describe the error handling architectture underlying the RavenClaw dialog management framework. The architecture provides a robust basis for current and future research in error detection and recovery. Several objectives were pursued in its development: task-independence, ease-of-use, adaptability and scalability. We describe the key aspects of architectural design which confer these properties, and discuss the deployment of this architectture in a number of spoken dialog systems spanning several domains and interaction types. Finally, we outline current research projects supported by this architecture.
LTI_Alexander_Rudnicky.txt
- D. Bohus, Alexander I. Rudnicky. 2005. Constructing accurate beliefs in spoken dialog systems. Abstract: We propose a novel approach for constructing more accurate beliefs over concept values in spoken dialog systems by integrating information across multiple turns in the conversation. In particular, we focus our attention on updating the confidence score of the top hypothesis for a concept, in light of subsequent user responses to system confirmation actions. Our data-driven approach bridges previous work in confidence annotation and correction detection, providing a unified framework for belief updating. The approach significantly outperforms heuristic rules currently used in most spoken dialog systems
LTI_Alexander_Rudnicky.txt
- Rong Zhang, Ziad Al Bawab, Arthur Chan, A. Chotimongkol, David Huggins-Daines, Alexander I. Rudnicky. 2005. Investigations on ensemble based semi-supervised acoustic model training. Abstract: Semi-supervised learning has been recognized as an effective way to improve acoustic model training in cases where sufficient transcribed data are not available. Different from most of existing approaches only using single acoustic model and focusing on how to refine it, this paper investigates the feasibility of using ensemble methods for semi-supervised acoustic modeling training. Two methods are investigated here, one is a generalized Boosting algorithm, a second one is based on data partitions. Both methods demonstrate substantial improvement over baseline. More than 15% relative reduction of word error rate was observed in our experiments using a large real-world meeting recognition dataset.
LTI_Alexander_Rudnicky.txt
- Alexander I. Rudnicky, P. Rybski, S. Banerjee, Francisco Veloso. 2005. Intelligently Integrating Information from Speech and Vision Processing to Perform Light-weight Meeting Understanding. Abstract: Important information is often generated at meetings but identifying, and retrieving that information after the meeting is not always simple. Automatically capturing such information and making it available for later retrieval has therefore become a topic of some interest. Most approaches to this problem have involved constructing specialized instrumented meeting rooms that allow a meeting to be captured in great detail. We propose an alternate approach that focuses on people’s information retrieval needs and makes use of a light-weight data collection system that allows data acquisition on portable equipment, such as personal laptops. Issues that arise include the integration of information from different audio and video streams and optimum use of sparse computing resources. This paper describes our current development of a light-weight portable meeting recording infrastructure, as well as the use of streams of visual and audio information to derive structure from meetings. The goal is to make meeting contents easily accessible to people.
LTI_Alexander_Rudnicky.txt
- Rong Zhang, Alexander I. Rudnicky. 2004. Apply n-best list re-ranking to acoustic model combinations of boosting training. Abstract: The object function for Boosting training method in acoustic modeling aims to reduce utterance level error rate. This is different from the most commonly used performance metric in speech recognition, word error rate. This paper proposes that the combination of N-best list re-ranking and ROVER can partly address this problem. In particular, model combination is applied to re-ranked hypotheses rather than to the original top-1 hypotheses and carried on word level. Improvement of system performance is observed in our experiments. In addition, we describe and evaluate a new confidence feature that measures the correctness of frame level decoding result.
LTI_Alexander_Rudnicky.txt
- D. Bohus, Alexander I. Rudnicky. 2004. Users’ Performance and Preferences for Online Graphic, Text and Auditory Presentation of Instructions. Abstract: Traditional technical manuals consist primarily of text supplemented by tabular and graphic presentation of information. In the past decade technical information systems have increasingly been authored for presentation on computers instead of on paper; however a stable set of standards for such manuals has yet to evolve. There are strong beliefs but little empirical evidence to guide standards development within companies producing Interactive Electronic Technical Manuals (IETMs). The current study compares three different modes of instruction presentation for mechanical assembly tasks (graphic, text, and auditory), using a Wizard of Oz paradigm. Study participants preferred graphically-presented information and they completed the tasks fastest using this presentation mode. We found no significant difference in performance or preference between text and audio conditions. Nevertheless users indicated a clear desire that graphic presentation be supplemented by other modes. Study results will be useful for designers of multi-modal interfaces for online instruction systems.
LTI_Alexander_Rudnicky.txt
- Arthur Chan, M. Ravishankar, Alexander I. Rudnicky, J. Sherwani. 2004. Four-layer categorization scheme of fast GMM computation techniques in large vocabulary continuous speech recognition systems. Abstract: Large vocabulary continuous speech recognition systems are known to be computationally intensive. A major bottleneck is the Gaussian mixture model (GMM) computation and various techniques have been proposed to address this problem. We present a systematic study of fast GMM computation techniques. As there are a large number of these and it is impractical to exhaustively evaluate all of them, we first categorized techniques into four layers and selected representative ones to evaluate in each layer. Based on this framework of study, we provide a detailed analysis and comparison of GMM computation techniques from the four-layer perspective and explore two subtle practical issues, 1) how different techniques can be combined effectively and 2) how beam pruning will affect the performance of GMM computation techniques. All techniques are evaluated in the CMU Communicator domain. We also compare their performance with others reported in the literature.
LTI_Alexander_Rudnicky.txt
- P. Rybski, S. Banerjee, F. D. L. Torre, Carlos Vallespí, Alexander I. Rudnicky, M. Veloso. 2004. Segmentation and classification of meetings using multiple information streams. Abstract: We present a meeting recorder infrastructure used to record and annotate events that occur in meetings. Multiple data streams are recorded and analyzed in order to infer a higher-level state of the group's activities. We describe the hardware and software systems used to capture people's activities as well as the methods used to characterize them.
LTI_Alexander_Rudnicky.txt
- Rong Zhang, Alexander I. Rudnicky. 2004. A frame level boosting training scheme for acoustic modeling. Abstract: Conventional Boosting algorithms for acoustic modeling have two notable weaknesses. (1) The objective function aims to minimize utterance error rate, though the goal for most speech recognition systems is to reduce word error rate. (2) During Boosting training, an utterance is treated as a unit for resampling and each frame within the same utterance is assigned equal weight. Intuitively, the frames associated with a misclassified word should be given more emphasis than others. We propose a frame level Boosting training scheme that addresses these shortcomings and allows each frame to have a different weight. We describe a technique and provide experimental results for this approach.
LTI_Alexander_Rudnicky.txt
- Rong Zhang, Alexander I. Rudnicky. 2004. Optimizing boosting with discriminative criteria. Abstract: We describe the use of discriminative criteria to optimize Boosting based ensembles. Boosting algorithms may create hundreds of individual classifiers in order to fit the training data. However, this strategy isn’t feasible and necessary for complex classification problems, such as real-time continuous speech recognition, in which only the combination of a few of acoustic models is practical. How to improve the classification accuracy for small size of ensemble is the focus of this paper. Two discriminative criteria that attempt to minimize the true Bayes error rate are investigated. Improvements are observed over a variety of datasets including image and speech recognition, indicating the prospective utility of these two criteria.
LTI_Alexander_Rudnicky.txt
- S. Banerjee, Jason Cohen, Thomas R Quisel, Arthur Chan, Yash Patodia, Ziad Al Bawab, Rong Zhang, A. Black, R. Stern, Alexander I. Rudnicky, P. Rybski, M. Veloso. 2004. Creating Multi-Modal, User-Centric Records of Meetings with the Carnegie Mellon Meeting Recorder Architecture. Abstract: Our goal is to build conversational agents that combine information from speech, gesture, hand-writing, text and presentations to create an understanding of the ongoing conversation (e.g. by identifying the action items agreed upon), and that can make useful contributions to the meeting based on such an understanding (e.g. by confirming the details of the action items). To create a corpus of relevant data, we have implemented the Carnegie Mellon Meeting Recorder to capture detailed multi-modal recordings of meetings. This software differs somewhat from other meeting room architectures in that it focuses on instrumenting the individual rather than the room and assumes that the meeting space is not fixed in advance. Thus, most of the sensors are user-centric (closetalking microphones connected to laptop computers, instrumented note-pads, instrumented presentation software, etc), although some are indeed ”room-centric” (instrumented whiteboard, distant cameras, table-top microphones, etc). This paper describes the details of our data collection environment. We report on the current status of our data collection, transcription and higher-level discourse annotation efforts. We also describe some of our initial research on conversational turn-taking based on this corpus.
LTI_Alexander_Rudnicky.txt
- S. Banerjee, Alexander I. Rudnicky. 2004. Using simple speech-based features to detect the state of a meeting and the roles of the meeting participants. Abstract: We introduce a simple taxonomy of meeting states and participant roles. Our goal is to automatically detect the state of a meeting and the role of each meeting participant and to do so concurrent with a meeting. We trained a decision tree classifier that learns to detect these states and roles from simple speech–based features that are easy to compute automatically. This classifier detects meeting states 18% absolute more accurately than a random classifier, and detects participant roles 10% absolute more accurately than a majority classifier. The results imply that simple, easy to compute features can be used for this purpose.
LTI_Alexander_Rudnicky.txt
- Alexander Hauptmann, Alexander I. Rudnicky. 2004. A Comparison of Speech vs Typed Input. Abstract: We conducted a series of empirical experiments in which users were asked to enter digit strings into the computer by voice or keyboard. Two different ways of verifying and correcting the spoken input were examined. Extensive timing analyses were performed to determine which aspects of the interface were critical to speedy completion of the task. The results show that speech is preferable for strings that require more than a few keystrokes. The results emphasize the need for fast and accurate speech recognition, but also demonstrate how error correction and input validation are crucial for an effective speech interface.
LTI_Alexander_Rudnicky.txt
- D. Bohus, Alexander I. Rudnicky. 2003. Ravenclaw: dialog management using hierarchical task decomposition and an expectation agenda. Abstract: We describe RavenClaw, a new dialog management framework developed as a successor to the Agenda [1] architecture used in the CMU Communicator. RavenClaw introduces a clear separation between task and discourse behavior specification, and allows rapid development of dialog management components for spoken dialog systems operating in complex, goal-oriented domains. The system development effort is focused entirely on the specification of the dialog task, while a rich set of domain-independent conversational behaviors are transparently generated by the dialog engine. To date, RavenClaw has been applied to five different domains allowing us to draw some preliminary conclusions as to the generality of the approach. We briefly describe our experience in developing these systems.
LTI_Alexander_Rudnicky.txt
- Rong Zhang, Alexander I. Rudnicky. 2003. Comparative study of boosting and non-boosting training for constructing ensembles of acoustic models. Abstract: This paper compares the performance of Boosting and nonBoosting training algorithms in large vocabulary continuous speech recognition (LVCSR) using ensembles of acoustic models. Both algorithms demonstrated significant word error rate reduction on the CMU Communicator corpus. However, both algorithms produced comparable improvements, even though one would expect that the Boosting algorithm, which has a solid theoretic foundation, should work much better than the non-Boosting algorithm. Several voting schemes for hypothesis combining were evaluated, including weighted voting, un-weighted voting and ROVER.
LTI_Alexander_Rudnicky.txt
- Rong Zhang, Alexander I. Rudnicky. 2003. Improving the performance of an LVCSR system through ensembles of acoustic models. Abstract: This paper describes our work on applying ensembles of acoustic models to the problem of large vocabulary continuous speech recognition (LVCSR). We propose three algorithms for constructing ensembles. The first two have their roots in bagging algorithms; however, instead of randomly sampling examples our algorithms construct training sets based on the word error rate. The third one is a boosting style algorithm. Different from other boosting methods which demand large resources for computation and storage, our method present a more efficient solution suitable for acoustic model training. We also investigate a method that seeks optimal combination for models. We report experimental results on a large real world corpus collected from the Carnegie Mellon Communicator dialog system. Significant improvements on system performance are observed in that up to 15.56% relative reduction on word error rate is achieved.
LTI_Alexander_Rudnicky.txt
- Christina L. Bennett, A. F. Llitjós, Stefanie Shriver, Alexander I. Rudnicky, A. Black. 2002. Building voiceXML-based applications. Abstract: Abstract : The Language Technologies Institute (LTI) at Carnegie Mellon University has, for the past several years, conducted a lab course in building spoken-language dialog systems. In the most recent versions of the course, we have used (commercial) web-based development environments to build systems. This paper describes our experiences and discusses the characteristics of applications that are developed within this framework.
LTI_Alexander_Rudnicky.txt
- Christina L. Bennett, Alexander I. Rudnicky. 2002. The carnegie mellon communicator corpus. Abstract: As part of the DARPA Communicator program, Carnegie Mellon has, over the past three years, collected a large corpus of speech produced by callers to its Travel Planning system. To date, a total of 180,605 utterances (90.9 hours) have been collected. The data were used for a number of purposes, including acoustic and language modeling and the development of a spoken dialog system. The collection, transcription and annotation of these data prompted us to develop a number of procedures for managing the transcription process and for ensuring accuracy. We describe these, as well as some results based on these data. A portion of this corpus, covering the years 1999-2001, is being published for research purposes.
LTI_Alexander_Rudnicky.txt
- M. Walker, Alexander I. Rudnicky, J. Aberdeen, Elizabeth Owen Bratt, J. Garofolo, H. Hastie, Audrey N. Le, B. Pellom, A. Potamianos, R. Passonneau, R. Prasad, S. Roukos, G. Sanders, S. Seneff, D. Stallard. 2002. DARPA communicator evaluation: progress from 2000 to 2001. Abstract: ABSTRACTThis paper describes the evaluation methodology and resultsof the DARPA Communicator spoken dialog system evaluationexperiments in 2000 and 2001. Nine spoken dialog systems in thetravel planning domain participated in the experiments resulting ina total corpus of 1904 dialogs. We describe and compare the ex-perimental design of the 2000 and 2001 DARPA evaluations. Wedescribe how we established a performance baseline in 2001 forcomplex tasks. We present our overall approach to data collection,the metrics collected, and the application of PARADISE to thesedata sets. We compare the results we achieved in 2000 for a num-ber of core metrics with those for 2001. These results demonstratelarge performance improvements from 2000 to 2001 and show thatthe Communicator program goal of conversational interaction forcomplex tasks has been achieved.1. INTRODUCTIONThe objective of the DARPA Communicator project is to supportrapid development of multi-modal speech-enabled dialog systemswith advanced conversational capabilities. Figure 1 illustrates theCommunicator challenge problem; asystem must support complexconversational interaction to complete this task within 10 minutes.You are in Denver, Friday night at 8pm on the road to the air-port after a great meeting. As a result of the meeting, youneed to attend a group meeting in San Diego on Point Lomaon Monday at 8:30, a meeting Tuesday morning at Miramar at7:30, then one from 3-5 pm in Monterey; you need reservations(car, hotel, air).You pull over to the side of the road and whip out your Com-municator. Through spoken dialog (augmented with a displayand pointing), you make the appropriate reservations, discovera conflict, and send an e-mail message (dictated) to inform thegroup of the changed schedule. Do this in 10 minutes.Fig. 1. Darpa Communicator Challenge ProblemDuring the course of the Communicator program, we havebeen involved in developing methods for measuring progress to-wards the program goals and assessing advances in the componenttechnologies required to achieve such goals. In previous work, wereport on an exploratory data collection experiment with nine par-ticipating Communicator systems in the travel planning domain
LTI_Alexander_Rudnicky.txt
- Rong Zhang, Alexander I. Rudnicky. 2002. A large scale clustering scheme for kernel K-Means. Abstract: Kernel functions can be viewed as a non-linear transformation that increases the separability of the input data by mapping them to a new high dimensional space. The incorporation of kernel functions enables the K-Means algorithm to explore the inherent data pattern in the new space. However, the previous applications of the kernel K-Means algorithm are confined to small corpora due to its expensive computation and storage cost. To overcome these obstacles, we propose a new clustering scheme which changes the clustering order from the sequence of samples to the sequence of kernels, and employs a disk-based strategy to control data. The new clustering scheme has been demonstrated to be very efficient for a large corpus by our experiments on handwritten digits recognition, in which more than 90% of the running time was saved.
LTI_Alexander_Rudnicky.txt
- R. Frederking, E. Steinbrecher, Ralf D. Brown, Alexander I. Rudnicky, J. Moody. 2002. Speech Translation on a Tight Budget without Enough Data. Abstract: The Tongues speech-to-speech translation system was developed for the US Army chaplains, with fairly stringent constraints on time, budget, and available data. The resulting prototype was required to undergo a quite realistic field test. We describe the development and architecture of the system, the field test, and our analysis of its results. The system performed quite well, especially given its development constraints.
LTI_Alexander_Rudnicky.txt
- D. Bohus, Alexander I. Rudnicky. 2002. Integrating Multiple Knowledge Sources for Utterance-Level Confidence Annotation in the CMU Communicator Spoken Dialog System. Abstract: Abstract : In the recent years, automated speech recognition has been the main drive behind the advent of spoken language interfaces, but at the same a time a severe limiting factor in the development of these systems. We believe that increased robustness in the face of recognition errors can be achieved by making the systems aware of their own misunderstandings, and employing appropriate recovery techniques when breakdowns in interacted occur. In this paper we address the first problem: the development of an utterance-level confidence annotator for a spoken dialog system. After a brief introduction to the CMU Communicator spoken dialog system (which provided the target platform for the developed annotator), we cast the confidence annotation problem as a machine learning classification task, and focus on selecting relevant features and on empirically identifying the best classification techniques for this task. The results indicate that significant reductions in classification error rate can be obtained using several different classifiers. Furthermore, we propose a data driven approach to assessing the impact of the errors committed by the confidence annotator on dialog performance, with a view to optimally fine-tuning the annotator. Several models were constructed, and the resulting error costs were in accordance with our intuition. We found, surprisingly, that, at least for a mixed-initiative spoken dialog system as the CMU Communicator, these errors trade-all equally over a wide operating characteristic range.
LTI_Alexander_Rudnicky.txt
- M. Walker, Alexander I. Rudnicky, R. Prasad, J. Aberdeen, Elizabeth Owen Bratt, J. Garofolo, H. Hastie, Audrey N. Le, B. Pellom, A. Potamianos, R. Passonneau, S. Roukos, G. Sanders, S. Seneff, D. Stallard. 2002. DARPA communicator: cross-system results for the 2001 evaluation. Abstract: This paper describes the evaluation methodology and results of the 2001 DARPA Communicator evaluation. The experiment spanned 6 months of 2001 and involved eight DARPA Communicator systems in the travel planning domain. It resulted in a corpus of 1242 dialogs which include many more dialogues for complex tasks than the 2000 evaluation. We describe the experimental design, the approach to data collection, and the results. We compare the results by the type of travel plan and by system. The results demonstrate some large differences across sites and show that the complex trips are clearly more difficult.
LTI_Alexander_Rudnicky.txt
- A. Chotimongkol, Alexander I. Rudnicky. 2002. Automatic concept identification in goal-oriented conversations. Abstract: We address the problem of identifying key domain concepts automatically from an unannotated corpus of goal-oriented human-human conversations. We examine two clustering algorithms, one based on mutual information and another one based on Kullback-Liebler distance. In order to compare the results from both techniques quantitatively, we evaluate the outcome clusters against reference concept labels using precision and recall metrics adopted from the evaluation of topic identification task. However, since our system allows more than one cluster to associate with each concept an additional metric, a singularity score, is added to better capture cluster quality. Based on the proposed quality metrics, the results show that Kullback-Liebler-based clustering outperforms mutual information-based clustering for both the optimal quality and the quality achieved using an automatic stopping criterion.
LTI_Alexander_Rudnicky.txt
- Rong Zhang, Alexander I. Rudnicky. 2002. Improve latent semantic analysis based language model by integrating multiple level knowledge. Abstract: We describe an extension to the use of Latent Semantic Analysis (LSA) for language modeling. This technique makes it easier to exploit long distance relationships in natural language for which the traditional n-gram is unsuited. However, with the growth of length, the semantic representation of the history may be contaminated by irrelevant information, increasing the uncertainty in predicting the next word. To address this problem, we propose a multilevel framework dividing the history into three levels corresponding to document, paragraph and sentence. To combine the three levels of information with the n-gram, a Softmax network is used. We further present a statistical scheme that dynamically determines the unit scope in the generalization stage. The combination of all the techniques leads to a 14% perplexity reduction on a subset of Wall Street Journal, compared with the trigram model.
LTI_Alexander_Rudnicky.txt
- A. Black, Ralf D. Brown, R. Frederking, K. Lenzo, J. Moody, Alexander I. Rudnicky, Rita Singh, E. Steinbrecher. 2002. RAPID DEVLOPEMENT OF SPEECH-TO-SPEECH TRANSLATION SYSTEMS. Abstract: This paper describes building of the basic components, par-ticularly speech recognition and synthesis, of a speech-to-speech translation system. This work is described within the framework of the “Tongues: small footprint speech-to-speech translation device” developed at CMU and Lockheed Martin for use by US Army Chaplains.
LTI_Alexander_Rudnicky.txt
- Christina L. Bennett, A. F. Llitjós, Stefanie Shriver, Alexander I. Rudnicky. 2002. BUILDING VOICEXML-BAS. Abstract: The Language Technologies Institute (LTI) at Carnegie Mellon University has, for the past several years, conducted a lab course in building spoken-language dialog systems. In the most recent versions of the course, we have used (commercial) webbased development environments to build systems. This paper describes our experiences and discusses the characteristics of applications that are developed within this framework.
LTI_Alexander_Rudnicky.txt
- Alice H. Oh, Alexander I. Rudnicky. 2002. Submitted to Computer Speech and Language Stochastic Natural Language Generation for Spoken Dialog Systems. Abstract: We describe a corpus-based approach to natural language generation (NLG). The approach has been implemented as a component of a spoken dialog system and a series of evaluations were carried out. Our system uses n-gram language models, which have been found useful in other language technology applications, in a generative mode. It is not yet clear whether the simple n-grams can adequately model human language generation in general, but we show that we can successfully apply this ubiquitous modeling technique to the task of natural language generation for spoken dialog systems. In this paper, we discuss applying corpus-based stochastic language generation at two levels: content selection and sentence planning/realization. At the content selection level, output utterances are modeled by bigrams, and the appropriate attributes are chosen using bigram statistics. In sentence planning and realization, corpus utterances are modeled by n-grams of varying length, and new utterances are generated stochastically. Through this work, we show that a simple statistical model alone can generate appropriate language for a spoken dialog system. The results describe a promising avenue for using a statistical approach in future NLG systems. A. H. Oh: Stochastic Natural Language Generation 3 Natural Language Understanding Natural Language Generation Surface Realization Semantic (Syntactic) Representation Semantic (Syntactic) Representation Surface Realization Figure 1: NLU and NLG
LTI_Alexander_Rudnicky.txt
- Alexander I. Rudnicky, Alice H. Oh. 2002. Dialog Annotation for Stochastic Generation. Abstract: Individuals who successfully make their livelihood by talking with others, for example travel agents, can be presumed to have optimized their language for the task at hand in terms of conciseness and intelligibility. It makes sense to exploit this effort for the purpose of building better generation components for a spoken dialog system. The Stochastic Generation technique, introduced by Oh and Rudnicky (2002), is one such approach. In this approach, utterances in a corpus of domain expert utterances are classified as to speech act and individual concepts tagged. Statistical n-gram models are built for each speech-act class then used generatively to create novel utterances. These have been shown to be comparable in quality to human productions. The class and tag scheme is concrete and closely tied to the domain at hand; we believe this produces a distinct advantage in speed of implementation and quality of results. The current paper describes the classification and tagging procedures used for Stochastic Generation, and discusses the advantages and limitations of the techniques.
LTI_Alexander_Rudnicky.txt
- M. Walker, Alex Rudnicky, R. Prasad, J. Aberdeen, Elizabeth Owen Bratt, J. Garofolo, H. Hastie, A. Le, B. Pellom, Alex Potamianos, R. Passonneau, S. Roukos, Greg Sanders, S. Seneff, Dave Stallard, M. A. Walker, Alexander I. Rudnicky, R. Prasad, J. Aberdeen, E. Owen Bratt, J. Garofolo, H. Hastie, A. Le, B. Pellom, A. Potamianos, R. Passonneau, S. Roukos, G. Sanders, S. Seneff, D. Stallard, Alex Potami-Anos. 2002. DARPA Communicator Evaluation : Progress from 2000 to 2001. Abstract: This paper describes the evaluation methodology and results of the DARPA Communicator spoken dialog system evaluation experiments in 2000 and 2001. Nine spoken dialog systems in the travel planning domain participated in the experiments resulting in a total corpus of 1904 dialogs. We describe and compare the experimental design of the 2000 and 2001 DARPA evaluations. We describe how we established a performance baseline in 2001 for complex tasks. We present our overall approach to data collection, the metrics collected, and the application of PARADISE to these data sets. We compare the results we achieved in 2000 for a number of core metrics with those for 2001. These results demonstrate large performance improvements from 2000 to 2001 and show that the Communicator program goal of conversational interaction for complex tasks has been achieved.
LTI_Alexander_Rudnicky.txt
- M. Walker, J. Aberdeen, Julie E. Boland, Elizabeth Owen Bratt, J. Garofolo, L. Hirschman, Audrey N. Le, Sungbok Lee, Shrikanth S. Narayanan, K. Papineni, B. Pellom, J. Polifroni, A. Potamianos, P. Prabhu, Alexander I. Rudnicky, G. Sanders, S. Seneff, D. Stallard, S. Whittaker. 2001. DARPA communicator dialog travel planning systems: the june 2000 data collection. Abstract: This paper describes results of an experiment with 9 different DARPA Communicator Systems who participated in the June 2000 data collection. All systems supported travel planning and utilized some form of mixed-initiative interaction. However they varied in several critical dimensions: (1) They targeted different back-end databases for travel information; (2) The used different modules for ASR, NLU, TTS and dialog management. We describe the experimental design, the approach to data collection, the metrics collected, and results comparing the systems.
LTI_Alexander_Rudnicky.txt
- Stefanie Shriver, Arthur R. Toth, Xiaojin Zhu, Alexander I. Rudnicky, R. Rosenfeld. 2001. A unified design for human-machine voice interaction. Abstract: We describe a unified design for voice interaction with simple machines; discuss the motivation for and main features of the approach, include a short sample interaction, and report the results of two preliminary experiments.
LTI_Alexander_Rudnicky.txt
- Stefanie Shriver, R. Rosenfeld, Xiaojin Zhu, Arthur R. Toth, Alexander I. Rudnicky, M. Flueckiger. 2001. Universalizing speech: notes from the USI project. Abstract: This paper discusses progress in designing a standardized interface for speech interaction with simple machines – the Universal Speech Interface (USI) project. We discuss the motivation for such a design and issues that must be addressed by such an interface. We present our current proposals for handling these issues, and comment on the usability of these approaches based on user interactions with the system. Finally, we discuss future work and plans for the USI project.
LTI_Alexander_Rudnicky.txt
- R. Rosenfeld, D. Olsen, Alexander I. Rudnicky. 2001. Universal speech interfaces. Abstract: In recent years speech recognition has reached the point of commercial viability realizable on any off-the-shelf computer. This is a goal that has long been sought by both the research community and by prospective users. Anyone who has used these technologies understands that the recognition has many flaws and there is much still to be done. The recognition algorithms are not the whole story. There is still the question of how speech can and should actually be used. Related to this is the issue of tools for development of speech-based applications. Achieving reliable, accurate speech recognition is similar to building an inexpensive mouse and keyboard. The underlying input technology is available but the question of how to build the application interface still remains. We have been considering these problems for some time [Rosenfeld et. al., 2000a]. In this paper we present some of our thoughts about the future of speech-based interaction. This paper is not a report of results we have obtained, but rather a vision of a future to be explored.
LTI_Alexander_Rudnicky.txt
- T. M. DuBois, Alexander I. Rudnicky. 2001. CONCEPT METRIC FOR ASSESSING DIALOG SYSTEM COMPLEXITY. Abstract: Techniques for assessing dialog system performance commonly focus on characteristics of the interaction, using metrics such as completion, satisfaction or time on task. However, such metrics are not always capable of differentiating systems that operate on fundamentally different principles, particularly when tested on tasks that focus on common-denominator capabilities. We introduce a new metric, the open concept count, and show how it can be used to capture useful system properties of a dialog system.
LTI_Alexander_Rudnicky.txt
- T. M. DuBois, Alexander I. Rudnicky. 2001. An open concept metric for assessing dialog system complexity. Abstract: Techniques for assessing dialog system performance commonly focus on characteristics of the interaction, using metrics such as completion, satisfaction or time on task. However, such metrics are not always capable of differentiating systems that operate on fundamentally different principles, particularly when tested on tasks that focus on common-denominator capabilities. We introduce a new metric, the open concept count, and show how it can be used to capture useful system properties of a dialog system.
LTI_Alexander_Rudnicky.txt
- A. Chotimongkol, Alexander I. Rudnicky. 2001. N-best speech hypotheses reordering using linear regression. Abstract: We propose a hypothesis reordering technique to improve speech recognition accuracy in a dialog system. For such systems, additional information external to the decoding process itself is available, in particular features derived from the parse and the dialog. Such features can be combined with recognizer features by means of a linear regression model to predict the most likely entry in the hypothesis list. We introduce the use of concept error rate as an alternative accuracy measurement and compare it withy the use of word error rate. The proposed model performs better than human subjects performing the same hypothesis reordering task.
LTI_Alexander_Rudnicky.txt
- Rong Zhang, Alexander I. Rudnicky. 2001. Word level confidence annotation using combinations of features. Abstract: This paper describes the development of a word-level confidence metric suitable for use in a dialog system. Two aspects of the problems are investigated: the identification of useful features and the selection of an effective classifier. We find that two parse-level features, Parsing-Mode and SlotBackoff-Mode, provide annotation accuracy comparable to that observed for decoder-level features. However, both decoderlevel and parse-level features independently contribute to confidence annotation accuracy. In comparing different classification techniques, we found that Support Vector Machines (SVMs) appear to provide the best accuracy. Overall we achieve 39.7% reduction in annotation uncertainty for a binary confidence decision in a travel-planning domain.
LTI_Alexander_Rudnicky.txt
- D. Bohus, Alexander I. Rudnicky. 2001. Modeling the cost of misunderstanding errors in the CMU Communicator dialog system. Abstract: We describe a data-driven approach that allows us to quantify the costs of various types of errors made by the utterance-level confidence annotator in the Carnegie Mellon Communicator system. Knowing these costs we can determine the optimal tradeoff point between these errors, and tune the confidence annotator accordingly. We describe several models, based on concept transmission efficiency. The models fit our data quite well and the relative costs of errors are in accordance with our intuition. We also find, surprisingly, that for a mixed-initiative system such as the CMU Communicator, false positive and false negative errors trade-off equally over a wide operating range.
LTI_Alexander_Rudnicky.txt
- Paul Carpenter, Chunxiang Jin, Daniel Wilson, Rong Zhang, D. Bohus, Alexander I. Rudnicky. 2001. Is this conversation on track?. Abstract: Confidence annotation allows a spoken dialog system to accurately assess the likelihood of misunderstanding at the utterance level and to avoid breakdowns in interaction. We describe experiments that assess the utility of features from the decoder, parser and dialog levels of processing. We also investigate the effectiveness of various classifiers, including Bayesian Networks, Neural Networks, SVMs, Decision Trees, AdaBoost and Naive Bayes, to combine this information into an utterancelevel confidence metric. We found that a combination of a subset of the features considered produced promising results with several of the classification algorithms considered, e.g., our Bayesian Network classifier produced a 45.7% relative reduction in confidence assessment error and a 29.6% reduction relative to a handcrafted rule.
LTI_Alexander_Rudnicky.txt
- Wei Xu, Alexander I. Rudnicky. 2000. Task-based dialog management using an agenda. Abstract: Dialog management addresses two specific problems: (1) providing a coherent overall structure to interaction that extends beyond the single turn, (2) correctly managing mixed-initiative interaction. We propose a dialog management architecture based on the following elements: handlers that manage interaction focussed on tightly coupled sets of information, a product that reflects mutually agreed-upon information and an agenda that orders the topics relevant to task completion.
LTI_Alexander_Rudnicky.txt
- Alexander I. Rudnicky, Christina L. Bennett, A. Black, A. Chotimongkol, K. Lenzo, Alice H. Oh, Rita Singh. 2000. Task and domain specific modelling in the Carnegie Mellon communicator system. Abstract: The Carnegie Mellon Communicator is a telephone-based dialog system that supports planning in a travel domain. The implementation of such a system requires two complimentary components, an architecture capable of managing interaction and the task, as well as a knowledge base that captures the speech, language and task characteristics specific to the domain. Given a suitable architecture, the principal effort in development in taken up in the acquisition and processing of a domain knowledge base. This paper describes a variety of techniques we have applied to modeling in acoustic, language, task, generation and synthesis components of the system.
LTI_Alexander_Rudnicky.txt
- Alice H. Oh, Alexander I. Rudnicky. 2000. Stochastic Language Generation for Spoken Dialogue Systems. Abstract: The two current approaches to language generation, template-based and rule-based (linguistic) NLG, have limitations when applied to spoken dialogue systems, in part because they were developed for text generation. In this paper, we propose a new corpus-based approach to natural language generation, specifically designed for spoken dialogue systems.
LTI_Alexander_Rudnicky.txt
- P. Constantinides, Alexander I. Rudnicky. 1999. Dialog analysis in the carnegie mellon communicator. Abstract: In this paper, we present a formative evaluation procedure that we have applied to the Communicator dialog system. In the system improvement process, we have recognized the need to identify interaction failures through passive observation of system use. By systematizing the process of dialog evaluation, we hope to gain a mechanism for effectively communicating descriptions of interaction failures, specifically for use in system improvement. Additionally, we argue that this process can be taught to and executed by an evaluator external to the system development process, with the same proficiency as someone intimately familiar with the mechanics of the system components.
LTI_Alexander_Rudnicky.txt
- Alexander I. Rudnicky, Eric H. Thayer, P. Constantinides, C. Tchou, R. Shern, K. Lenzo, W. Xu, Alice H. Oh. 1999. Creating natural dialogs in the carnegie mellon communicator system. Abstract: The Carnegie Mellon Communicator system helps users create complex travel itineraries through a conversational interface. Itineraries consist of (multi-leg) flights, hotel and car reser-vations and are built from actual travel information for North America, obtained from the Web. The system manages dialog using a schema-based approach. Schemas correspond to major units of task information (such as a flight leg) and define conversational topics, or foci of interaction, meaningful to the user.
LTI_Alexander_Rudnicky.txt
- Alexander I. Rudnicky. 1999. AN AGENDA-BASED DIALOG MANAGEMENT ARCHITECTURE FOR SPOKEN LANGUAGE SYSTEMS. Abstract: Dialog management can be seen as a solution to two specific problems: (1) providing a coherent overall structure to interaction that extends beyond the single turn, (2) correctly manage mixed-initiative interaction, allowing users to guide interaction as per their (not necessarily explicitly shared) goals while allowing the system to guide interaction towards successful completion. We propose a dialog management architecture based on the following elements: handlers that manage interaction focussed on tightly coupled sets of information, a product that reflects mutually agreed-upon information and an agenda that orders the topics relevant to task completion.
LTI_Alexander_Rudnicky.txt
- M. Eskénazi, Alexander I. Rudnicky, Karin Gregory, P. Constantinides, R. Brennan, Christina L. Bennett, Jwan Allen. 1999. Data collection and processing in the carnegie mellon communicator. Abstract: In order to create a useful, gracefully functioning system for travel arrangements, we have first observed the task as it is accomplished by a human. We then imitated the human while making the user believe he was dialoguing with an automatic system. As we gradually built our system, we devised ways to assess progress and to detect errors. The following described the manner in which the Carnegie Mellon Communicator was built, data collected, and assessment begun using these criteria.
LTI_Alexander_Rudnicky.txt
- R. Frederking, C. Hogan, Alexander I. Rudnicky. 1999. A new approach to the translating telephone. Abstract: The Translating Telephone has been a major goal of speech translation for many years. Previous approaches have attempted to work from limited-domain, fully-automatic translation towards broad-coverage, fully-automatic translation. We are approaching the problem from a different direction: starting with a broad-coverage but not fully-automatic system, and working towards full automation. We believe that working in this direction will provide us with better feedback, by observing users and collecting language data under realistic conditions, and thus may allow more rapid progress towards the same ultimate goal. Our initial approach relies on the wide-spread availability of Internet connections and web browsers to provide a user interface. We describe our initial work, which is an extension of the Diplomat wearable speech translator.
LTI_Alexander_Rudnicky.txt
- Bertrand A. Damiba, Alexander I. Rudnicky. 1998. Internationalizing Speech Technology through Language Independent Lexical Acquisition. Abstract: Software internationalization, the process of making software easier to localize for specific languages, has deep implications when applied to speech technology, where the goal of the task lies in the very essence of the particular language. A great deal of work and fine-tuning normally goes into the development of speech software for a single language, say English. This tuning complicates a port to different languages. The inherent identity of a language manifests itself in its lexicon, where its character set, phoneme set, pronunciation rules are revealed. We propose a decomposition of the lexicon building process, into four discrete and sequential steps: (a) Transliteration code points from Unicode. (b) Orthographic standardization rules. (c) Application of grapheme to phoneme rules. (d) Application of phonological rules. In following these steps one should gain accessibility to most of the existing speech/language processing tools, thereby internationalizing one's speech technology. In addition, adhering to this decomposition should allow for a reduction of rule conflicts that often plague the phoneticizing process. Our work makes two main contributions: it proposes a systematic procedure for the internationalization of automatic speech recognition (ASR) systems. It also proposes a particular decomposition of the phoneticization process that facilitates internationalization by non-expert informants.
LTI_Alexander_Rudnicky.txt
- P. Constantinides, Scott Hansma, C. Tchou, Alexander I. Rudnicky. 1998. A schema based approach to dialog control. Abstract: Frame-based approaches to spoken language interaction work well for limited tasks such as information access, given that the goal of the interaction is to construct a correct query then execute it. More complex tasks, however, can benefit from more active system participation. We describe two mechanisms that provide this, a modified stack that allows the system to track multiple topics, and form-specific schema that allow the system to deal with tasks that involve completion of multiple forms. Domain-dependent schema specify system behavior and are executed by a domain-independent engine. We describe implementations for a personal calendar system and for an air travel planning system.
LTI_Alexander_Rudnicky.txt
- Bertrand A. Damiba, Alexander I. Rudnicky. 1997. Language-Independent Lexical Acquisition. Abstract: Lexicon construction is at the core of internationalizing speech systems, as it is the locus at which the correspondence between the written and spoken forms of a language is specified. For the most part, speech systems for a given language benefit from the attention of native speakers and the opportunity to tune performance over time, allowing the cost of lexicon development to be amortized over time. On the other hand rapid deployment of recognition capability for new languages stresses the need for rapid availability of a usable lexicon. We propose a decomposition of the lexicon building process, into four discrete and sequential steps that simplify and speed up the creation of language knowledge bases for recognition and synthesis. Results from four languages are discussed.
LTI_Alexander_Rudnicky.txt
- Alexander I. Rudnicky. 1996. Speech Interface Guidelines. Abstract: This document provides an overview of speech interface design principles as applied to the range of applications that have been developed at Carnegie Mellon. For the most part these are workstation-based applications based on spoken language understanding technology. Nevertheless the guidelines should be applicable to a wider range of applications. Speech interfaces have two properties not normally found in more mature interface technologies:
LTI_Alexander_Rudnicky.txt
- Alexander I. Rudnicky, Stephen Reed, Eric H. Thayer. 1996. SPEECHWEAR: a mobile speech system. Abstract: We describe a system that allows ambulating users to perform data entry and retrieval using a speech interface to a wearable computer. The interface is a speech-enabled Web browser that allows the user to access both locally stored documents as well as remote ones through a wireless link.
LTI_Alexander_Rudnicky.txt
- Alexander I. Rudnicky. 1995. Language Modeling with Limited Domain Data. Abstract: Generic recognition systems contain language models which arerepresentative of a broad corpus. In actual practice, however, recognitionis usually on a coherent text covering a single topic, suggestingthat knowledge of the topic at hand can be used to advantage. A basemodel can be augmented with information from a small sample ofdomain-specific language data to significantly improve recognitionperformance. Good performance may be obtained by merging inonly those n-grams that include words that are out of vocabularywith respect to the base model.
LTI_Alexander_Rudnicky.txt
- Alexander Hauptmann, M. Witbrock, Alexander I. Rudnicky. 1995. Speech for multimedia information retrieval. Abstract: We describe the Informediatm News-on-Demand system. News-on-Demand is an innovative example of indexing and searching broadcast video and audio material by text content. The fully-automatic system monitors TV news and allows selective retrieval of news items based on spoken queries. The user then plays the appropriate video "paragraph". The system runs on a Pentium PC using MPEG-I video compression and the Sphinx-II continuous speech recognition system [6].
LTI_Alexander_Rudnicky.txt
- D. Dahl, M. Bates, Michael Brown, W. Fisher, Kate Hunicke-Smith, D. S. Pallett, Christine Pao, Alexander I. Rudnicky, Elizabeth Shriberg. 1994. Expanding the Scope of the ATIS Task: The ATIS-3 Corpus. Abstract: The Air Travel Information System (ATIS) domain serves as the common evaluation task for ARPA spoken language system developers. To support this task, the Multi-Site ATIS Data COllection Working group (MADCOW) coordinates data collection activities. This paper describes recent MADCOW activities. In particular, this paper describes the migration of the ATIS task to a richer relational database and development corpus (ATIS-3) and describes the ATIS-3 corpus. The expanded database, which includes information on 46 US and Canadian cities and 23,457 flights, was released in the fall of 1992, and data collection for the ATIS-3 corpus began shortly thereafter. The ATIS-3 corpus now consists of a total of 8297 released training utterances and 3211 utterances reserved for testing, collected at BBN, CMU, MIT, NIST and SRI. 2906 of the training utterances have been annotated with the correct information from the database. This paper describes the ATIS-3 corpus in detail, including breakdowns of data by type (e.g. context-independent, context-dependent, and unevaluable)and variations in the data collected at different sites. This paper also includes a description of the ATIS-3 database. Finally, we discuss future data collection and evaluation plans.
LTI_Alexander_Rudnicky.txt
- Alexander I. Rudnicky, Alexander Hauptmann, Kai-Fu Lee. 1994. Survey of current speech technology. Abstract: Speech recognition and speech synthesis are technologies of particular interest for their support of direct communication between humans and computers through a communications mode humans commonly use among themselves and at which they are highly skilled. Both manipulate speech in terms of its information content; recognition transforms human speech into text to be used literally (e.g., for dictation) or interpreted as commands to control applications, and synthesis allows the generation of spoken utterances from text
LTI_Alexander_Rudnicky.txt
- L. Hirschman, M. Bates, D. Dahl, W. Fisher, J. Garofolo, D. S. Pallett, Kate Hunicke-Smith, P. Price, Alexander I. Rudnicky, E. Tzoukermann. 1993. Multi-Site Data Collection and Evaluation in Spoken Language Understanding. Abstract: The Air Travel Information System (ATIS) domain serves as the common task for DARPA spoken language system research and development. The approaches and results possible in this rapidly growing area are structured by available corpora, annotations of that data, and evaluation methods. Coordination of this crucial infrastructure is the charter of the Multi-Site ATIS Data COllection Working group (MADCOW). We focus here on selection of training and test data, evaluation of language understanding, and the continuing search for evaluation methods that will correlate well with expected performance of the technology in applications.
LTI_Alexander_Rudnicky.txt
- Alexander I. Rudnicky. 1993. Session 1: Spoken Language Systems. Abstract: Without the ability to interpret natural language, speech recognition is suited only for a subset of tasks (though certainly not trivial ones), such as data entry, simple commands or dictation. Similarly, without speech recognition natural language is restricted to the interpretation of written language, a stylized form of human communication. Spoken language systems thus represent an attempt to automate speech communication. While limited in terms of the target behavior, they still represents an advance over the capabilities of the individual technologies.
LTI_Alexander_Rudnicky.txt
- Alexander I. Rudnicky. 1993. Factors affecting choice of speech over keyboard and mouse in a simple data-retrieval task. Abstract: This paper describes some recent experiments that assess user mode selection behavior in amulti-modal environment inwhich actions can be performed with equivalent effect by speech, keyboard or scroller. Results indicate that users freely choose speech over other modalities, even when it is less efcient in objective terms, such as time-to-completion or input error. Additional evidence indicates that users appear to focus on simple input time in making their choice of mode, in effect minimizing the amount of personal effort expended.
LTI_Alexander_Rudnicky.txt
- Alexander I. Rudnicky. 1993. Mode preference in a simple data-retrieval task. Abstract: This paper describes some recent experiments that assess user behavior in a multi-modal environment in which actions can be performed with equivalent effect in speech, keyboard or scroiler modes. Results indicate that users freely choose speech over other modalities, even when it is less efficient in objective terms, such as time-to-completion or input error.
LTI_Alexander_Rudnicky.txt
- S. L. Teal, Alexander I. Rudnicky. 1992. A performance model of system delay and user strategy selection. Abstract: This study lays the ground work for a predictive, zero-parameter engineering model that characterizes the relationship between system delay and user performance. This study specifically investigates how system delays affects a user's selection of task strategy. Strategy selection is hypothesized to be based on a cost function combining two factors: (1) the effort required to synchronize input system availability and (2) the accuracy level afforded. Results indicate that users, seeking to minimize effort and maximize accuracy, choose among three strategies – automatic performance, pacing, and monitoring. These findings provide a systematic account of the influence of system delay on user performance, based on adaptive strategy choice drive by cost.
LTI_Alexander_Rudnicky.txt
- S. L. Teal, Alexander I. Rudnicky. 1991. CHANGES IN USER TASK STRATEGY DUE TO SYSTEM RESPONSE DELAY. Abstract: Despite recent advances in computer technologies, system response time remains an important factor in determining system usability, especially for newer and yet unperfected technologies such as speech recognition. Previous investigations of response delay have produced contradictory results. This study attempts to systematically investigate the relationship between response delay and a user's choice of task strategy. We address two questions. First, does response delay have a significant effect on user performance? Second, if response delay does affect user performance, can we define a model that describes the relationship?
LTI_Alexander_Rudnicky.txt
- Alexander I. Rudnicky, Alexander Hauptmann. 1991. Models for evaluating interaction protocols in speech recognition. Abstract: Recognition errors complicate the assessment of speech systems. This paper presents a new approach to modeling spoken language interaction protocols, based on finite Markov chains. An interaction protocol, prescribed by the interface design, defines a set of primitive transaction steps and the order of their execut ion. The efficiency of an interface depends on the interaction protocol as well as the cost of each different transaction step. Markov chains provide a simple and computationally eflicient method for modeling errorful systems. They allow for detailed comparisons between different interaction protocols and between different modalities. The method is illustrated by application to example protocols.
LTI_Alexander_Rudnicky.txt
- Jean-Michel Lunati, Alexander I. Rudnicky. 1991. Spoken language interfaces: the OM system. Abstract: The intrinsic properties of speech communication (e.g., the presence-of malformed utterances) and the characteristics of current recognition technology (inaccurate recognition) pose special problems for the design of a speech interface. We are interested in understanding these problems and in identifying an interface structure that allows speech to be a useful form of computer input. Ultimately, our goal is to understand how to turn speech into a conventional input modality, well integrated into a multimodal interface that includes keyboard and mouse. To fully exploit the advantages of spoken communication, a spoken language system must afford the user the following forms of flexibility: natural production, natural language, and a natural flow of interaction. The Carnegie Mellon Spoken Language Shell (CMSLS) attempts to provide such flexibility through the use of speaker-independent continuous-speech recognition, natural language processing, as well as rudiment ary “conversational skill” heuristics.
LTI_Alexander_Rudnicky.txt
- Alexander I. Rudnicky, Jean-Michel Lunati, A. Franz. 1991. Spoken language recognition in an office management domain. Abstract: The authors highlight needs related to a voice interface and describe the implementation of a general-purpose spoken language interface, the Carnegie Mellon Spoken Language Shell (CM-SLS). CM-SLS provides voice interface services to different applications running on the same computer. CM-SLS was used to build the Office Manager, a collection of applications that includes an appointment calendar, a personal database, voice mail, and a calculator. The performance of several system components is described.<<ETX>>
LTI_Alexander_Rudnicky.txt
- Alexander I. Rudnicky. 1990. The design of spoken language interfaces. Abstract: This report describes how a speech application using a speaker-independent continuous speech system is designed and implemented. The topics covered include task analysis, language design and interface design. An example of such an application, a voice spreadsheet, is described. Evaluation techniques are discussed. This research was supported by the Defense Advanced Research Projects Agency (DOD) and monitored by the Space and Naval Warfare Systems Command under Contract N00039-85-C-0163, ARPA Order No. 5167. The views and conclusions contained in this document are those of the author and should not be interpreted as representing the official policies, either expressed or implied, of DARPA or the U.S. government.
LTI_Alexander_Rudnicky.txt
- Jean-Michel Lunati, Alexander I. Rudnicky. 1990. The design of a spoken language interface. Abstract: Fast and accurate speech recognition systems systems bring with them the possibility of designing effective voice driven applications. Efforts to this date have involved the construction of monolithic systems, necessitating repetition of effort as each new system is implemented. In this paper, we describe an initial implementation of a general spoken language interface, the Carnegie Mellon Spoken Language Shell (CM-SLS) which provides voice interface services to a variable number of applications running on the same computer. We also present a system built using CM-SLS, the Office Manager, which provides the user with voice access to facilities such as an appointment calendar, a personal database, and voice mail.
LTI_Alexander_Rudnicky.txt
- Alexander Hauptmann, Alexander I. Rudnicky. 1990. A Comparison of Speech and Typed Input. Abstract: Meaningful evaluation of spoken language interfaces must be based on detailed comparisons with an alternate, well-understood input modality, such as the keyboard. This paper presents an empirical study in which users were asked to enter digit strings into the computer by voice and by keyboard. Two different ways of verifying and correcting the spoken input were also examined using either voice or keyboard. Timing analyses were performed to determine which aspects of the interface were critical to speedy completion of the task. The results show that speech is preferable for strings that require more than a few keystrokes. The results emphasize the need for fast and accurate speech recognition, but also demonstrate how error correction and input validation are crucial components of a speech interface.
LTI_Alexander_Rudnicky.txt
- Alexander I. Rudnicky, M. Sakamoto, J. Polifroni. 1990. Spoken language interaction in a goal-directed task. Abstract: To study the spoken language interface in the context of a complex problem-solving task, a group of users are asked to perform a spreadsheet task, alternating voice and keyboard input. A total of 40 tasks are performed by each participant, the first 30 in a group (over several days), the remaining ones a month later. The voice spreadsheet program is extensively instrumented to provide detailed information about the components of the interaction. These data, as well as analysis of the participant's utterances and recognizer output, provide a fairly detailed picture of spoken language interaction. Although task completion by voice takes longer than by keyboard, analysis shows that users would be able to perform the spreadsheet task faster by voice, if two key criteria could be met: recognition occurs in real-time, and the error rate is sufficiently low. This initial experience with a spoken language system also allows the identification of several metrics, beyond those traditionally associated with speech recognition, that can be used to characterize system performance.<<ETX>>
LTI_Alexander_Rudnicky.txt
- J. Polifroni, Alexander I. Rudnicky. 1989. Modeling lexical stress in read and spontaneous speech. Abstract: Although prosodic information has long been thought important for speech recognition, few demonstrations exist of its effective use in recognition systems. Lexical stress information has been shown to improve recognition performance by allowing the differentiation of confusable words (e.g., Rudnicky and Li, DARPA Workshop on Speech Recogn., June 1988). In this study, lexical stress modeling for a spreadsheet system with significant number of confusable words (e.g., EIGHTY and EIGHTEEN) is examined. The models used here have been evaluated on both read and spontaneous speech. A database of over 400 spreadsheet and numeric utterances was available for training a (HMM‐based) speaker‐independent continuous‐speech system with a 273‐word vocabulary and language perplexity of about 51. Testing data used in this study were based on read utterances and data generated in a separate study examining the use of a spoken‐language spreadsheet. This latter set includes: (a) a “spontaneous” set, composed of parsable utter...
LTI_Alexander_Rudnicky.txt
- Alexander I. Rudnicky. 1989. The design of voice-driven interfaces. Abstract: This paper presents some issues that arise in building voice-driven interfaces to complex applications and describes some of the approaches that we have developed for this purpose. To test these approaches, we have implemented a voice spreadsheet and have begun observation of users interacting with it.
LTI_Alexander_Rudnicky.txt
- Alexander I. Rudnicky, M. Sakamoto. 1989. Transcription conventions and evaluation techniques for spoken language system research. Abstract: We describe the transcription conventions currently in use for spontaneous speech at Carnegie Mellon University. Two sets of conventions are described, a detail-rich system for wizard experi ments, and a more rigid evaluation system designed for purposes of SLS evaluation. The latter is suitable for automatic scoring using the existing NBS (now NIST) scoring software. A sample wizard transcription is included as well as a sample of live-system transcription together with system output Transcripts can be used to generate a number of diagnostic metrics useful for system evaluation. The research described in this paper was sponsored by the Defense Advanced Research Projects Agency (DOD), ARPA Order No. 5167, monitored by SPAWAR under contract N00039-85-C-0163. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Defense Advanced Research Projects Agency or of the US Government. Abstract This note describes the transcription conventions currently in use for spontaneous speech at Carnegie Mellon University. Two sets of conventions arc described, a detail-rich system for wizard experiments, and a more rigid system designed for purposes of SLS evaluation. The latter is suitable for automatic scoring using the existing NBS (now NIST) scoring software. A sample wizard transcription is included as well as a sample of live-system transcription together with system output. Transcripts can be used to generate a number of diagnostic metrics useful for system evaluation.
LTI_Alexander_Rudnicky.txt
- Alexander I. Rudnicky, M. Sakamoto, J. Polifroni. 1989. Evaluating spoken language interaction. Abstract: To study the spoken language interface in the context of a complex problem-solving task, a group of users were asked to perform a spreadsheet task, alternating voice and keyboard input. A total of 40 tasks were performed by each participant, the first thirty in a group (over several days), the remaining ones a month later. The voice spreadsheet program used in this study was extensively instrumented to provide detailed information about the components of the interaction. These data, as well as analysis of the participants's utterances and recognizer output, provide a fairly detailed picture of spoken language interaction.Although task completion by voice took longer than by keyboard, analysis shows that users would be able to perform the spreadsheet task faster by voice, if two key criteria could be met: recognition occurs in real-time, and the error rate is sufficiently low. This initial experience with a spoken language system also allows us to identify several metrics, beyond those traditionally associated with speech recognition, that can be used to characterize system performance.
LTI_Alexander_Rudnicky.txt
- Alexander I. Rudnicky. 1989. Goal-directed Speech in a Spoken Language System. Abstract: The advent of reliable speaker‐independent continuous speech recognition systems has made it possible to design systems that use speech as a replacement for keyboard input. To understand the nature of a system that accepts spontaneous goal‐directed speech (as opposed to the current standard of read speech), a spoken‐language spreadsheet was implemented and users performing a series of tasks using this system were studied. The system was instrumented to allow the collection of detailed timing information about the components of the interaction cycle. The (HMM‐based) recognition system incorporates a lexicon of 273 words and a language of perplexity 51. Four users performed a series of 40 tasks (involving the entry of personal financial information) alternating voice and keyboard input. Users completed 30 tasks in one block of sessions, then returned a month later to complete the remainder. The utterances spoken into the system (over 7500) were stored for later analysis. The data collected provide a compreh...
LTI_Alexander_Rudnicky.txt
- Alexander I. Rudnicky, Alexander Hauptmann. 1989. Conversational interaction with speech systems. Abstract: This paper discusses design principles for spoken language applications. We focus on those aspects of interface design that are separate from basic speech recognition and that are concerned with the global process of performing a task using a speech interface. Six basic speech user interface design principles are discussed: 1. User plasticity. This property describes how much users can adapt to speech interfaces. 2. Interaction protocol styles. We explain how different interaction protocols for speech interfaces impact on basic task throughput. 3. Error correction. Alternate ways to correct recognition errors are examined. 4. Response Time. The response time requirements of a speech user interface is presented based on experimental results. 5. Task structure. The use of task structure to reduce the complexity of the speech recognition problem is discussed and the resulting benefits are demonstrated. 6. Multi-Modality. The opportunity for integration of several modalities into the interface is evaluated. Since these design principles are different from others for standard applications with typing or pointing, we present experimental support for the importance of these principles as well as perspectives towards solutions and further research. The research described in this paper was sponsored by the Defense Advanced Research P m i * r t c views and conclusions contained in this document are those of the authors and should not be i n t L e t e d ^xzz^s^:~ ~ Table of
LTI_Alexander_Rudnicky.txt
- Alexander I. Rudnicky, Zong-ge Li, J. Polifroni, Eric H. Thayer, J. L. Gale. 1988. An unanchored matching algorithm for lexical access. Abstract: Describes the lexical access component of the Carnegie-Mellon University (CMU) continuous speech recognition system. The word recognition algorithm operates in a left to right fashion, building words as it traverses an input network. Search is initiated at each node in the input network. The score assigned to a word is a function of both arc phone probabilities assigned by the acoustic phonetic module and knowledge of expected phone duration and frequency of occurrence of different word pronunciations. The algorithm also incorporates knowledge-based strategies to control the number of hypotheses generated by the matcher. These strategies use criteria external to the search. Performance characteristics are reported using a 1029 word lexicon built automatically from standard pronunciation base forms by context-dependent phonetic rules. Lexical rules are independent of specific lexicons and are derived by examination of transcribed speech data. The lexical representation now includes juncture rules that model specific inter-word phenomena. A junction validation module is also described, whose task is to evaluate the connectivity of words in the word hypotheses lattice.<<ETX>>
LTI_Alexander_Rudnicky.txt
- Alexander I. Rudnicky, R. Brennan, J. Polifroni. 1988. Interactive problem solving with speech. Abstract: Until recently, systems offering high‐performance speaker‐independent continuous speech recognition were not available, making it difficult to understand how speech should be used in interactive systems. The advent of the SPHINX system developed al Carnegie‐Mellon University [K.‐F. Lee and H.‐W. Hon, Proc. IEEE ICASSP‐88, 123–126 (1988)] has made it practical to address the issue of designing systems that integrate speech into “real‐world” tasks. This paper describes experience with several tasks that use tightly coupled speech interaction: a programmable voice calculator, a personal scheduler, and a spreadsheet. The goal of this work is to create an environment that allows for the rapid prototyping of “spoken language” systems and allows the study of human factors issues that these entail. The environment that was developed includes the ability to rapidly configure and refine recognition systems using declarative specifications, and the ability to define control structures suitable to particular tasks. [...
LTI_Alexander_Rudnicky.txt