diff --git "a/SubCiteseer.json" "b/SubCiteseer.json" new file mode 100644--- /dev/null +++ "b/SubCiteseer.json" @@ -0,0 +1,12591 @@ +[ + { + "node_id": 0, + "label": 1, + "text": "Decomposition in Data Mining: An Industrial Case Study Data mining offers tools for discovery of relationships, patterns, and knowledge in large databases. The knowledge extraction process is computationally complex and therefore a subset of all data is normally considered for mining. In this paper, numerous methods for decomposition of data sets are discussed. Decomposition enhances the quality of knowledge extracted from large databases by simplification of the data mining task. The ideas presented are illustrated with examples and an industrial case study. In the case study reported in this paper, a data mining approach is applied to extract knowledge from a data set. The extracted knowledge is used for the prediction and prevention of manufacturing faults in wafers.", + "neighbors": [ + 442 + ], + "mask": "Test" + }, + { + "node_id": 1, + "label": 2, + "text": "Exploration versus Exploitation in Topic Driven Crawlers Topic driven crawlers are increasingly seen as a way to address the scalability limitations of universal search engines, by distributing the crawling process across users, queries, or even client computers. The context available to a topic driven crawler allows for informed decisions about how to prioritize the links to be explored, given time and bandwidth constraints. We have developed a framework and a number of methods to evaluate the performance of topic driven crawler algorithms in a fair way, under limited memory resources. Quality metrics are derived from lexical features, link analysis, and a hybrid combination of the two. In this paper we focus on the issue of how greedy a crawler should be. Given noisy quality estimates of links in a frontier, we investigate what is an appropriate balance between a crawler's need to exploit this information to focus on the most promising links, and the need to explore links that appear suboptimal but might lead to more relevant pages. We show that exploration is essential to locate the most relevant pages under a number of quality measures, in spite of a penalty in the early stage of the crawl.", + "neighbors": [ + 53, + 281, + 457, + 649, + 662, + 774, + 968, + 1000, + 1017, + 1264 + ], + "mask": "Train" + }, + { + "node_id": 2, + "label": 3, + "text": "Software Engineering and Middleware: A Roadmap The construction of a large class of distributed systems can be simplified by leveraging middleware, which is layered between network operating systems and application components. Middleware resolves heterogeneity, and facilitates communication and coordination of distributed components. State of-the-practice middleware products enable software engineers to build systems that are distributed across a localarea network. State-of-the-art middleware research aims to push this boundary towards Internet-scale distribution, adaptive systems, middleware for dependable and wireless systems. The challenge for software engineering research is to devise notations, techniques, methods and tools for distributed system construction that systematically build and exploit the capabilities that middleware products deliver, now and in the future.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 3, + "label": 0, + "text": "Dynamic-Agents for Dynamic Service Provisioning We claim that a dynamic-agent infrastructure can provide a shift from static distributed computing to dynamic distributed computing, and we have developed such an infrastructure to realize such a shift. We shall show its impact on software engineering through a comparison with other distributed object-oriented systems such as CORBA and DCOM, and demonstrate its value in highly dynamic system integration and service provisioning. The infrastructure is Java-based, light-weight, and extensible. It differs from other agent platforms and client/server infrastructures in its support of dynamic behavior modification of agents. A dynamic-agent is not designed to have a fixed set of predefined functions but instead, to carry application-specific actions, which can be loaded and modified on the fly. This allows a dynamic-agent to adjust its capability for accommodating environment and requirement changes, and play different roles across multiple applications. The above features are supported b...", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 4, + "label": 0, + "text": "A Hybrid Mobile Robot Architecture with Integrated Planning and Control Research in the planning and control of mobile robots has received much attention in the past two decades. Two basic approaches have emerged from these research efforts: deliberative vs. reactive. These two approaches can be distinguished by their different usage of sensed data and global knowledge, speed of response, reasoning capability, and complexity of computation. Their strengths are complementary and their weaknesses can be mitigated by combining the two approaches in a hybrid architecture. This paper describes a method for goal-directed, collision-free navigation in unpredictable environments that employs a behavior-based hybrid architecture with asynchronously operating behavioral modules. It differs from existing hybrid architectures in two important ways: (1) the planning module produces a sequence of checkpoints instead of a conventional complete path, and (2) in addition to obstacle avoidance, the reactive module also performs target reaching under the control of a self-organizing neural network. The neural network is trained to perform fine, smooth motor control that moves the robot through the checkpoints. These two aspects facilitate a tight integration between high-level planning and low-level control, which permits real-time performance and easy path modification even when the robot is en route to the goal position.", + "neighbors": [ + 1194 + ], + "mask": "Test" + }, + { + "node_id": 5, + "label": 3, + "text": "XM2VTSDB: The Extended M2VTS Database In this paper we describe the acquisition and content of a large multi-modal database intended for training and testing of multi-modal verification systems. The XM2VTSDB database offers synchronised video and speech data as well as image sequences allowing multiple views of the face. It consists of digital video recordings taken of 295 hundred subjects at one month intervals taken over a period of five months. We also describe a protocol for evaluating verification algorithms on the database. The database has been made available to anyone on request to the University of Surrey through http://www.ee.surrey.ac.uk/Research/VSSP/xm2vtsdb.", + "neighbors": [ + 1060 + ], + "mask": "Validation" + }, + { + "node_id": 6, + "label": 1, + "text": "Use of Satellite Image Referencing Algorithms to Characterize Asphaltic Concrete Mixtures A natural way to test the structural integrity of a pavement is to send signals with different frequencies through the pavement and compare the results with the signals passing through an ideal pavement. For this comparison, we must determine how, for the corresponding mixture, the elasticity E depends on the frequency f in the range from 0.1 to 10 5 Hz. It is very expensive to perform measurements in high frequency area (above 20 Hz). To avoid these measurements, we can use the fact that for most of these mixtures, when we change a temperature, the new dependence changes simply by scaling. Thus, instead of performing expensive measurements for different frequencies, we can measure the dependence of E on moderate frequencies f for different temperatures, and then combine the resulting curves into a single \"master\" curve. In this paper, we show how fuzzy techniques can help to automate this \"combination\".", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 7, + "label": 3, + "text": "A Case for Parallelism in Data Warehousing and OLAP In recent years the database community has experienced a tremendous increase in the availability of new technologies to support efficient storage and retrieval of large volumes of data, namely data warehousing and On-Line Analytical Processing (OLAP) products. Efficient query processing is critical in such an environment, yet achieving quick response times with OLAP queries is still largely an open issue. In this paper we propose a solution approach to this problem by applying parallel processing techniques to a warehouse environment. We suggest an efficient partitioning strategy based on the relational representation of a data warehouse (i.e., star schema). Furthermore, we incorporate a particular indexing strategy, DataIndexes, to further improve query processing times and parallel resource utilization, and propose a preliminary parallel star-join strategy. 1 Introduction In recent years, there has been an explosive growth in the use of databases for decision support. This phenome...", + "neighbors": [ + 248, + 389 + ], + "mask": "Validation" + }, + { + "node_id": 8, + "label": 2, + "text": "A State-of-the-art Review on Multimodal Video Indexing Efficient and effective handling of video documents depends on the availability of indexes. Manual indexing is unfeasible for large video collections. Effective indexing requires a multimodal approach in which either the most appropriate modality is selected or the different modalities are used in collaborative fashion. In this paper we focus on the similarities and differences between the modalities, and survey several methods aiming at automating the time and resource consuming process of video indexing. Furthermore, we put forward a unifying and multimodal framework, which views a video document from the perspective of its author. This framework forms the guiding principle for identifying index types, for which automatic methods are found in literature. It furthermore forms the basis for categorizing these different methods.", + "neighbors": [ + 550, + 605 + ], + "mask": "Train" + }, + { + "node_id": 9, + "label": 3, + "text": "Discovering Web Access Patterns and Trends by Applying OLAP and Data Mining Technology on Web Logs As a confluence of data mining and WWW technologies, it is now possible to perform data mining on web log records collected from the Internet web page access history. The behaviour of the web page readers is imprinted in the web server log files. Analyzing and exploring regularities in this behaviour can improve system performance, enhance the quality and delivery of Internet information services to the end user, and identify population of potential customers for electronic commerce. Thus, by observing people using collections of data, data mining can bring considerable contribution to digital library designers. In a joint effort between the TeleLearning-NCE project on Virtual University and NCE-IRIS project on data mining, we have been developing the knowledge discovery tool, WebLogMiner, for mining web server log files. This paper presents the design of the WebLogMiner, reports the current progress, and outlines the future work in this direction.", + "neighbors": [ + 447, + 1120 + ], + "mask": "Train" + }, + { + "node_id": 10, + "label": 3, + "text": "Managing the Operator Ordering Problem in Parallel Databases This paper focuses on parallel query optimization. We consider the operator problem and introduce a new class of execution strategies called Linear-Oriented Bushy Trees (LBT). Compared to the related approach of the General Bushy Trees (GBT) a significant complexity reduction of the operator ordering problem can be derived theoretically and demonstrated experimentally (e.g. compared with GBTs, LBTs authorize optimization time improvement that can reach up-to 49%) without loosing quality. Finally we demonstrate that existing commercial parallel query optimizers need little extension modifications in order to handle LBTs. Key words: Parallel databases, parallel query optimization, linear-oriented bushy trees, extending existing optimizers. 1 Introduction Modern database applications, such as data mining and decision support pose several new challenges to query optimization and processing [1]. One of the main issues concerns the processing of complex queries (e.g. recent Teradata rela...", + "neighbors": [ + 788 + ], + "mask": "Train" + }, + { + "node_id": 11, + "label": 2, + "text": "SpeechBot: a Speech Recognition based Audio Indexing System for the Web We have developed an audio search engine incorporating speech recognition technology. This allows indexing of spoken documents from the World Wide Web when no transcription is available. This site indexes several talk and news radio shows covering a wide range of topics and speaking styles from a selection of public Web sites with multimedia archives. Our Web site is similar in spirit to normal Web search sites; it contains an index, not the actual multimedia content. The audio from these shows suffers in acoustic quality due to bandwidth limitations, coding, compression, and poor acoustic conditions. The shows are typically sampled at 8 kHz and transmitted, RealAudio compressed, at 6.5 kbps. Our word-error rate results using appropriately trained acoustic models show remarkable resilience to the high compression, though many factors combine to increase the average word-error rates over standard broadcast news benchmarks. We show that, even if the transcription is inaccurate, we can st...", + "neighbors": [ + 763, + 796, + 1161 + ], + "mask": "Test" + }, + { + "node_id": 12, + "label": 4, + "text": "Context Awareness by Analysing Accelerometer Data In this paper we describe continuing work being carried out as part of the Bristol Wearable Computing Initiative. We are researching processing techniques for data from accelerometers which enable the wearable computer to determine the user's activity. We have experimented with, and review, techniques already employed by others; and then propose new methods for analysing the data delivered by these devices. We try to minimise the number of devices needed, and use a single X-Y accelerometer device. Using our techniques we have adapted our GPS based Tourist Guide wearable Computer application to include a multimedia presentation which gives the user information using different media depending on the user's activity as well as location. 1 Introduction and Background This is a condensed version of a technical report. [1] Our interests in wearable computing are centred around determining the context of the user and developing applications which make use of this information. We are expl...", + "neighbors": [ + 124, + 1192 + ], + "mask": "Validation" + }, + { + "node_id": 13, + "label": 1, + "text": "Actor-Critic Algorithms We propose and analyze a class of actor-critic algorithms for simulation-based optimization of a Markov decision process over a parameterized family of randomized stationary policies. These are two-time-scale algorithms in which the critic uses TD learning with a linear approximation architecture and the actor is updated in an approximate gradient direction based on information provided by the critic. We show that the features for the critic should span a subspace prescribed by the choice of parameterization of the actor. We conclude by discussing convergence properties and some open problems. 1 Introduction The vast majority of Reinforcement Learning (RL) [9] and Neuro-Dynamic Programming (NDP) [1] methods fall into one of the following two categories: (a) Actor-only methods work with a parameterized family of policies. The gradient of the performance, with respect to the actor parameters, is directly estimated by simulation, and the parameters are updated in a direction o...", + "neighbors": [ + 97 + ], + "mask": "Validation" + }, + { + "node_id": 14, + "label": 3, + "text": "Computing and Comparing Semantics of Programs in Four-valued Logics The different semantics that can be assigned to a logic program correspond to different assumptions made concerning the atoms whose logical values cannot be inferred from the rules. Thus, the well founded semantics corresponds to the assumption that every such atom is false, while the Kripke-Kleene semantics corresponds to the assumption that every such atom is unknown. In this paper, we propose to unify and extend this assumption-based approach by introducing parameterized semantics for logic programs. The parameter holds the value that one assumes for all atoms whose logical values cannot be inferred from the rules. We work within Belnap's four-valued logic, and we consider the class of logic programs defined by Fitting. Following Fitting's approach, we define a simple operator that allows us to compute the parameterized semantics, and to compare and combine semantics obtained for different values of the parameter. The semantics proposed by Fitting corresponds to the value false. We also show that our approach captures and extends the usual semantics of conventional logic programs thereby unifying their computation.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 15, + "label": 4, + "text": "A Pattern Approach to Interaction Design To create successful interactive systems, user interface designers need to cooperate with developers and application domain experts in an interdisciplinary team. These groups, however, usually miss a common terminology to exchange ideas, opinions, and values. This paper presents an approach that uses pattern languages to capture this knowledge in software development, HCI, and the application domain. A formal, domain-independent definition of design patterns allows for computer support without sacrificing readability, and pattern use is integrated into the usability engineering lifecycle. As an example, experience from building an award-winning interactive music exhibit was turned into a pattern language, which was then used to inform follow-up projects and support HCI education.", + "neighbors": [ + 240 + ], + "mask": "Train" + }, + { + "node_id": 16, + "label": 0, + "text": "Communication Primitives for Ubiquitous Systems or RPC Considered Harmful RPC is widely used to access and modify remote state. Its procedural call semantics are argued as an efficient unifying paradigm for both local and remote access. Our experience with ubiquitous device control systems has shown otherwise. RPC semantics of a synchronous, blocking invocation on a statically typed interface are overly restrictive, inflexible, and fail to provide an efficient unifying abstraction for accessing and modifying state in ubiquitous systems. This position paper considers other alternatives and proposes the use of comvets (conditional, mobility aware events) as the unifying generic communication paradigm for such systems. Keywords: RPC, RMI, Events, Comvets, CORBA, Jini 1 Introduction Ubiquitous environments or active spaces are the next generation of device control networks. A user interacts with an active space by using novel interfaces like speech and gesture input [1] to control her environment, and the system interacts with the user using audio/video outpu...", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 17, + "label": 3, + "text": "Benchmarking XML Management Systems: The XOO7 Way The effectiveness of existing XML query languages has been studied by many who focused on the comparison of linguistic features, implicitly reflecting the fact that most XML tools exist only on paper. In this paper, with a focus on efficiency and concreteness, we propose a pragmatic first step toward the systematic benchmarking of XML query processing platforms. We begin by identifying the necessary functionalities an XML data management system should support. We review existing approaches for managing XML data and the query processing capabilities of these approaches. We then compare three XML query benchmarks XMach-1, XMark and XOO7 and discuss the applicability, strengths and limitations of these benchmarks. We highlight the bias of these benchmarks towards the data centric view of XML and motivate our selection of XOO7 to extend with document centric queries. We complete XOO7 to capture the information retrieval capabilities of XML management systems. Finally we summarize our contributions and discuss future directions.", + "neighbors": [ + 364, + 585, + 1162 + ], + "mask": "Test" + }, + { + "node_id": 18, + "label": 3, + "text": "dQUOB: Managing Large Data Flows Using Dynamic Embedded Queries The dQUOB system satisfies client need for specific information from high-volume data streams. The data streams we speak of are the flow of data existing during large-scale visualizations, video streaming to large numbers of distributed users, and high volume business transactions. We introduces the notion of conceptualizing a data stream as a set of relational database tables so that a scientist can request information with an SQL-like query. Transformation or computation that often needs to be performed on the data en-route can be conceptualized ascomputation performed onconsecutive views of the data, with computation associated with each view. The dQUOB system moves the query code into the data stream as a quoblet; as compiled code. The relational database data model has the significant advantage of presenting opportunities for efficient reoptimizations of queries and sets of queries. Using examples from global atmospheric modeling, we illustrate the usefulness of the dQUOB system. We carry the examples through the experiments to establish the viability of the approach for high performance computing with a baseline benchmark. We define a cost-metric of end-to-end latency that can be used to determine realistic cases where optimization should be applied. Finally, we show that end-to-end latency can be controlled through a probability assigned to a query that a query will evaluate to true.", + "neighbors": [ + 309 + ], + "mask": "Train" + }, + { + "node_id": 19, + "label": 3, + "text": "Use Case Maps as a Feature Description Notation . We propose Use Case Maps (UCMs) as a notation for describing features. UCMs capture functional requirements in terms of causal scenarios bound to underlying abstract components. This particular view proved very useful in the description of a wide range of reactive and telecommunications systems. This paper presents some of the most interesting constructs and benefits of the notation in relation to a question on a User Requirements Notation recently approved by ITU-T Study Group 10, which will lead to a new Recommendation by 2003. Tool support, current research on UCMs, and related notations are also discussed. 1 Introduction The modeling of reactive systems requires an early emphasis on behavioral aspects such as interactions between the system and the external world (including the users), on the cause-to-e#ect relationships among these interactions, and on intermediate activities performed by the system. Scenarios are particularly good at representing such aspects so that various ...", + "neighbors": [ + 179 + ], + "mask": "Train" + }, + { + "node_id": 20, + "label": 2, + "text": "Hermes - A Notification Service for Digital Libraries The high publication rate of scholarly material makes searching and browsing an inconvenient way to keep oneself up-todate. Instead of being the active part in information access, researchers want to be notified whenever a new paper in one's research area is published.", + "neighbors": [ + 475 + ], + "mask": "Train" + }, + { + "node_id": 21, + "label": 4, + "text": "Towards Group Communication for Mobile Participants (Extended Abstract) Group communication will undoubtedly be a useful paradigm for many applications of wireless networking in which reliability and timeliness are requirements. Moreover, location awareness is clearly central to mobile applications such as traffic management and smart spaces. In this paper, we introduce our definition of proximity groups in which group membership depends on location and then discuss some requirements for a group membership management service suitable for proximity groups. We describe a novel approach to efficient coverage estimation, giving applications feedback on the proportion of the area of interest covered by a proximity group, and also discuss our approach to partition anticipation.", + "neighbors": [ + 919, + 1029 + ], + "mask": "Train" + }, + { + "node_id": 22, + "label": 1, + "text": "Distinctive Features Should Be Learned Most existing machine vision systems perform recognition based on a xed set of hand-crafted features, geometric models, or eigen-subspace decomposition. Drawing from psychology, neuroscience and intuition, we show that certain aspects of human performance in visual discrimination cannot be explained by any of these techniques. We argue that many practical recognition tasks for articial vision systems operating under uncontrolled conditions critically depend on incremental learning. Loosely motivated by visuocortical processing, we present feature representations and learning methods that perform biologically plausible functions. The paper concludes with experimental results generated by our method. 1 Introduction How exible are the representations for visual recognition, encoded by the neurons of the human visual cortex? Are they predetermined by a xed developmental schedule, or does their development depend on their stimulation? Does their development cease at some poin...", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 23, + "label": 3, + "text": "Interaction between Path and Type Constraints XML [7], which is emerging as an important standard for data exchange on the World-Wide Web, highlights the importance of semistructured data. Although the XML standard itself does not require any schema or type system, a number of proposals [6, 17, 19] have been developed that roughly correspond to data definition languages. These allow one to constrain the structure of XML data by imposing a schema on it. These and other proposals also advocate the need for integrity constraints, another form of constraints that should, for example, be capable of expressing inclusion constraints and inverse relationships. The latter have recently been studied as path constraints in the context of semistructured data [4, 9]. It is likely that future XML proposals will involve both forms of constraints, and it is therefore appropriate to understand the interaction between them. This paper investigates that interaction. In particular it studies constraint implication problems, which are important both i...", + "neighbors": [ + 585, + 681 + ], + "mask": "Test" + }, + { + "node_id": 24, + "label": 1, + "text": "Robustness of Case-Initialized Genetic Algorithms We investigate the robustness of Case Initialized Genetic AlgoRithm (CIGAR) systems with respect to problem indexing. When confronted with a series of similar problems CIGAR stores potential solutions in a case-base or an associative memory and retrieves and uses these solutions to help improve a genetic algorithm 's performance over time. Defining similarity among the problems, or indexing, is key to performance improvement. We study four indexing schemes on a class of simple problems and provide empirical evidence of CIGAR's robustness to imperfect indexing.", + "neighbors": [ + 837 + ], + "mask": "Train" + }, + { + "node_id": 25, + "label": 0, + "text": "Rapid Concurrent Software Engineering in Competitive Situations This article is an experience report on the evolutionary development process of AT Humboldt, a multi agent system which has become World Champion 1997 and Vice World Champion 1998 of RoboCup simulator league. It details why the artifical soccer initiative RoboCup is a tempting domain for rapid concurrent software engineering. Both the development processes in 1997 and 1998 are described, compared and evaluated. Lessons learned for development projects in distributed control conclude this report. 1 Introduction In this article the project managers describe the evolutionary development process of the software project AT (AgentTeam) Humboldt, which has become World Champion 1997 and Vice World Champion 1998 in the simulator league of the artifical soccer contest RoboCup ([10]). The RoboCup initiative recently gets more and more popular among scientists in robotics, distributed systems and distributed artificial intelligence because of its strong competitive character and tight resource b...", + "neighbors": [ + 141, + 155, + 396, + 964 + ], + "mask": "Test" + }, + { + "node_id": 26, + "label": 0, + "text": "Graphical Models for Recognizing Human Interactions We describe a real-time computer vision and machine learning system for modeling and recognizing human actions and interactions. Two different domains are explored: recognition of two-handed motions in the martial art 'Tai Chi', and multiple-person interactions in a visual surveillance task. Our system combines top-down with bottom-up information using a feedback loop, and is formulated with a Bayesian framework. Two different graphical models (HMMs and Coupled HMMs) are used for modeling both individual actions and multiple-agent interactions, and CHMMs are shown to work more efficiently and accurately for a given amount of training. Finally, to overcome the limited amounts of training data, we demonstrate that `synthetic agents' (Alife-style agents) can be used to develop flexible prior models of the person-to-person interactions. 1 INTRODUCTION We describe a real-time computer vision and machine learning system for modeling and recognizing human behaviors in two different scenari...", + "neighbors": [ + 319 + ], + "mask": "Train" + }, + { + "node_id": 27, + "label": 3, + "text": "A Geometric Framework for Specifying Spatiotemporal Objects We present a framework for specifying spatiotemporal objects using spatial and temporal objects, and a geometric transformation. We define a number of classes of spatiotemporal objects and study their closure properties. 1 Introduction Many natural or man-made phenomena have both a spatial and a temporal extent. Consider for example, a forest fire or property histories in a city. To store information about such phenomena in a database one needs appropriate data modeling constructs. We claim that a new concept, spatiotemporal object, is necessary. In this paper, we introduce a very general framework for specifying spatiotemporal objects. To define a spatiotemporal object we need a spatial object, a temporal object, and a continuous geometric transformation (specified using a parametric representation) that determines the image of the spatial object at different time instants belonging to the temporal object. In this framework, a number of classes of spatiotemporal objects arise quite ...", + "neighbors": [ + 56, + 129, + 147, + 929, + 1012 + ], + "mask": "Train" + }, + { + "node_id": 28, + "label": 3, + "text": "SI-Designer: a tool for intelligent integration of information SI-Designer (Source Integrator Designer) is a designer support tool for semi \ufffd automatic integration of heterogeneous sources schemata (relational, object and semi \ufffd structured sources); it has been implemented within the MOMIS project and it carries out integration following a semantic approach which uses intelligent Description Logics-based techniques, clustering techniques and an extended ODMG-ODL language, \ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd \ufffd , to represent schemata, extracted, integrated information. Starting from the sources \u2019 \ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd \ufffd descriptions (local schemata) SI-Designer supports the designer in the creation of an integrated view of all the sources (global schema) which is expressed in the same \ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd \ufffd language. We propose SI-Designer as a tool to build virtual catalogs in the E-Commerce environment. 1.", + "neighbors": [ + 294, + 766 + ], + "mask": "Test" + }, + { + "node_id": 29, + "label": 0, + "text": "Error-Tolerant Agents . The use of agents in today's Internet world is expanding rapidly. Yet, agent developers", + "neighbors": [ + 342, + 659 + ], + "mask": "Train" + }, + { + "node_id": 30, + "label": 1, + "text": "DirectSVM: A Fast And Simple Support Vector Machine Perceptron . We propose a simple implementation of the Support Vector Machine (SVM) for pattern recognition, that is not based on solving a complex quadratic optimization problem. Instead we propose a simple, iterative algorithm that is based on a few simple heuristics. The proposed algorithm nds high-quality solutions in a fast and intuitively-simple way. In experiments on the COIL database, on the extended COIL database and on the Sonar database of the UCI Irvine repository, DirectSVM is able to nd solutions that are similar to these found by the original SVM. However DirectSVM is able to nd these solutions substantially faster, while requiring less computational resources than the original SVM. INTRODUCTION Support Vector Machines (SVMs) belong to the best-performing learning algorithms available. They have produced remarkable performance in a number of dicult learning tasks without requiring prior knowledge. We mention amongst others the following examples in pattern recognition: handwr...", + "neighbors": [ + 973 + ], + "mask": "Train" + }, + { + "node_id": 31, + "label": 3, + "text": "Schema Evolution in Heterogeneous Database Architectures, A Schema Transformation Approach In previous work we have a developed general framework to support schema transformation and integration in heterogeneous database architectures. The framework consists of a hypergraph-based common data model and a set of primitive schema transformations defined for this model. Higher-level common data models and primitive schema transformations for them can be defined in terms of this lower-level model. A key feature of the framework is that both primitive and composite schema transformations are automatically reversible. We have shown in earlier work how this allows automatic query translation from a global schema to a set of source schemas. In this paper we show how our framework also readily supports evolution of source schemas, allowing the global schema and the query translation pathways to be easily repaired, as opposed to having to be regenerated, after changes to source schemas. 1", + "neighbors": [ + 844 + ], + "mask": "Train" + }, + { + "node_id": 32, + "label": 4, + "text": "Dynamic Connection of Wearable Computers to Companion Devices Using Near-Field Radio Hewlett-Packard Laboratories, Bristol and the University of Bristol Department of Computer Science are engaged in an initiative to explore the design, technology and use of wearable computers. We describe a way of connecting a wearable computer to companion devices such as displays or cameras using near-field radio technology. The shortrange nature of near-field radio allows relatively high data rates (300 kbps -- 1Mbit), low power consumption and the interpretation of gestures as configuration requests. Keywords: Near-field radio, dynamic connectivity. INTRODUCTION We are particularly interested in communication technologies that exhibit low-power, short range (up to 1 foot) and modest data rates (300 kbps -- 1 Mbs). The action of picking up a companion device (such as a display) establishes the communication link due to the very short range. An important aspect of a suitable communication technology is that the user is not required to touch an electrode and therefore handling of ...", + "neighbors": [ + 1192 + ], + "mask": "Validation" + }, + { + "node_id": 33, + "label": 4, + "text": "The CyberShoe: A Wireless Multisensor Interface for a Dancer's Feet : As a bridge between our interest in Wearable Computer systems and new performance interfaces for digital music, we have built a highly instrumented pair of sneakers for interactive dance. These shoes each measure 16 different, continuous parameters expressed by each foot and are able to transmit them wirelessly to a base station placed well over 30 meters away, updating all values up to 60 times per second. This paper describes our system, illustrates its performance, and outlines a few musical mappings that we have created for demonstrations in computer-augmented dance. ____________________________________ Electronic sensors have been incorporated into footwear for several different applications over the last several years. Employing force-sensing resistor arrays or pixelated capacitive sensing, insoles with very dense pressure sampling have been developed for research at the laboratories of footwear manufacturers and pediatric treatment facilities (Cavanaugh, et. al., 1992). Alth...", + "neighbors": [ + 979 + ], + "mask": "Train" + }, + { + "node_id": 34, + "label": 3, + "text": "An Overview of Active Information Gathering in InfoSleuth InfoSleuth is a system of collaborating software agents that can be configured to perform many different information management activities in a distributed environment. InfoSleuth agents advertise semantic constraints about themselves to InfoSleuth brokers using a global domain ontology. When queried, a broker reasons over these constraints to determine the minimal set of agents that can provide a solution to the query. InfoSleuth's architecture is based on a generic agent shell that provides basic agent communication behaviors over a subset of Knowledge Query Manipulation Language. Individual agents are subclasses of this generic shell that provide specific kinds of functionality. InfoSleuth agents perform a number of complex query activities that require resolving ontology-based queries over dynamically changing, distributed, heterogeneous resources, including distributed query, location-independent single-resource updates, event monitoring by means of subscription/notification servi...", + "neighbors": [ + 132, + 663 + ], + "mask": "Train" + }, + { + "node_id": 35, + "label": 1, + "text": "Inference and Learning in Hybrid Bayesian Networks We survey the literature on methods for inference and learning in Bayesian Networks composed of discrete and continuous nodes, in which the continuous nodes have a multivariate Gaussian distribution, whose mean and variance depends on the values of the discrete nodes. We also briefly consider hybrid Dynamic Bayesian Networks, an extension of switching Kalman filters. This report is meant to summarize what is known at a sufficient level of detail to enable someone to implement the algorithms, but without dwelling on formalities. 1 1 Introduction We discuss Bayesian networks (BNs [Jen96]) in which each node is either discrete or continuous, scalar or vector-valued, and in which the joint distribution over all the nodes is Conditional Gaussian (CG) [LW89, Lau92] i.e., for each instantiation i of the discrete nodes Y, the distribution over the continuous nodes X has the form f(xjY = i) = N (x; ~\u00af(i); \\Sigma(i)), where N () represents a multivariate Gaussian (MVG) or Normal density. (Note...", + "neighbors": [ + 791 + ], + "mask": "Test" + }, + { + "node_id": 36, + "label": 5, + "text": "Optimising Propositional Modal Satisfiability for Description Logic Subsumption . Effective optimisation techniques can make a dramatic difference in the performance of knowledge representation systems based on expressive description logics. Because of the correspondence between description logics and propositional modal logic many of these techniques carry over into propositional modal logic satisfiability checking. Currently-implemented representation systems that employ these techniques, such as FaCT and DLP, make effective satisfiable checkers for various propositional modal logics. 1 Introduction Description logics are a logical formalism for the representation of knowledge about individuals and descriptions of individuals. Description logics represent and reason with descriptions similar to \"all people whose friends are both doctors and lawyers\" or \"all people whose children are doctors or lawyers or who have a child who has a spouse\". The computations performed by systems that implement description logics are based around determining whether one descriptio...", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 37, + "label": 3, + "text": "Improving Data Cleaning Quality using a Data Lineage Facility The problem of data cleaning, which consists of removing inconsistencies and errors from original data sets, is well known in the area of decision support systems and data warehouses. However, for some applications, existing ETL (Extraction Transformation Loading) and data cleaning tools for writing data cleaning programs are insufficient. One important challenge with them is the design of a data flow graph that effectively generates clean data. A generalized difficulty is the lack of explanation of cleaning results and user interaction facilities to tune a data cleaning program. This paper presents a solution to handle this problem by enabling users to express user interactions declaratively and tune data cleaning programs. 1", + "neighbors": [ + 546 + ], + "mask": "Train" + }, + { + "node_id": 38, + "label": 0, + "text": "Modeling And Simulation Of Mobile Agents Agent-oriented software implies the realization of software components, which are mobile, autonomous, and solve problems by creating new software components during run-time, moving between locations, initiating or joining groups of other software components. Modeling and simulating those multiagent systems requires specific mechanisms for variable structure modeling. JAMES, a Java-Based Agent Modeling Environment for Simulation, realizes variable structure models including mobility from the perspective of single autonomous agents. JAMES itself is based on parallel DEVS and adopts its abstract simulator model. Simulation takes place as a sending of messages between concurrently active and locally distributed entities which reflect the model's current structure. Thus, modeling and simulation are coined equally by an agent-based perspective. 1 Introduction The definition of agents subsumes a multitude of different facets [30]. Agents are reactive, deliberative or combine reactive with ...", + "neighbors": [ + 121, + 1038 + ], + "mask": "Validation" + }, + { + "node_id": 39, + "label": 1, + "text": "Parameterized Logic Programs where Computing Meets Learning Abstract. In this paper, we describe recent attempts to incorporate learning into logic programs as a step toward adaptive software that can learn from an environment. Although there are a variety of types of learning, we focus on parameter learning of logic programs, one for statistical learning by the EM algorithm and the other for reinforcement learning by learning automatons. Both attempts are not full- edged yet, but in the former case, thanks to the general framework and an e cient EM learning algorithm combined with a tabulated search, we have obtained very promising results that open up the prospect of modeling complex symbolic-statistical phenomena. 1", + "neighbors": [ + 559 + ], + "mask": "Validation" + }, + { + "node_id": 40, + "label": 0, + "text": "Verification within the KARO Agent Theory Abstract. This paper discusses automated reasoning in the KARO framework. The KARO framework accommodates a range of expressive modal logics for describing the behaviour of intelligent agents. We concentrate on a core logic within this framework, in particular, we describe two new methods for providing proof methods for this core logic, discuss some of the problems we have encountered in their design, and present an extended example of the use of the KARO framework and the two proof methods. 1", + "neighbors": [ + 687 + ], + "mask": "Test" + }, + { + "node_id": 41, + "label": 2, + "text": "Automatically Analyzing and Organizing Music Archives . We are experiencing a tremendous increase in the amount of music being made available in digital form. With the creation of large multimedia collections, however, we need to devise ways to make those collections accessible to the users. While music repositories exist today, they mostly limit access to their content to query-based retrieval of their items based on textual meta-information, with some advanced systems supporting acoustic queries. What we would like to have additionally, is a way to facilitate exploration of musical libraries. We thus need to automatically organize music according to its sound characteristics in such a way that we nd similar pieces of music grouped together, allowing us to nd a classical section, or a hard-rock section etc. in a music repository. In this paper we present an approach to obtain such an organization of music data based on an extension to our SOMLib digital library system for text documents. Particularly, we employ the Self-Organizing Map to create a map of a musical archive, where pieces of music with similar sound characteristics are organized next to each other on the two-dimensional map display. Locating a piece of music on the map then leaves you with related music next to it, allowing intuitive exploration of a music archive. Keywords: Multimedia, Music Library, Self-Organizing Map (SOM), Exploration of Information Spaces, User Interface, MP3 1", + "neighbors": [ + 298 + ], + "mask": "Train" + }, + { + "node_id": 42, + "label": 0, + "text": "Extending a Multi-Agent System for Genomic Annotation . The explosive growth in genomic (and soon, expression and proteomic) data, exemplified by the Human Genome Project, is a fertile domain for the application of multi-agent information gathering technologies. Furthermore, hundreds of smaller-profile, yet still economically important organisms are being studied that require the efficient and inexpensive automated analysis tools that multiagent approaches can provide. In this paper we give a progress report on the use of the DECAF multi-agent toolkit to build reusable information gathering systems for bioinformatics. We will briefly summarize why bioinformatics is a classic application for information gathering, how DECAF supports it, and recent extensions underway to support new analysis paths for genomic information. 1", + "neighbors": [ + 964 + ], + "mask": "Train" + }, + { + "node_id": 43, + "label": 2, + "text": "WebSail: From On-line Learning to Web Search In this paper we investigate the applicability of on-line learning algorithms to the real-world problem of web search. Consider that web documents are indexed using n Boolean features. We first present a practically efficient on-line learning algorithm TW2 to search for web documents represented by a disjunction of at most k relevant features. We then design and implement WebSail, a real-time adaptive web search learner, with TW2 as its learning component. WebSail learns from the user's relevance feedback in real-time and helps the user to search for the desired web documents. The architecture and performance of WebSail are also discussed.", + "neighbors": [ + 70, + 216, + 453, + 561, + 596, + 599, + 1000, + 1017 + ], + "mask": "Train" + }, + { + "node_id": 44, + "label": 4, + "text": "User Interface Modelling with UML The Unified Modeling Language (UML) is a natural candidate for user interface (UI) modelling since it is the standard notation for object oriented modelling of applications. However, it is by no means clear how to model UIs using UML. This paper presents a user interface modelling case study using UML. This case study identifies some aspects of UIs that cannot be modelled using UML notation, and a set of UML constructors that may be used to model UIs. The modelling problems indicate some weaknesses of UML for modelling UIs, while the constructors exploited indicate some strengths. The identification of such strengths and weaknesses can be used in the formulation of a strategy for extending UML to provide greater support for user interface design.", + "neighbors": [ + 744 + ], + "mask": "Train" + }, + { + "node_id": 45, + "label": 4, + "text": "Using a Room Metaphor to Ease Transitions in Groupware Many groupware systems contain gaps that hinder or block natural social interaction or that do not let people easily move between different styles of work. We believe that the adoption of a room metaphor can ease people's transitions across these gaps, allowing them to work together more naturally. Using the TeamWave Workplace system as an example, we show how particular gaps are removed. First, we ease a person's transition between single user and groupware applications by making rooms suitable for both individual and group activity. Second, people can move fluidly between asynchronous and synchronous work because room artifacts persist. People can leave messages, documents and annotations for others, or work on them together when occupying the room at the same time. Third, we ease the difficulty of initiating real time work by providing people with awareness of others who may be available for real-time interactions, and by automatically establishing connections as users enter a commo...", + "neighbors": [ + 388, + 836, + 860 + ], + "mask": "Train" + }, + { + "node_id": 46, + "label": 1, + "text": "A Connectionist Approach for Learning Search-Control Heuristics for Automated Deduction Systems The central problem in automated deduction is the explosive growth of search spaces when proof length increases. In this paper, a connectionist approach for learning search-control heuristics for automated deduction systems is presented. In particular, we show how folding architecture networks, a new type of neural networks capable of solving supervised learning tasks on structured data, can be used for learning heuristics evaluation functions for algebraic (logical) expressions and how these evaluation functions can then be used to control the search process for new proof problems. Experimental results with the automated deduction system Setheo in an algebraic domain show a considerable performance improvement. Controlled by heuristics which had been learned from simple problems in this domain the system is able to solve several problems from the same domain which had been out of reach for the original system. 1 Introduction The goal in automated deduction (AD) is to automatically...", + "neighbors": [ + 91 + ], + "mask": "Validation" + }, + { + "node_id": 47, + "label": 0, + "text": "Ruling Agent Motion in Structured Environments . The design and development of cooperative Internet applications based on mobile agents require appropriate modelling of both the physical space where agents roam and the conceptual space of mobile agent interaction. The paper discusses how an open, Internet-based, organisation network can be modelled as a hierarchical collection of locality domains, where agents can dynamically acquire information about resource location and availability according to their permissions. It also analyses the issue of how agent motion can be ruled and constrained within a structured environment by means of an appropriate coordination infrastructure. 1 Introduction Mobile agents are a promising technology for the design and development of cooperative applications on the Internet [3, 5, 12, 13]. Due to their capability of autonomously roaming the Internet, mobile agents can move locally to the resources they need -- let them be users, data, or services -- and there interact with them. This can p...", + "neighbors": [ + 269, + 416 + ], + "mask": "Train" + }, + { + "node_id": 48, + "label": 3, + "text": "Dynamic Function Placement for Data-intensive Cluster Computing Optimally partitioning application andfilesystem functionality within a cluster of clients and servers is a difficult problem dueto dynamic variations in application behavior, resource availability, and workload mixes. Thispaper presents A BACUS, a run-time systemthat monitors and dynamically changes function placement for applications that manipulate largedata sets. Several examples of data-intensive workloads are used to show the importance ofproper function placement and its dependence on dynamic run-time characteristics, withperformance differences frequently reaching 2-10X. We evaluate how well the ABACUSprototype adapts to run-time system behavior, including both long-term variation (e.g., filterselectivity) and short-term variation (e.g., multi-phase applications and inter-applicationresource contention). Our experiments with ABACUS indicate that it is possible to adapt inall of these situations and that the adaptation converges most quickly in those cases where theperformance impact is most significant. 1", + "neighbors": [ + 131 + ], + "mask": "Test" + }, + { + "node_id": 49, + "label": 4, + "text": "Using Plan Recognition in Human-Computer Collaboration . Human-computer collaboration provides a practical and useful application for plan recognition techniques. We describe a plan recognition algorithm which is tractable by virtue of exploiting properties of the collaborative setting, namely: the focus of attention, the use of partially elaborated hierarchical plans, and the possibility of asking for clarification. We demonstrate how the addition of our plan recognition algorithm to an implemented collaborative system reduces the amount of communication required from the user. 1 Introduction An important trend in recent work on human-computer interaction and user modeling has been to view human-computer interaction as a kind of collaboration (e.g, Ferguson and Allen, 1998, Guinn, 1996, Rich and Sidner, 1998, Rickel and Johnson, 1998). In this approach, the human user and the computer (often personified as an \"agent\") coordinate their actions toward achieving shared goals. A common setting for collaboration, illustrated in Figure 1...", + "neighbors": [ + 312, + 775, + 1191 + ], + "mask": "Test" + }, + { + "node_id": 50, + "label": 1, + "text": "Empirical Risk Approximation: An Induction Principle for Unsupervised Learning Unsupervised learning algorithms are designed to extract structure from data without reference to explicit teacher information. The quality of the learned structure is determined by a cost function which guides the learning process. This paper proposes Empirical Risk Approximation as a new induction principle for unsupervised learning. The complexity of the unsupervised learning models are automatically controlled by the two conditions for learning: (i) the empirical risk of learning should uniformly converge towards the expected risk; (ii) the hypothesis class should retain a minimal variety for consistent inference. The maximal entropy principle with deterministic annealing as an efficient search strategy arises from the Empirical Risk Approximation principle as the optimal inference strategy for large learning problems. Parameter selection of learnable data structures is demonstrated for the case of k-means clustering. 1 What is unsupervised learning? Learning algorithms are desi...", + "neighbors": [ + 505, + 615 + ], + "mask": "Test" + }, + { + "node_id": 51, + "label": 3, + "text": "A Control Architecture for Flexible Internet Auction Servers The flexibility to support both high activity and low activity auctions is required by any system that allows bidding by both humans and software agents. We present the control architecture of the Michigan Internet AuctionBot, and discuss some of the system engineering issues that arose in its design. 1 Introduction The Michigan Internet AuctionBot is a highly configurable auction server built to support research on electronic commerce and multiagent negotiation [3]. The first generation architecture was simple and robust, and allowed us to concentrate on other aspects of the system. However, several inefficiencies made it problematic to run auctions with very fast interactions. We have redesigned the core AuctionBot architecture in order to improve overall performance, while still meeting the original goal: a system that is configurable, maintainable, and capable of conducting a large number of simultaneous auctions. In AuctionBot architecture nomenclature, we say an auction is open ...", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 52, + "label": 4, + "text": "A Semiotic Communication Model for Interface Design This research wants to contribute to the creation of a semiotic framework for interface design. Using the Jakobson's communication model to analyse the HCI approach to interface development, we explain how some central factors of communication are not enough considered by designers.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 53, + "label": 2, + "text": "Web Crawling Agents for Retrieving Biomedical Information Autonomous agents for topic driven retrieval of information from the Web are currently a very active area of research. The ability to conduct real time searches for information is important for many users including biomedical scientists, health care professionals and the general public. We present preliminary research on different retrieval agents tested on their ability to retrieve biomedical information, whose relevance is assessed using both genetic and ontological expertise. In particular, the agents are judged on their performance in fetching information about diseases when given information about genes. We discuss several key insights into the particular challenges of agent based retrieval learned from our initial experience in the biomedical domain.", + "neighbors": [ + 1, + 281, + 457, + 662, + 774, + 968 + ], + "mask": "Train" + }, + { + "node_id": 54, + "label": 4, + "text": "A Two-stage Scheme for Dynamic Hand Gesture Recognition In this paper a scheme is presented for recognizing hand gestures using the output of a hand tracker which tracks a rectangular window bounding the hand region. A hierarchical scheme for dynamic hand gesture recognition is proposed based on state representation of the dominant feature trajectories using an a priori knowledge of the way in which each gesture is performed.", + "neighbors": [ + 334, + 635 + ], + "mask": "Train" + }, + { + "node_id": 55, + "label": 4, + "text": "EventScope: Amplifying Human Knowledge and Experience via Intelligent Robotic Systems and Information Interaction The EventScope program develops publicly accessible \"reality browsers\" that display both archived and updating representations of remote environments derived from on-site robotic sensors. The interface encourages collaborative work within a community of users. Public exploration of real remote sites presents a variety of interface issues addressed by EventScope, including time delay, public exploration via a single robot and communication between geographically separate users from diverse backgrounds. Merging public interface with educational and contextual information extends the notion of \"interface\" to \"remote reality library.\" EventScope is a NASA and private foundationfunded project based at Carnegie Mellon University. 1. Introduction Publicly funded Earth and planetary exploration is conducted to increase knowledge of our universe. The public traditionally accesses this knowledge passively, through the media. However, the development of the Web and of robotic remote-sensing tech...", + "neighbors": [ + 1157 + ], + "mask": "Train" + }, + { + "node_id": 56, + "label": 3, + "text": "Nearest Neighbor Queries in a Mobile Environment Nearest neighbor queries have received much interest in recent years due to their increased importance in advanced database applications. However, past work", + "neighbors": [ + 27, + 1012 + ], + "mask": "Train" + }, + { + "node_id": 57, + "label": 3, + "text": "E-DEVICE: An Extensible Active Knowledge Base System with Multiple Rule Type Support This paper describes E-DEVICE, an extensible active knowledge base system (KBS) that supports the processing of event-driven, production, and deductive rules into the same active OODB system. E-DEVICE provides the infrastructure for the smooth integration of various declarative rule types, such as production and deductive rules, into an active OODB system that supports low-level event-driven rules only by a) mapping each declarative rule into one event-driven rule, offering centralized rule selection control for correct run-time behavior and conflict resolution, and b) using complex events to map the conditions of declarative rules and monitor the database to incrementally match those conditions. E-DEVICE provides the infrastructure for easily extending the system by adding a) new rule types as subtypes of existing ones and b) transparent optimizations to the rule matching network. The resulting system is a flexible, yet efficient, KBS that gives the user the ability to express knowledge in a variety of high-level forms for advanced problem solving in data intensive applications.", + "neighbors": [ + 814 + ], + "mask": "Train" + }, + { + "node_id": 58, + "label": 1, + "text": "EWA Learning in Bilateral Call Markets This chapter extends the EWA learning model to bilateral call market games (also known as the \"sealed-bid mechanism\" in two-person bargaining). In these games, a buyer and seller independently draw private values from commonly-known distributions and submit bids. If the buyer's bid is above the seller's, they trade at the midpoint of the two bids; otherwise they don't trade. We apply EWA by assuming that players have value-dependent bidding strategies, and they partially generalize experience from one value/cost condition to another in response to the incentives from nonlinear optimal bid functions. The same learning model can be applied to other market institutions where subjects economize on learning by taking into consideration similarity between past experience and a new environment while still recognizing the difference in market incentives between them. The chapter also presents a new application of EWA to a \"continental divide\" coordination game, and reviews 32 earlier studies comparing EWA, reinforcement, and belief learning. The application shows the advantages of a generalized adaptive model of behavior that includes elements of reinforcement, belief-based and direction learning as special cases at some cost of complexity for the benefit of generality and psychological appeal. It is a good foundation to build upon to extend our understanding of adaptive behavior in more general games and market institutions. In future work, we should investigate the similarity parameters, y and w, to better characterize their magnitude and significance in different market institutions. Keywords: Experimental economics, call markets, sealed-bid mechanism, learning JEL Classification: D44, D83, C92 August 2, 2000. Thanks to Terry Daniel for supplying data. This research has been...", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 59, + "label": 3, + "text": "Extending the ODMG Object Model with Time Although many temporal extensions of the relational data model have been proposed, there is no comparable amount of work in the context of object-oriented data models. Moreover, extensions to the relational model have been proposed in the framework of SQL standards, whereas no attempts have been made to extend the standard for object-oriented databases, defined by ODMG. This paper presents T ODMG, a temporal extension of the ODMG-93 standard data model. The main contributions of this work are, thus, the formalization of the ODMG standard data model and its extension with time. Another contribution of this work is the investigation, on a formal basis, of the main issues arising from the introduction of time in an object-oriented model.", + "neighbors": [ + 397, + 535 + ], + "mask": "Train" + }, + { + "node_id": 60, + "label": 1, + "text": "Reinforcement Learning for Visual Servoing of a Mobile Robot A novel reinforcement learning algorithm is applied to a visual servoing task on a real mobile robot. There is no requirement for camera calibration, an actuator model or a knowledgeable teacher. The controller learns from a critic which gives a scalar reward. The learning algorithm handles continuously valued states and actions and can learn from good and bad experiences including data gathered while performing unrelated behaviours and from historical data. Experimental results are presented. 1 Introduction Visual servoing consists of moving some part of a robot to a desired position using visual feedback [ Hutchinson et al., 1996 ] . It is a basic building block for purposeful robot behaviours such as foraging, target pursuit and landmark based navigation. Some degree of calibration is generally required to achieve visual servoing. This calibration can be a time consuming and error prone process. In this work we show that reinforcement based learning can eliminate the ca...", + "neighbors": [ + 621 + ], + "mask": "Train" + }, + { + "node_id": 61, + "label": 1, + "text": "Optimization and Interpretation of Rule-Based Classifiers Abstract. Machine learning methods are frequently used to create rule-based classifiers. For continuous features linguistic variables used in conditions of the rules are defined by membership functions. These linguistic variables should be optimized at the level of single rules or sets of rules. Assuming the Gaussian uncertainty of input values allows to increase the accuracy of predictions and to estimate probabilities of different classes. Detailed interpretation of relevant rules is possible using (probabilistic) confidence intervals. A real life example of such interpretation is given for personality disorders. The approach to optimization and interpretation described here is applicable to any rule-based system. 1 Introduction. In many applications rule-based classifiers are created starting from machine learning, fuzzy logic or neural network methods [1]\u2013[3]. If the number of rules is relatively small and accuracy is sufficiently high such classifiers are an optimal choice, because the reasons for their decisions are easily verified. Crisp logical rules are desirable", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 62, + "label": 1, + "text": "Introducing a New Advantage of Crossover: Commonality-Based Selection The Commonality-Based Crossover Framework defines crossover as a two-step process: 1) preserve the maximal common schema of two parents, and 2) complete the solution with a construction heuristic. In these \u201cheuristic \u201d operators, the first step is a form of selection. This commonality-based form of selection has been isolated in GENIE. Using random parent selection and a non-elitist generational replacement scheme, GENIE does not include fitness-based selection. However, a theoretical analysis shows that \u201cideal \u201d construction heuristics in GENIE can potentially converge to optimal solutions. Experimentally, results show that the effectiveness of practical construction heuristics can be amplified by commonalitybased restarts. Overall, it is shown that the commonality hypothesis is valid--schemata common to above-average solutions are indeed above average. Since common schemata can only be identified by multi-parent operators, commonality-based selection is a unique advantage that crossover can enjoy over mutation. 1", + "neighbors": [ + 866 + ], + "mask": "Train" + }, + { + "node_id": 63, + "label": 3, + "text": "CAT: the Copying Approach to Tabling The SLG-WAM is an abstract machine that can be characterized as a sharing approach to implementing tabling: The execution environments of suspended computations are interspersed in the WAM stacks. Stacks are frozen using a set of freeze registers, and the WAM trail mechanism is extended so that the suspended computations can be resumed. This technique has a reasonably small execution overhead, but it is not easy to implement on top of an existing Prolog system. It is also quite difficult to understand. We propose a new technique for the implementation of tabling: the copying approach to tabling. CAT does not impose any overhead to the execution of Prolog code and can be introduced into an existing Prolog system orthogonally. Also, CAT is easier to understand. We have implemented CAT in the XSB system by taking out SLG-WAM and adding CAT. We describe the additions needed for adopting CAT in a WAM implementation. We show a case in which CAT performs arbitrarily worse than SLG-WAM, but on the other hand we present empirical evidence that CAT is competitive and often faster than the SLG-WAM. We also briefly discuss issues related to memory management and to the scheduling.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 64, + "label": 0, + "text": "PMS: a PVC Management System for ATM Networks . Reported in this paper is the developed PMS, a PVC management system for ATM networks. PMS provides a scalable, end-to-end path management solution required for managing todays complex ATM networks. It aims to assist the network operators to perform PVC operations with simplified procedures and automatic optimum route selection. It also aims to provide effective decision-making support for PVC fault identification and prevention to the network operators. 1 Introduction ATM communication network is playing more and more important role in todays telecommunication networks. It has been widely used in backbone networks, transmission networks, access networks, and even enterprise networks. Such emerging large heterogeneous ATM networks have raised many new challenges for researchers and developers in the area of network management. In the management of ATM communication networks that have increased dramatically in size and complexity, the PVC (Permanent Virtual Circuit) managemen...", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 65, + "label": 1, + "text": "Direct value-approximation for factored MDPs We present a simple approach for computing reasonable policies for factored Markov decision processes (MDPs), when the optimal value function can be approximated by a compact linear form.", + "neighbors": [ + 820 + ], + "mask": "Validation" + }, + { + "node_id": 66, + "label": 5, + "text": "A Web-Based ITS Controlled by an Expert System Intelligent Tutoring System (ITS) for teaching high school teachers how to use new technologies. It offers course units covering the needs of users with different knowledge levels and characteristics. It tailors the presentation of the educational material to the users' diverse needs by using AI techniques to specify each user's model as well as to make pedagogical decisions. This is achieved via an expert system that uses a hybrid knowledge representation formalism integrating symbolic rules with neurocomputing.", + "neighbors": [ + 1257 + ], + "mask": "Train" + }, + { + "node_id": 67, + "label": 0, + "text": "Emergent Cooperative Goal-Satisfaction in Large Scale Automated-Agent Systems Cooperation among autonomous agents has been discussed in the DAI community for several years. Papers about cooperation [6, 45], negotiation [33], distributed planning [5], and coalition formation [28, 48], have provided a variety of approaches and several algorithms and solutions to situations wherein cooperation is possible. However, the case of cooperation in large-scale multi-agent systems (MAS) has not been thoroughly examined. Therefore, in this paper we present a framework for cooperative goal-satisfaction in large-scale environments focusing on a low complexity physics-oriented approach. The multi-agent systems with which we deal are modeled by a physics-oriented model. According to the model, MAS inherit physical properties, and therefore the evolution of the computational systems is similar to the evolution of physical systems. To enable implementation of the model, we provide a detailed algorithm to be used by a single agent within the system. The model and the algorithm are a...", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 68, + "label": 3, + "text": "Improving the Performance of High-Energy Physics Analysis through Bitmap Indices Bitmap indices are popular multi-dimensional structures for accessing read-mostly data such as data warehouse (DW) applications, decision support systems (DSS) and on-line analytical processing (OLAP). One of their main strengths is that they provide good performance characteristics for complex adhoc and an efficient combination of multiple index in one query. Considerable research work has been done in the area of finite (and low) attribute cardinalities. However, additional complexity is imposed on the design of bitmap indices for high cardinality or even non-discrete attributes, where different optimisation techniques than the ones proposed so far have to be applied. In this paper we discuss the design and implementation of bitmap indices for High-Energy Physics (HEP) analysis, where the potential search space consists of hundreds of independent dimensions. A single HEP query typically covers 10 to 100 dimensions out of the whole searchs space. In this context we evaluated two different bitmap encoding techniques, namely equality encoding and range encoding. For both methods the number of bit slices (or bitmap vectors) per attribute is a a central optimisation parameter. The paper presents some (first) results for choosing the optimal number of bit slices for multi-dimensional indices with attributes of different value distribution and query selectivity. We believe taht this discussion is not only applicable to HEP but also to DW, DSS and OLAP type problems in general.", + "neighbors": [ + 1019 + ], + "mask": "Train" + }, + { + "node_id": 69, + "label": 2, + "text": "Logical Structure Detection for Heterogeneous Document Classes We present a fully implemented system based on generic document knowledge for detecting the logical structure of documents for which only general layout information is assumed. In particular, we focus on detecting the reading order. Our system integrates components based on computer vision, articial intelligence, and natural language processing techniques. The prominent feature of our framework is its ability to handle documents from heterogeneous collections. The system has been evaluated on a standard collection of documents to measure the quality of the reading order detection. Experimental results for each component and the system as a whole are presented and discussed in detail. The performance of the system is promising, especially when considering the diversity of the document collection. Keywords: Document Analysis, Logical Structure Detection, Reading Order Detection, Natural Language Processing, Spatial Reasoning. 1. INTRODUCTION The goal of document analysis is to automa...", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 70, + "label": 2, + "text": "Yarrow: A Real-Time Client Side Meta-Search Learner In this paper we report our research on building Yarrow - an intelligent web meta-search engine. The predominant feature of Yarrow is that in contrast to the lack of adaptive learning features in existing metasearch engines, Yarrow is equipped with a practically efficient on-line learning algorithm so that it is capable of helping the user to search for the desired documents with as little feedback as possible. Currently, Yarrow can query eight of the most popular search engines and is able to perform document parsing and indexing, and learning in real-time on client side. Its architecture and performance are also discussed. 1. Introduction As the world wide web evolves and grows so rapidly, web search, an interface between the human users and the vast information gold mine of the web, is becoming a necessary part of people's daily life. Designing and implementing practically effective web search tools is a challenging task. It calls for innovative methods and strategies f...", + "neighbors": [ + 43, + 453, + 561, + 596, + 1000, + 1017 + ], + "mask": "Train" + }, + { + "node_id": 71, + "label": 1, + "text": "Hallucinating Faces In most surveillance scenarios there is a large distance between the camera and the objects of interest in the scene. Surveillance cameras are also usually set up with wide fields of view in order to image as much of the scene as possible. The end result is that the objects in the scene normally appear very small in surveillance imagery. It is generally possible to detect and track the objects in the scene, however, for tasks such as automatic face recognition and license plate reading, resolution enhancement techniques are often needed. Although numerous resolution enhancement algorithms have been proposed in the literature, most of them are limited by the fact that they make weak, if any, assumptions about the scene. We propose an algorithm that can be used to learn a prior on the spatial distribution of the image gradient for frontal images of faces. We proceed to show how such a prior can be incorporated into a super-resolution algorithm to yield 4-8 fold improvements in resolution #...", + "neighbors": [ + 1218 + ], + "mask": "Validation" + }, + { + "node_id": 72, + "label": 1, + "text": "Two Views of Classifier Systems This work suggests two ways of looking at Michigan classifier systems; as Genetic Algorithm-based systems, and as Reinforcement Learning-based systems, and argues that the former is more suitable for traditional strength-based systems while the latter is more suitable for accuracy-based XCS. The dissociation of the Genetic Algorithm from policy determination in XCS is noted, and the two types of Michigan classifier system are contrasted with Pittsburgh systems.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 73, + "label": 0, + "text": "Trust Relationships in a Mobile Agent System . The notion of trust is presented as an important component in a security infrastructure for mobile agents. A trust model that can be used in tackling the aspect of protecting mobile agents from hostile platforms is proposed. We dene several trust relationships in our model, and present a trust derivation algorithm that can be used to infer new relationships from existing ones. An example of how such a model can be utilized in a practical system is provided. 1", + "neighbors": [ + 925 + ], + "mask": "Train" + }, + { + "node_id": 74, + "label": 0, + "text": "Improving the Scalability of Multi-agent Systems . There is an increasing demand for designers and developers to construct ever larger multi-agent systems. Such systems will be composed of hundreds or even thousands of autonomous agents. Moreover, in open and dynamic environments, the number of agents in the system at any one time will uctuate signicantly. To cope with these twin issues of scalability and variable numbers, we hypothesize that multiagent systems need to be both self-building (able to determine the most appropriate organizational structure for the system by themselves at runtime) and adaptive (able to change this structure as their environment changes). To evaluate this hypothesis we have implemented such a multiagent system and have applied it to the domain of automated trading. Preliminary results supporting the rst part of this hypothesis are presented: adaption and self-organization do indeed make the system better able to cope with large numbers of agents. 1 Introduction When designing or buildin...", + "neighbors": [ + 942 + ], + "mask": "Train" + }, + { + "node_id": 75, + "label": 4, + "text": "A Wearable Spatial Conferencing Space Wearable computers provide constant access to computing and communications resources. In this paper we describe how the computing power of wearables can be used to provide spatialized 3D graphics and audio cues to aid communication. The result is a wearable augmented reality communication space with audio enabled avatars of the remote collaborators surrounding the user. The user can use natural head motions to attend to the remote collaborators, can communicate freely while being aware of other side conversations and can move through the communication space. In this way the conferencing space can support dozens of simultaneous users. Informal user studies suggest that wearable communication spaces may offer several advantages, both through the increase in the amount of information it is possible to access and the naturalness of the interface. 1: Introduction One of the broad trends emerging in human-computer interaction is the increasing portability of computing and communication fac...", + "neighbors": [ + 738, + 886, + 965, + 1043, + 1192 + ], + "mask": "Test" + }, + { + "node_id": 76, + "label": 0, + "text": "A Software Fault Tree Approach to Requirements Analysis of an Intrusion Detection System Requirements analysis for an Intrusion Detection System (IDS) involves deriving requirements for the IDS from analysis of the intrusion domain. When the IDS is, as here, a collection of mobile agents that detect, classify, and correlate system and network activities, the derived requirements include what activities the agent software should monitor, what intrusion characteristics the agents should correlate, where the IDS agents should be placed to feasibly detect the intrusions, and what countermeasures the software should initiate. This paper describes the use of software fault trees for requirements identification and analysis in an IDS. Intrusions are divided into seven stages (following Ruiu), and a fault subtree is developed to model each of the seven stages (reconnaissance, penetration, etc.). Two examples are provided. This approach was found to support requirements evolution (as new intrusions were identified), incremental development of the IDS, and prioritization of countermeasures.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 77, + "label": 3, + "text": "Constraints in Object-Oriented Databases Normal forms in relational database theory, like 3NF or BCNF, are dened by means of semantic contraints. Since for these constraints sound and complete axiomatisations exist and, additionally, for some of these constraints the implication problem is decidable, computer aided database design is possible for relational data models. Object-oriented database theory lacks such normal forms, partly because neither a classication of semantic constraints nor sound and complete axiomatisations exist. In this work we present three classes of semantic constraints for object-oriented data models and show that these constraints have a sound and complete axiomatisation. Thus we prepare the grounds for normal forms in object-oriented data models and subsequently for computer aided object-oriented database design. 1 Introduction The theory of database design for relational data models identies a number of properties to characterise good database schemas. These properties lead then to no...", + "neighbors": [ + 773, + 1073 + ], + "mask": "Train" + }, + { + "node_id": 78, + "label": 3, + "text": "Flattening an Object Algebra to Provide Performance Algebraic transformation and optimization techniques have been the method of choice in relational query execution, but applying them in OODBMS has been difficult due to the complexity of object-oriented query languages. This paper demonstrates that the problem can be simplified by mapping a complex storage model to the flat binary model implemented by Monet, a state-of-theart database kernel. We present a generic mapping scheme to flatten data models and study the case of a straightforward object-oriented model. We show how flattening enabled us to implement a full-fledged query algebra on it, using only a very limited set of simple operations. The required primitives and query execution strategies are discussed, and their performance is evaluated on the 1GB TPC-D benchmark, showing that our divide-and-conquer approach yields excellent results. 1 Introduction During the last decade, relational database technology has grown towards industrial maturity, and the attention of the research...", + "neighbors": [ + 364, + 452, + 1057, + 1128 + ], + "mask": "Train" + }, + { + "node_id": 79, + "label": 1, + "text": "A Neuro-Fuzzy Solution for Integrated Visual and Force Control In this paper the use of a B-spline neuro-fuzzy model for different tasks such as vision-based fine-positioning using uncalibrated cameras and force control is presented. It is shown that neuro-fuzzy controllers can be used not only for low-dimensional problems like force control but also for high-dimensional problems like vision-based sensorimotor control and for fusing input from different sensors. Controllers of this type can be modularly combined to solve a given assembly problem. 1 Introduction It is well-known that general fuzzy rule descriptions of systems with a large number of input variables suffer from the problem of the \"curse of dimensionality.\" In many realworld applications it is difficult to identify the decisive input parameters and thus to reduce the number of input variables to the minimum. A general solution to building fuzzy models is not only interesting from a theoretical point, it may also extend the range of applications of fuzzy control to more complex intel...", + "neighbors": [ + 1121 + ], + "mask": "Train" + }, + { + "node_id": 80, + "label": 2, + "text": "A Non-obtrusive User Interface for Increasing Social Awareness on the World Wide Web Arguing for the need of increasing social awareness on the World Wide Web, we describe a user interface based on the metaphor of windows bridging electronic and physical spaces. We present a system that, with the aim of making on-line activity perceptible in the physical world, makes it possible to hear people visiting one's Web site. The system takes advantage of the seamless and continuous network connection offered by handheld Web-appliances such as PDA's.", + "neighbors": [ + 1214 + ], + "mask": "Train" + }, + { + "node_id": 81, + "label": 4, + "text": "The Morph Node We discuss potential and limitations of a Morph Node, inspired by the corresponding construct in Java3D. A Morph Node in Java3D interpolates vertex attributes among several homeomorphic geometries. This node is a promising candidate for the delivery of 3D animation in a very compact form. We review the state-of-the-art in Web 3D techniques, allowing for the possibility of interpolating among several geometries. This review leads to a simple extension for VRML-97 as well as a recommendation for necessary changes in Java3D. Furthermore, we discuss various optimization issues for Morph Nodes. CR Categories and Subject Descriptors. I.3.6 [Computer Graphics ] Methodology and Techniques: Standards - VRML; I.3.7 [Computer Graphics] Three Dimensional Graphics and Realism: Animation; I.3.8 [Computer Graphics] Applications. Additional Keywords. Animation, Avatars, Morphing, Virtual Humans, VRML. INTRODUCTION Animation of three-dimensional shapes involves the change of vertex attributes over ...", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 82, + "label": 0, + "text": "Combinations of Modal Logics Combining logics for modelling purposes has become a rapidly expanding enterprise that is inspired mainly by concerns about modularity and the wish to join together different kinds of information. As any interesting real world system is a complex, composite entity, decomposing its descriptive requirements (for design, verification, or maintenance purposes) into simpler, more restricted, reasoning tasks is not only appealing but is often the only plausible way forward. It would be an exaggeration to claim that we currently have a thorough understanding of `combined methods.' However, a core body of notions, questions and results has emerged for an important class of combined logics, and we are beginning to understand how this core theory behaves when it is applied outside this particular class. In this paper we will consider the combination of modal (including temporal) logics, identifying leading edge research that we, and others, have carried out. Such combined sys", + "neighbors": [ + 159, + 327, + 644 + ], + "mask": "Train" + }, + { + "node_id": 83, + "label": 1, + "text": "A Rule Induction Approach to Modeling Regional Pronunciation Variation. This 1)~q)er descril)es the use of rule indue-tion techniques fi)r the mli;omatic exl;ra(:l;ion of l)honemic knowledge mM rules fl'om pairs of l:,romm(:intion lexi(:a. This (:xtra(:ted knowl-edge allows the ndat)tntion of sl)ee(:h pro(:ess-ing systelns tO regional vm'iants of a language. As a case sl;u(ty, we apply the approach to Northern Dutch and Flemish (the wtriant of Dutch spoken in Flan(lers, a t)art; of Bel-gium), based Oll C(?lex and l'bnilex, promm-clarion lexi(:a tbr Norttmrn l)utch mM Fhm,-ish, r(~sl)e(:tively. In our study, we (:omt)ar(~ l;wo rule ilMu(:tion techniques, franslbrmation-B;tsed Error-l)riven Learning ('I'I/E])I,) (Brill, 1995) mM C5.0 (Quinl~m, 1993), and (,valu-ate the extr~tct(xl knowh;dge quanl:it~l;ively (a(:-(:ura.cy) mM qualitatively (linguistic r(;levanc:e of the rules). We (:onchMe that. whereas classificntion-1)ased rule. induct;ion with C5.0 is 11101.'0 a(;(:(lr&l;e ~ th(? |;rallSt~)rnl;~l;ion l"ules le;~rne(t with TBE1)I, can 1)e more easily ini;ert)reted. 1.", + "neighbors": [ + 1168 + ], + "mask": "Train" + }, + { + "node_id": 84, + "label": 2, + "text": "A World Wide Web Meta Search Engine Using an Automatic Query Routing Algorithm CONTENTS 1. INTRODUCTION......................................................... ........................................ 6 2. LITERATURE REVIEW............................................................... ........................ 9 2.1 Overview of conventional search techniques............................................. 9 2.2 Conventional query routing systems......................................................... 11 2.2.1 Manual query routing services......................................................... 11 2.2.2 Automated query routing systems based on centroids..................... 12 2.2.3 Automated query routing systems without centroids....................... 12 3. SYSTEM STRUCTURE............................................................ .......................... 14 3.1 System overview............................................................. .......................... 14 3.2 Off-line operations.......................................................", + "neighbors": [ + 410, + 1032 + ], + "mask": "Train" + }, + { + "node_id": 85, + "label": 1, + "text": "Advanced DeInterlacing techniques with the use of Zonal Based Algorithms This paper describes a new highly efficient aleinterlacing approach based on motion estimation and compensation techniques. The proposed technique mainly benefits from the motion vector properties of zonal based algorithms, such as the Advanced Predictive Diamond Zonal Search (APDZS) and the Predictive Motion Vector Field Adaptive Search Technique (PMVFAST), multihypothesis motion compensation, but also an additional motion classification phase where, depending on the motion of a pixel, additional spatial and temporal information is also considered to further improve performance. Extensive simulations demonstrate the efficacy of these algorithms, especially when compared to standard deinterlacing techniques such as the line doubling and line averaging algorithms.", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 86, + "label": 4, + "text": "LivingLab: A white paper The LivingLab is a planned research infrastructure that is pivotal for user-system interaction research in the next decade. This article presents the concept and outlines a research programme that will be served by this facility. These future plans are motivated by a vision of future developments concerning interaction with intelligent environments.", + "neighbors": [ + 684 + ], + "mask": "Train" + }, + { + "node_id": 87, + "label": 4, + "text": "Let's Browse: A Collaborative Browsing Agent Web browsing, like most of today's desktop applications, is usually a solitary activity. Other forms of media, such as watching television, are often done by groups of people, such as families or friends. What would it be like to do collaborative Web browsing? Could the computer provide assistance to group browsing by trying to help find mutual interests among the participants? Let's Browse is an experiment in building an agent to assist a group of people in browsing, by suggesting new material likely to be of common interest. It is built as an extension to the single user Web browsing agent Letizia. Let's Browse features automatic detection of the presence of users, automated \"channel surfing\" browsing, and dynamic display of the user profiles and explanation of recommendations. # 1999 Elsevier Science B.V. All rights reserved. Keywords: Browsing; Collaboration; Agents; User profiles 1. Collaborative browsing Increasingly, Web browsing will be performed in collaborative settings, ...", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 88, + "label": 3, + "text": "Typechecking for XML Transformers We study the typechecking problem for XML transformers: given an XML transformation program and a DTD for the input XML documents, check whether every result of the program conforms to a specified output DTD. We model XML transformers using a novel device called a k- pebble transducer, that can express most queries without data-value joins in XML-QL, XSLT, and other XML query languages. Types are modeled by regular tree languages, a robust extension of DTDs. The main result of the paper is that typechecking for k-pebble transducers is decidable. Consequently, typechecking can be performed for a broad range of XML transformation languages, including XMLQL and a fragment of XSLT. 1. INTRODUCTION Traditionally, database query languages have focused on data retrieval, with complex data transformations left to applications. The new XML data exchange standard for the Web, and emerging applications requiring data wrapping and integration, have shifted the focus towards data transformations....", + "neighbors": [ + 585 + ], + "mask": "Test" + }, + { + "node_id": 89, + "label": 5, + "text": "BISMARC: A Biologically Inspired System for Map-based Autonomous Rover Control As the complexity of the missions to planetary surfaces increases, so too does the need for autonomous rover systems. This need is complicated by the power, mass and computer storage restrictions on such systems (Miller, 1992). To address these problems, we have recently developed a system called BISMARC (Biologically Inspired System for Map-based Autonomous Rover Control) for planetary missions involving multiple small, lightweight surface rovers (Huntsberger, 1997). BISMARC is capable of cooperative planetary surface retrieval operations such as a multiple cache recovery mission to Mars. The system employs autonomous navigation techniques, behavior-based control for surface retrieval operations, and an action selection mechanism based on a modified form of free flow hierarchy (Rosenblatt and Payton, 1989). This paper primarily describes the navigation and map-mapping subsystems of BISMARC. They are inspired by some recent studies of London taxi drivers indicating that the right hippo...", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 90, + "label": 4, + "text": "Contract-Net-Based Learning in a User-Adaptive Interface Agency . This paper describes a multi-agent learning approach to adaptation to users' preferences realized by an interface agency. Using a contract-net-based negotiation technique, agents as contractors as well as managers negotiate with each other to pursue the overall goal of dynamic user adaptation. By learning from indirect user feedback, the adjustment of internal credit vectors and the assignment of contractors that gained maximal credit with respect to the user's current preferences, the preceding session, and current situational circumstances can be realized. In this way, user adaptation is achieved without accumulating explicit user models but by the use of implicit, distributed user models. 1 Introduction Interface agents are computer programs that enhance the human-computer interaction by mediating a relationship between technical systems and users [Lau90]. On the one hand, they provide assistance to users by acting on his/her behalf and automating his/her actions [Nor94...", + "neighbors": [ + 432 + ], + "mask": "Test" + }, + { + "node_id": 91, + "label": 1, + "text": "On the Correspondence between Neural Folding Architectures and Tree Automata The folding architecture together with adequate supervised training algorithms is a special recurrent neural network model designed to solve inductive inference tasks on structured domains. Recently, the generic architecture has been proven as a universal approximator of mappings from rooted labeled ordered trees to real vector spaces. In this article we explore formal correspondences to the automata (language) theory in order to characterize the computational power (representational capabilities) of different instances of the generic folding architecture. As the main result we prove that simple instances of the folding architecture have the computational power of at least the class of deterministic bottom-up tree automata. It is shown how architectural constraints like the number of layers, the type of the activation functions (first-order vs. higher-order) and the transfer functions (threshold vs. sigmoid) influence the representational capabilities. All proofs are carried out in a c...", + "neighbors": [ + 46, + 1223 + ], + "mask": "Validation" + }, + { + "node_id": 92, + "label": 3, + "text": "On Computational Representations of Herbrand Models . Finding computationally valuable representations of models of predicate logic formulas is an important issue in the field of automated theorem proving, e.g. for automated model building or semantic resolution. In this article we treat the problem of representing single models independently of building them and discuss the power of different mechanisms for this purpose. We start with investigating context-free languages for representing single Herbrand models. We show their computational feasibility and prove their expressive power to be exactly the finite models. We show an equivalence with \"ground atoms and ground equations\" concluding equal expressive power. Finally we indicate how various other well known techniques could be used for representing essentially infinite models (i.e. models of not finitely controllable formulas), thus motivating our interest in relating model properties with syntactical properties of corresponding Herbrand models and in investigating connections betwe...", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 93, + "label": 3, + "text": "Integrating Spatial Information And Image Analysis - One Plus One Makes Ten Photogrammetry and remote sensing have proven their efficiency for spatial data collection in many ways. Interactive mapping at digital workstations is performed by skilled operators, which guarantees excellent quality in particular of the geometric data. In this way, worldwide acquisition of a large number of national GIS databases has been supported and still a lot of production effort is devoted to this task. In the field of image analysis, it has become evident that algorithms for scene interpretation and 3D reconstruction of topographic objects, which rely on a single data source, cannot function efficiently. Research in two directions promises to be more successful. Multiple, largely complementary, sensor data like range data from laser scanners, SAR and panchromatic or multi-/hyper-spectral aerial images have been used to achieve robustness and better performance in image analysis. On the other hand, given GIS databases, e.g. layers from topographic maps, can be considered as vi...", + "neighbors": [ + 339 + ], + "mask": "Validation" + }, + { + "node_id": 94, + "label": 1, + "text": "Naive Bayes for Regression Abstract. Despite its simplicity, the naive Bayes learning scheme performs well on most classification tasks, and is often significantly more accurate than more sophisticated methods. Although the probability estimates that it produces can be inaccurate, it often assigns maximum probability to the correct class. This suggests that its good performance might be restricted to situations where the output is categorical. It is therefore interesting to see how it performs in domains where the predicted value is numeric, because in this case, predictions are more sensitive to inaccurate probability estimates. This paper shows how to apply the naive Bayes methodology to numeric prediction (i.e., regression) tasks by modeling the probability distribution of the target value with kernel density estimators, and compares it to linear regression, locally weighted linear regression, and a method that produces \u201cmodel trees\u201d\u2014decision trees with linear regression functions at the leaves. Although we exhibit an artificial dataset for which naive Bayes is the method of choice, on real-world datasets it is almost uniformly worse than locally weighted linear regression and model trees. The comparison with linear regression depends on the error measure: for one measure naive Bayes performs similarly, while for another it is worse. We also show that standard naive Bayes applied to regression problems by discretizing the target value performs similarly badly. We then present empirical evidence that isolates naive Bayes \u2019 independence assumption as the culprit for its poor performance in the regression setting. These results indicate that the simplistic statistical assumption that naive Bayes makes is indeed more restrictive for regression than for classification.", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 95, + "label": 1, + "text": "Applying Parallelism to Improve Genetic Algorithm-based Design Optimization Introduction The abundance of powerful workstations makes course-grained parallelization an obvious enhancement to many optimization techniques, including genetic algorithms [Gol89, DM97]. While initial modifications have been made to GADO (Genetic Algorithm for Design Optimization [Ras98, RHG97]), such changes have not been carefully analyzed for potential impacts on quality. More generally, parallelization has the potential to improve GA performance through the use of alternative models of computation. Parallelism can certainly reduce the total elapsed clock-time for a solution, but as a change in model of computation (either real or simulated) , it can change the number of simulator calls and even make new solutions achievable. The effects of parallelization on GADO were investigated during my summer internship at the Center for Computational Design. 2 Objectives Since a straightforward parallelized implementation already existed, my first tasks were to ana", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 96, + "label": 0, + "text": "An Intelligent Agent Framework In VRML Worlds actions, e.g. move to next room, are received by it and consequently send to the EAC. Finally, the abstract action arrives at the Virtual Reality Management Unit that specifies in detail the received actions. It provides specific values concerning the orientation and position of the avatar, e.g. it specifies the coordinates, orientation and path so that it can successfully move to the next room, and sends them as commands to the Virtual Reality World Browser. The browser executes the command by altering the virtual environment appropriately. When changes have been performed the AEC unit notifies the logical core that the action has been successfully executed and the logical core goes on by updating its internal and external state. Consequently, the agent looks around into the virtual space, gathers any additional information and decides the next step it should take to satisfy its goals.", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 97, + "label": 1, + "text": "Model-Free Least-Squares Policy Iteration We propose a new approach to reinforcement learning which combines least squares function approximation with policy iteration. Our method is model-free and completely off policy. We are motivated by the least squares temporal difference learning algorithm (LSTD), which is known for its efficient use of sample experiences compared to pure temporal difference algorithms. LSTD is ideal for prediction problems, however it heretofore has not had a straightforward application to control problems. Moreover, approximations learned by LSTD are strongly influenced by the visitation distribution over states. Our new algorithm, Least-Squares Policy Iteration (LSPI) addresses these issues. The result is an off-policy method which can use (or reuse) data collected from any source. We test LSPI on several problems, including a bicycle simulator in which it learns to guide the bicycle to a goal efficiently by merely observing a relatively small number of completely random trials.", + "neighbors": [ + 13, + 1238 + ], + "mask": "Train" + }, + { + "node_id": 98, + "label": 3, + "text": "The BASIS System: a Benchmarking Approach for Spatial Index Structures This paper describes the design of the BASIS prototype system, which is currently under implementation. BASIS stands for Benchmarking Approach for Spatial Index Structures. It is a prototype system aiming at performance evaluation of spatial access methods and query processing strategies, under different data sets, various query types, and different workloads. BASIS is based on a modular architecture, composed of a simple storage manager, a query processor, and a set of algorithmic techniques to facilitate benchmarking. The main objective of BASIS is twofold: (i) to provide a benchmarking environment for spatial access methods and related query evaluation techniques, and (ii) to allow comparative studies of spatial access methods in different cases but under a common framework. We currently extend it to support the fundamental features of spatiotemporal data management and access methods.", + "neighbors": [ + 1058 + ], + "mask": "Train" + }, + { + "node_id": 99, + "label": 4, + "text": "Exploration of Perceptual Computing for Smart-Its The future success of ubiquitous computing depends to a big part on how well applications can adapt to their environment and act accordingly. This thesis has set itself the goal of exploring perceptual computing for Smart-Its, which is one such ubiquitous computing vision.", + "neighbors": [ + 1006 + ], + "mask": "Validation" + }, + { + "node_id": 100, + "label": 2, + "text": "Image Retrieval: Current Techniques, Promising Directions And Open Issues This paper provides a comprehensive survey of the technical achievements in the research area of image retrieval, especially content-based image retrieval, an area that has been so active and prosperous in the past few years. The survey includes 100+ papers covering the research aspects of image feature representation and extraction, multidimensional indexing, and system design, three of the fundamental bases of content-based image retrieval. Furthermore, based on the state-of-the-art technology available now and the demand from real-world applications, open research issues are identified and future promising research directions are suggested. C \u25cb 1999 Academic Press 1.", + "neighbors": [ + 118, + 523, + 1203 + ], + "mask": "Validation" + }, + { + "node_id": 101, + "label": 3, + "text": "Scalable Trigger Processing + Current database trigger systems have extremely limited scalability. This paper proposes a way to develop a truly scalable trigger system. Scalability to large numbers of triggers is achieved with a trigger cache to use main memory effectively, and a memory-conserving selection predicate index based on the use of unique expression formats called expression signatures. A key observation is that if a very large number of triggers are created, many will have the same structure, except for the appearance of different constant values. When a trigger is created, tuples are added to special relations created for expression signatures to hold the trigger's constants. These tables can be augmented with a database index or main-memory index structure to serve as a predicate index. The design presented also uses a number of types of concurrency to achieve scalability, including token (tuple)-level, condition-level, rule action-level, and datalevel concurrency. 1. Introduction Trigger feature...", + "neighbors": [ + 475, + 651 + ], + "mask": "Train" + }, + { + "node_id": 102, + "label": 1, + "text": "Robust Entropy Estimation Strategies Based on Edge Weighted Random Graphs (with corrections) In this paper we treat the problem of robust entropy estimation given a multidimensional random sample from an unknown distribution. In particular, we consider estimation of the Renyi entropy of fractional order which is insensitive to outliers, e.g. high variance contaminating distributions, using the k-point minimal spanning tree (kMST) . A greedy algorithm for approximating the NP-hard problem of computing the k-minimal spanning tree is given which is a generalization of the potential function partitioning method of Ravi etal. 1 The basis for our approach is an asymptotic theorem establishing that the log of the overall length or weight of the greedy approximation is a strongly consistent estimator of the Renyi entropy. Quantitative robustness of the estimator to outliers is established using Hampel's method of inuence functions. 2 The structure of the inuence function indicates that the k-MST is a natural extension of the one dimensional -trimmed mean for multi-dimensional...", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 103, + "label": 3, + "text": "Using Java and CORBA for Implementing Internet Databases We describe an architecture called WebFINDIT that allows dynamic couplings of Web accessible databases based on their content and interest. We propose an implementation using WWW, Java, JDBC, and CORBA's ORBs that communicate via the CORBA's IIOP protocol. The combination of these technologies offers a compelling middleware infrastructure to implement wide-area enterprise applications. In addition to a discussion of WebFINDIT's core concepts and implementation architecture, we also discuss an experience of using WebFINDIT in a healthcare application. 1 Introduction The growth of the Internet and the Web increased dramatically the need for data sharing. The Web has brought a wave of new users and service providers to the Internet. It contains a huge quantity of heterogeneous information and services (e.g., home pages, online digital libraries, product catalogs, and so on) (Bouguettaya et al. 1998). The result is that the Web is now accepted as the de facto support in all domains of li...", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 104, + "label": 0, + "text": "Cross Entropy Guided Ant-like Agents Finding Dependable Primary/Backup Path Patterns in Networks Telecommunication network owners and operators have for half a century been well aware of the potential loss of revenue if a major trunk is damaged, thus dependability at high cost has been implemented. A simple, effective and common dependability scheme is 1:1 protection with 100% capacity redundancy in the network. A growing number of applications in need of dependable connections with specific requirements to bandwidth and delay have started using the internet (which only provides best effort transport) as their base communication service. In this paper we adopt the 1:1 protection scheme and incorporate it as part of a routing system applicable for internet infrastructures. 100% capacity redundancy is no longer required. A distributed stochastic path finding (routing) algorithm based on Rubinstein's Cross Entropy method for combinatorial optimisation is presented. Early results from Monte Carlo simulations indeed indicate that the algorithm is capable of finding pairs of independent primary and backup paths satisfying specific bandwidth a constraints.", + "neighbors": [ + 967 + ], + "mask": "Train" + }, + { + "node_id": 105, + "label": 1, + "text": "Bayesian Representations and Learning Mechanisms for Content-Based Image Retrieval We have previously introduced a Bayesian framework for content-based image retrieval (CBIR) that relies on a generative model for feature representation based on embedded mixtures. This is a truly generic image representation that can jointly model color and texture and has been shown to perform well across a broad spectrum of image databases. In this paper, we expand the Bayesian framework along two directions. First, we show that the formulation of CBIR as a problem of Bayesian inference leads to a natural criteria for evaluating local image similarity without requiring any image segmentation. This allows the practical implementation of retrieval systems where users can provide image regions, or objects, as queries. Region-based queries are significantly less ambiguous than queries based on entire images leading to significant improvements in retrieval precision. Second, we present a Bayesian learning algorithm that relies on belief propagation to integrate feedback provided by the...", + "neighbors": [ + 961 + ], + "mask": "Train" + }, + { + "node_id": 106, + "label": 0, + "text": "Specification of Heterogeneous Agent Architectures . Agent-based software applications need to incorporate agents having heterogeneous architectures in order for each agent to optimally perform its task. HEMASL is a simple meta-language used to specify intelligent agents and multi-agent systems when different and heterogeneous agent architectures must be used. HEMASL specifications are based on an agent model that abstracts several existing agent architectures. The paper describes some of the features of the language, presents examples of its use and outlines its operational semantics. We argue that adding HEMASL to CaseLP, a specification and prototyping environment for MAS, can enhance its flexibility and usability. 1 Introduction Intelligent agents and multi-agent systems (MAS) are increasingly being acknowledged as the \"new\" modelling techniques to be used to engineer complex and distributed software applications [17, 9]. Agent-based software development is concerned with the realization of software applications modelled ...", + "neighbors": [ + 485, + 884, + 964, + 1222 + ], + "mask": "Test" + }, + { + "node_id": 107, + "label": 1, + "text": "Comparing Evolutionary Programs and Evolutionary Pattern Search Algorithms: A Drug Docking Application Evolutionary programs (EPs) and evolutionary pattern search algorithms (EPSAs) are two general classes of evolutionary methods for optimizing on continuous domains. The relative performance of these methods has been evaluated on standard global optimization test functions, and these results suggest that EPSAs more robustly converge to nearoptimal solutions than EPs. In this paper we evaluate the relative performance of EPSAs and EPs on a real-world application: flexible ligand binding in the Autodock docking software. We compare the performance of these methods on a suite of docking test problems. Our results confirm that EPSAs and EPs have comparable performance, and they suggest that EPSAs may be more robust on larger, more complex problems. 1 Introduction Evolutionary programs (EPs) and evolutionary pattern search algorithms (EPSAs) are two classes of evolutionary algorithms (EAs) that have been specifically developed for solving problems of the form min x2R n f(x): In particula...", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 108, + "label": 1, + "text": "Making Use of Population Information in Evolutionary Artificial Neural Networks This paper is concerned with the simultaneous evolution of artificial neural network (ANN) architectures and weights. The current practice in evolving ANN's is to choose the best ANN in the last generation as the final result. This paper proposes a different approach to form the final result by combining all the individuals in the last generation in order to make best use of all the information contained in the whole population. This approach regards a population of ANN's as an ensemble and uses a combination method to integrate them. Although there has been some work on integrating ANN modules [2], [3], little has been done in evolutionary learning to make best use of its population information. Four linear combination methods have been investigated in this paper to illustrate our ideas. Three real-world data sets have been used in our experimental studies, which show that the recursive least-square (RLS) algorithm always produces an integrated system that outperforms the best individual. The results confirm that a population contains more information than a single individual. Evolutionary learning should exploit such information to improve generalization of learned systems.", + "neighbors": [ + 504 + ], + "mask": "Train" + }, + { + "node_id": 109, + "label": 4, + "text": "An Agent-Based Approach to the Construction of Floristic Digital Libraries This paper describes an agent-assisted approach to the construction of floristic digital libraries, which consist of very large botanical data repositories and related services. We propose an environment, termed Chrysalis, in which authors of plant morphologic descriptions can enter data into a digital library via a web-based editor. An agent that runs concurrently with the editor suggests potentially useful morphologic descriptions based on similar documents existing in the library. Benefits derived from the introduction of Chrysalis include reduced potential for errors and data inconsistencies, increased parallelism among descriptions, and considerable savings in the time regularly spent in visually checking for parallelism and manually editing data. KEYWORDS: agents, agent-based interfaces, floristic digital libraries, FNA, Chrysalis. INTRODUCTION Constructing the vast data repositories that will support knowledge-intensive activities in digital libraries poses problems of enormo...", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 110, + "label": 0, + "text": "Response Generation in Collaborative Negotiation In collaborative planning activities, since the agents are autonomous and heterogeneous, it is inevitable that conflicts arise in their beliefs during the planning process. In cases where such conflicts are relevant to the task at hand, the agents should engage in collaborative negotiation as an attempt to square away the discrepancies in their beliefs. This paper presents a computational strategy for detecting conflicts regarding proposed beliefs and for engaging in collaborative negotiation to resolve the conflicts that warrant resolution. Our model is capable of selecting the most effective aspect to address in its pursuit of conflict resolution in cases where multiple conflicts arise, and of selecting appropriate evidence to justify the need for such modification. Furthermore, by capturing the negotiation process in a recursive Propose-Evaluate-Modify cycle of actions, our model can successfully handle embedded negotiation subdialogues. 1 Introduction In collaborative consultat...", + "neighbors": [ + 1251 + ], + "mask": "Validation" + }, + { + "node_id": 111, + "label": 3, + "text": "View Security as the Basis for Data Warehouse Security Access . permissions in a data warehouse are currently managed in a separate world from the sources' policies. The consequences are inconsistencies, slow response to change, and wasted administrative work. We present a different approach, which treats the sources' exported tables and the warehouse as part of the same distributed database. Our main result is a way to control derived products by extending SQL grants rather than creating entirely new mechanisms. We provide a powerful, sound inference theory that derives permissions on warehouse tables (both materialized and virtual), making the system easier to administer and its applications more robust. We also propose a new permission construct suitable for views that filter data from mutually-suspicious parties. 1 Introduction A key challenge for data warehouse security is how to manage the entire system coherently -- from sources and their export tables, to warehouse stored tables (conventional and cubes) and vi...", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 112, + "label": 2, + "text": "Web Mining Research: A Survey With the huge amount of information available online, the World Wide Web is a fertile area for data mining research. The Web mining research is at the cross road of research from several research communities, such as database, information retrieval, and within AI, especially the sub-areas of machine learning and natural language processing. However, there is a lot of confusions when comparing research efforts from different point of views. In this paper, we survey the research in the area of Web mining, point out some confusions regarded the usage of the term Web mining and suggest three Web mining categories. Then we situate some of the research with respect to these three categories. We also explore the connection between the Web mining categories and the related agent paradigm. For the survey, we focus on representation issues, on the process, on the learning algorithm, and on the application of the recent works as the criteria. We conclude the paper with some research issues.", + "neighbors": [ + 216, + 411, + 774, + 855, + 933, + 1017, + 1120 + ], + "mask": "Train" + }, + { + "node_id": 113, + "label": 2, + "text": "Digital Library Resources as a Basis for Collaborative Work The creation of large, networked, digital document resources has greatly facilitated information access and dissemination. We suggest that such resources can further enhance how we work with information, namely, that they can provide a substrate that supports collaborative work. We focus on one form of collaboration, annotation, by which we mean any of an open-ended number of creative document manipulations which are useful to record and to share with others. Widespread digital document dissemination required technological enablers, such as web clients and servers. The resulting infrastructure is one in which information may be widely shared by individuals across administrative boundaries. To achieve the same ubiquitous availability for annotation requires providing support for spontaneous collaboration, that is, for collaboration across administrative boundaries without significant prior agreements. Annotation is not more commonplace, we suggest, because the technological needs of sp...", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 114, + "label": 0, + "text": "Compiling for Fast State Capture of Mobile Agents Saving, transporting, and restoring the state of a mobile agent is one of the main problems in implementing a mobile agents system. We present an approach, implemented as part of our Messengers system, that represents a trade-off between the unrestricted use of pointers and the ability to perform fully transparent state capture. When writing the code for an agent, the programmer has a choice between two types of functions. C functions are fully general and may use unrestricted pointers, but they are not allowed to invoke any migration commands. Messengers functions may cause migration but their use of pointers is restricted to only a special type of a dynamic array structure. Under these restrictions, the local variables, the program counter, and the calling stack of an agent can all be made machine-independent and can be captured/restored transparently during migration. 1 Introduction Saving, transporting, and restoring the state of a mobile agent is one of the main problem in implem...", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 115, + "label": 2, + "text": "Text Database Selection for Longer Queries A metasearch engine is a system that supports unified access to multiple local search engines. One of the main challenges in building a large-scale metasearch engine is to solve the database (search engine) selection problem, which is to efficiently and accurately determine a small number of potentially useful local search engines to invoke for each user query. For the database of each search engine, a representative which indicates approximately the contents of the database is created in advance to enable database selection. The representatives of all databases can be integrated into a single representative to make the selection process more scalable. While an integrated representative with high scalability has just been proposed and has been found to be effective for short queries, its effectiveness for longer queries is significantly lower. In the Internet environment, most queries initially submitted by users are short queries. However, it has been found that better search effectiveness can often be achieved when additional terms are added to the initial queries through query expansion or relevance feedback. The resulting queries are usually longer than the initial queries. In this paper, we propose a new method to construct database representatives and to decide which databases to select for longer queries. Experimental results are given to compare the performance of the new method with that of a previous method.", + "neighbors": [ + 216, + 224, + 241, + 510, + 696, + 792, + 931, + 1003, + 1017 + ], + "mask": "Train" + }, + { + "node_id": 116, + "label": 1, + "text": "Equivalence in Knowledge Representation: Automata, Recurrent Neural Networks, and Dynamical Fuzzy Systems Neurofuzzy systems-the combination of artificial neural networks with fuzzy logic-have become useful in many application domains. However, conventional neurofuzzy models usually need enhanced representation power for applications that require context and state (e.g., speech, time series prediction, control). Some of these applications can be readily modeled as finite state automata. Previously, it was proved that deterministic finite state automata (DFA) can be synthesized by or mapped into recurrent neural networks by directly programming the DFA structure into the weights of the neural network. Based on those results, a synthesis method is proposed for mapping fuzzy finite state automata (FFA) into recurrent neural networks. Furthermore, this mapping is suitable for direct implementation in very large scale integration (VLSI), i.e., the encoding of FFA as a generalization of the encoding of DFA in VLSI systems. The synthesis method requires FFA to undergo a transformation prior to being mapped into recurrent networks. The neurons are provided with an enriched functionality in order to accommodate a fuzzy representation of FFA states. This enriched neuron functionality also permits fuzzy parameters of FFA to be directly represented as parameters of the neural network. We also prove the stability of fuzzy finite state dynamics of the constructed neural networks for finite values of network weight and, through simulations, give empirical validation of the proofs. Hence, we prove various knowledge equivalence representations between neural and fuzzy systems and models of automata.", + "neighbors": [ + 740 + ], + "mask": "Train" + }, + { + "node_id": 117, + "label": 4, + "text": "SIDE Surfer: a Spontaneous Information Discovery and Exchange System Development of wireless communications enables the rise of networking applications in embedded systems. Web interactions, which are the most spread, are nowadays available on wireless PDAs. Moreover, we can observe a development of ubiquitous computing. Based on this concept, many works aim to consider user's context as part of the parameters of the applications. The context notion can include the user's location, his social activity . . . Taking part from emerging technologies enabling short range and direct wireless communications (which allow to define a proximity context), the aim of our study is to design a new kind of application, extending the Web paradigm: spontaneous and proximate Web interactions.", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 118, + "label": 2, + "text": "MetaSEEk: A Content-Based Meta-Search Engine for Images Search engines are the most powerful resources for finding information on the rapidly expanding World Wide Web (WWW). Finding the desired search engines and learning how to use them, however, can be very time consuming. The integration of such search tools enables the users to access information across the world in a transparent and efficient manner. These systems are called meta-search engines. The recent emergence of visual information retrieval (VIR) search engines on the web is leading to the same efficiency problem. This paper describes and evaluates MetaSEEk, a content-based meta-search engine used for finding images on the Web based on their visual information. MetaSEEk is designed to intelligently select and interface with multiple on-line image search engines by ranking their performance for different classes of user queries. User feedback is also integrated in the ranking refinement. We compare MetaSEEk with a base line version of meta-search engine, which does not use the past performance of the different search engines in recommending target search engines for future queries.", + "neighbors": [ + 100, + 768, + 781, + 931, + 1203 + ], + "mask": "Train" + }, + { + "node_id": 119, + "label": 0, + "text": "MARS: a Programmable Coordination Architecture for Mobile Agents Mobile agents represent a promising technology for the development of Internet applications. However, mobile computational entities introduce peculiar problems w.r.t. the coordination of the application components. The paper outlines the advantages of Linda-like coordination models, and shows how a programmable coordination model based on reactive tuple spaces can provide further desirable features for Internet applications based on mobile agents. Accordingly, the paper presents the design and the implementation of the MARS coordination architecture for Java-based mobile agents. MARS defines Linda-like tuple spaces, which can be programmed to react with specific actions to the accesses made by mobile agents.", + "neighbors": [ + 246, + 308, + 416, + 757 + ], + "mask": "Train" + }, + { + "node_id": 120, + "label": 1, + "text": "Projective Rotations applied to a Pan-Tilt Stereo Head A non-metric pan-tilt stereo-head consists of a weakly calibrated stereo rig mounted on a pan-tilt mechanism. It is called non-metric since neither the kinematics of the mechanism, nor camera calibration are required. The Lie group of \"projective rotations\"- homographies of projective space corresponding to pure rotations -- is an original formalism to model the geometry of such a pan-tilt system. A Rodrigues alike formula as well as a minimal parameterization of projective rotations are introduced. Based on this, the practical part devises a numerical optimization technique for accurately estimating projective rotations from point correspondences, only. This procedure recovers sufficient geometry to operate the system. The experiments validate and evaluate the proposed approach on real image data. They show the weak calibration, image prediction, and homing of a non-metric pan-tilt head. 1 Introduction One of the most useful sensors in computer vision is a pan and tilt stereo head. ...", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 121, + "label": 0, + "text": "Cooperating Mobile Agents for Dynamic Network Routing this paper we present a contrasting model, a dynamic, wireless, peer to peer network with routing tasks performed in a decentralized and distributed fashion by mobile software agents that cooperate to accumulate and distribute connectivity information. Our agents determine system topology by exploring the network, then store this information in the nodes on the network. Other agents use this stored information to derive multi-hop routes across the network. We study these algorithms in simulation as an example of using populations of mobile agents to manage networks", + "neighbors": [ + 38 + ], + "mask": "Validation" + }, + { + "node_id": 122, + "label": 3, + "text": "Functional Join Processing . Inter-object references are one of the key concepts of object-relational and object-oriented database systems. In this work, we investigate alternative techniques to implement inter-object references and make the best use of them in query processing, i.e., in evaluating functional joins. We will give a comprehensive overview and performance evaluation of all known techniques for simple (singlevalued) as well as multi-valued functional joins. Furthermore, we will describe special order-preserving functionaljoin techniques that are particularly attractive for decision support queries that require ordered results. While most of the presentation of this paper is focused on object-relational and object-oriented database systems, some of the results can also be applied to plain relational databases because index nested-loop joins along key/foreign-key relationships, as they are frequently found in relational databases, are just one particular way to execute a functional join. Key words: O...", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 123, + "label": 5, + "text": "Managing Robot Autonomy and Interactivity Using Motives and Visual Communication An autonomous mobile robot operating in everyday life conditions will have to face a huge variety of situations and to interact with other agents (living or artificial). Such a robot needs flexible and robust methods for managing its goals and for adapting its control mechanisms to face the contingencies of the world. It also needs to communicate with others in order to get useful information about the world. This paper describes an approach based on a general architecture and on internal variables called `motives' to manage the goals of an autonomous robot. These variables are also used as a basis for communication using a visual communication system. Experiments using a vision- and sonar-based Pioneer I robot, equipped with a visual signaling device, are presented. 1 Introduction Designing an autonomous mobile robot to operate in unmodified environments, i.e., environments that have not been specifically engineered for the robot, is a very challenging problems. Dynamic and unpredic...", + "neighbors": [ + 1137, + 1231 + ], + "mask": "Train" + }, + { + "node_id": 124, + "label": 4, + "text": "The Cub-e, a Novel Virtual 3D Display Device We have designed, and are in the process of building, a visualisation device, the Cub-e. The Cub-e consists of six TFT screens, arranged in a perspex cube, with a StrongARM processor and batteries inside. It is a multipurpose device with applications including teleconferencing, interaction with virtual worlds, and games. 1", + "neighbors": [ + 12 + ], + "mask": "Validation" + }, + { + "node_id": 125, + "label": 2, + "text": "Automatic Text Representation, Classification and Labeling in European Law The huge text archives and retrieval systems of legal information have not achieved yet the representation in the wellknown subject-oriented structure of legal commentaries. Content-based classification and text analysis remains a high priority research topic. In the joint KONTERM, SOM and LabelSOM projects, learning techniques of neural networks are used to achieve similar high compression rates of classification and analysis like in manual legal indexing. The produced maps of legal text corpora cluster related documents in units that are described with automatically selected descriptors. Extensive tests with text corpora in European case law have shown the feasibility of this approach. Classification and labeling proved very helpful for legal research. The Growing Hierarchical Self-Organizing Map represents very interesting generalities and specialties of legal text corpora. The segmentation into document parts improved very much the quality of labeling. The next challenge would be a change from tfxidf vector representation to a modified vector representation taking into account thesauri or ontologies considering learned properties of legal text corpora.", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 126, + "label": 0, + "text": "Supporting Conflict Resolution in Cooperative Design Systems Complex modern-day artifacts are designed cooperatively by groups of experts, each with their own areas of expertise. The interaction of such experts inevitably involves conflict. This paper presents an implemented computational model, based on studies of human cooperative design, for supporting the resolution of such conflicts. This model is based centrally on the insights that general conflict resolution expertise exists separately from domain-level design expertise, and that this expertise can be instantiated in the context of particular conflicts into specific advice for resolving those conflicts. Conflict resolution expertise consists of a taxonomy of design conflict classes in addition to associated general advice suitable for resolving conflicts in these classes. The abstract nature of conflict resolution expertise makes it applicable to a wide variety of design domains. This paper describes this conflict resolution model and provides examples of its operation from an implemente...", + "neighbors": [ + 724, + 943 + ], + "mask": "Train" + }, + { + "node_id": 127, + "label": 5, + "text": "CMUnited-97: RoboCup-97 Small-Robot World Champion Team Robotic soccer is a challenging research domain which involves multiple agents that need to collaborate in an adversarial environment to achieve specificobjectives. In this paper, we describe CMUnited, the team of small robotic agents that we developed to enter the RoboCup-97 competition. We designed and built the robotic agents, devised the appropriate vision algorithm, and developed and implemented algorithms for strategic collaboration between the robots in an uncertain and dynamic environment. The robots can organize themselves in formations, hold specific roles, and pursue their goals. In game situations, they have demonstrated their collaborative behaviors on multiple occasions. We present an overview of the vision processing algorithm which successfully tracks multiple moving objects and predicts trajectories. The paper then focusses on the agent behaviors ranging from low-level individual behaviors to coordinated, strategic team behaviors. CMUnited won the RoboCup-97 small-robot competition at IJCAI-97 in Nagoya, Japan.", + "neighbors": [ + 155, + 430, + 927, + 1266 + ], + "mask": "Train" + }, + { + "node_id": 128, + "label": 1, + "text": "Clustering Large Datasets in Arbitrary Metric Spaces Clustering partitions a collection of objects into groups called clusters, such that similar objects fall into the same group. Similarity between objects is defined by a distance function satisfying the triangle inequality; this distance function along with the collection of objects describes a distance space. In a distance space, the only operation possible on data objects is the computation of distance between them. All scalable algorithms in the literature assume a special type of distance space, namely a k-dimensional vector space, which allows vector operations on objects. We present two scalable algorithms designed for clustering very large datasets in distance spaces. Our first algorithm BUBBLE is, to our knowledge, the first scalable clustering algorithm for data in a distance space. Our second algorithm BUBBLE-FM improves upon BUBBLE by reducing the number of calls to the distance function, which may be computationally very expensive. Both algorithms make only a single scan ov...", + "neighbors": [ + 616, + 1183 + ], + "mask": "Test" + }, + { + "node_id": 129, + "label": 3, + "text": "Manipulating Interpolated Data is Easier than You Thought Data defined by interpolation is frequently found in new applications involving geographical entities, moving objects, or spatiotemporal data. These data lead to potentially infinite collections of items, (e.g., the elevation of any point in a map), whose definitions are based on the association of a collection of samples with an interpolation function. The naive manipulation of the data through direct access to both the samples and the interpolation functions leads to cumbersome or inaccurate queries. It is desirable to hide the samples and the interpolation functions from the logical level, while their manipulation is performed automatically. We propose to model such data using infinite relations (e.g., the map with elevation yields an infinite ternary relation) which can be manipulated through standard relational query languages (e.g., SQL), with no mention of the interpolated definition. The clear separation between logical and physical levels ensures the accu...", + "neighbors": [ + 27, + 147 + ], + "mask": "Train" + }, + { + "node_id": 130, + "label": 4, + "text": "Developing a Context-aware Electronic Tourist Guide: Some Issues and Experiences In this paper, we describe our experiences of developing and evaluating GUIDE, an intelligent electronic tourist guide. The GUIDE system has been built to overcome many of the limitations of the traditional information and navigation tools available to city visitors. For example, group-based tours are inherently inflexible with fixed starting times and fixed durations and (like most guidebooks) are constrained by the need to satisfy the interests of the majority rather than the specific interests of individuals. Following a period of requirements capture, involving experts in the field of tourism, we developed and installed a system for use by visitors to Lancaster. The system combines mobile computing technologies with a wireless infrastructure to present city visitors with information tailored to both their personal and environmental contexts. In this paper we present an evaluation of GUIDE, focusing on the quality of the visitors experience when using the system. Keywords Mobile c...", + "neighbors": [ + 1252 + ], + "mask": "Train" + }, + { + "node_id": 131, + "label": 3, + "text": "Highly Concurrent Shared Storage . Shared storage arrays enable thousands of storage devices to be shared and directly accessed by end hosts over switched system-area networks, promising databases and filesystems highly scalable, reliable storage. In such systems, hosts perform access tasks (read and write) and management tasks (migration and reconstruction of data on failed devices.) Each task translates into multiple phases of low-level device I/Os, so that concurrent host tasks can span multiple shared devices and access overlapping ranges potentially leading to inconsistencies for redundancy codes and for data read by end hosts. Highly scalable concurrency control and recovery protocols are required to coordinate on-line storage management and access tasks. While expressing storage-level tasks as ACID transactions ensures proper concurrency control and recovery, such an approach imposes high performance overhead, results in replication of work and does not exploit the available knowledge about storage le...", + "neighbors": [ + 48 + ], + "mask": "Validation" + }, + { + "node_id": 132, + "label": 5, + "text": "Dynamic Service Matchmaking Among Agents in Open Information Environments Introduction The amount of services and deployed software agents in the most famous offspring of the Internet, the World Wide Web, is exponentially increasing. In addition, the Internet is an open environment, where information sources, communication links and agents themselves may appear and disappear unpredictably. Thus, an effective, automated search and selection of relevant services or agents is essential for human users and agents as well. We distinguish three general agent categories in the Cyberspace, service providers, service requester, and middle agents. Service providers provide some type of service, such as finding information, or performing some particular domain specific problem solving. Requester agents need provider agents to perform some service for them. Agents that help locate others are called middle agents[2]. Matchmaking is the process of finding an appropriate provider for a requester thr", + "neighbors": [ + 34, + 675, + 765 + ], + "mask": "Train" + }, + { + "node_id": 133, + "label": 2, + "text": "A Study of Approaches to Hypertext Categorization Hypertext poses new research challenges for text classification. Hyperlinks, HTML tags, category labels distributed over linked documents, and meta data extracted from related Web sites all provide rich information for classifying hypertext documents. How to appropriately represent that information and automatically learn statistical patterns for solving hypertext classification problems is an open question. This paper seeks a principled approach to providing the answers. Specifically, we define five hypertext regularities which may (or may not) hold in a particular application domain, and whose presence (or absence) may significantly influence the optimal design of a classifier. Using three hypertext datasets and three well-known learning algorithms (Naive Bayes, Nearest Neighbor, and First Order Inductive Learner), we examine these regularities in different domains, and compare alternative ways to exploit them. Our results show that the identification of hypertext regularities in the data and the selection of appropriate representations for hypertext in particular domains are crucial, but seldom obvious, in real-world problems. We find that adding the words in the linked neighborhood to the page having those links (both inlinks and outlinks) were helpful for all our classifiers on one data set, but more harmful than helpful for two out of the three classifiers on the remaining datasets. We also observed that extracting meta data from related Web sites was extremely useful for improving classification accuracy in some of those domains. Finally, the relative performance of the classifiers being tested provided insights into their strengths and limitations for solving classification problems involving diverse and often noisy Web pages.", + "neighbors": [ + 142, + 216, + 280, + 379, + 892, + 1178 + ], + "mask": "Validation" + }, + { + "node_id": 134, + "label": 1, + "text": "Comparison of Learning Approaches to Appearance-based 3D Object Recognition with and without cluttered background We re-evaluate the application of Support Vector Machines (SVM) to appearance-based 3D object recognition, by comparing it to two other learning approaches: the system developed at Columbia University (\"Columbia\") and a simple image matching system using a nearest neighbor classifier (\"NNC\"). In a first set of experiments, we compare correct recognition rates of the segmented 3D object images of the COIL database. We show that the performance of the simple \"NNC\" system compares to the more elaborated \"Columbia\" and \"SVM\" systems. Only when the experimental setting is more demanding, i.e. when we reduce the number of views during the training phase, some difference in performance can be observed. In a second set of experiments, we consider the more realistic task of 3D object recognition with cluttered background. Also in this case, we obtain that the performance of the three systems are comparable. Only with the recently proposed black/white background training scheme (\"BW\") applied t...", + "neighbors": [ + 973 + ], + "mask": "Train" + }, + { + "node_id": 135, + "label": 3, + "text": "Optional Locking Integrated with Operational Transformation in Distributed Real-Time Group Editors Locking is a standard technique in traditional distributed computing and database systems to ensure data integrity by prohibiting concurrent conflicting updates on shared data objects. Operational transformation is an innovative technique invented by groupware research for consistency maintenance in real-time group editors. In this paper, we will examine and explore the complementary roles of locking and operational transformation in consistency maintenance. A novel optional locking scheme is proposed and integrated with operation transformation to maintain both generic and context-specific consistency in a distributed, interactive, and collaborative environment. The integrated optional locking and operational transformation technique is fully distributed, highly responsive, non-blocking, and capable of avoiding locking overhead in the most common case of collaborative editing. Keywords: Locking, operational transformation, consistency maintenance, group editors, groupware, distribute...", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 136, + "label": 4, + "text": "Learning and Tracking Cyclic Human Motion We present methods for learning and tracking human motion in video. We estimate a statistical model of typical activities from a large set of 3D periodic human motion data by segmenting these data automatically into \"cycles\". Then the mean and the principal components of the cycles are computed using a new algorithm that accounts for missing information and enforces smooth transitions between cycles. The learned temporal model provides a prior probability distribution over human motions that can be used in a Bayesian framework for tracking human subjects in complex monocular video sequences and recovering their 3D motion.", + "neighbors": [ + 245 + ], + "mask": "Validation" + }, + { + "node_id": 137, + "label": 1, + "text": "Learning Strategy Knowledge Incrementally Modern industrial processes require advanced computer tools that should adapt to the user requirements and to the tasks being solved. Strategy learning consists of automating the acquisition of patterns of actions used while solving particular tasks. Current intelligent strategy learning systems acquire operational knowledge to improve the efficiency of a particular problem solver. However, these strategy learning tools should also provide a way of achieving low-cost solutions according to user-specific criteria. In this paper, we present a learning system, hamlet, which is integrated in a planning architecture, prodigy, and acquires control knowledge to guide prodigy to efficiently produce cost-effective plans. hamlet learns from planning episodes, by explaining why the correct decisions were made, and later refines the learned strategy knowledge to make it incrementally correct with experience.", + "neighbors": [ + 626, + 988 + ], + "mask": "Train" + }, + { + "node_id": 138, + "label": 1, + "text": "Automatic Discrimination Among Languages Based on Prosody Alone The development of methods for the automatic identification of languages is motivated both by speech-based applications intended for use in a multi-lingual environment, and by theoretical questions of cross-linguistic variation and similarity. We evaluate the potential utility of two prosodic variables, F 0 and amplitude envelope modulation, in a pairwise language discrimination task. Discrimination is done using a novel neural network which can successfully attend to temporal information at a range of timescales. Both variables are found to be useful in discriminating among languages, and confusion patterns, in general, reflect traditional intonational and rhythmic language classes. The methods employed allow empirical determination of prosodic similarity across languages. Die Entwicklung von Methoden zur automatischen Sprachidentifikation wird motiviert sowohl durch sprach-basierte Anwendungen, die zum Einsatz in einer mehrsprachigen Umgebung bestimmt sind, als auch durch theoretisch...", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 139, + "label": 1, + "text": "Class Representation and Image Retrieval with Non-Metric Distances One of the key problems in appearance-based vision is understanding how to use a set of labeled images to classify new images. Classification systems that can model human performance, or that use robust image matching methods, often make use of similarity judgments that are non-metric; but when the triangle inequality is not obeyed, most existing pattern recognition techniques are not applicable. We note that exemplar-based (or nearest-neighbor) methods can be applied naturally when using a wide class of non-metric similarity functions. The key issue, however, is to find methods for choosing good representatives of a class that accurately characterize it. We show that existing condensing techniques for finding class representatives are ill-suited to deal with non-metric dataspaces. We then focus on developing techniques for solving this problem, emphasizing two points: First, we show that the distance between two images is not a good measure of how well one image can represent another in non-metric spaces. Instead, we use the vector correlation between the distances from each image to other previously seen images. Second, we show that in non-metric spaces, boundary points are less significant for capturing the structure of a class", + "neighbors": [ + 1040 + ], + "mask": "Train" + }, + { + "node_id": 140, + "label": 0, + "text": "Organisational Abstractions for the Analysis and Design of Multi-Agent Systems Abstract. The architecture of a multi-agent system can naturally be viewed as a computational organisation. For this reason, we believe organisational abstractions should play a central role in the analysis and design of such systems. To this end, the concepts of agent roles and role models are increasingly being used to specify and design multi-agent systems. However, this is not the full picture. In this paper we introduce three additional organisational concepts \u2014 organisational rules, organisational structures, and organisational patterns \u2014 that we believe are necessary for the complete specification of computational organisations. We view the introduction of these concepts as a step towards a comprehensive methodology for agent-oriented systems. 1", + "neighbors": [ + 451, + 573, + 957 + ], + "mask": "Train" + }, + { + "node_id": 141, + "label": 0, + "text": "Dynamic Reconfiguration in Collaborative Problem Solving In this article we will describe our research efforts in coping with a trade-off that can be often found in the control and optimization of todays business processes. Though centralized control may achieve nearto -optimum results in optimizing the system behavior, there are usually social, technical and security restrictions on applying centralized control. Distributed control on the other hand may cope with these restrictions but also entails sub-optimality and communicational overhead. Our concept of composable agents tries to allow a dynamic and fluent transition between globalization and localization in business process control by adapting to the current real-world system structure. We are currently evaluating this concept in the framework of patient flow control at Charit'e Berlin. Introduction Research in Distributed Artificial Intelligence (DAI, (Bond & Gasser 1988)) has been traditionally divided into Distributed Problem Solving (DPS) and Multi Agent Systems (MAS). However, r...", + "neighbors": [ + 25, + 964 + ], + "mask": "Train" + }, + { + "node_id": 142, + "label": 2, + "text": "Combining Multiple Learning Strategies for Effective Cross Validation Parameter tuning through cross-validation becomes very difficult when the validation set contains no or only a few examples of the classes in the evaluation set. We address this open challenge by using a combination of classifiers with different performance characteristics to effectively reduce the performance variance on average of the overall system across all classes, including those not seen before. This approach allows us to tune the combination system on available but less-representative validation data and obtain smaller performance degradation of this system on the evaluation data than using a single-method classifier alone. We tested this approach by applying k-Nearest Neighbor, Rocchio and Language Modeling classifiers and their combination to the event tracking problem in the Topic Detection and Tracking (TDT) domain, where new classes (events) are created constantly over time, and representative validation sets for new classes are often difficult to ob...", + "neighbors": [ + 133, + 341, + 1178 + ], + "mask": "Train" + }, + { + "node_id": 143, + "label": 2, + "text": "Web Metasearch as Belief Aggregation Web metasearch requires a mechanism for combining rank-ordered lists of ratings returned by multiple search engines in response to a given user query. We view this as being analogous to the need for combining degrees of belief in probabilistic and uncertain reasoning in artificial intelligence. This paper describes a practical method for performing web metasearch based on a novel transformationbased theory of belief aggregation. The consensus ratings produced by this method take into account the item ratings/rankings output by individual search engines as well as the user's preferences. Copyright c fl 2000, American Association for Artificial Intelligence (www.aaai.org). All rights reserved. Introduction Web search engines (WSE) use tools ranging from simple text-based search to more sophisticated methods that attempt to understand the intended meanings of both queries and data items. There has been much work in this area in recent years. The link structure of the web has...", + "neighbors": [ + 224, + 933, + 1007 + ], + "mask": "Train" + }, + { + "node_id": 144, + "label": 1, + "text": "Analysis of Approximate Nearest Neighbor Searching with Clustered Point Sets this paper we study the performance of two other splitting methods, and compare them against the kd-tree splitting method. The first, called slidingmidpoint, is a splitting method that was introduced by Mount and Arya in the ANN library for approximate nearest neighbor searching [30]. This method was introduced into the library in order to better handle highly clustered data sets. We know of no analysis (empirical or theoretical) of this method. This method was designed as a simple technique for addressing one of the most serious flaws in the standard kd-tree splitting method. The flaw is that when the data points are highly clustered in low dimensional subspaces, then the standard kd-tree splitting method may produce highly elongated cells, and these can lead to slow query times. This splitting method starts with a simple midpoint split of the longest side of the cell, but if this split results in either subcell containing no data points, it translates (or \"slides\") the splitting plane in the direction of the points until hitting the first data point. In Section 3.1 we describe this splitting method and analyze some of its properties. The second splitting method, called minimum-ambiguity, is a query-based technique. The tree is given not only the data points, but also a collection of sample query points, called the training points. The algorithm applies a greedy heuristic to build the tree in an attempt to minimize the expected query time on the training points. We model query processing as the problem of eliminating data points from consideration as the possible candidates for the nearest neighbor. Given a collection of query points, we can model any stage of the nearest neighbor algorithm as a bipartite graph, called the candidate graph, whose vertices correspond t...", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 145, + "label": 0, + "text": "Agents Supporting Information Integration: The Miks Framework During past years we have developed the MOMIS (Mediator envirOnment for Multiple Information Sources) system for the integration of data from structured and semi-structured data sources. In this paper we propose a new system, MIKS (Mediator agent for Integration of Knowledge Sources), which enriches the MOMIS architecture exploiting the intelligent and mobile agent features. 1. Motivation The web explosion, both at Internet and intranet level, has transformed the electronic information system from single isolated node to an entry point into a worldwide network of information exchange and business transactions. One of the main challenges for the designers of the e-commerce infrastructures is the information sharing, retrieving data located in different sources thus obtaining an integrated view to overcome any contradiction or redundancy. During past years we have developed the MOMIS (Mediator envirOnment for Multiple Information Sources) system for the integration of data from struc...", + "neighbors": [ + 766 + ], + "mask": "Train" + }, + { + "node_id": 146, + "label": 3, + "text": "<bigwig> -- A language for developing interactive Web services <bigwig> is a high-level programming language and a compiler for developing interactive Web services. The overall goal of the language design is to remove many of the obstacles that face current developers of Web services in order to lower cost while increasing functionality and reliability. The compiler translates programs into a conglomerate of lower-level standard technologies such as CGI-scripts, HTML, JavaScript, and HTTP Authentication. This paper describes the major facets of the language design and the techniques used in their implementation, and compares the design with alternative Web service technologies.", + "neighbors": [ + 203 + ], + "mask": "Train" + }, + { + "node_id": 147, + "label": 3, + "text": "A Data Model and Data Structures for Moving Objects Databases We consider spatio-temporal databases supporting spatial objects with continuously changing position and extent, termed moving objects databases. We formally define a data model for such databases that includes complex evolving spatial structures such as line networks or multi-component regions with holes. The data model is given as a collection of data types and operations which can be plugged as attribute types into any DBMS data model (e.g. relational, or object-oriented) to obtain a complete model and query language. A particular novel concept is the sliced representation which represents a temporal development as a set of units, where unit types for spatial and other data types represent certain \"simple\" functions of time. We also show how the model can be mapped into concrete physical data structures in a DBMS environment. 1 Introduction A wide and increasing range of database applications has to deal with spatial objects whose position and/or extent changes over time...", + "neighbors": [ + 27, + 129, + 481, + 1158 + ], + "mask": "Train" + }, + { + "node_id": 148, + "label": 4, + "text": "Time Series Segmentation for Context Recognition in Mobile Devices Recognizing the context of use is important in making mobile devices as simple to use as possible. Finding out what the user's situation is can help the device and underlying service in providing an adaptive and personalized user interface. The device can infer parts of the context of the user from sensor data: the mobile device can include sensors for acceleration, noise level, luminosity, humidity, etc. In this paper we consider context recognition by unsupervised segmentation of time series produced by sensors. Dynamic programming can be used to find segments that minimize the intra-segment variances. While this method produces optimal solutions, it is too slow for long sequences of data. We present and analyze randomized variations of the algorithm. One of them, Global Iterative Replacement or GIR, gives approximately optimal results in a fraction of the time required by dynamic programming. We demonstrate the use of time series segmentation in context recognition for mobile phone applications. 1", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 149, + "label": 2, + "text": "Stable Algorithms for Link Analysis The Kleinberg HITS and the Google PageRank algorithms are eigenvector methods for identifying \"authoritative\" or \"influential\" articles, given hyperlink or citation information. That such algorithms should give reliable or consistent answers is surely a desideratum, and in [10], we analyzed when they can be expected to give stable rankings under small perturbations to the linkage patterns. In this paper, we extend the analysis and show how it gives insight into ways of designing stable link analysis methods. This in turn motivates two new algorithms, whose performance we study empirically using citation data and web hyperlink data. 1.", + "neighbors": [ + 774, + 1189 + ], + "mask": "Train" + }, + { + "node_id": 150, + "label": 2, + "text": "Memory Hierarchies as a Metaphor for Academic Library Collections Research libraries and their collections are a cornerstone of the academic tradition, representing 2000 years of development of the Western Civilization; they make written history widely accessible at low cost. Computer memories are a range of physical devices used for storing digital information that have undergone much formal study over 40 years and are well understood. This paper draws parallels between the organisation of research collections and computer memories, in particular examining their hierarchical structure, and examines the implication for digital libraries.", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 151, + "label": 3, + "text": "Developments in Spatio-Temporal Query Languages In contrast to the field view of spatial data that basically views spatial data as a mapping from points into some features, the object view clusters points by features and their values into spatial objects of type point, line, or region. When embedding these objects into a data model, such as the relational model, an additional clustering according to conceptually identified objects takes place. For example, we could define a relation City(name: string,center: point,area: region) that combines different features for cities in one relation. An important aspect of this kind of modeling is that clustering happens on two different levels: (i) points are grouped into spatial objects like regions and (ii) different attributes/features are grouped into a perceived object. When talking about data modeling there is no reason why this grouping should be limited to two levels. For example, we can consider storing regions of different population densities for each city in an attribute density: num \u2192 region. Although then the relation is not in first normal form anymore, we can \u201crecover\u201d the first normal form by encapsulating the function num \u2192 region in an abstract data type. The important aspect is that all the required operations on such a type as well as on regions and other complex types can be defined to a large degree independently from the data model. 1 The most important point about the preceding discussion is the way in which complex types can be easily", + "neighbors": [ + 487, + 1012 + ], + "mask": "Train" + }, + { + "node_id": 152, + "label": 1, + "text": "A Methodology to Improve Ad Hoc Data-Driven Linguistic Rule Learning Methods by Inducing Cooperation Among Rules Within the Linguistic Modeling eld |one of the most important applications of Fuzzy Rule-Based Systems|, a family of ecient and simple methods guided by covering criteria of the data in the example set, called \\ad hoc data-driven methods\", has been proposed in the literature in the last few years. Their high performance, in addition to their quickness and easy understanding, have make them very suitable for learning tasks. In this paper we are going to perform a double task analyzing these kinds of learning methods and introducing a new methodology to signicantly improve their accuracy keeping their descriptive power unalterable. On the one hand, a taxonomy of ad hoc data-driven learning methods based on the way in which the available data is used to guide the learning will be made. In this sense, we will distinguish between two approaches: the example-based and the fuzzy-grid-based one. Whilst in the former each rule is obtained from a specic example, in the latter the e...", + "neighbors": [ + 834 + ], + "mask": "Train" + }, + { + "node_id": 153, + "label": 3, + "text": "OMS/Java: Model Extensibility of OODBMS for Advanced Application Domains . We showhow model extensibility of object-oriented data management systems can be achieved through the combination of a highlevel core object data model and an architecture designed with model extensibility in mind. The resulting system, OMS#Java, is both a general data management system and a framework for the developmentof advanced database application systems. All aspects of the core model # constructs, query language and constraints # can easily be generalised to support, for example, the management of temporal, spatial and versioned data. Speci#cally,we showhow the framework was used to extend the core system to a temporal object-oriented database management system. 1 Introduction Extensibility has often been considered a purely architectural issue in database management systems #DBMS#. In the 1980s, there was an increase in the various forms of DBMS that appeared --- many of whichwere tailored to speci#c application domains such as Geographical Information Systems or ...", + "neighbors": [ + 199 + ], + "mask": "Train" + }, + { + "node_id": 154, + "label": 3, + "text": "A Holistic Process Performance Analysis through a Performance Data Warehouse This paper describes how a performance data warehouse can be used to facilitate business process improvement that is based on holistic performance measurement. The feasibility study shows how management and analysis of performance data can be facilitated by a data warehouse approach. It is argued that many of the shortcomings of traditional measurement systems can be overcome with this performance data warehouse approach.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 155, + "label": 5, + "text": "A Layered Approach to Learning Client Behaviors in the RoboCup Soccer Server In the past few years, Multiagent Systems (MAS) has emerged as an active subfield of Artificial Intelligence (AI). Because of the inherent complexity of MAS, there is much interest in using Machine Learning (ML) techniques to help build multiagent systems. Robotic soccer is a particularly good domain for studying MAS and Multiagent Learning. Our approach to using ML as a tool for building Soccer Server clients involves layering increasingly complex learned behaviors. In this article, we describe two levels of learned behaviors. First, the clients learn a low-level individual skill that allows them to control the ball effectively. Then, using this learned skill, they learn a higher-level skill that involves multiple players. For both skills, we describe the learning method in detail and report on our extensive empirical testing. We also verify empirically that the learned skills are applicable to game situations. 1 Introduction In the past few years, Multiagent Systems (MAS) has emerge...", + "neighbors": [ + 25, + 127, + 394, + 430, + 927, + 1266 + ], + "mask": "Validation" + }, + { + "node_id": 156, + "label": 2, + "text": "Iterative Information Retrieval Using Fast Clustering and Usage-Specific Genres This paper describes how collection specific empirically defined stylistics based genre prediction can be brought together together with rapid topical clustering to build an interactive information retrieval interface with multi-dimensional presentation of search results. The prototype presented addresses two specific problems of information retrieval: how to enrich the information seeking dialog by encouraging and supporting iterative refinement of queries, and how to enrich the document representation past the shallow semantics allowed by term frequencies. Searching For More Than Words Today's tools for searching information in a document database are based on term occurrence in texts. The searcher enters a number of terms and a number of documents where those terms or closely related terms appear comparatively frequently are retrieved and presented by the system in list form. This method works well up to a point. It is intuitively understandable, and for competent users and well e...", + "neighbors": [ + 298, + 903 + ], + "mask": "Train" + }, + { + "node_id": 157, + "label": 1, + "text": "Modified Gath-Geva Fuzzy Clustering for Identification of Takagi-Sugeno Fuzzy Models The construction of interpretable Takagi--Sugeno (TS) fuzzy models by means of clustering is addressed. First, it is shown how the antecedent fuzzy sets and the corresponding consequent parameters of the TS model can be derived from clusters obtained by the Gath--Geva algorithm. To preserve the partitioning of the antecedent space, linearly transformed input variables can be used in the model. This may, however, complicate the interpretation of the rules. To form an easily interpretable model that does not use the transformed input variables, a new clustering algorithm is proposed, based on the Expectation Maximization (EM) identification of Gaussian mixture models. This new technique is applied to two well-known benchmark problems: the MPG (miles per gallon) prediction and a simulated second-order nonlinear process. The obtained results are compared with results from the literature.", + "neighbors": [ + 828 + ], + "mask": "Train" + }, + { + "node_id": 158, + "label": 3, + "text": "Executing Query Packs in ILP Inductive logic programming systems usually send large numbers of queries to a database. The lattice structure from which these queries are typically selected causes many of these queries to be highly similar. As a consequence, independent execution of all queries may involve a lot of redundant computation. We propose a mechanism for executing a hierarchically structured set of queries (a \"query pack\") through which a lot of redundancy in the computation is removed. We have incorporated our query pack execution mechanism in the ILP systems Tilde and Warmr by implementing a new Prolog engine ilProlog which provides support for pack execution at a lower level. Experimental results demonstrate significant efficiency gains. Our query pack execution mechanism is very general in nature and could be incorporated in most other ILP systems, with similar efficiency improvements to be expected.", + "neighbors": [ + 639, + 1261 + ], + "mask": "Train" + }, + { + "node_id": 159, + "label": 0, + "text": "Resolution-Based Proof for Multi-Modal Temporal Logics of Knowledge Temporal logics of knowledge are useful in order to specify complex systems in which agents are both dynamic and have information about their surroundings. We present a resolution method for propositional temporal logic combined with multi-modal S5 and illustrate its use on examples. This paper corrects a previous proposal for resolution in multi-modal temporal logics of knowledge. Keywords: temporal and modal logics, non-classical resolution, theorem-proving 1 Introduction Combinations of logics have been useful for specifying and reasoning about complex situations, for example multi-agent systems [21, 24], accident analysis [15], and security protocols [18]. For example, logics to formalise multi-agent systems often incorporate a dynamic component representing change of over time; an informational component to capture the agent's knowledge or beliefs; and a motivational component for notions such as goals, wishes, desires or intentions. Often temporal or dynamic logic is used for...", + "neighbors": [ + 82, + 327, + 953 + ], + "mask": "Validation" + }, + { + "node_id": 160, + "label": 1, + "text": "Advances in Analogy-Based Learning: False Friends and Exceptional Items in Pronunciation By Paradigm-Driven Analogy When looked at from a multilingual perspective, grapheme-to-phoneme conversion is a challenging task, fraught with most of the classical NLP \"vexed questions\": bottle-neck problem of data acquisition, pervasiveness of exceptions, difficulty to state range and order of rule application, proper treatment of context-sensitive phenomena and long-distance dependencies, and so on. The hand-crafting of transcription rules by a human expert is onerous and time-consuming, and yet, for some European languages, still stops short of a level of correctness and accuracy acceptable for practical applications. We illustrate here a self-learning multilingual system for analogy-based pronunciation which was tested on Italian, English and French, and whose performances are assessed against the output of both statistically and rule-based transcribers. The general point is made that analogy-based self-learning techniques are no longer just psycholinguistically-plausible models, but competitive tools, combining the advantages of using language-independent, self-learning, tractable algorithms, with the welcome bonus of being more reliable for applications than traditional text-to-speech systems.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 161, + "label": 2, + "text": "Ontobroker: The Very High Idea The World Wide Web (WWW) is currently one of the most important electronic information sources. However, its query interfaces and the provided reasoning services are rather limited. Ontobroker consists of a number of languages and tools that enhance query access and inference service of the WWW. The technique is based on the use of ontologies. Ontologies are applied to annotate web documents and to provide query access and inference service that deal with the semantics of the presented information. In consequence, intelligent brokering services for web documents can be achieved without requiring to change the semiformal nature of web documents. Introduction The World Wide Web (WWW) contains huge amounts of knowledge about almost all subjects you can think of. HTML documents enriched by multi-media applications provide knowledge in different representations (i.e., text, graphics, animated pictures, video, sound, virtual reality, etc.). Hypertext links between web documents represent r...", + "neighbors": [ + 452, + 1190, + 1233 + ], + "mask": "Validation" + }, + { + "node_id": 162, + "label": 3, + "text": "A Theorem Prover-Based Analysis Tool for Object-Oriented Databases We present a theorem-prover based analysis tool for object-oriented database systems with integrity constraints. Object-oriented database specifications are mapped to higher-order logic (HOL). This allows us to reason about the semantics of database operations using a mechanical theorem prover such as Isabelle or PVS. The tool can be used to verify various semantics requirements of the schema (such as transaction safety, compensation, and commutativity) to support the advanced transaction models used in workflow and cooperative work. We give an example of method safety analysis for the generic structure editing operations of a cooperative authoring system. 1 Introduction Object-oriented specification methodologies and object-oriented programming have become increasingly important in the past ten years. Not surprisingly, this has recently led to an interest in object-oriented program verification in the theorem prover community, mainly using higher-order logic (HOL). Several dif...", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 163, + "label": 5, + "text": "Embodied Evolution: Embodying an Evolutionary Algorithm in a Population of Robots We introduce Embodied Evolution (EE) as a methodology for the automatic design of robotic controllers. EE is an evolutionary robotics (ER) technique that avoids the pitfalls of the simulate-and-transfer method, allows the speed-up of evaluation time by utilizing parallelism, and is particularly suited to future work on multi-agent behaviors. In EE, an evolutionary algorithm is distributed amongst and embodied within a population of physical robots that reproduce with one another while situated in the task environment. We have built a population of eight robots and successfully implemented our first experiments. The controllers evolved by EE compare favorably to hand-designed solutions for a simple task. We detail our methodology, report our initial results, and discuss the application of EE to more advanced and distributed robotics tasks. 1. Introduction Our work is inspired by the following vision. A large number of robots freely interact with each other in a shared environment, atte...", + "neighbors": [ + 741 + ], + "mask": "Validation" + }, + { + "node_id": 164, + "label": 3, + "text": "CHIME: Customizable Hyperlink Insertion and Maintenance Engine for Software Engineering Environments Source code browsing is an important part of program comprehension. Browsers expose semantic and syntactic relationships (such as between object references and definitions) in GUI-accessible forms. These relationships are derived using tools which perform static analysis on the original software documents. Implementing such browsers is tricky. Program comprehension strategies vary, and it is necessary to provide the right browsing support. Analysis tools to derive the relevant crossreference relationships are often difficult to build. Tools to browse distributed documents require extensive coding for the GUI, as well as for data communications. Therefore, there are powerful motivations for using existing static analysis tools in conjunction with WWW technology to implement browsers for distributed software projects. The chime framework provides a flexible, customizable platform for inserting HTML links into software documents using information generated by existing software analysis tools. Using the chime specification language, and a simple, retargetable database interface, it is possible to quickly incorporate a range of different link insertion tools for software documents, into an existing, legacy software development environment. This enables tool builders to offer customized browsing support with a well-known GUI. This paper describes the chime architecture, and describes our experience with several re-targeting efforts of this system. 1", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 165, + "label": 5, + "text": "Word Sense Disambiguation based on Semantic Density This paper presents a Word Sense Disambiguation method based on the idea of semantic density between words. The disambiguation is done in the context of WordNet. The Internet is used as a raw corpora to provide statistical information for word associations. A metric is introduced and used to measure the semantic density and to rank all possible combinations of the senses of two words. This method provides a precision of 58% in indicating the correct sense for both words at the same time. The precision increases as we consider more choices: 70% for top two ranked and 73% for top three ranked. 1 Introduction Word Sense Disambiguation (WSD) is an open problem in Natural Language Processing. Its solution impacts other tasks such as discourse, reference resolution, coherence, inference and others. WSD methods can be broadly classified into three types: 1. WSD that make use of the information provided by machine readable dictionaries (Cowie et al.1992), (Miller et al.1994), (Agirre and Rig...", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 166, + "label": 1, + "text": "Learning-Based Vision and Its Application to Autonomous Indoor Navigation Learning-Based Vision and Its Application to Autonomous Indoor Navigation By Shaoyun Chen Adaptation is critical to autonomous navigation of mobile robots. Many adaptive mechanisms have been implemented, ranging from simple color thresholding to complicated learning with artificial neural networks (ANN). The major focus of this thesis lies in machine learning for vision-based navigation. Two well known vision-based navigation systems are ALVINN and ROBIN developed by Carnegie-Mellon University and University of Maryland, respectively. ALVINN uses a two-layer feedforward neural network while ROBIN relies on a radial basis function network (RBFN). Although current ANN-based methods have achieved great success in vision-based navigation, they have two major disadvantages: (1) Local minimum problem: The training of either multilayer perceptron or radial basis function network can get stuck at poor local minimums. (2) The flexibility problem: After the system has been trained in certain r...", + "neighbors": [ + 244, + 463 + ], + "mask": "Train" + }, + { + "node_id": 167, + "label": 2, + "text": "View-independent Recognition of Hand Postures Since human hand is highly articulated and deformable, hand posture recognition is a challenging example in the research of view-independent object recognition. Due to the difficulties of the modelbased approach, the appearance-based learning approach is promising to handle large variation in visual inputs. However, the generalization of many proposed supervised learning methods to this problem often suffers from the insufficiency of labeled training data. This paper describes an approach to alleviate this difficulty by adding a large unlabeled training set. Combining supervised and unsupervised learning paradigms, a novel and powerful learning approach, the Discriminant-EM (D-EM) algorithm, is proposed in this paper to handle the case of small labeled training set. Experiments show that D-EM outperforms many other learning methods. Based on this approach, we implement a gesture interface to recognize a set o...", + "neighbors": [ + 234, + 609, + 1153 + ], + "mask": "Validation" + }, + { + "node_id": 168, + "label": 2, + "text": "Exploiting Structure for Intelligent Web Search Together with the rapidly growing amount of online data we register an immense need for intelligent search engines that access a restricted amount of data as found in intranets or other limited domains. This sort of search engines must go beyond simple keyword indexing/matching, but they also have to be easily adaptable to new domains without huge costs. This paper presents a mechanism that addresses both of these points: first of all, the internal document structure is being used to extract concepts which impose a directorylike structure on the documents similar to those found in classified directories. Furthermore, this is done in an efficient way which is largely language independent and does not make assumptions about the document structure.", + "neighbors": [ + 1000, + 1017, + 1032 + ], + "mask": "Train" + }, + { + "node_id": 169, + "label": 2, + "text": "Accurately and Reliably Extracting Data from the Web: A Machine Learning Approach A critical problem in developing information agents for the Web is accessing data that is formatted for human use. We have developed a set of tools for extracting data from web sites and transforming it into a structured data format, such as XML. The resulting data can then be used to build new applications without having to deal with unstructured data. The advantages of our wrapping technology over previous work are the the ability to learn highly accurate extraction rules, to verify the wrapper to ensure that the correct data continues to be extracted, and to automatically adapt to changes in the sites from which the data is being extracted. 1 Introduction There is a tremendous amount of information available on the Web, but much of this information is not in a form that can be easily used by other applications. There are hopes that XML will solve this problem, but XML is not yet in widespread use and even in the best case it will only address the problem within application domains...", + "neighbors": [ + 243, + 859, + 1233 + ], + "mask": "Validation" + }, + { + "node_id": 170, + "label": 4, + "text": "The CLEF 2003 Interactive Track The CLEF 2003 Interactive Track (iCLEF) was the third year of a shared experiment design to compare strategies for cross-language search assistance. Two kinds of experiments were performed: a) experiments in Cross-Language Document Selection, where the user task is to scan a ranked list of documents written in a foreign language, selecting those which seem relevant to a given query. The aim here is to compare di#erent translation strategies for an \"indicative\" purpose; and b) Full Cross-Language Search experiments, where the user task is to maximize the number of relevant documents that can be found in a foreignlanguage collection with the help of an end-to-end cross-language search system. Participating teams could choose to focus on any aspects of the search task (e.g., query formulation, query translation and/or relevance feedback). This paper describes the shared experiment design and briefly summarizes the experiments run by the five teams that participated.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 171, + "label": 3, + "text": "Rotational Polygon Containment and Minimum Enclosure An algorithm and a robust floating point implementation is given for rotational polygon containment:given polygons P 1 ,P 2 ,P 3 ,...,P k and a container polygon C, find rotations and translations for the k polygons that place them into the container without overlapping. A version of the algorithm and implementation also solves rotational minimum enclosure: givenaclass C of container polygons, find a container C in C of minimum area for which containment has a solution. The minimum enclosure is approximate: it bounds the minimum area between (1-epsilon)A and A. Experiments indicate that finding the minimum enclosure is practical for k = 2, 3 but not larger unless optimality is sacrificed or angles ranges are limited (although these solutions can still be useful). Important applications for these algorithm to industrial problems are discussed. The paper also gives practical algorithms and numerical techniques for robustly calculating polygon set intersection, Minkowski sum, and range in...", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 172, + "label": 1, + "text": "A Quantification Of Distance-Bias Between Evaluation Metrics In Classification This paper provides a characterization of bias for evaluation metrics in classification (e.g., Information Gain, Gini, 2 , etc.). Our characterization provides a uniform representation for all traditional evaluation metrics. Such representation leads naturally to a measure for the distance between the bias of two evaluation metrics. We give a practical value to our measure by observing if the distance between the bias of two evaluation metrics correlates with differences in predictive accuracy when we compare two versions of the same learning algorithm that differ in the evaluation metric only. Experiments on real-world domains show how the expectations on accuracy differences generated by the distance-bias measure correlate with actual differences when the learning algorithm is simple (e.g., search for the best single-feature or the best single-rule). The correlation, however, weakens with more complex algorithms (e.g., learning decision trees). Our results sh...", + "neighbors": [ + 1140 + ], + "mask": "Validation" + }, + { + "node_id": 173, + "label": 1, + "text": "Reasoning within Fuzzy Description Logics Description Logics (DLs) are suitable, well-known, logics for managing structured knowledge. They allow reasoning about individuals and well defined concepts, i.e. set of individuals with co#hfiP pro# erties. The experience in using DLs inapplicatio#& has sho wn that in many cases we wo#6H like to extend their capabilities. In particular, their use in the co# texto# Multimedia Info#mediafi Retrieval (MIR) leadsto the co# vincement that such DLssho#PF allo w the treatmento f the inherentimprecisio# in multimediao# ject co# tent representatio# and retrieval. In this paper we will present a fuzzyextensio# ALC,co# bining Zadeh's fuzzy lo#zy with a classical DL. In particular,co#rticu beco#FK fuzzy and, thus,reaso#HO6 ab o#fi impreciseco#recis is suppo#ppfi6 We will define its syntax, its semantics, describe its pro# erties and present a co#PHOSfi9 tpro#F&fi9KFS calculus for reasoning in it.", + "neighbors": [ + 311, + 644 + ], + "mask": "Validation" + }, + { + "node_id": 174, + "label": 3, + "text": "Scalable Processing of Read-Only Transactions in Broadcast Push Recently, push-based delivery has attracted considerable attention as a means of disseminating information to large client populations in both wired and wireless settings. In this paper, we address the problem of ensuring the consistency and currency of client read-only transactions in the presence of updates. To this end, additional control information is broadcast. A suite of methods is proposed that vary in the complexity and volume of the control information transmitted and subsequently differ in response times, degrees of concurrency, and space and processing overheads. The proposed methods are combined with caching to improve query latency. The relative advantages of each method are demonstrated through both simulation results and qualitative arguments. Read-only transactions are processed locally at the client without contacting the server and thus the proposed approaches are scalable, i.e., their performance is independent of the number of clients. 1. Introduction In traditio...", + "neighbors": [ + 354, + 373, + 470 + ], + "mask": "Train" + }, + { + "node_id": 175, + "label": 4, + "text": "Interpretation of Shape-related Iconic Gestures in Virtual Environments The interpretation of iconic gestures in spatial domains is a promising idea to improve the communicative capabilities of human-computer interfaces. So far, approaches towards gesture recognition focused mainly on deictic and emblematic gestures. Iconics, viewed as iconic signs in the sense of Peirce, are different from deictics and emblems, for their relation to the referent is based on similarity. In the work reported here, the breakdown of the complex notion of similarity provides the key idea towards a computational model of gesture semantics for iconic gestures. Based on an empirical study,we describe first steps towards a recognition model for shape-related iconic gestures and its implementation in a prototype gesture recognition system. Observations are focused on spatial concepts and their relation to features of iconic gestural expressions. The recognition model is based on a graphmatching method which compares the decomposed geometrical structures of gesture and object.", + "neighbors": [ + 635, + 921 + ], + "mask": "Train" + }, + { + "node_id": 176, + "label": 3, + "text": "A Formal Approach to Detecting Security Flaws in Object-Oriented Databases This paper adopts the method-based authorization model and assumes the following database management policies. Let (m, (c 1 ,c 2 , ...,c n )) be in an authorization for a user u.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 177, + "label": 2, + "text": "A Maximum Entropy Language Model Integrating N-Grams And Topic Dependencies For Conversational Speech Recognition A compact language model which incorporates local dependencies in the form of N-grams and long distance dependencies through dynamic topic conditional constraints is presented. These constraints are integrated using the maximum entropy principle. Issues in assigning a topic to a test utterance are investigated. Recognition results on the Switchboard corpus are presented showing that with a very small increase in the number of model parameters, reduction in word error rate and language model perplexity are achieved over trigram models. Some analysis follows, demonstrating that the gains are even larger on content-bearing words. The results are compared with those obtained by interpolating topicindependent and topic-specific N-gram models. The framework presented here extends easily to incorporate other forms of statistical dependencies such as syntactic word-pair relationships or hierarchical topic constraints. 1. INTRODUCTION Language modeling is a crucial component of systems that c...", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 178, + "label": 0, + "text": "Speech Acts for Dialogue Agents this paper by the U.S. Army Research Office under contract/grant number DAAH 04 95 10628 and the U.S. National Science Foundation under grant IRI9311988. Some of the work described above was developed in collaboration with James Allen and supported by ONR/DARPA under grant number N00014-92J -1512, by ONR under research grant number N00014-90-J-1811, and by NSF under grant number IRI-9003841.", + "neighbors": [ + 359, + 981 + ], + "mask": "Validation" + }, + { + "node_id": 179, + "label": 3, + "text": "On the Extension of UML with Use Case Maps Concepts . Descriptions of reactive systems focus heavily on behavioral aspects, often in terms of scenarios. To cope with the increasing complexity of services provided by these systems, behavioral aspects need to be handled early in the design process with flexible and concise notations as well as expressive concepts. UML offers different notations and concepts that can help describe such services. However, several necessary concepts appear to be absent from UML, but present in the Use Case Map (UCM) scenario notation. In particular, Use Case Maps allow scenarios to be mapped to different architectures composed of various component types. The notation supports structured and incremental development of complex scenarios at a high level of abstraction, as well as their integration. UCMs specify variations of run-time behavior and scenario structures through sub-maps \"pluggable\" into placeholders called stubs. This paper presents how UCM concepts could be used to extend the semantics...", + "neighbors": [ + 19 + ], + "mask": "Train" + }, + { + "node_id": 180, + "label": 3, + "text": "Repeating History beyond ARIES In this paper, I describe first the background behind the development of the original ARIES recovery method, and its significant impact on the commercial world and the research community. Next, I provide a brief introduction to the various concurrency control and recovery methods in the ARIES family of algorithms. Subsequently, I discuss some of the recent developments affecting the transaction management area and what these mean for the future. In ARIES, the concept of repeating history turned out to be an important paradigm. As I examine where transaction management is headed in the world of the internet, I observe history repeating itself in the sense of requirements that used to be considered significant in the mainframe world (e.g., performance, availability and reliability) now becoming important requirements of the broader information technology community as well. 1. Introduction Transaction management is one of the most important functionalities provided by a...", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 181, + "label": 1, + "text": "Generating Accurate Rule Sets Without Global Optimization The two dominant schemes for rule-learning, C4.5 and RIPPER, both operate in two stages. First they induce an initial rule set and then they refine it using a rather complex optimization stage that discards (C4.5) or adjusts (RIPPER) individual rules to make them work better together. In contrast, this paper shows how good rule sets can be learned one rule at a time, without any need for global optimization. We present an algorithm for inferring rules by repeatedly generating partial decision trees, thus combining the two major paradigms for rule generation\u2014creating rules from decision trees and the separate-and-conquer rule-learning technique. The algorithm is straightforward and elegant: despite this, experiments on standard datasets show that it produces rule sets that are as accurate as and of similar size to those generated by C4.5, and more accurate than RIPPER\u2019s. Moreover, it operates efficiently, and because it avoids postprocessing, does not suffer the extremely slow performance on pathological example sets for which the C4.5 method has been criticized.", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 182, + "label": 0, + "text": "Desiderata for Agent Oriented Programming Languages Multiagent system designers need programming languages in order to develop agents and multiagent systems. Current approaches consist to use classical programming languages like C or C++ and above all Java which is the most preferred language by agent community thanks to its rich library of functions. The aim of Java is not to design multiagent systems so it does not encompass multiagent features. The aim of this paper is to present a set of characteristics which could be present in an agent-oriented programming language. This paper also describes what kind of multiagent systems could be developed with this set of characteristics.", + "neighbors": [ + 472 + ], + "mask": "Train" + }, + { + "node_id": 183, + "label": 1, + "text": "Primitive-Based Movement Classification for Humanoid Imitation . Motor control is a complex problem and imitation is a powerful mechanism for acquiring new motor skills. In this paper, we describe perceptuo-motor primitives, a biologically-inspired notion for a basis set of perceptual and motor routines. Primitives serve as a vocabulary for classifying and imitating observed human movements, and are derived from the imitator's motor repertoire. We describe a model of imitation based on such primitives and demonstrate the feasibility of the model in a constrained implementation. We present approximate motion reconstruction generated from visually captured data of typically imitated tasks taken from aerobics, dancing, and athletics. 1 Introduction Imitation is a powerful mechanism for acquiring new skills. It involves an intricate interaction between perceptual and motor mechanisms, both of which are complex in themselves. Research into vision and motor control has explored the role of \"subroutines\", schemas [1], and other variants based on ...", + "neighbors": [ + 398, + 1143 + ], + "mask": "Train" + }, + { + "node_id": 184, + "label": 0, + "text": "Specifying Agents with UML in Robotic Soccer The use of agents and multiagent systems is widespread in computer science nowadays. Thus the need for methods to specify agents in a clear and simple manner arises. In this paper we propose an approach to specifying agents with the help of UML statecharts. Agents are specified on different levels of abstraction. In addition a method for specifying multiagent plans with explicit cooperation is shown. As an example domain we chose robotic soccer, which lays the basis of the annual RoboCup competitions. Robotic soccer is an ideal testbed for research in the fields of robotics and multiagent systems. In the RoboCup Simulation League the research focus is laid on agents and multiagent systems, and we will demonstrate our approach by using examples from this domain. Keywords: Multiagent Systems, Unified Modeling Language (UML), Specification, RoboCup, Robotic Soccer 1", + "neighbors": [ + 363, + 508, + 602 + ], + "mask": "Train" + }, + { + "node_id": 185, + "label": 4, + "text": "Value-added Mediation in Large-Scale Information Systems Many information-processing tasks can be part of multiple customer applications, as summarizing stock prices, integrating catolog information from several companies in the same line of business, predicting the weather, and checking on transportation resources. We assign such sharable services to an active middleware layer, interposed between clients and servers. We define domain-specific mediator modules to populate this layer. Such mediating services must be of value to the customers, so that it will benefit their client applications to access mediators rather than the server sources directly. Several categories of value can be considered: improvement in access and coverage, improvement of content, and delegation of maintenance. We will define criteria for mediating modules: ownership by party who assumes responsibility for the rseults of the services, domain-specificity to delimit the scope of such a responsibility, and, of course, conformance with interface standards that ...", + "neighbors": [ + 859, + 1079 + ], + "mask": "Validation" + }, + { + "node_id": 186, + "label": 2, + "text": "Information Extraction via Heuristics for a Movie Showtime Query System Semantic interpretation for limited-domain spoken dialogue systems often amounts to extracting information from utterances. For a system that provides movie showtime information, queries are classified along four dimensions: question type, and movie titles, towns and theaters that were mentioned. Simple heuristics suffice for constructing highly accurate classifiers for the latter three attributes; classifiers for the question type attribute are induced from data using features tailored to spoken language phenomena. Since separate classifiers are used for the four attributes, which are not independent, certain errors can be detected and corrected, thus increasing robustness. 1.", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 187, + "label": 5, + "text": "Geometric Foundations for Interval-Based Probabilities The need to reason with imprecise probabilities arises in a wealth of situations ranging from pooling of knowledge from multiple experts to abstraction-based probabilistic planning. Researchers have typically represented imprecise probabilities using intervals and have developed a wide array of different techniques to suit their particular requirements. In this paper we provide an analysis of some of the central issues in representing and reasoning with interval probabilities. At the focus of our analysis is the probability cross-product operator and its interval generalization, the cc-operator. We perform an extensive study of these operators relative to manipulation of sets of probability distributtions. This study provides insight into the sources of the strengths and weaknesses of various approaches to handling probability intervals. We demonstrate the application of our results to the problems of inference in interval Bayesian networks and projection and evaluation of abstract pro...", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 188, + "label": 2, + "text": "Authoring and Annotation of Web Pages in CREAM Richly interlinked, machine-understandable data constitute the basis for the Semantic Web. We provide a framework, CREAM, that allows for creation of metadata. While the annotation mode of CREAM allows to create metadata for existing web pages, the authoring mode lets authors create metadata --- almost for free --- while putting together the content of a page. As a particularity of our framework, CREAM allows to create relational metadata, i.e. metadata that instantiate interrelated definitions of classes in a domain ontology rather than a comparatively rigid template-like schema as Dublin Core. We discuss some of the requirements one has to meet when developing such an ontology-based framework, e.g. the integration of a metadata crawler, inference services, document management and a meta-ontology, and describe its implementation, viz. Ont-O-Mat a component-based, ontology-driven Web page authoring and annotation tool.", + "neighbors": [ + 239, + 644, + 835, + 934, + 1106, + 1233 + ], + "mask": "Train" + }, + { + "node_id": 189, + "label": 4, + "text": "Principles of Mixed-Initiative User Interfaces Recent debate has centered on the relative promise of focusing user-interface research on developing new metaphors and tools that enhance users' abilities to directly manipulate objects versus directing effort toward developing interface agents that provide automation. In this paper, we review principles that show promise for allowing engineers to enhance human---computer interaction through an elegant coupling of automated services with direct manipulation. Key ideas will be highlighted in terms of the LookOut system for scheduling and meeting management. Keywords Intelligent agents, direct manipulation, user modeling, probability, decision theory, UI design INTRODUCTION There has been debate among researchers about where great opportunities lay for innovating in the realm of human--- computer interaction [10]. One group of researchers has expressed enthusiasm for the development and application of new kinds of automated services, often referred to as interface \"agents.\" The effo...", + "neighbors": [ + 460, + 1160 + ], + "mask": "Train" + }, + { + "node_id": 190, + "label": 3, + "text": "Achieving Workflow Adaptability by means of Reflection Belief in the importance of business processes has triggered considerable interest in the workflow systems that automate these processes. However, of the two competing management philosophies that promulgate business processes, Business Process Reengineering proposes radical change, whereas Continuous Process Improvement places much greater emphasis on adaptability. The former school is somewhat discredited, whereas the latter school seems more likely to endure, thus making the flexibility and evolution of workflows an issue of increasing importance. In this paper, we present a programmable object-oriented metalevel framework which aims to reveal the processes of assembling and coordinating the tasks that make up business processes. This is achieved by isolating four key facets -- state, behaviour, location and coordination. In particular, we open up the general process of task coordination and specification, allowing for extensions in a planned way. By suitable manipulation of coordin...", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 191, + "label": 5, + "text": "Optimal Anytime Search For Constrained Nonlinear Programming In this thesis, we study optimal anytime stochastic search algorithms (SSAs) for solving general constrained nonlinear programming problems (NLPs) in discrete, continuous and mixed-integer space. The algorithms are general in the sense that they do not assume differentiability or convexity of functions. Based on the search algorithms, we develop the theory of SSAs and propose optimal SSAs with iterative deepening in order to minimize their expected search time. Based on the optimal SSAs, we then develop optimal anytime SSAs that generate improved solutions as more search time is allowed. Our SSAs for solving general constrained NLPs are based on the theory of discrete con-strained optimization using Lagrange multipliers that shows the equivalence between the set of constrained local minima (CLMdn) and the set of discrete-neighborhood saddle points (SPdn). To implement this theory, we propose a general procedural framework for locating an SPdn. By incorporating genetic algorithms in the framework, we evaluate new constrained search algorithms: constrained genetic algorithm (CGA) and combined constrained simulated annealing and genetic algorithm (CSAGA).", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 192, + "label": 3, + "text": "Design and Implementation of the OLOG Deductive Object-Oriented Database Management System . OLOG is a novel deductive database system for advanced intelligent information system applications. It directly supports eective storage, ecient access and inference of large amount of persistent data with complex structures. It provides a SQL-like data denition language and data manipulation language, and a declarative rule-based query language. It combines the best of the deductive, object-oriented, and objectrelational approaches in a uniform framework. This paper describes the design and implementation of the OLOG system. 1 Introduction Deductive, object-oriented, and object-relational databases are three important extensions of the traditional relational database technology. Deductive databases stem from the integration of logic programming and relational databases. It oers representational and operational uniformity, reasoning capabilities, recursion, declarative querying, ecient secondary storage access, etc. However, deductive databases based on relational databas...", + "neighbors": [ + 211, + 501, + 861 + ], + "mask": "Train" + }, + { + "node_id": 193, + "label": 3, + "text": "Time Series Classification by Boosting Interval Based Literals A supervised classification method for temporal series, even multivariate, is presented. It is based on boosting very simple classifiers: clauses with one literal in the body. The background predicates are based on temporal intervals. Two types of predicates are used: i) relative predicates, such as \"increases\" and \"stays\", and ii) region predicates, such as \"always\" and \"sometime\", which operate over regions in the dominion of the variable. Experiments on di#erent data sets, several of them obtained from the UCI repositories, show that the proposed method is highly competitive with previous approaches. Keywords: time series classification, interval based literals, boosting, machine learning. 1", + "neighbors": [ + 719 + ], + "mask": "Test" + }, + { + "node_id": 194, + "label": 4, + "text": "LegORB and Ubiquitous CORBA The increasing popularity of ubiquitous computing and the new approaches for low-consumption, shortrange wireless connectivity will enable a future with hundreds of heterogeneous devices interconnected to achieve a common task. However, communication among those network enabled heterogeneous devices requires standard protocols and well defined interfaces. While existing middleware architectures already offer standard mechanisms (DCOM, CORBA, JINI), they are, in most of the cases, not suitable for most of the heterogeneous devices. The resources required by those middleware solutions normally exceed the computational limits of the heterogeneous devices We present in this paper a minimalist component-based Object Request Broker (ORB) that can be dynamically reconfigured and requires, for the smallest configuration, 6Kb of memory. Introduction The incoming ubiquitous computing trend allows the existence of collections of network-enabled devices attached to rooms, people and buildings....", + "neighbors": [ + 385 + ], + "mask": "Train" + }, + { + "node_id": 195, + "label": 1, + "text": "Using Case-Based Reasoning to Acquire User Scheduling Preferences that Change over Time Production/Manufacturing scheduling typically involves the acquisition of user optimization preferences. The ill-structuredness of both the problem space and the desired objectives make practical scheduling problems difficult to formalize and costly to solve, especially when problem configurations and user optimization preferences change over time. This paper advocates an incremental revision framework for improving schedule quality and incorporating user dynamically changing preferences through Case-Based Reasoning. Our implemented system, called CABINS, records situation-dependent tradeoffs and consequences that result from schedule revision to guide schedule improvement. The preliminary experimental results show that CABINS is able to effectively capture both user static and dynamic preferences which are not known to the system and only exist implicitly in a extensional manner in the case base. 1 Introduction Scheduling deals with allocation of a limited set of resources to a nu...", + "neighbors": [ + 476 + ], + "mask": "Train" + }, + { + "node_id": 196, + "label": 0, + "text": "The Adaptive Agent Architecture: Achieving FaultTolerance Using Persistent Broker Teams Brokers are used in many multi-agent systems for locating agents, for routing and sharing information, for managing the system, and for legal purposes, as independent third parties. However, these multi-agent systems can be incapacitated and rendered non-functional when the brokers become inaccessible due to failures such as machine crashes, network breakdowns, and process failures that can occur in any distributed software system. We propose that the theory of teamwork can be used to create robust brokered architectures that can recover from broker failures, and we present the Adaptive Agent Architecture (AAA) to show the feasibility of this approach. The AAA brokers form a team with a joint commitment to serve any agent that registers with the broker team as long as the agent remains registered with the team. This commitment enables the brokers to substitute for each other when needed. A multiagent system based on the AAA can continue to work despite broker failures as long...", + "neighbors": [ + 200, + 312, + 724, + 943 + ], + "mask": "Validation" + }, + { + "node_id": 197, + "label": 1, + "text": "Feature Subset Selection Using A Genetic Algorithm Practical pattern classification and knowledge discovery problems require selection of a subset of attributes or features (from a much larger set) to represent the patterns to be classified. This paper presents an approach to the multi-criteria optimization problem of feature subset selection using a genetic algorithm. Our experiments demonstrate the feasibility of this approach for feature subset selection in the automated design of neural networks for pattern classification and knowledge discovery. 1 Introduction Many practical pattern classification tasks (e.g., medical diagnosis) require learning of an appropriate classification function that assigns a given input pattern (typically represented using a vector of attribute or feature values) to one of a finite set of classes. The choice of features, attributes, or measurements used to represent patterns that are presented to a classifier affect (among other things): ffl The accuracy of the classification function that can be learn...", + "neighbors": [ + 620, + 872 + ], + "mask": "Train" + }, + { + "node_id": 198, + "label": 3, + "text": "Automatic I/O Hint Generation through Speculative Execution Aggressive prefetching is an effective technique for reducing the execution times of disk-bound applications; that is, applications that manipulate data too large or too infrequently used to be found in file or disk caches. While automatic prefetching approaches based on static analysis or historical access patterns are effective for some workloads, they are not as effective as manually-driven (programmer-inserted) prefetching for applications with irregular or input-dependent access patterns. In this paper, we propose to exploit whatever processor cycles are left idle while an application is stalled on I/O by using these cycles to dynamically analyze the application and predict its future I/O accesses. Our approach is to speculatively pre-execute the application's code in order to discover and issue hints for its future read accesses. Coupled with an aggressive hint-driven prefetching system, this automatic approach could be applied to arbitrary applications, and should be particularl...", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 199, + "label": 3, + "text": "An Extended Entity-Relationship Approach to Data Management in Object-Oriented Systems Database programming in object-oriented systems can be supported by combining data modelling and programming technologies such that a data model supports the management of collections of objects where those objects are as specified by the underlying object-oriented programming language. This approach is the basis of the object data management services (ODMS) of the Comandos system. The ODMS data model provides constructs for the representation of both entities and their relationships and further supports rich classification structures. To complement the structural model, there is an operational model based on an algebra over collections of objects. 1 Introduction Object-oriented technologies are gaining in popularity as the basis for software development platforms. Meanwhile the family of entity-relationship data models retain their wide-spread use and popularity for conceptual modelling. How then can these two successful technologies be combined to support the development of ...", + "neighbors": [ + 153 + ], + "mask": "Train" + }, + { + "node_id": 200, + "label": 0, + "text": "Multi-Agent Architectures as Organizational Structures A Multi-Agent System (MAS) is an organization of coordinated autonomous agents that interact in order to achieve particular, possible common goals. Considering real world organizations as an analogy, this paper proposes architectural styles for MAS which adopt concepts from organizational theories. The styles are modeled using the i* framework which o#ers the notions of actor, goal and actor dependency and specified in Formal Tropos. They are evaluated with respect to a set of software quality attributes, such as predictability or adaptability. In addition, we conduct a comparative study of organizational and conventional software architectures using the mobile robot control example from the Software Engineering literature. The research is conducted in the context of Tropos, a comprehensive software system development methodology.", + "neighbors": [ + 196, + 495, + 945, + 963 + ], + "mask": "Validation" + }, + { + "node_id": 201, + "label": 2, + "text": "A Unifying Approach to HTML Wrapper Representation and Learning . The number, the size, and the dynamics of Internet information sources bears abundant evidence of the need for automation in information extraction. This calls for representation formalisms that match the World Wide Web reality and for learning approaches and learnability results that apply to these formalisms. The concept of elementary formal systems is appropriately generalized to allow for the representation of wrapper classes which are relevant to the description of Internet sources in HTML format. Related learning results prove that those wrappers are automatically learnable from examples. This is setting the stage for information extraction from the Internet by exploitation of inductive learning techniques. 1 Motivation Today's online access to millions or even billions of documents in the World Wide Web is a great challenge to research areas related to knowledge discovery and information extraction (IE). The general task of IE is to locate specific pieces of text i...", + "neighbors": [ + 875, + 1122, + 1232 + ], + "mask": "Test" + }, + { + "node_id": 202, + "label": 1, + "text": "Is Machine Colour Constancy Good Enough? . This paper presents a negative result: current machine colour constancy algorithms are not good enough for colour-based object recognition. This result has surprised us since we have previously used the better of these algorithms successfully to correct the colour balance of images for display. Colour balancing has been the typical application of colour constancy, rarely has it been actually put to use in a computer vision system, so our goal was to show how well the various methods would do on an obvious machine colour vision task, namely, object recognition. Although all the colour constancy methods we tested proved insufficient for the task, we consider this an important finding in itself. In addition we present results showing the correlation between colour constancy performance and object recognition performance, and as one might expect, the better the colour constancy the better the recognition rate. 1 Introduction We set out to show that machine colour constancy had matured to...", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 203, + "label": 3, + "text": "A Runtime System for Interactive Web Services Interactive web services are increasingly replacing traditional static web pages. Producing web services seems to require a tremendous amount of laborious lowlevel coding due to the primitive nature of CGI programming. We present ideas for an improved runtime system for interactive web services built on top of CGI running on virtually every combination of browser and HTTP/CGI server. The runtime system has been implemented and used extensively in , a tool for producing interactive web services. Keywords: CGI, Interactive Web Service, Web Document Management, Runtime System, Session Model. 1 Introduction An interactive web service consists of a global shared state (typically a database) and a number of distinct sessions that each contain some local private state and a sequential, imperative action. A web client may invoke an individual thread of one of the given session kinds. The execution of this thread may interact with the client and inspect or modify the global state. One...", + "neighbors": [ + 146 + ], + "mask": "Train" + }, + { + "node_id": 204, + "label": 0, + "text": "Intelligent Agents -- A New Technology for Future Distributed Sensor Systems? This master thesis deals with intelligent agents and the possibility to use the intelligent agent technology in future distributed sensor systems. The term future distributed sensor system refers to a system based on several sensors that will be developed within a period of five to ten years. Since researchers have not agreed on a more precise definition of intelligent agents, we first examined what constitutes an intelligent agent and made a definition suited for our application domain. We used our definition as a base for investigating if and how intelligent agents can be used in future distributed sensor systems. We argue that it is not interesting to come up with a general agent definition applicable to every agent, instead one should make a foundation for a definition. When this is done we can decide on more specific features depending on the task the agent will perform and in what domain the agent will work in. Finally we conclude that it is possible to use the agent technology i...", + "neighbors": [ + 854 + ], + "mask": "Test" + }, + { + "node_id": 205, + "label": 3, + "text": "Efficient Computation of Temporal Aggregates with Range Predicates A temporal aggregation query is an important but costly operation for applications that maintain timeevolving data (data warehouses, temporal databases, etc.). Due to the large volume of such data, performance improvements for temporal aggregation queries are critical. In this paper we examine techniques to compute temporal aggregates that include key-range predicates (range temporal aggregates). In particular we concentrate on SUM, COUNT and AVG aggregates. This problem is novel; to handle arbitrary key ranges, previous methods would need to keep a separate index for every possible key range. We propose an approach based on a new index structure called the Multiversion SB-Tree, which incorporates features from both the SB-Tree and the Multiversion B-Tree, to handle arbitrary key-range temporal SUM, COUNT and AVG queries. We analyze the performance of our approach and present experimental results that show its efficiency. 1", + "neighbors": [ + 706 + ], + "mask": "Train" + }, + { + "node_id": 206, + "label": 4, + "text": "The PLAY Research Group: Entertainment and Innovation in Sweden In a short time the research group PLAY has established an unorthodox but effective work style, where a creative approach to research in information technology is combined with a strong focus on achieving high-quality results. Being a young research group (both regarding the time it has existed and the average age of its members) has presented PLAY with both challenges and opportunities. We face the challenge of building a credible basis for research in the academic community, but also think that we have the opportunity to contribute innovative results to the research community and our industrial partners. Keywords HCI research groups, future HCI, European HCI, IT design INTRODUCTION How can one perform exciting and unorthodox research in information technology, while still assuring that results are useful and of good quality? How can a small group, consisting mostly of relatively inexperienced students, in a small country with very little traditions in groundbreaking IT research, ...", + "neighbors": [ + 854, + 869 + ], + "mask": "Test" + }, + { + "node_id": 207, + "label": 5, + "text": "A Survey on Knowledge Compilation this paper we survey recent results in knowledge compilation of propositional knowledge bases. We first define and limit the scope of such a technique, then we survey exact and approximate knowledge compilation methods. We include a discussion of compilation for non-monotonic knowledge bases. Keywords: Knowledge Representation, Efficiency of Reasoning", + "neighbors": [ + 980 + ], + "mask": "Test" + }, + { + "node_id": 208, + "label": 0, + "text": "More Realistic Human Behavior Models for Agents in Virtual Worlds: Emotion, Stress, and Value Ontologies This paper focuses on challenges to improving the behavioral realism of computer generated agents and attempts to reflect the state of the art in human behavior modeling with particular attention to value ontologies, emotion, and stress in game-theoretic settings. The goal is to help those interested in constructing more realistic software agents for use in simulations, in virtual reality environments, and in training and performance aiding settings such as on the web or in embedded applications. This paper pursues this goal by providing a framework for better integrating the theories and models contained in the diverse human behavior modeling literatures, such as those that straddle physiological, cognitive and emotive processes; individual differences; emergent group and crowd behavior; and (punctuated) equilibria in social settings. The framework is based on widely available ontologies of world values and how these and physiological factors might be construed emotively into subjective expected utilities to guide the reactions and deliberations of agents. For example what makes one set of opponent groups differ from another? This framework serves as an extension of Markov decision processes appropriate for iterative play in game-theoretic settings, with particular emphasis on agent capabilities for redefining drama and for finding meta-games to counter the human player. This article presents the derivation of the framework and some initial results and lessons learned about integrating behavioral models into interactive dramas and meta-games that stimulate (systemic) thought and training doctrine. 1)", + "neighbors": [ + 267, + 1067 + ], + "mask": "Train" + }, + { + "node_id": 209, + "label": 3, + "text": "Temporal Statement Modifiers A wide range of database applications manage time-varying data. Many temporal query languages have been proposed, each one the result of many carefully made yet subtly interacting design decisions. In this article we advocate a different approach to articulating a set of requirements, or desiderata, that directly imply the syntactic structure and core semantics of a temporal extension of an (arbitrary) nontemporal query language. These desiderata facilitate transitioning applications from a nontemporal query language and data model, which has received only scant attention thus far. The paper then introduces the notion of statement modifiers that provide a means of systematically adding temporal support to an existing query language. Statement modifiers apply to all query language statements, for example, queries, cursor definitions, integrity constraints, assertions, views, and data manipulation statements. We also provide a way to systematically add temporal support to an existing implementation. The result is a temporal query language syntax, semantics, and implementation that derives from first principles. We exemplify this approach by extending SQL-92 with statement modifiers. This extended language, termed ATSQL, is formally defined via a denotational-semantics-style mapping of", + "neighbors": [ + 397 + ], + "mask": "Validation" + }, + { + "node_id": 210, + "label": 3, + "text": "Selecting and Materializing Horizontally Partitioned Warehouse Views Data warehouse views typically store large aggregate tables based on a subset of dimension attributes of the main data warehouse fact table. Aggregate views can be stored as 2 n subviews of a data cube with n attributes. Methods have been proposed for selecting only some of the data cube views to materialize in order to speed up query response time, accommodate storage space constraint and reduce warehouse maintenance cost. This paper proposes a method for selecting and materializing views, which selects and horizontally fragments a view, recomputes the size of the stored partitioned view while deciding further views to select. # 2001 Elsevier Science B.V. All rights reserved. Keywords: Data warehouse; Views; Fragmentation; Performance benet 1. Introduction Decision support systems (DSS) used by business executives require analyzing snapshots of departmental databases over several periods of time. Departmental databases of the same organization (e.g., a bank) may be stored on dier...", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 211, + "label": 3, + "text": "Deductive Database Languages: Problems and Solutions this paper, we discuss these problems from four different aspects: complex values, object orientation, higher-orderness, and updates. In each case, we examine four typical languages that address the corresponding issues.", + "neighbors": [ + 192 + ], + "mask": "Train" + }, + { + "node_id": 212, + "label": 2, + "text": "Regression Models for Ordinal Data: A Machine Learning Approach In contrast to the standard machine learning tasks of classification and metric regression we investigate the problem of predicting variables of ordinal scale, a setting referred to as ordinal regression. The task of ordinal regression arises frequently in the social sciences and in information retrieval where human preferences play a major role. Also many multi--class problems are really problems of ordinal regression due to an ordering of the classes. Although the problem is rather novel to the Machine Learning Community it has been widely considered in Statistics before. All the statistical methods rely on a probability model of a latent (unobserved) variable and on the condition of stochastic ordering. In this paper we develop a distribution independent formulation of the problem and give uniform bounds for our risk functional. The main difference to classification is the restriction that the mapping of objects to ranks must be transitive and asymmetric. Combining our theoretical framework with results from measurement theory we present an approach that is based on a mapping from objects to scalar utility values and thus guarantees transitivity and asymmetry. Applying the principle of Structural Risk Minimization as employed in Support Vector Machines we derive a new learning algorithm based on large margin rank boundaries for the task of ordinal regression. Our method is easily extended to nonlinear utility functions. We give experimental results for an Information Retrieval task of learning the order of documents with respect to an initial query. Moreover, we show that our algorithm outperforms more naive approaches to ordinal regression such as Support Vector Classification and Support Vector Regression in the case of more than two ranks.", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 213, + "label": 2, + "text": "Designing a Digital Library for Young Children: An Intergenerational Partnership As more information resources become accessible using computers, our digital interfaces to those resources need to be appropriate for all people. However when it comes to digital libraries, the interfaces have typically been designed for older children or adults. Therefore, we have begun to develop a digital library interface developmentally appropriate for young children (ages 5-10 years old). Our prototype system we now call \"QueryKids\" offers a graphical interface for querying, browsing and reviewing search results. This paper describes our motivation for the research, the design partnership we established between children and adults, our design process, the technology outcomes of our current work, and the lessons we have learned. Keywords Children, digital libraries, information retrieval design techniques, education applications, participatory design, cooperative inquiry, intergenerational design team, zoomable user interfaces (ZUIs). THE NEED FOR RESEARCH A growing body of k...", + "neighbors": [ + 450, + 648 + ], + "mask": "Train" + }, + { + "node_id": 214, + "label": 1, + "text": "A Sound Algorithm for Region-Based Image Retrieval Using an Index Region-based image retrieval systems aim to improve the effectiveness of content-based search by decomposing each image into a set of \"homogeneous\" regions. Thus, similarity between images is assessed by computing similarity between pairs of regions and then combining the results at the image level. In this paper we propose the first provably sound algorithm for performing region-based similarity search when regions are accessed through an index. Experimental results demonstrate the effectiveness of our approach, as also compared to alternative retrieval strategies. 1. Introduction Many real world applications, in the field of medicine, weather prediction, and communications, to name a few, require efficient access to image databases based on content. To this end, the goal of content-based image retrieval (CBIR) systems is to define a set of properties (features) able to effectively characterize the content of images and then to use such features during retrieval. Users accessing a CB...", + "neighbors": [ + 1155 + ], + "mask": "Validation" + }, + { + "node_id": 215, + "label": 4, + "text": "A Wearable Spatial Conferencing Space Wearable computers provide constant access to computing and communications resources. In this paper we describe how the computing power of wearables can be used to provide spatialized 3D graphics and audio cues to aid communication. The result is a wearable augmented reality communication space with audio enabled avatars of the remote collaborators surrounding the user. The user can use natural head motions to attend to the remote collaborators, can communicate freely while being aware of other side conversations and can move through the communication space. In this way the conferencing space can support dozens of simultaneous users. Informal user studies suggest that wearable communication spaces may offer several advantages, both through the increase in the amount of information it is possible to access and the naturalness of the interface. 1: Introduction One of the broad trends emerging in human-computer interaction is the increasing portability of computing and communication fac...", + "neighbors": [ + 738, + 886, + 965, + 1043, + 1192 + ], + "mask": "Train" + }, + { + "node_id": 216, + "label": 2, + "text": "Authoritative Sources in a Hyperlinked Environment The link structure of a hypermedia environment can be a rich source of information about the content of the environment, provided we have effective means for understanding it. Versions of this principle have been studied in the hypertext research community and (in a context predating hypermedia) through journal citation analysis in the field of bibliometrics. But for the problem of searching in hyperlinked environments such as the World Wide Web, it is clear from the prevalent techniques that the information inherent in the links has yet to be fully exploited. In this work we develop a new method for automatically extracting certain types of information about a hypermedia environment from its link structure, and we report on experiments that demonstrate its effectiveness for a variety of search problems on the www. The central problem we consider is that of determining the relative \"authority\" of pages in such environments. This issue is central to a number of basic hypertext search t...", + "neighbors": [ + 43, + 112, + 115, + 133, + 247, + 322, + 427, + 482, + 526, + 612, + 649, + 845, + 867, + 893, + 1005, + 1165, + 1198 + ], + "mask": "Validation" + }, + { + "node_id": 217, + "label": 1, + "text": "Hybrid Neural Systems This chapter provides an introduction to the eld of hybrid neural systems. Hybrid neural systems are computational systems which are based mainly on articial neural networks but also allow a symbolic interpretation or interaction with symbolic components. In this overview, we will describe recent results of hybrid neural systems. We will give a brief overview of the main methods used, outline the work that is presented here, and provide additional references. We will also highlight some important general issues and trends.", + "neighbors": [ + 654 + ], + "mask": "Test" + }, + { + "node_id": 218, + "label": 3, + "text": "Query Optimization in the Presence of Limited Access Patterns 1 Introduction The goal of a query optimizer of a database system is to translate a declarative query expressed on a logical schema into an imperative query execution plan that accesses the physical storage of the data, and applies a sequence of relational operators. In building query execution plans, traditional relational query optimizers try to find the most efficient method for accessing the necessary data. When possible, a query optimizer will use auxiliary data structures such as an index on a file in order to efficiently retrieve a certain set of tuples in a relation. However, when such structures do not exist or are not useful for the given query, the alternative of scanning the entire relation always exists. The existence of the fall back option to perform a complete scan is an important assumption in traditional query optimization. Several recent query processing applications have the common characteristic that it is not always possible to perform complete scans on the data. Instead, the query optimization problem is complicated by the fact that there are only limited access patterns to the data. One such", + "neighbors": [ + 419, + 420, + 879, + 1219 + ], + "mask": "Train" + }, + { + "node_id": 219, + "label": 2, + "text": "Text and Image Metasearch on the Web As the Web continues to increase in size, the relative coverage of Web search engines is decreasing, and search tools that combine the results of multiple search engines are becoming more valuable. This paper provides details of the text and image metasearch functions of the Inquirus search engine developed at the NEC Research Institute. For text metasearch, we describe features including the use of link information in metasearch, and provide statistics on the usage and performance of Inquirus and the Web search engines. For image metasearch, Inquirus queries multiple image search engines on the Web, downloads the actual images, and creates image thumbnails for display to the user. Inquirus handles image search engines that return direct links to images, and engines that return links to HTML pages. For the engines that return HTML pages, Inquirus analyzes the text on the pages in order to predict which images are most likely to correspond to the query. The individual image search engin...", + "neighbors": [ + 224, + 587, + 774, + 1017, + 1031, + 1189 + ], + "mask": "Train" + }, + { + "node_id": 220, + "label": 4, + "text": "Control Law Design for Haptic Interfaces to Virtual Reality The goal of control law design for haptic displays is to provide a safe and stable user interface while maximizing the operator's sense of kinesthetic immersion in a virtual environment. This paper outlines a control design approach which stabilizes a haptic interface when coupled to a broad class of human operators and virtual environments. Two-port absolute stability criteria are used to develop explicit control law design bounds for two different haptic display implementations: impedance display and admittance display. The strengths and weaknesses of each approach are illustrated through numerical and experimental results for a three degree-offreedom device. The example highlights the ability of the proposed design procedure to handle some of the more difficult problems in control law synthesis for haptics, including structural flexibility and non-collocation of sensors and actuators. The authors are with the Department of Electrical Engineering University of Washington, Box 352500 Seattle, WA 98195-2500 2 I.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 221, + "label": 3, + "text": "Improving Business Process Quality through Exception Understanding, Prediction, and Prevention Business process automation technologies are being increasingly used by many companies to improve the efficiency of both internal processes as well as of e-services offered to customers. In order to satisfy customers and employees, business processes need to be executed with a high and predictable quality. In particular, it is crucial for organizations to meet the Service Level Agreements (SLAs) stipulated with the customers and to foresee as early as possible the risk of missing SLAs, in order to set the right expectations and to allow for corrective actions. In this paper we focus on a critical issue in business process quality: that of analyzing, predicting and preventing the occurrence of exceptions, i.e., of deviations from the desired or acceptable behavior. We characterize the problem and propose a solution, based on data warehousing and mining techniques. We then describe the architecture and implementation of a tool suite that enables exception analysis, prediction, and prevention. Finally, we show experimental results obtained by using the tool suite to analyze internal HP processes. 1.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 222, + "label": 0, + "text": "Formalizing Collaborative Decision-making and Practical Reasoning in Multi-agent Systems In this paper, we present an abstract formal model of decision-making in a social setting that covers all aspects of the process, from recognition of a potential for cooperation through to joint decision. In a multi-agent environment, where self-motivated autonomous agents try to pursue their own goals, a joint decision cannot be taken for granted. In order to decide effectively, agents need the ability to (a) represent and maintain a model of their own mental attitudes, (b) reason about other agents' mental attitudes, and (c) influence other agents' mental states. Social mental shaping is advocated as a general mechanism for attempting to have an impact on agents' mental states in order to increase their cooperativeness towards a joint decision. Our approach is to specify a novel, high-level architecture for collaborative decision-making in which the mentalistic notions of belief, desire, goal, intention, preference and commitment play a central role in guiding the individual agent's and the group's decision-making behaviour. We identify preconditions that must be fulfilled before collaborative decision-making can commence and prescribe how cooperating agents should behave, in terms of their own decision-making apparatus and their interactions with others, when the decision-making process is progressing satisfactorily. The model is formalized through a new, many-sorted, multi-modal logic.", + "neighbors": [ + 557, + 724 + ], + "mask": "Validation" + }, + { + "node_id": 223, + "label": 0, + "text": "Plan Recognition in Military Simulation: Incorporating Machine Learning with Intelligent Agents A view of plan recognition shaped by both operational and computational requirements is presented. Operational requirements governing the level of fidelity and nature of the reasoning process combine with computational requirements including performance speed and software engineering effort to constrain the types of solutions available to the software developer. By adopting machine learning to provide spatio-temporal recognition of environmental events and relationships, an agent can be provided with a mechanism for mental state recognition qualitatively different from previous research. An architecture for integrating machine learning into a BDI agent is suggested and the results from the development of a prototype provide proof-of-concept. 1 Introduction This paper proposes machine-learning as a tool to assist in the construction of agents capable of plan recognition. This paper focuses on the beliefs-desires-intentions (BDI) class of agents. These agents have been ...", + "neighbors": [ + 277, + 303, + 816 + ], + "mask": "Train" + }, + { + "node_id": 224, + "label": 2, + "text": "Searching the world wide web The coverage and recency of the major World Wide Web search engines was analyzed, yielding some surprising results. The coverage of any one engine is significantly limited: No single engine indexes more than about one-third of the \u201cindexable Web, \u201d the coverage of the six engines investigated varies by an order of magnitude, and combining the results of the six engines yields about 3.5 times as many documents on average as compared with the results from only one engine. Analysis of the overlap between pairs of engines gives an estimated lower bound on the size of the indexable Web of 320 million pages. The Internet has grown rapidly since its inception in December 1969 (1) and is anticipated to expand 1000 % over the next few years (2). The amount of scientific information and the number of electronic journals on the Internet continue to increase [about 1000 journals as of 1996 (2, 3)]. The Internet and the World Wide Web (the Web) represent significant advancements for the retrieval and dissemination of scientific and other literature and for the advancement of education (2, 4). With the introduction of full-text search engines such as AltaVista (www.", + "neighbors": [ + 115, + 143, + 219, + 271, + 382, + 433, + 466, + 496, + 526, + 649, + 696, + 792, + 1005, + 1031, + 1066, + 1124, + 1183, + 1253 + ], + "mask": "Test" + }, + { + "node_id": 225, + "label": 3, + "text": "DB-Prism: Integrated Data Warehouses and Knowledge Networks for Bank Controlling DB-Prism is an integrated data warehouse system", + "neighbors": [ + 1022 + ], + "mask": "Train" + }, + { + "node_id": 226, + "label": 3, + "text": "Efficiently Computing Weighted Proximity Relationships in Spatial Databases Spatial data mining recently emerges from a number of real applications, such as real-estate marketing, urban planning, weather forecasting, medical image analysis, road traffic accident analysis, etc. It demands for efficient solutions for many new, expensive, and complicated problems. In this paper, we investigate the problem of evaluating the top k distinguished \"features\" for a \"cluster\" based on weighted proximity relationships between the cluster and features. We measure proximity in an average fashion to address possible nonuniform data distribution in a cluster. Combining a standard multi-step paradigm with new lower and upper proximity bounds, we presented an efficient algorithm to solve the problem. The algorithm is implemented in several different modes. Our experiment results not only give a comparison among them but also illustrate the efficiency of the algorithm.", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 227, + "label": 0, + "text": "TravelPlan: A MultiAgent System to Solve Web Electronic Travel Problems This paper presents TravelPlan, a multiagent architecture to co-operative work between different elements (human and/or software) whose main goal is to recommend useful solutions in the electronic tourism domain to system users. The system uses different types of intelligent autonomous agents whose main characteristics are cooperation, negotiation, learning, planning and knowledge sharing. The information used by the intelligent agents is heterogeneous and geographically distributed. The main information source of the system is Internet (the web). Other information sources are distributed knowledge bases in the own system.. The process to obtain, filter, and store the information is performed automatically by agents. This information is translated into a homogeneous format for high-level reasoning in order to obtain different partial solutions. Partial solutions are reconstructed into a general solution (or solutions) to be presented to the user. The system will recommend different solution...", + "neighbors": [ + 261 + ], + "mask": "Test" + }, + { + "node_id": 228, + "label": 2, + "text": "Partitioning-Based Clustering for Web Document Categorization Clustering techniques have been used by manyintelligent software agents in order to retrieve, lter, and categorize documents available on the World Wide Web. Clustering is also useful in extracting salient features of related web documents to automatically formulate queries and search for other similar documents on the Web. Traditional clustering algorithms either use a priori knowledge of document structures to de ne a distance or similarity among these documents, or use probabilistic techniques such as Bayesian classi cation. Many of these traditional algorithms, however, falter when the dimensionality of the feature space becomes high relative to the size of the document space. In this paper, we introduce two new clustering algorithms that can e ectively cluster documents, even in the presence of a very high dimensional feature space. These clustering techniques, which are based on generalizations of graph partitioning, do not require pre-speci ed ad hoc distance functions, and are capable of automatically discovering document similarities or associations. We conduct several experiments on real Web data using various feature selection heuristics, and compare our clustering schemes to standard distance-based techniques, such ashierarchical agglomeration clustering, and Bayesian classi cation methods, such as AutoClass.", + "neighbors": [ + 291, + 616, + 780 + ], + "mask": "Test" + }, + { + "node_id": 229, + "label": 4, + "text": "LART: flexible, low-power building blocks for wearable computers To ease the implementation of different wearable computers, we developed a low-power processor board (named LART) with a rich set of interfaces. The LART supports dynamic voltage scaling, so performance (and power consumption) can be scaled to match demands: 59-221 MHz, 106-640 mW. High-end wearables can be configured from multiple LARTs operating in parallel; alternatively, FPGA boards can be used for dedicated data-processing, which reduces power consumption significantly.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 230, + "label": 1, + "text": "Case-based Learning for Knowledge-based Design Support . We present a general approach to combine methods of interactive knowledge acquisition with methods for machine learning. The approach has been developed in order to deliver knowledge required by support-systems for design-tasks. Learning rests upon a knowledge representation scheme for cases that distinguishes between knowledge needed for subgoaling and knowledge needed for design. We employ traces, i.e., protocols of the user's actions when tackling design-tasks as the initial input for incremental knowledge acquisition. This allows to learn task structures to be used for subgoaling and case-bases plus similarity relations applicable to particular case-bases. 1 INTRODUCTION Integrating incremental learning into a knowledge-based systems seems to be a promising way to lessen the burden of knowledge elicitation to system development [9]. The goal of this paper is to point out how learning can be used in an interactive design-support system that uses Cbr [8] as the main problem solvin...", + "neighbors": [ + 902 + ], + "mask": "Train" + }, + { + "node_id": 231, + "label": 1, + "text": "A Genetic Algorithm-Based Solution for the Problem of Small Disjuncts . In essence, small disjuncts are rules covering a small number of examples. Hence, these rules are usually error-prone, which contributes to a decrease in predictive accuracy. The problem is particularly serious because, although each small disjuncts covers few examples, the set of small disjuncts can cover a large number of examples. This paper proposes a solution to the problem of discovering accurate small-disjunct rules based on genetic algorithms. The basic idea of our method is to use a hybrid decision tree / genetic algorithm approach for classification. More precisely, examples belonging to large disjuncts are classified by rules produced by a decision-tree algorithm, while examples belonging to small disjuncts are classified by a new genetic algorithm, particularly designed for discovering small-disjunct rules. 1 Introduction In the context of the well-known classification task of data mining, the discovered knowledge is often expressed as a set of IF-THEN predict...", + "neighbors": [ + 832 + ], + "mask": "Train" + }, + { + "node_id": 232, + "label": 3, + "text": "Building a Large Location Table to Find Replicas of Physics Objects The problem of building a large location table for physics objects occurs within a number of planned physics data management systems, like those that control reclustering and wide-area replication. To satisfy their e#ciency goals, these systems have to make local or remote replicas of individual physics objects, which contain raw or reconstructed data for a single event, rather than replicas of large run or ntuple files. This replication implies the use of a table to resolve the logical, location independent object descriptor into a physical location where an object replica can be found. For modern physics experiments the table needs to scale to at least some 10 10 objects. We argue that such a table can be e#ciently implemented by limiting the freedom of lookup operations, and by exploiting some specific properties of the physics data model. One specific viable implementation is discussed. Key words: Object location table, object-oriented databases, object clustering, object re-cl...", + "neighbors": [ + 1019 + ], + "mask": "Train" + }, + { + "node_id": 233, + "label": 2, + "text": "Generating a Topically Focused VirtualReality Internet Surveys highlight that Internet users are frequently frustrated by failing to locate useful information, and by difficulty in browsing anarchically linked web-structures. We present a new Internet browsing application (called VR-net) that addresses these problems. It first identifies semantic domains consisting of tightly interconnected web-page groupings. The second part populates a 3D virtual world with these information sources, representing all relevant pages plus appropriate structural relations. Users can then easily browse through around a semantically focused virtual library. 1 Introduction The Internet is a probably the most significant global information resource ever created, allowing access to an almost unlimited amount of information. In this paper we describe two inter-related difficulties suffered by Internet users, and their combined influence on web use. We then introduce an integrated \"search and browse\" solution tool that directly tackles both issues. We also examin...", + "neighbors": [ + 587, + 923, + 1017 + ], + "mask": "Train" + }, + { + "node_id": 234, + "label": 4, + "text": "An Adaptive Self-Organizing Color Segmentation Algorithm with Application to Robust Real-time Human Hand Localization In Proc. Asian Conf. on Computer Vision, Taiwan, 2000 This paper describes an adaptive self-organizing color segmentation algorithm and a transductive learning algorithm used to localize human hand in video sequences. The color distribution at each time frame is approximated by the proposed 1-D self-organizing map (SOM), in which schemes of growing, pruning and merging are facilitated to find an appropriate number of color cluster automatically. Due to the dynamic backgrounds and changing lighting conditions, the distribution of color over time may not be stationary. An algorithm of SOM transduction is proposed to learn the nonstationary color distribution in HSI color space by combining supervised and unsupervised learning paradigms. Color cue and motion cue are integrated in the localization system, in which motion cue is employed to focus the attention of the system. This approach is also applied to other tasks such as human face tracking and color indexing. Our localization system...", + "neighbors": [ + 167, + 635 + ], + "mask": "Test" + }, + { + "node_id": 235, + "label": 2, + "text": "Collection Synthesis The invention of the hyperlink and the HTTP transmission protocol caused an amazing new structure to appear on the Internet -- the World Wide Web. With the Web, there came spiders, robots, and Web crawlers, which go from one link to the next checking Web health, ferreting out information and resources, and imposing organization on the huge collection of information (and dross) residing on the net. This paper reports on the use of one such crawler to synthesize document collections on various topics in science, mathematics, engineering and technology. Such collections could be part of a digital library.", + "neighbors": [ + 291, + 410, + 536, + 662, + 774, + 968, + 1000, + 1017, + 1099 + ], + "mask": "Train" + }, + { + "node_id": 236, + "label": 0, + "text": "A Multiagent Architecture For Fuzzy Modeling In this paper a hybrid learning system that combines different fuzzy modeling techniques is being investigated. In order to implement the different methods, we propose the use of intelligent agents, which collaborate by means of a multiagent architecture.", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 237, + "label": 5, + "text": "Probabilistic Deduction with Conditional Constraints over Basic Events We study the problem of probabilistic deduction with conditional constraints over basic events. We show that globally complete probabilistic deduction with conditional constraints over basic events is NP-hard. We then concentrate on the special case of probabilistic deduction in conditional constraint trees. We elaborate very efficient techniques for globally complete probabilistic deduction. In detail, for conditional constraint trees with point probabilities, we present a local approach to globally complete probabilistic deduction, which runs in linear time in the size of the conditional constraint trees. For conditional constraint trees with interval probabilities, we show that globally complete probabilistic deduction can be done in a global approach by solving nonlinear programs. We show how these nonlinear programs can be transformed into equivalent linear programs, which are solvable in polynomial time in the size of the conditional constraint trees. 1. Introduction Dealing wit...", + "neighbors": [ + 803, + 1015 + ], + "mask": "Train" + }, + { + "node_id": 238, + "label": 4, + "text": "Exploring Brick-Based Navigation and Composition in an Augmented Reality . BUILD-IT is a planning tool based on computer vision technology, supporting complex planning and composition tasks. A group of people, seated around a table, interact with objects in a virtual scene using real bricks. A plan view of the scene is projected onto the table, where object manipulation takes place. A perspective view is projected on the wall. The views are set by virtual cameras, having spatial attributes like shift, rotation and zoom. However, planar interaction with bricks provides only position and rotation information. Object height control is equally constrained by planar interaction. The aim of this paper is to suggest methods and tools bridging the gap between planar interaction and three-dimensional control. To control camera attributes, active objects, with intelligent behaviour are introduced. To control object height, several real and virtual tools are suggested. Some of the solutions are based on metaphors, like window, sliding-ruler and floor. 1 I...", + "neighbors": [ + 467 + ], + "mask": "Train" + }, + { + "node_id": 239, + "label": 3, + "text": "On2broker: Semantic-Based Access to Information Sources at the WWW On2broker provides brokering services to improve access to heterogeneous, distributed and semistructured information sources as they are presented in the World Wide Web. It relies on the use of ontologies to make explicit the semantics of web pages. In the paper we will discuss the general architecture and main components of On2broker and provide some application scenarios. 1. Introduction In the paper we describe a tool environment called On2broker 1 that processes information sources and content descriptions in HTML, XML, and RDF and that provides intelligent information retrieval, query answering and maintenance support. Central for our approach is the use of ontologies to describe background knowledge and to make explicit the semantics of web documents. Ontologies have been developed in the area of knowledge-based systems for structuring and reusing large bodies of knowledge (cf. CYC [Lenat, 1995], (KA)2 [Benjamins et al., 1998]). Ontologies are consensual and formal specificat...", + "neighbors": [ + 188, + 835 + ], + "mask": "Test" + }, + { + "node_id": 240, + "label": 4, + "text": "A Pattern-Supported Approach to the User Interface Design Process Patterns describe generic solutions to common problems in context. Originating from the world of architecture, patterns have been used mostly in object-oriented programming and data analysis. The goal of HCI patterns is to create an inventory of solutions to help designers (and usability engineers) to resolve UI development problems that are common, difficult and frequently encountered. In this paper, we present our pattern-supported approach to user interface design in the context of information visualization. Using a concrete example from the telecommunications domain, we will focus on a task/subtask pattern to illustrate how knowledge about a task and an appropriate interaction design solution can be captured and communicated. 1", + "neighbors": [ + 15, + 287 + ], + "mask": "Train" + }, + { + "node_id": 241, + "label": 2, + "text": "Automatic Discovery of Language Models for Text Databases The proliferation of text databases within large organizations and on the Internet makes it difficult for a person to know which databases to search. Given language models that describe the contents of each database, a database selection algorithm such as GlOSS can provide assistance by automatically selecting appropriate databases for an information need. Current practice is that each database provides its language model upon request, but this cooperative approach has important limitations. This paper demonstrates that cooperation is not required. Instead, the database selection service can construct its own language models by sampling database contents via the normal process of running queries and retrieving documents. Although random sampling is not possible, it can be approximated with carefully selected queries. This sampling approach avoids the limitations that characterize the cooperative approach, and also enables additional capabilities. Experimental results demonstrate th...", + "neighbors": [ + 115, + 433, + 502, + 510, + 526, + 579, + 599, + 627, + 792, + 1253 + ], + "mask": "Train" + }, + { + "node_id": 242, + "label": 2, + "text": "Combining Labeled and Unlabeled Data for Text Classification With a Large Number of Categories A major concern with supervised learning techniques for text classification is that they often require a large number of labeled examples to learn accurately. One way to reduce the amount of labeled data required is to develop algorithms that can learn effectively from a small number of labeled examples augmented with a large number of unlabeled examples. In this paper, we develop a framework to incorporate unlabeled data in the Error-Correcting Output Coding (ECOC) setup by decomposing multiclass problems into multiple binary problems and then use Co-Training to learn the individual binary classification problems. We show that our method is especially useful for classification tasks involving a large number of categories where Co-training doesn't perform very well by itself and when combined with ECOC, outperforms several other algorithms that combine labeled and unlabeled data for text classification. 1", + "neighbors": [ + 439, + 609, + 1133, + 1153 + ], + "mask": "Train" + }, + { + "node_id": 243, + "label": 2, + "text": "Hierarchical Wrapper Induction for Semistructured Information Sources With the tremendous amount of information that becomes available on the Web on a daily basis, the abilitytoquickly develop information agents has become a crucial problem. A vital componentofanyWeb-based information agent is a set of wrappers that can extract the relevant data from semistructured information sources. Our novel approach to wrapper induction is based on the idea of hierarchical information extraction, which turns the hard problem of extracting data from an arbitrarily complex documentinto a series of simpler extraction tasks. We introduce an inductive algorithm, stalker, that generates high accuracy extraction rules based on user-labeled training examples. Labeling the training data represents the major bottleneck in using wrapper induction techniques, and our experimental results showthatstalker requires up to two orders of magnitude fewer examples than other algorithms. Furthermore, can wrap information sources that could not be wrapped by existing inductivetechniques.", + "neighbors": [ + 169, + 855, + 859 + ], + "mask": "Train" + }, + { + "node_id": 244, + "label": 1, + "text": "Vision-Guided Navigation Using SHOSLIF . This paper presents an unconventional approach to vision-guided autonomous navigation. The system recalls information about scenes and navigational experience using content-based retrieval from a visual database. To achieve a high applicability to various road types, we do not impose a priori scene features, such as road edges, that the system must use. But rather, the system automatically derives features from images during supervised learning. To accomplish this, the system uses principle component analysis and linear discriminant analysis to automatically derive the most expressive features (MEF) for scene reconstruction or the most discriminating features (MDF) for scene classi cation. These features best describe or classify the population of the scenes and approximate complex decision regions using piecewise linear boundaries up to a desired accuracy. A new self-organizing scheme called recursive partition tree (RPT) is used for automatic construction of a vision-and-control da...", + "neighbors": [ + 166, + 652 + ], + "mask": "Train" + }, + { + "node_id": 245, + "label": 4, + "text": "Learning and Tracking Human Motion Using Functional Analysis We present a method for the modeling and tracking of human motion using a sequence of 2D video images. Our analysis is divided in two parts: statistical learning and Bayesian tracking. First, we estimate a statistical model of typical activities from a large set of 3D human motion data. For this purpose, the human body is represented as a set of articulated cylinders and the evolution of a particular joint angle is described by a time-series. Specifically, we consider periodic motion such as \u201cwalking \u201d in this work, and we develop a new set of tools that allows for the automatic segmentation of the training data into a sequence of identical \u201cmotion cycles\u201d. Then we compute the mean and the principal components of these cycles using a new algorithm to account for missing information and to enforce smooth transitions between different cycles. The learned temporal model provides a prior probability distribution over human motions which is used for tracking. We adopt a Bayesian perspective and approximate the posterior distribution of the body parameters using a particle filter. The resulting algorithm is able to track human subjects in monocular video sequences and to recover their 3D motion in complex unknown environments. 1", + "neighbors": [ + 136 + ], + "mask": "Validation" + }, + { + "node_id": 246, + "label": 0, + "text": "Engineering Mobile-agent Applications via Context-dependent Coordination The design and development of Internet applications can take advantage of a paradigm based on autonomous and mobile agents. However, mobility introduces peculiar coordination problems in agent-based Internet applications. First, it suggests the exploitation of an infrastructure based on a multiplicity of local interaction spaces. Second, it may require coordination activities to be adapted both to the characteristics of the execution environment where they occur and to the needs of the application to which the coordinating agents belong. In this context, this paper introduces the concept of context-dependent coordination based on programmable interaction spaces. On the one hand, interaction spaces associated to different execution environments may be independently programmed so as to lead to differentiated, environment-dependent, behaviors. On the other hand, agents can program the interaction spaces of the visited execution environments to obtain an application-dependent behavior of the interaction spaces themselves. Several examples show how an infrastructure enforcing context-dependent coordination can be effectively exploited to simplify and make more modular the design of Internet applications based on mobile agents. In addition, the MARS coordination infrastructure is presented as an example of a system in which the concept of context-dependent coordination has found a clean and efficient implementation.", + "neighbors": [ + 119, + 269, + 421, + 1065 + ], + "mask": "Train" + }, + { + "node_id": 247, + "label": 2, + "text": "The Intelligent Surfer: Probabilistic Combination of Link and Content Information in PageRank The PageRank algorithm, used in the Google search engine, greatly improves the results of Web search by taking into account the link structure of the Web. PageRank assigns to a page a score proportional to the number of times a random surfer would visit that page, if it surfed indefinitely from page to page, following all outlinks from a page with equal probability. We propose to improve PageRank by using a more intelligent surfer, one that is guided by a probabilistic model of the relevance of a page to a query. Efficient execution of our algorithm at query time is made possible by precomputing at crawl time (and thus once for all queries) the necessary terms. Experiments on two large subsets of the Web indicate that our algorithm significantly outperforms PageRank in the (human -rated) quality of the pages returned, while remaining efficient enough to be used in today's large search engines.", + "neighbors": [ + 216, + 427, + 774, + 1000, + 1017, + 1189 + ], + "mask": "Validation" + }, + { + "node_id": 248, + "label": 3, + "text": "Recent Advances and Research Problems in Data Warehousing . In the recent years, the database community has witnessed the emergence of a new technology, namely data warehousing. A data warehouse is a global repository that stores pre-processed queries on data which resides in multiple, possibly heterogeneous, operational or legacy sources. The information stored in the data warehouse can be easily and efficiently accessed for making effective decisions. The On-Line Analytical Processing (OLAP) tools access data from the data warehouse for complex data analysis, such as multidimensional data analysis, and decision support activities. Current research has lead to new developments in all aspects of data warehousing, however, there are still a number of problems that need to be solved for making data warehousing effective. In this paper, we discuss recent developments in data warehouse modelling, view maintenance, and parallel query processing. A number of technical issues for exploratory research are presented and possible solutions are discusse...", + "neighbors": [ + 7 + ], + "mask": "Train" + }, + { + "node_id": 249, + "label": 0, + "text": "Learning Environmental Features for Pose Estimation We present a method for learning a set of environmental features which are useful for pose estimation. The landmark learning mechanism is designed to be applicable to a wide range of environments, and generalized for di#erent sensing modilities. In the context of computer vision, each landmark is detected as a local extremum of a measure of distinctiveness and represented by an appearance-based encoding which is exploited for matching. The set of obtained landmarks can be parameterized and then evaluated in terms of their utility for the task at hand. The method is used to motivate a general approach to task-oriented sensor fusion. We present experimental evidence that demonstrates the utility of the method. 1 Introduction In this paper, we develop an approach to sensorbased robot localization by learning a set of recognizable features in the robot's environment. In particular, we consider the problem of learning a set of image-domain landmarks from a set of di#erent views of a scene. ...", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 250, + "label": 0, + "text": "Team Formation by Self-Interested Mobile Agents . A process of team formation by autonomous agents in a distributed environment is presented. Since the environment is distributed, there are serious problems with communication and consistent decision making inside a team. To deal with these problems, the standard technique of token passing in a computer network is applied. The passing cycle of the token serves as the communication route. It assures consistent decision making inside the team maintaining its organizational integrity. On the other hand it constitutes a component of the plan of the cooperative work performed by a complete team. Two algorithms for team formation are given. The first one is based on simple self-interested agents that still can be viewed as reactive agents (see [14]) although augmented with knowledge, goal, and cooperation mechanisms. The second one is based on sophisticated self-interested agents. Moreover, the algorithm based on fully cooperative agents, which is an adaptation of the static ...", + "neighbors": [ + 724, + 1156, + 1266 + ], + "mask": "Train" + }, + { + "node_id": 251, + "label": 1, + "text": "Data Mining At The Interface Of Computer Science And Statistics This chapter is written for computer scientists, engineers, mathematicians, and scientists who wish to gain a better understanding of the role of statistical thinking in modern data mining. Data mining has attracted considerable attention both in the research and commercial arenas in recent years, involving the application of a variety of techniques from both computer science and statistics. The chapter discusses how computer scientists and statisticians approach data from different but complementary viewpoints and highlights the fundamental differences between statistical and computational views of data mining. In doing so we review the historical importance of statistical contributions to machine learning and data mining, including neural networks, graphical models, and flexible predictive modeling. The primary conclusion is that closer integration of computational methods with statistical thinking is likely to become increasingly important in data mining applications. Keywords: Data mining, statistics, pattern recognition, transaction data, correlation. 1.", + "neighbors": [ + 443, + 614 + ], + "mask": "Train" + }, + { + "node_id": 252, + "label": 0, + "text": "On the Robustness of some Cryptographic Protocols for Mobile Agent Protection Mobile agent security is still a young discipline and most naturally, the focus up to the time of writing was on inventing new cryptographic protocols for securing various aspects of mobile agents. However, past experience shows that protocols can be flawed, and flaws in protocols can remain unnoticed for a long period of time. The game of breaking and fixing protocols is a necessary evolutionary process that leads to a better understanding of the underlying problems and ultimately to more robust and secure systems. Although, to the best of our knowledge, little work has been published on breaking protocols for mobile agents, it is inconceivable that the multitude of protocols proposed so far are all flawless. As it turns out, the opposite is true. We identify flaws in protocols proposed by Corradi et al., Karjoth et al., and Karnik et al., including protocols based on secure co-processors.", + "neighbors": [ + 593 + ], + "mask": "Train" + }, + { + "node_id": 253, + "label": 3, + "text": "Indexing the Distance: An Efficient Method to KNN Processing In this paper, we present an efficient method, called iDistance, for K-nearest neighbor (KNN) search in a high-dimensional space. iDistance partitions the data and selects a reference point for each partition. The data in each cluster are transformed into a single dimensional space based on their similarity with respect to a reference point. This allows the points to be indexed using a B + -tree structure and KNN search be performed using one-dimensional range search. The choice of partition and reference point provides the iDistance technique with degrees of freedom most other techniques do not have. We describe how appropriate choices here can effectively adapt the index structure to the data distribution. We conducted extensive experiments to evaluate the iDistance technique, and report results demonstrating its effectiveness.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 254, + "label": 4, + "text": "A Practical Approach for Recovery of Evicted Variables SRC\u2019s charter is to advance the state of the art in computer systems by doing basic and applied research in support of our company\u2019s business objectives. Our interests and projects span scalable systems (including hardware, networking, distributed systems, and programming-language technology), the Internet (including the Web, e-commerce, and information retrieval), and human/computer interaction (including user-interface technology, computer-based appliances, and mobile computing). SRC was established in 1984 by Digital Equipment Corporation. We test the value of our ideas by building hardware and software prototypes and assessing their utility in realistic settings. Interesting systems are too complex to be evaluated solely in the abstract; practical use enables us to investigate their properties in depth. This experience is useful in the short term in refining our designs and invaluable in the long term in advancing our knowledge. Most of the major advances in information systems have come through this approach, including personal computing, distributed systems, and the Internet. We also perform complementary work of a more mathematical character. Some of", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 255, + "label": 2, + "text": "Target Seeking Crawlers and their Topical Performance Topic driven crawlers can complement search engines by targeting relevant portions of the Web. A topic driven crawler must exploit the information available about the topic and its underlying context. In this paper we extend our previous research on the design and evaluation of topic driven crawlers by comparing seven different crawlers on a harder problem, namely, seeking highly relevant target pages. We find that exploration is an important aspect of a crawling strategy. We also study how the performance of crawler strategies depends on a number of topical characteristics based on notions of topic generality, cohesiveness, and authoritativeness. Our results reveal that topic generality is an obstacle for most crawlers, that three crawlers tend to perform better when the target pages are clustered together, and that two of these also display better performance when topic targets are highly authoritative.", + "neighbors": [ + 281, + 662, + 774, + 1017 + ], + "mask": "Train" + }, + { + "node_id": 256, + "label": 4, + "text": "Nexus - An Open Global Infrastructure for Spatial-Aware Applications Due to the lack of a generic platform for location- and spatial-aware systems, many basic services have to be reimplemented in each application that uses spatial-awareness. A cooperation among different applications is also difficult to achieve without a common platform. In this paper we present a platform that solves these problems. It provides an infrastructure that is based on computer models of regions of the physical world, which are augmented by virtual objects. We show how virtual objects make the integration of existing information systems and services in spatial-aware systems easier. Furthermore, our platform supports interactions between the computer models and the real world and integrates single models in a global \"Augmented World\". Contents 1 Introduction 3 2 General Idea 4 2.1 Augmented Areas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.2 Augmented World . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 3 Example Scenario 6 4 Require...", + "neighbors": [ + 438 + ], + "mask": "Test" + }, + { + "node_id": 257, + "label": 0, + "text": "Compositional Design and Reuse of a Generic Agent Model This paper introduces a formally specified design of a compositional generic agent model (GAM). This agent model abstracts from specific application domains; it provides a unified formal definition of a model for weak agenthood. It can be (re)used as a template or pattern for a large variety of agent types and application domain types. The model was designed on the basis of experiences in a number of application domains. The compositional development method DESIRE was used to design the agent model GAM at a conceptual and logical level. It serves as a unified, precisely defined coneptual structure which can be refined by specialisation and instantiation to a large variety of other, more specific agents. To illustrate reuse of this agent model, specialisation and instantiation to model co-operative information gathering agents is described in depth. Moreover, it is shown how GAM can be used to describe in a unified and hence more comparable manner a large number of agent architectures from the literature.", + "neighbors": [ + 625, + 942 + ], + "mask": "Train" + }, + { + "node_id": 258, + "label": 2, + "text": "JRes: A Resource Accounting Interface for Java With the spread of the Internet the computing model on server systems is undergoing several important changes. Recent research ideas concerning dynamic operating system extensibility are finding their way into the commercial domain, resulting in designs of extensible databases and Web servers. In addition, both ordinary users and service providers must deal with untrusted downloadable executable code of unknown origin and intentions. Across the board, Java has emerged as the language of choice for Internet-oriented software. We argue that, in order to realize its full potential in applications dealing with untrusted code, Java needs a flexible resource accounting interface. The design and prototype implementation of such an interface --- JRes --- is presented in this paper. The interface allows to account for heap memory, CPU time, and network resources consumed by individual threads or groups of threads. JRes allows limits to be set on resources available to threads and it can invoke...", + "neighbors": [ + 465, + 800 + ], + "mask": "Test" + }, + { + "node_id": 259, + "label": 0, + "text": "An Agent Based Framework for Mobile Users User mobility together with an easy access to distributed resources is one of the greatest challenge to be faced in the future years. At the same time, agent technology is seen as a very promising approach to deal with distributed computing and user mobility. In this paper an agent-based strategy for support of mobile users is presented. It is based on a mobile agent platform developed at the University of Catania, which has been enhanced in order to allow the user to access network services in a mobile environment. Main functionalities and architecure of the above platform are described. 1 Introduction The quick expansion of wireless communication technologies and of portable computing devices, has made mobile computing more and more important. The user wishes to access the information he/she needs at any moment, independently of the place where he/she is. The ever increasing computing power available in notebooks, makes them a valid working tool for the user who needs to move from ...", + "neighbors": [ + 317 + ], + "mask": "Train" + }, + { + "node_id": 260, + "label": 4, + "text": "System Lag Tests for Augmented and Virtual Environments We describe a simple technique for accurately calibrating the temporal lag in augmented and virtual environments within the Enhanced Virtual Hand Lab (EVHL), a collection of hardware and software to support research on goal-directed human hand motion. Lag is the sum of various delays in the data pipeline associated with sensing, processing, and displaying information from the physical world to produce an augmented or virtual world. Our main calibration technique uses a modified phonograph turntable to provide easily tracked periodic motion, reminiscent of the pendulum-based calibration technique of Liang, Shaw and Green. Measurements show a three-frame (50 ms) lag for the EVHL. A second technique, which uses a specialized analog sensor that is part of the EVHL, provides a \"closed loop\" calibration capable of sub-frame accuracy. Knowing the lag to sub-frame accuracy enables a predictive tracking scheme to compensate for the end-toend lag in the data pipeline. We describe both techniques and the EVHL environment in which they are used.", + "neighbors": [ + 444 + ], + "mask": "Train" + }, + { + "node_id": 261, + "label": 0, + "text": "MAPWEB: Cooperation between Planning Agents and Web Agents This paper presents MAPWeb (MultiAgent Planning in the Web), a multiagent system for cooperative work among dierent intelligent software agents whose main goal is to solve user planning problems using the information stored in the World Wide Web (Web). MAPWeb is made of a heterogeneous mixture of intelligent agents whose main characteristics are cooperation, reasoning, and knowledge sharing. The architecture of MAPWeb uses four types of agents:UserAgents that are the bridge between the users and the system; ControlAgents (Manager and Coach Agents) that are responsible to manage the rest of agents; PlannerAgents that are able to solve planning problems; and nally WebAgents whose aim is to retrieve, represent and share information obtained from the Web. MAPWeb solves planning problems by means of cooperation between PlannerAgents and WebAgents. Instead of trying the PlannerAgent to solve the whole planning problem, the PlannerAgent focuses on a less restricted (and therefore easier to solve) problem (what we call an abstract problem) and cooperates with the WebAgents to validate and complete abstract solutions. In order for cooperation to take place, a common language and data structures have also been dened. Categories and Subject Descriptors H.3.5 [Online Information Services]: Data sharing, Webbased services; I.2 [Articial Intelligence]; I.2.6 [Learning]: Knowledge acquisition; I.2.8 [Problem Solving]: Planning; I.2.11 [Distributed Articial Intelligence]: Intelligent agents, Multi-Agent Systems, Web agents Keywords Information System, Agent Architecture, Multi-Agent Systems, Web Agents, Intelligent Agents, Planning. 1.", + "neighbors": [ + 227 + ], + "mask": "Test" + }, + { + "node_id": 262, + "label": 5, + "text": "Audio-Visual Speaker Detection using Dynamic Bayesian Networks The development of human-computer interfaces poses a challenging problem: actions and intentions of different users have to be inferred from sequences of noisy and ambiguous sensory data. Temporal fusion of multiple sensors can be efficiently formulated using dynamic Bayesian networks (DBNs). DBN framework allows the power of statistical inference and learning to be combined with contextual knowledge of the problem. We demonstrate the use of DBNs in tackling the problem of audio/visual speaker detection. \"Off-the-shelf\" visual and audio sensors (face, skin, texture, mouth motion, and silence detectors) are optimally fused along with contextual information in a DBN architecture that infers instances when an individual is speaking. Results obtained in the setup of an actual human-machine interaction system (Genie Casino Kiosk) demonstrate superiority of our approach over that of static, context-free fusion architecture. 1. Introduction Advanced human--computer interfaces increasingly r...", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 263, + "label": 0, + "text": "An approach to using degrees of belief in BDI agents : The past few years have seen a rise in the popularity of the use of mentalistic attitudes such as beliefs, desires and intentions to describe intelligent agents. Many of the models which formalise such attitudes do not admit degrees of belief, desire and intention. We see this as an understandable simplification, but as a simplification which means that the resulting systems cannot take account of much of the useful information which helps to guide human reasoning about the world. This paper starts to develop a more sophisticated system based upon an existing formal model of these mental attributes. 1 Introduction In the past few years there has been a lot of attention given to building formal models of autonomous software agents; pieces of software which operate to some extent independently of human intervention and which therefore may be considered to have their own goals and the ability to determine how to achieve those goals. Many of these formal models are based on the use of ...", + "neighbors": [ + 557, + 852, + 953, + 964, + 1051 + ], + "mask": "Train" + }, + { + "node_id": 264, + "label": 0, + "text": "Model Checking Agent UML Protocol Diagrams Agents in multiagent systems use protocols in order to exchange messages and to coordinate together. Since agents and objects are not exactly the same, designers do not use directly communication protocols used in distributed systems but a new type called interaction protocols encompassing agent features such as richer messages and the ability to cooperate and to coordinate. Obviously, designers consider formal description techniques used for communication protocols. New graphical modeling languages based on UML appeared several years ago. Agent UML is certainly the best known. Until now, no validation is given for Agent UML. The aim of this paper is to present how to model check Agent UML protocol diagrams.", + "neighbors": [ + 522, + 619, + 874 + ], + "mask": "Train" + }, + { + "node_id": 265, + "label": 0, + "text": "Some Considerations about Embodied Agents As computers are being more and more part of our world we feel the urgent need of proper user interface to interact with. The use of command lines typed on a keyboard are more and more obsolete, specially as computers are receiving so much attention from a large audience. The metaphor of face-to-face communication applied to human-computer interaction is finding a lot of attraction. Humans are used since they are born to communicate with others. Seeing faces, interpreting their expressions, understanding speech are all part of our development and growth. But face-to-face conversation is very complex as it involved a huge number of factors. We speak with our voice, but also with our hand, eye, face and body. Our gesture modifies, emphasizes, contradicts what we say by words. The production of speech and nonverbal behaviors work in parallel and not in antithesis. They seem to be two different forms (voice and body gestures) of the same process (speech). They add info", + "neighbors": [ + 1191, + 1267 + ], + "mask": "Train" + }, + { + "node_id": 266, + "label": 1, + "text": "Rewriting Logic as a Metalogical Framework A metalogical framework is a logic with an associated methodology that is used to represent other logics and to reason about their metalogical properties. We propose that logical frameworks can be good metalogical frameworks when their logics support reflective reasoning and their theories always have initial models. We present a concrete realization of this idea in rewriting logic. Theories in rewriting logic always have initial models and this logic supports reflective reasoning. This implies that inductive reasoning is valid when proving properties about the initial models of theories in rewriting logic, and that we can use reflection to reason at the metalevel about these properties. In fact, we can uniformly reflect induction principles for proving metatheorems about rewriting logic theories and their parameterized extensions. We show that this reflective methodology provides an effective framework for di erent, non-trivial, kinds of formal metatheoretic reasoning; one can, for examp...", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 267, + "label": 0, + "text": "How Emotions and Personality Effect the Utility of Alternative Decisions: A Terrorist Target Selection Case Study The role of emotion modeling in the development of computerized agents has long been unclear. This is partially due to instability in the philosophical issues of the problem as psychologists struggle to build models for their own purposes, and partially due to the often-wide gap between these theories and that which can be implemented by an agent author. This paper describes an effort to use emotion models in part as a deep model of utility for use in decision theoretic agents. This allows for the creation of simulated forces capable of balancing a great deal of competing goals, and in doing so they behave, for better or for worse, in a more realistic manner.", + "neighbors": [ + 208 + ], + "mask": "Train" + }, + { + "node_id": 268, + "label": 3, + "text": "qRTDB: Qos-Sensitive Real-Time Database Introduction Recently the demand for real-time database services is exploding. The applications requiring such services include sensor data fusion, decision support, web information service, e-commerce, online trading, and dataintensive smart space applications. Furthermore, the information system is being globalized due to the fast growth of the Internet. Despite the importance and wide applicability, the performance and predictability of a database system \\Gamma the core component of global information systems \\Gamma are relatively limited compared to the other real-time systems such as real-time operating systems. It can not be easily replicated due to the consistency problem. In addition, the database system has relatively low predictability compared to other real-time systems due to data dependence of the transaction execution, data and resource conflicts, dynamic paging and I/O, and transaction aborts and the resulting rollbacks and restarts [36]. Because of the limited perfo", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 269, + "label": 0, + "text": "LIME: Linda Meets Mobility Lime is a system designed to assist in the rapid development of dependable mobile applications over both wired and ad hoc networks. Mobile agents reside on mobile hosts and all communication takes place via transiently shared tuple spaces distributed across the mobile hosts. The decoupled style of computing characterizing the Linda model is extended to the mobile environment. At the application level, both agents and hosts perceive movement as a sudden change of context. The set of tuples accessible by a particular agent residing on a given host is altered transparently in response to changes in the connectivity pattern among the mobile hosts. In this paper we present the key design concepts behind the Lime system. 1 INTRODUCTION Today's users demand ubiquitous network access independent of their physical location. This style of computation, often referred to as mobile computing, is enabled by rapid advances in the wireless communication technology. The networking scenarios enabled ...", + "neighbors": [ + 47, + 246, + 318, + 757, + 880, + 1065 + ], + "mask": "Train" + }, + { + "node_id": 270, + "label": 2, + "text": "Evolution of the Walden's Paths Authoring Tools : Changing user skills, available infrastructure, and work practices have caused many differences in the authoring support provided by the Walden's Paths project since its conception. In this paper we trace these changes and the transition from the earlier authoring tools that supported an integrated authoring process, to the more recent tools designed to work with the Web applications that teachers have become accustomed to. 1. Introduction Hypertext has come a long way from being found only in research systems to being a part of our everyday lives in the form of the World-Wide Web (WWW or the Web). We use the Web for browsing academic information, for furthering business interests, for entertainment and a variety of other purposes. There is an immense amount of information on the Web that can be used for a variety of reasons. Web-based information can be harnessed to supplement classroom teaching for K-12 students. K-12 teachers can use Web-based information in the curriculum t...", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 271, + "label": 2, + "text": "Finding the Most Similar Documents across Multiple Text Databases In this paper, we present a methodology for finding the n most similar documents across multiple text databases for any given query and for any positive integer n. This methodology consists of two steps. First, databases are ranked in a certain order. Next, documents are retrieved from the databases according to the order and in a particular way. If the databases containing the n most similar documents for a given query can be ranked ahead of other databases, the methodology will guarantee the retrieval of the n most similar documents for the query. A statistical method is provided to identify databases, each of which is estimated to contain at least one of the n most similar documents. Then, a number of strategies is presented to retrieve documents from the identified databases. Experimental results are given to illustrate the relative performance of different strategies. 1 Introduction The Internet has become a vast information source in recent years and can be considered as the w...", + "neighbors": [ + 224, + 433, + 435, + 477, + 502, + 510, + 526, + 696, + 792, + 897, + 931, + 1124, + 1165, + 1253 + ], + "mask": "Train" + }, + { + "node_id": 272, + "label": 5, + "text": "Learning to Perceive the World as Articulated: An Approach for Hierarchical Learning in Sensory-Motor Systems This paper describes how agents can learn an internal model of the world structurally by focusing on the problem of behavior-based articulation. We develop an on-line learning scheme -- the so-called mixture of recurrent neural net (RNN) experts -- in which a set of RNN modules becomes self-organized as experts on multiple levels in order to account for the different categories of sensory-motor flow which the robot experiences. Autonomous switching of activated modules in the lower level actually represents the articulation of the sensory-motor flow. In the meanwhile, a set of RNNs in the higher level competes to learn the sequences of module switching in the lower level, by which articulation at a further more abstract level can be achieved. The proposed scheme was examined through simulation experiments involving the navigation learning problem. Our dynamical systems analysis clarified the mechanism of the articulation; the possible correspondence between the articulation...", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 273, + "label": 0, + "text": "Integrating Mobile Agents into Off-the-Shelf Web Servers: The M&M Approach The mobile agent paradigm provides a new approach for developing distributed systems. During the last two years, we have been working on a project that tries to overcome some of the limitations found in terms of programmability and usability of the mobile agent paradigm in real applications. In the M&M framework there are no agent platforms. Instead applications become agent-enabled by using simple JavaBeans components. In our approach the agents arrive and departure directly from the applications, interacting with them from the inside.", + "neighbors": [ + 873, + 909 + ], + "mask": "Test" + }, + { + "node_id": 274, + "label": 2, + "text": "Flexible Queries to Semi-structured Datasources: the WG-log Approach A line of research is presented aimed at specifying both logical and navigational aspects of semi-structured data sources such as Web sites through the unifying notion of schema. Gracefully supporting schemata that are huge or subject to change, the WG-Log language allows for a uniform representation of queries and views, the latter expressing customized access structures to site information. A survey of related work and some directions for future research involving fuzzy query techniques are also outlined. 1 Introduction and Motivations Modern network-oriented information systems often have to deal with data that are semi-structured, i.e. lack the strict, regular, and complete structure required by traditional database management systems (see [Abi97] and [Suc97] for a survey on semi-structured data and related research). Information is semi-structured also when the structure of data varies w.r.t. time, rather than w.r.t. space: even if data is fairly well structured, such struc...", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 275, + "label": 3, + "text": "Flow Analysis for Verifying Specifications of Concurrent and Distributed Software This paper presents FLAVERS, a finite state verification approach that analyzes whether concurrent or sequential programs satisfy user-defined correctness properties. In contrast to other finite-state verification techniques, FLAVERS is based on algorithms with low-order polynomial bounds on the running time. FLAVERS achieves this efficiency at the cost of precision. Users, however, can improve the precision of the results by selectively and judiciously incorporating additional semantic information into the analysis problem. The FLAVERS analysis approach has been implemented for programs written in Ada. We report on an empirical study of the performance of applying the FLAVERS/Ada tool set to a collection of multi-tasking Ada programs. This study indicates that sufficient precision for proving program properties can be achieved and that the cost for such analysis grows as a low-order polynomial in the size of the program. 1 Introduction The application of distributed and concurrent pr...", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 276, + "label": 4, + "text": "Audio Driven Facial Animation For Audio-Visual Reality In this paper, we demonstrate a morphing based automated audio driven facial animation system. Based on an incoming audio stream, a face image is animated with full lip synchronization and expression. An animation sequence using optical flow between visemes is constructed, given an incoming audio stream and still pictures of a face speaking different visemes. Rules are formulated based on coarticulation and the duration of a viseme to control the continuity in terms of shape and extent of lip opening. In addition to this new viseme-expression combinations are synthesized to be able to generate animations with new facial expressions. Finally various applications of this system are discussed in the context of creating audio-visual reality. 1.", + "neighbors": [ + 898 + ], + "mask": "Train" + }, + { + "node_id": 277, + "label": 0, + "text": "Meeting Plan Recognition Requirements for Real-Time Air-Mission Simulations In this paper, the potential synergy between instancebased pattern recognition and means-end (possible world) reasoning is explored, for supporting plan recognition in multi-aeroplane air-mission simulations. A combination of graph matching, induction, probabilistic principles and dynamic programming are applied to traces of aeroplane behaviour during flight manoeuvres. These satisfy the real-time constraints of the simulation. This enables the agents to recognise what other agents are doing and to abstract about their activity, at the instrumentation level. A means-end-reasoning model is then used to deliberate about and invoke standard operating procedures, based on recognised activity. The reasoning model constrains the recognition process by framing queries according to what a pilot would expect during the execution of the current plan(s). Results from experiments involving the dMARS procedural reasoning system and the CLARET pattern matching and induction system are described for ...", + "neighbors": [ + 223, + 405, + 964 + ], + "mask": "Test" + }, + { + "node_id": 278, + "label": 3, + "text": "Alternative Correctness Criteria for Multiversion Concurrency Control and a Locking Protocol via Freezing Concurrency control protocols based on multiversions have been used in some commercial transaction processing systems in order to provide the serializable executions of transactions. In the existing protocols, transactions are allowed to read only the most recent version of each data item in order to ensure the correct execution of transactions. However, this feature is not desirable in some advanced database systems which have more requirements such as timing or security constraints besides serializability. In this paper, we propose a new correctness criteria, called F-serializability, for multiversion concurrency control protocols. It is the extended definition of `1-serial' and relaxes the condition so that a protocol provides not only the most recent version but also the previous one to transactions, if necessary. We prove that whenever a multiversion schedule is F-serializable, the schedule is also one-copy serializable. This is the first contribution of our paper. Next, we propos...", + "neighbors": [ + 470 + ], + "mask": "Train" + }, + { + "node_id": 279, + "label": 1, + "text": "Learning Hidden Markov Model Structure for Information Extraction Statistical machine learning techniques, while well proven in fields such as speech recognition, are just beginning to be applied to the information extraction domain. We explore the use of hidden Markov models for information extraction tasks, specifically focusing on how to learn model structure from data and how to make the best use of labeled and unlabeled data. We show that a manually-constructed model that contains multiple states per extraction field outperforms a model with one state per field, and discuss strategies for learning the model structure automatically from data. We also demonstrate that the use of distantly-labeled data to set model parameters provides a significant improvement in extraction accuracy. Our models are applied to the task of extracting important fields from the headers of computer science research papers, and achieve an extraction accuracy of 92.9%. Introduction Hidden Markov modeling is a powerful statistical machine learning technique that is just ...", + "neighbors": [ + 496, + 570, + 643, + 875, + 956, + 1122, + 1232 + ], + "mask": "Validation" + }, + { + "node_id": 280, + "label": 2, + "text": "Data Mining on Symbolic Knowledge Extracted from the Web Information extractors and classifiers operating on unrestricted, unstructured texts are an errorful source of large amounts of potentially useful information, especially when combined with a crawler which automatically augments the knowledge base from the world-wide web. At the same time, there is much structured information on the World Wide Web. Wrapping the web-sites which provide this kind of information provide us with a second source of information; possibly less up-to-date, but reliable as facts. We give a case study of combining information from these two kinds of sources in the context of learning facts about companies. We provide results of association rules, propositional and relational learning, which demonstrate that data-mining can help us improve our extractors, and that using information from two kinds of sources improves the reliability of data-mined rules. 1. INTRODUCTION The World Wide Web has become a significant source of information. Most of this computer-retri...", + "neighbors": [ + 133, + 347 + ], + "mask": "Train" + }, + { + "node_id": 281, + "label": 2, + "text": "Evaluating Topic-Driven Web Crawlers Due to limited bandwidth, storage, and computational resources, and to the dynamic nature of the Web, search engines cannot index every Web page, and even the covered portion of the Web cannot be monitored continuously for changes. Therefore it is essential to develop effective crawling strategies to prioritize the pages to be indexed. The issue is even more important for topic-specific search engines, where crawlers must make additional decisions based on the relevance of visited pages. However, it is difficult to evaluate alternative crawling strategies because relevant sets are unknown and the search space is changing. We propose three different methods to evaluate crawling strategies. We apply the proposed metrics to compare three topic-driven crawling algorithms based on similarity ranking, link analysis, and adaptive agents.", + "neighbors": [ + 1, + 53, + 255, + 382, + 649, + 774, + 1005, + 1059, + 1264 + ], + "mask": "Train" + }, + { + "node_id": 282, + "label": 3, + "text": "Scalable Consistency Protocols for Distributed Services Abstract\u00d0A common way to address scalability requirements of distributed services is to employ server replication and client caching of objects that encapsulate the service state. The performance of such a system could depend very much on the protocol implemented by the system to maintain consistency among object copies. We explore scalable consistency protocols that never require synchronization and communication between all nodes that have copies of related objects. We achieve this by developing a novel approach called local consistency (LC). LC based protocols can provide increased flexibility and efficiency by allowing nodes control over how and when they become aware of updates to cached objects. We develop two protocols for implementing strong consistency using this approach and demonstrate that they scale better than a traditional invalidation based consistency protocol along the system load and geographic distribution dimensions of scale. Index Terms\u00d0Scalable services, distributed objects, replication, caching, consistency protocols. 1", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 283, + "label": 3, + "text": "Query Optimization for Selections using Bitmaps Bitmaps are popular indexes for data warehouse (DW) applications and most database management systems offer them today. This paper proposes query optimization strategies for selections using bitmaps. Both continuous and discrete selection criteria are considered. Query optimization strategies are categorized into static and dynamic. Static optimization strategies discussed are the optimal design of bitmaps, and algorithms based on tree and logical reduction. The dynamic optimization discussed is the approach of inclusion and exclusion for both bit-sliced indexes and encoded bitmap indexes. 1 Introduction Bitmap indexing has become a promising technique for query processing in DWs. Variations of bitmap indexes include bit-sliced indexes [14, 3], encoded bitmap indexes (EBI) [18], bitmapped join indexes [13], range-based bitmap indexes [20], and others[16]. For query operations, such as selections, aggregates, and joins, query evaluation algorithms using bitmaps have been proposed in ...", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 284, + "label": 0, + "text": "Socialware: Multiagent Systems for Supporting Network Communities ing with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from Publications Dept, ACM Inc., fax +1 (212) 869-0481, or permissions@acm.org. Community B Community A Community C Community agent A1 Community agent B1 Community agent B2 Personal unit 1 Personal unit 2 Personal unit 5 Personal unit 6 Personal unit 3 Personal unit 7 Personal unit 4 User 5 to join User 4 to leave Figure 1: A general architecture of socialware as a multiagent system Socialware as Multiagent Systems There are several characteristics specific to network communities, which make a multiagent architecture attractive to use. The first characteristic is that the participants of a network community are widely distributed and the number of potential participants is large. Hence, no solid, centralized, or monolithic system would be adequate: A distributed system would be required, in which perso...", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 285, + "label": 0, + "text": "Updating Mental States from Communication . In order to perform effective communication agents must be able to foresee the effects of their utterances on the addressee's mental state. In this paper we investigate on the update of the mental state of an hearer agent as a consequence of the utterance performed by a speaker agent. Given an agent communication language with a STRIPSlike semantics, we propose a set of criteria that allow to bind the speaker's mental state to its uttering of a certain sentence. On the basis of these criteria, we give an abductive procedure that the hearer can adopt to partially recognize the speaker's mental state that led to a specific utterance. This procedure can be adopted by the hearer to update its own mental state and its image of the speaker's mental state. 1 Introduction In multi-agent systems, communication is necessary for the agents to cooperate and coordinate their activities or simply to avoid interfering with one another. If agents are not designed with embedded pre-compiled...", + "neighbors": [ + 808, + 964 + ], + "mask": "Train" + }, + { + "node_id": 286, + "label": 0, + "text": "A Conceptual Framework for Agent Definition and Development The use of agents of many different kinds in a variety of fields of computer science and artificial intelligence is increasing rapidly and is due, in part, to their wide applicability. The richness of the agent metaphor that leads to many different uses of the term is, however, both a strength and a weakness: its strength lies in the fact that it can be applied in very many different ways in many situations for different purposes; the weakness is that the term agent is now used so frequently that there is no commonly accepted notion of what it is that constitutes an agent. This paper addresses this issue by applying formal methods to provide a defining framework for agent systems. The Z specification language is used to provide an accessible and unified formal account of agent systems, allowing us to escape from the terminological chaos that surrounds agents. In particular, the framework precisely and unambiguously provides meanings for common concepts and terms, enables alternative models of particular classes of system to be described within it, and provides a foundation for subsequent development of increasingly more refined concepts.", + "neighbors": [ + 957 + ], + "mask": "Train" + }, + { + "node_id": 287, + "label": 4, + "text": "Principles for a Usability-Oriented Pattern Language High-level usability principles (e.g. \"Familiarity\") are difficult to apply to specific projects, and style guides providing more detailed instructions are often misinterpreted and inaccessible. An approach to usability based on design patterns enables designers to learn how certain recurring problems can be solved according to high-level principles. This paper summarises a review of the desirable properties advocated by five popular style guides, and discusses how this list has been modified to provide an underlying philosophy which is appropriate for a usability-oriented pattern language. A sample pattern which exemplifies this philosophy, involving iteration through selectable objects, is described. KEYWORDS Usability engineering, Design techniques, Style guides. 1. Introduction There has been considerable discussion about how to reconcile the gaps between software engineering (SE) and human-computer interaction (HCI). One of the primary ways to smoothly integrate the disciplines ...", + "neighbors": [ + 240, + 1273 + ], + "mask": "Test" + }, + { + "node_id": 288, + "label": 0, + "text": "Controlling Speculative Computation in Multi-Agent Environments In this paper, we propose a multi-agent system which performs speculative computation under incomplete communication environments. In a master-slave style multi-agent system with speculative computation, a master agent asks queries to slave agents in problem solving, and proceeds computation with default answers when answers from slave agents are delayed. We rst provide a semantics for speculative computation using default logic. Then, in the proposed system, we use the consequence- nding procedure SOL written in the Java language to perform data-driven deductive reasoning. The use of a consequence- nding procedure is convenient for updating agents' beliefs according to situation changes in the world. In our system, slave agents can change their answers frequently, yet a master agent can avoid duplicate computation. As long as actual answers from slave agents do not con- ict with any previously encountered situation, the obtained conclusions are never recomputed. We applied the proposed system to the meeting-room reservation problem to see the usefulness of the framework. 1", + "neighbors": [ + 532 + ], + "mask": "Train" + }, + { + "node_id": 289, + "label": 3, + "text": "Binary Decision Diagram Representations Of Firewall And Router Access Lists Network firewalls and routers can use a rule database to decide which packets will be allowed from one network onto another. By filtering packets the firewalls and routers can improve security and performance -- by excluding packets which may pose a security risk to a network or are not relevant to it. However, as the size of the rule list increases, it becomes difficult to maintain and validate the rules, and the cost of rule lookup may add significantly to latency. Ordered binary decision diagrams (BDDs) -- a compact method of representing and manipulating boolean expressions -- are a potential method of representing the rules. This paper explores how BDDs can be used to develop methods that help analysis of rules to validate them and changes to them, to improve performance, and facilitate hardware support. 1 Introduction The growth of network and internet communication creates several challenges for network design. Two important issues are security and performance. As the volume o...", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 290, + "label": 4, + "text": "Collision Avoidance and Resolution Multiple Access for Multichannel Wireless Networks The CARMA-NTG protocol is presented and analyzed. CARMA-NTG dynamically divides the channel into cycles of variable length; each cycle consists of a contention period and a group-transmission period. During the contention period, a station with one or more packets to send competes for the right to be added to the group of stations allowed to transmit data without collisions; this is done using a collision resolution splitting algorithm based on a request-to-send/clear-to-send (RTS/CTS) message exchange with non-persistent carrier sensing. CARMA-NTG ensures that one station is added to the group transmission period if one or more stations send requests to be added in the previous contention period. The group-transmission period is a variable-length train of packets, which are transmitted by stations that have been added to the group by successfully completing an RTS/CTS message exchange in previous contention periods. As long as a station maintains its position in the group, it is able to transmit data packets without collision. An upper bound is derived for the average costs of obtaining the first success in the splitting algorithm. This bound is then applied to the computation of the average channel utilization in a fully connected network with a large number of stations. These results indicate that collision resolution is a powerful mechanism in combination with floor acquisition and group allocation multiple access. 1", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 291, + "label": 2, + "text": "Centroid-Based Document Classification: Analysis & Experimental Results . In this paper we present a simple linear-time centroid-based document classification algorithm, that despite its simplicity and robust performance, has not been extensively studied and analyzed. Our experiments show that this centroid-based classifier consistently and substantially outperforms other algorithms such as Naive Bayesian, k-nearest-neighbors, and C4.5, on a wide range of datasets. Our analysis shows that the similarity measure used by the centroidbased scheme allows it to classify a new document based on how closely its behavior matches the behavior of the documents belonging to different classes. This matching allows it to dynamically adjust for classes with different densities and accounts for dependencies between the terms in the different classes. 1 Introduction We have seen a tremendous growth in the volume of online text documents available on the Internet, digital libraries, news sources, and company-wide intranets. It has been forecasted that these docu...", + "neighbors": [ + 228, + 235, + 545, + 726, + 1005, + 1049 + ], + "mask": "Test" + }, + { + "node_id": 292, + "label": 1, + "text": "Learning Languages by Collecting Cases and Tuning Parameters . We investigate the problem of case-based learning of formal languages. Case-based reasoning and learning is a currently booming area of artificial intelligence. The formal framework for case-based learning of languages has recently been developed by [JL93] in an inductive inference manner. In this paper, we first show that any indexed class of recursive languages in which finiteness is decidable is case-based representable, but many classes of languages including the class of all regular languages are not case-based learnable with a fixed universal similarity measure, even if both positive and negative examples are presented. Next we consider a framework of case-based learning where the learning algorithm is allowed to learn similarity measures, too. To avoid trivial encoding tricks, we carefully examine to what extent the similarity measure is going to be learned. Then by allowing only to learn a few parameters in the similarity measures, we show that any indexed class of recursive ...", + "neighbors": [ + 1259 + ], + "mask": "Train" + }, + { + "node_id": 293, + "label": 0, + "text": "Systems Directions for Pervasive Computing Pervasive computing, with its focus on users and their tasks rather than on computing devices and technology, provides an attractive vision for the future of computing. But, while hardware and networking infrastructure to realize this vision are becoming a reality, precious few applications run in this infrastructure. We believe that this lack of applications stems largely from the fact that it is currently too hard to design, build, and deploy applications in the pervasive computing space. In this paper, we argue that existing approaches to distributed computing are flawed along three axes when applied to pervasive computing; we sketch out alternatives that are better suited for this space. First, application data and functionality need to be kept separate, so that they can evolve gracefully in a global computing infrastructure. Second, applications need to be able to acquire any resource they need at any time, so that they can continuously provide their services in a highly dynamic environment. Third, pervasive computing requires a common system platform, allowing applications to be run across the range of devices and to be automatically distributed and installed. 1.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 294, + "label": 3, + "text": "SI-Designer: an Integration Framework for E-Commerce Electronic commerce lets people purchase goods and exchange information on business transactions on-line. Therefore one of the main challenges for the designers of the e-commerce infrastructures is the information sharing, retrieving data located in different sources thus obtaining an integrated view to overcome any contradiction or redundancy. Virtual Catalogs synthesize this approach as they are conceived as instruments to dynamically retrieve information from multiple catalogs and present product data in a unified manner, without directly storing product data from catalogs. In this paper we propose SI-Designer, a support tool for the integration of data from structured and semi-structured data sources, developed within the MOMIS (Mediator environment for Multiple Information Sources) project. 1", + "neighbors": [ + 28, + 766 + ], + "mask": "Test" + }, + { + "node_id": 295, + "label": 5, + "text": "Coastal Navigation - Mobile Robot Navigation with Uncertainty in Dynamic Environments Ships often use the coasts of continents for navigation in the absence of better tools such as GPS, since being close to land allows sailors to determine with high accuracy where they are. Similarly for mobile robots, in many environments global and accurate localization is not always feasible. Environments can lack features, and dynamic obstacles such as people can confuse and block sensors. In this paper, we demonstrate a technique for generating trajectories that take into account both the information content of the environment, and the density of the people in the environment. These trajectories reduce the average positional certainty as the robot moves, reducing the likelihood the robot will become lost at any point. Our method was successfully implemented and used by the mobile robot Minerva, a museum tourguide robot, for a 2 week period in the Smithsonian National Museum of American History. 1 Introduction One essential component of any operational mobile robot system is the ab...", + "neighbors": [ + 369 + ], + "mask": "Train" + }, + { + "node_id": 296, + "label": 0, + "text": "The KRAFT Architecture for Knowledge Fusion and Transformation This paper describes the KRAFT architecture which supports the fusion of knowledge from multiple, distributed, heterogeneous sources. The architecture uses constraints as a common knowledge interchange format, expressed against a common ontology. Knowledge held in local sources can be tranformed into the common constraint language, and fused with knowledge from other sources. The fused knowledge is then used to solve some problem or deliver some information to a user. Problem-solving in KRAFT typically exploits pre-existing constraint solvers. KRAFT uses an open and flexible agent architecture in which knowledge sources, knowledge fusing entities, and users are all represented by independent KRAFT agents, communicating using a messaging protocol. Facilitator agents perform matchmaking and brokerage services between the various kinds of agent. KRAFT is being applied to an example application in the domain of network data services design. 1 Introduction and Motivation Most ...", + "neighbors": [ + 663, + 1051, + 1190 + ], + "mask": "Train" + }, + { + "node_id": 297, + "label": 0, + "text": "Planning and Motion Control in Lifelike Gesture: A Refined Approach In this paper an operational model for the automatic generation of lifelike gestures of an anthropomorphic virtual agent is described. The biologically motivated approach to controlling the movements of a highly articulated figure provides a transformation of spatiotemporal gesture specifications into an analog representation of the movement from which the animations are directly rendered. To this end, knowledge-based computer animation techniques are combined with appropriate methods for trajectory formation and articulated figure animation. 1. Introduction The inclusion of nonverbal modalities into the communicative behaviors of virtual agents has moved into focus of human-computer interface researchers. Humans are more likely to consider computer-generated figures lifelike when appropriate nonverbal behaviors are displayed in addition to speech. This enables the evocation of social communicative attributions to the artificial agent, which are supposed to be advantageous for a natu...", + "neighbors": [ + 921 + ], + "mask": "Train" + }, + { + "node_id": 298, + "label": 2, + "text": "Integrating Automatic Genre Analysis into Digital Libraries With the number and types of documents in digital library systems increasing, tools for automatically organizing and presenting the content have to be found. While many approaches focus on topic-based organization and structuring, hardly any system incorporates automatic structural analysis and representation. Yet, genre information (unconsciously) forms one of the most distinguishing features in conventional libraries and in information searches. In this paper we present an approach to automatically analyze the structure of documents and to integrate this information into an automatically created content-based organization. In the resulting visualization, documents on similar topics, yet representing dierent genres, are depicted as books in diering colors. This representation supports users intuitively in locating relevant information presented in a relevant form. Keywords Genre Analysis, Self-Organizing Map (SOM), SOMLib, Document Clustering, Visualization, Metaphor Graphics 1.", + "neighbors": [ + 41, + 156, + 903, + 1010 + ], + "mask": "Validation" + }, + { + "node_id": 299, + "label": 0, + "text": "Localisation using Automatically Selected Landmarks from Panoramic Images The use of visual landmarks for robot localisation is a promising field. It is apparent that the success of localisation by visual landmarks depends on the landmarks chosen. Good landmarks are those which remain reliable over time and through changes in position and orientation. This paper describes a system which learns places by automatically selecting landmarks from panoramic images and uses them for localisation tasks. An adaption of the biologically inspired Turn Back and Look behaviour is used to evaluate potential landmarks. Normalised correlation is used to overcome the a#ects of changes in illumination in the environment. Results from real robot experiments are reported, showing successful localisation from up to one meter away from the learnt position. 1 Introduction Visual localisation is one of the key problems in making successful autonomous robots. Vision as a sensor is the richest source of information about a mobile agent's environment and as such con...", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 300, + "label": 5, + "text": "Adaptation Techniques for Intrusion Detection and Intrusion Response Systems This paper examines techniques for providing adaptation in intrusion detection and intrusion response systems. As attacks on computer systems are becoming increasingly numerous and sophisticated, there is a growing need for intrusion detection and response systems to dynamically adapt to better detect and respond to attacks. The Adaptive Hierarchical Agentbased Intrusion Detection System (AHA! IDS) provides detection adaptation by adjusting the amount of system resources devoted to the task of detecting intrusive activities. This is accomplished by dynamically invoking new combinations of lower level detection agents in response to changing circumstances and by adjusting the confidence associated with these lower-level agents. The Adaptive Agentbased Intrusion Response System (AAIRS) provides response adaptation by weighting those responses that have been successful in the past over those techniques that have not been as successful. As a result, the more successful responses are used...", + "neighbors": [ + 547 + ], + "mask": "Train" + }, + { + "node_id": 301, + "label": 1, + "text": "Evolutionary Approaches to Off-Line Routing in Backbone Communications Networks Off-line routing in backbone communications networks is an important combinatorial optimisation problem. It has three main uses: first, off-line routing provides reference benchmark results for dynamic (on-line) routing strategies. Second, and more interestingly, off-line routing is becoming more and more investigated and employed in its own right as a way of quickly finding significantly improved routings for live networks which can then be imposed on the network to offer a net improvement in quality of service. Third, it can be used in networks where bandwidth may be booked in advance. In this paper we introduce and investigate a number of heuristic techniques applicable to the routing problem for use in stochastic, iterative search. Results are presented which indicate that these heuristics significantly improve the search for solutions, particularly when on-line performance is considered. We also investigate how computation time can be further reduced by the use of delta-evaluation...", + "neighbors": [ + 598 + ], + "mask": "Train" + }, + { + "node_id": 302, + "label": 3, + "text": "Optimizing Object Queries Using an Effective Calculus Object-oriented databases (OODBs) provide powerful data abstractions and modeling facilities but they generally lack a suitable framework for query processing and optimization. One of the key factors for OODB systems to successfully compete with relational systems as well as to meet the performance requirements of many non-traditional applications is the development of an effective query optimizer. We propose an effective framework with a solid theoretical basis for optimizing OODB query languages. Our calculus, called the monoid comprehension calculus, captures most features of ODMG OQL and is a good basis for expressing various optimization algorithms concisely. This paper concentrates on query unnesting, an optimization that, even though improves performance considerably, is not treated properly (if at all) by most OODB systems. Our framework generalizes many unnesting techniques proposed recently in the literature and is capable of removing any form of query nesting using a very si...", + "neighbors": [ + 397, + 493, + 1047 + ], + "mask": "Train" + }, + { + "node_id": 303, + "label": 0, + "text": "A Unified View of Plans as Recipes Plans as recipes or abstract structures, as well as plans as mental attitudes that guide an agent in its planning process has been enthusiastically embraced by both philosophers and AI practitioners. They play a central role in a class of rational agents, called Belief-Desire-Intention (BDI) agents. This dual view of plans can not only be used for efficient planning, but can also be used for recognizing the plans of other agents, coordinating one's actions and achieving joint intentions with other members of a larger collective or team, and finally recognizing the collective plans and intentions of other teams. In this paper, we start with a simple notion of execution plans and discuss its operational semantics. We progressively extend this notion of plans to recognition plans, joint execution plans, and joint recognition plans. The primary contribution of this paper is in providing an integrated view of plans that facilitate individual an collective planning and recognition. 1 Int...", + "neighbors": [ + 223, + 816, + 839 + ], + "mask": "Train" + }, + { + "node_id": 304, + "label": 3, + "text": "Making LDAP Active with the LTAP Gateway: Case Study in Providing Telecom Integration and Enhanced Services LDAP (Lightweight Directory Access Protocol) directories are being rapidly deployed on the Web. They are currently used to store data like white pages information, user profiles, and network device descriptions. These directories offer a number of advantages over current database technology in that they provide better support for heterogeneity and scalability. However, they lack some basic database functionality (e.g., triggers, transactions) that is crucial for Directory Enabled Networking (DEN) tasks like provisioning network services, allocating resources, reporting, managing end-to-end security, and offering mobile users customized features that follow them. In order to address these limitations while keeping the simplicity and performance features of LDAP directories, unbundled and portable solutions are needed. In this paper we discuss LDAP limitations we faced while building an LDAP meta-directory that integrates data from legacy telecom systems, and how LTAP (Lightweight Trigger Access Process), a portable gateway that adds active functionality to LDAP directories, overcomes these limitations.", + "neighbors": [ + 711, + 752, + 1201 + ], + "mask": "Test" + }, + { + "node_id": 305, + "label": 3, + "text": "Textural Features for Image Database Retrieval This paper presents two feature extraction methods and two decision methods to retrieve images having some section in them that is like the user input image. The features used are variances of gray level co-occurrences and line-angle-ratio statistics constituted by a 2-D histogram of angles between two intersecting lines and ratio of mean gray levels inside and outside the regions spanned by those angles. The decision method involves associating with any pair of images either the class \u201crelevant\u201d or \u201cirrelevant\u201d. A Gaussian classifier and nearest neighbor classifier are used. A protocol that translates a frame throughout every image to automatically define for any pair of images whether they are in the relevance class or the irrelevance class is discussed. Experiments on a database of 300 gray scale images with 9,600 groundtruth image pairs showed that the classifier assigned 80 % of the image pairs we were sure were relevant, to the relevance class correctly. The actual retrieval accuracy is greater than this lower bound of 80%.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 306, + "label": 3, + "text": "Indexing Semistructured Data This paper describes techniques for building and exploiting indexes on semistructured data: data that may not have a fixed schema and that may be irregular or incomplete. We first present a general framework for indexing values in the presence of automatic type coercion. Then based on Lore, a DBMS for semistructured data, we introduce four types of indexes and illustrate how they are used during query processing. Our techniques and indexing structures are fully implemented and integrated into the Lore prototype. 1 Introduction We call data that is irregular or that exhibits type and structural heterogeneity semistructured, since it may not conform to a rigid, predefined schema. Such data arises frequently on the Web, or when integrating information from heterogeneous sources. In general, semistructured data can be neither stored nor queried in relational or object-oriented database management systems easily and efficiently. We are developing Lore 1 , a database management system d...", + "neighbors": [ + 364, + 488, + 634 + ], + "mask": "Validation" + }, + { + "node_id": 307, + "label": 4, + "text": "Visual Contextual Awareness in Wearable Computing Small, body-mounted video cameras enable a different style of wearable computing interface. As processing power increases, a wearable computer can spend more time observing its user to provide serendipitous information, manage interruptions and tasks, and predict future needs without being directly commanded by the user. This paper introduces an assistant for playing the real-space game Patrol. This assistant tracks the wearer's location and current task through computer vision techniques and without off-body infrastructure. In addition, this paper continues augmented reality research, started in 1995, for binding virtual data to physical locations. 1. Introduction For most computer systems, even virtual reality systems, sensing techniques are a means of getting input directly from the user. However, wearable computers offer a unique opportunity to re-direct sensing technology towards recovering more general user context. Wearable computers have the potential to \"see\" as the user sees...", + "neighbors": [ + 334, + 497, + 531, + 665, + 680, + 728, + 738, + 910, + 1006 + ], + "mask": "Validation" + }, + { + "node_id": 308, + "label": 0, + "text": "Knowledge Modeling - State of the Art A major characteristic of developments in the broad field of Artificial Intelligence (AI) during the 1990s has been an increasing integration of AI with other disciplines. A number of other computer science fields and technologies have been used in developing intelligent systems, starting from traditional information systems and databases, to modern distributed systems and the Internet. This paper surveys knowledge modeling techniques that have received most attention in recent years among developers of intelligent systems, AI practitioners and researchers. The techniques are described from two perspectives, theoretical and practical. Hence the first part of the paper presents major theoretical and architectural concepts, design approaches, and research issues. The second part discusses several practical systems, applications, and ongoing projects that use and implement the techniques described in the first part. Finally, the paper briefly covers some of the most recent results in the fields of intelligent manufacturing systems, intelligent tutoring systems, and ontologies. 2 1.", + "neighbors": [ + 119, + 1067 + ], + "mask": "Train" + }, + { + "node_id": 309, + "label": 3, + "text": "Run-time Detection in Parallel and Distributed Systems: An Application to Safety-Critical Applications As systems are becoming more complex, there is increasing interest in their runtime analysis, understanding their dynamic behavior and possibly controling it as well. This paper describes complex distributed and parallel applications that use run-time analyses to attain scalability improvements with respect to the amount and complexity of the data transmitted, transformed, and shared among different application components. Such improvements are derived from using database techniques when manipulating data streams. Namely, by imposing a relational model on a data stream, filters and constraints on the stream may be expressed in the form of database queries evaluated against the data events comprising the stream. Streams may then be filtered using runtime optimization techniques derived from query optimization methods. This paper also presents a tool, termed Cnet, which offers (1) means for the dynamic creation of queries and their application to distributed data streams, (2) permits the...", + "neighbors": [ + 18 + ], + "mask": "Train" + }, + { + "node_id": 310, + "label": 3, + "text": "Specialising The Other Way Around In this paper, we present a program transformation based on bottom up evaluation of logic programs. We explain that using this technique, programs can be \"specialised\" w.r.t. a set of unit clauses instead of a query. Moreover, examples indicate that excellent specialisation can be obtained when this bottom up transformation is combined with a more traditional top down approach, resulting in conceptually cleaner techniques requiring a less complicated control than one overall approach. 1", + "neighbors": [ + 1250 + ], + "mask": "Train" + }, + { + "node_id": 311, + "label": 5, + "text": "A Description Logic for Vague Knowledge This work introduces the concept language ALCFM which is an extension of ALC to many-valued logics. ALCFM allows to express vague concepts, e.g. more or less enlarged or very small. To realize this extension to many-valued logics, the classical notions of satisfiability and subsumption had to be modified appropriately. For example, ALCFM -concepts are no longer either satisfiable or unsatisfiable, but they are satisfiable to a certain degree. The main contribution of this paper is a sound and complete method for computing the degree of subsumption between two ALCFM -concepts. 1 Introduction This work takes its motivation from the occurrence of vague concept descriptions in different application areas. Often, application--inherent information is characterized by a very high degree of vagueness. Appropriate information systems must be able to process this kind of data. So far, there are no systems that really solve the corresponding problems due to the lack of powerful basic methods. A...", + "neighbors": [ + 173 + ], + "mask": "Validation" + }, + { + "node_id": 312, + "label": 0, + "text": "Towards Robust Teams with Many Agents Agents in deployed multi-agent systems monitor other agents to coordinate and collaborate robustly. However, as the number of agents monitored is scaled up, two key challenges arise: (i) the number of monitoring hypotheses to be considered can grow exponentially in the number of agents; and (ii) agents become physically and logically unconnected (unobservable) to their peers. This paper examines these challenges in teams of cooperating agents, focusing on a monitoring task that is of particular importance to robust teamwork: detecting disagreements among team-members. We present YOYO, a highly scalable disagreement-detection algorithm which guarantees sound detection in time linear in the number of agents despite the exponential number of hypotheses. In addition, we present new upper bounds about the number of agents that must be monitored in a team to guarantee disagreement detection. Both YOYO and the new bounds are explored analytically and empirically in thousands of monitoring problems, scaled to thousands of agents.", + "neighbors": [ + 49, + 196, + 724, + 963 + ], + "mask": "Train" + }, + { + "node_id": 313, + "label": 4, + "text": "Classification Space for Augmented Surgery, an Augmented Reality Case Study One of the recent design goals in Human Computer Interaction has been to extend the sensorymotor capabilities of computer systems to combine the real and the virtual in order to assist the user in his environment. Such systems are called Augmented Reality (AR). Although AR systems are becoming more prevalent we still do not have a clear understanding of this interaction paradigm. In this paper we propose OPAS as a generic framework for classifying existing AR systems. Computer Assisted Medical Interventions (CAMI), for which the added value of AR has been demonstrated by experience, are discussed in light of OPAS. We illustrate OPAS using our system, CASPER (Computer ASsisted PERicardial puncture), a CAMI system which assists in surgical procedures (pericardial punctures). KEYWORDS: Augmented Surgery, CAMI, Augmented Reality, Classification Space 1. INTRODUCTION The term \"Augmented Reality\" (AR) appears in the literature usually in conjunction with the term \"Virtual Reality\" (VR). Th...", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 314, + "label": 2, + "text": "Amalthaea: Information Discovery and Filtering using a Multiagent Evolving Ecosystem Agents are semi-intelligent programs that assist the user in performing repetitive and time-consuming tasks. Information discovery and information filtering are a suitable domain for applying agent technology. Ideas drawn from the field of autonomous agents and artificial life are combined in the creation of an evolving ecosystem composed of competing and cooperating agents. A co-evolution model of information filtering agents that adapt to the various user's interests and information discovery agents that monitor and adapt to the various on-line information sources is analyzed. Results from a number of experiments are presented and discussed. Keywords: Agents, Information Filtering, Evolution, World-Wide-Web 1 Introduction The exponential increase of computer systems that are interconnected in on-line networks has resulted in a corresponding exponential increase in the amount of information available on-line. This information is distributed among heterogeneous sources and is...", + "neighbors": [ + 492, + 561, + 606, + 952, + 1264 + ], + "mask": "Train" + }, + { + "node_id": 315, + "label": 3, + "text": "Leveraging Mediator Cost Models with Heterogeneous Data Sources : Distributed systems require declarative access to diverse data sources of information. One approach to solving this heterogeneous distributed database problem is based on mediator architectures. In these architectures, mediators accept queries from users, process them with respect to wrappers, and return answers. Wrapper provide access to underlying data sources. To efficiently process queries, the mediator must optimize the plan used for processing the query. In classical databases, cost-estimate based query optimization is an effective method for optimization. In a heterogeneous distributed databases, cost-estimate based query optimization is difficult to achieve because the underlying data sources do not export cost information. This paper describes a new method that permits the wrapper programmer to export cost estimates (cost estimate formulas and statistics). For the wrapper programmer to describe all cost estimates may be impossible due to lack of information or burdensome due...", + "neighbors": [ + 530 + ], + "mask": "Train" + }, + { + "node_id": 316, + "label": 4, + "text": "Software Infrastructure for Ubiquitous Computing Environments: Supporting Synchronous Collaboration with Heterogeneous Devices In ubiquitous computing environments, multiple users work with a wide range of different devices. In many cases, users interact and collaborate using multiple heterogeneous devices at the same time. The configuration of the devices should be able to change frequently due to a highly dynamic, flexible and mobile nature of new work practices. This produces new requirements for the architecture of an appropriate software infrastructure. In this paper, an architecture designed to meet these requirements is proposed. To test its applicability, this architecture was used as the basis for the implementation of BEACH, the software infrastructure of i-LAND (the ubiquitous computing environment at GMD-IPSI). It provides the functionality for synchronous cooperation and interaction with roomware components, i.e. room elements with integrated information technology. In conclusion, our experiences with the current implementation are presented.", + "neighbors": [ + 576, + 628, + 698 + ], + "mask": "Train" + }, + { + "node_id": 317, + "label": 0, + "text": "MAP: Design and Implementation of a Mobile Agents Platform The recent development of telecommunication networks has contributed to the success of applications such as information retrieval and electronic commerce, as well as all the services that take advantage of communication in distributed systems. In this area, the emerging technology of mobile agents aroused considerable interest. Mobile agents are applications that can move through the network for carrying out a given task on behalf of the user. In this work we present a platform (called MAP (Mobile Agents Platform)) for the development and the management of mobile agents. The language used both for developing the platform and for carrying out the agents is Java. The platform gives the user all the basic tools needed for creating some applications based on the use of agents. It enables us to create, run, suspend, resume, deactivate, reactivate local agents, to stop their execution, to make them communicate each other and migrate. Keywords: mobile agents, distributed computing, Java, net...", + "neighbors": [ + 259 + ], + "mask": "Test" + }, + { + "node_id": 318, + "label": 0, + "text": "Klava: a Java Framework for Distributed and Mobile Applications Highly distributed networks have now become a common infrastructure for a new kind of wide-area distributed applications whose key design principle is network awareness, namely the ability of dealing with dynamic changes of the network environment. Network-aware computing has called for new programming languages that exploit the mobility paradigm as a basic interaction mechanism. In this paper we present the architecture of Klava, an experimental Java framework for distributed applications and code mobility. We explain how Klava implements code mobility by relying on Java and show a few distributed applications that exploit mobile code and are programmed in Klava. Keywords: Code Mobility, Distributed Applications, Network Awareness, Language and Middleware Implementation, Tuple Spaces, Java. 1", + "neighbors": [ + 269 + ], + "mask": "Train" + }, + { + "node_id": 319, + "label": 1, + "text": "A Bayesian Computer Vision System for Modeling Human Interactions Abstract\u00d0We describe a real-time computer vision and machine learning system for modeling and recognizing human behaviors in a visual surveillance task [1]. The system is particularly concerned with detecting when interactions between people occur and classifying the type of interaction. Examples of interesting interaction behaviors include following another person, altering one's path to meet another, and so forth. Our system combines top-down with bottom-up information in a closed feedback loop, with both components employing a statistical Bayesian approach [2]. We propose and compare two different state-based learning architectures, namely, HMMs and CHMMs for modeling behaviors and interactions. The CHMM model is shown to work much more efficiently and accurately. Finally, to deal with the problem of limited training data, a synthetic \u00aaAlife-style\u00ba training system is used to develop flexible prior models for recognizing human interactions. We demonstrate the ability to use these a priori models to accurately classify real human behaviors and interactions with no additional tuning or training. Index Terms\u00d0Visual surveillance, people detection, tracking, human behavior recognition, Hidden Markov Models. 1", + "neighbors": [ + 26, + 1160 + ], + "mask": "Test" + }, + { + "node_id": 320, + "label": 5, + "text": "Probabilistic Roadmap Methods are Embarrassingly Parallel In this paper we report on our experience parallelizing probabilistic roadmap motion planning methods (prms). We show that significant, scalable speedups can be obtained with relatively little effort on the part of the developer. Our experience is not limited to prms, however. In particular, we outline general techniques for parallelizing types of computations commonly performed in motion planning algorithms, and identify potential difficulties that might be faced in other efforts to parallelize sequential motion planning methods. 1 Introduction Automatic motion planning has application in many areas such as robotics, virtual reality systems, and computer-aided design. Although many different motion planning methods have been proposed, most are not used in practice since they are computationally infeasible except for some restricted cases, e.g., when the robot has very few degrees of freedom (dof) [12, 16]. Indeed, there is strong evidence that any complete planner (one that is guaran...", + "neighbors": [ + 387, + 446 + ], + "mask": "Train" + }, + { + "node_id": 321, + "label": 2, + "text": "Accessing Information and Services on the DAML-Enabled Web The DARPA Agent Markup Language (DAML) program aims to allow one to mark up web pages to indicate the meaning of their content; it is intended that the results delivered by a DAML-enabled browser will more closely match the intentions of the user than is possible with today's syntactically oriented search engines. In this paper we present our vision of a DAML-enabled search architecture. We present a set of queries of increasing complexity that should be answered efficiently in a Semantic Web. We describe several scenarios illustrating how queries are processed, identifying the main software components necessary to facilitate the search. We examine the issue of inference in search, and we address how to characterize procedures and services in DAML, enabling a DAML query language to find web sites with specified capabilities. Key Words: Semantic Web, DAML, inference, Web services, process modeling. 1.", + "neighbors": [ + 1086 + ], + "mask": "Train" + }, + { + "node_id": 322, + "label": 2, + "text": "Discovering Unexpected Information from Your Competitors' Web Sites Ever since the beginning of the Web, finding useful information from the Web has been an important problem. Existing approaches include keyword-based search, wrapper-based information extraction, Web query and user preferences. These approaches essentially find information that matches the user's explicit specifications. This paper argues that this is insufficient. There is another type of information that is also of great interest, i.e., unexpected information, which is unanticipated by the user. Finding unexpected information is useful in many applications. For example, it is useful for a company to find unexpected information about its competitors, e.g., unexpected services and products that its competitors offer. With this information, the company can learn from its competitors and/or design counter measures to improve its competitiveness. Since the number of pages of a typical commercial site is very large and there are also many relevant sites (competitors), it is very difficult for a human user to view each page to discover the unexpected information. Automated assistance is needed. In this paper, we propose a number of methods to help the user find various types of unexpected information from his/her competitors' Web sites. Experiment results show that these techniques are very useful in practice and also efficient. Keywords Information interestingness, Web comparison, Web mining. 1.", + "neighbors": [ + 216, + 379, + 536, + 1017, + 1144 + ], + "mask": "Train" + }, + { + "node_id": 323, + "label": 2, + "text": "Named Entity Recognition from Diverse Text Types Current research in Information Extraction tends to be focused on application-specific systems tailored to a particular domain. The Muse system is a multi-purpose Named Entity recognition system which aims to reduce the need for costly and time-consuming adaptation of systems to new applications, with its capability for processing texts from widely di#ering domains and genres. Although the system is still under development, preliminary results are encouraging, showing little degradation when processing texts of lower quality or of unusual types. The system currently averages 93% precision and 95% recall across a variety of text types.", + "neighbors": [ + 571 + ], + "mask": "Validation" + }, + { + "node_id": 324, + "label": 4, + "text": "Evaluation of Recommender Algorithms for an Internet Information Broker Based Association rules are a widely used technique to generate recommendations in commercial and research recommender systems. Since more and more Web sites, especially of retailers, offer automatic recommender services using Web usage mining, evaluation of recommender algorithms becomes increasingly important. In this paper we first present a framework for the evaluation of different aspects of recommender systems based on the process of discovering knowledge in databases of Fayyad et al. and then we focus on the comparison of the performance of two recommender algorithms based on frequent itemsets. The first recommender algorithm uses association rules, and the other recommender algorithm is based on the repeat-buying theory known from marketing research. For the evaluation we concentrated on how well the patterns extracted from usage data match the concept of useful recommendations of users. We use 6 month of usage data from an educational Internet information broker and compare useful recommendations identified by users from the target group of the broker with the results of the recommender algorithms. The results of the evaluation presented in this paper suggest that frequent itemsets from purchase histories match the concept of useful recommendations expressed by users with satisfactory accuracy (higher than 70%) and precision (between 60% and 90%). Also the evaluation suggests that both algorithms studied in the paper perform similar on real-world data if they are tuned properly.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 325, + "label": 4, + "text": "End of First Year Report displays of information (graphs, plots, etc.) are a recent invention at around 1750-1800 [15]. Andrews defines Information Visualisation as the visual presentation of information spaces and structures to facilitate their rapid assimilation and understanding [149]. In the same document, the authors give a collection of Information Visualisation pointers (references also available at http://www.iicm.edu/hci/ivis). A more complete on-line document for Information Visualisation, from Andrews is available at http://www.iicm.edu.hci/ivis/node2.htm. A report on three-dimensional Information Visualisation is given by Young, also available on-line, at http://www.dur.ac.uk/~dcs3py/pages/work/documents/litsurvey /IV-Survey/ [150]. This report gives a visualisation techniques enumeration and a survey of research visualisation systems. Two other Web resources for Information Visualisation are Olive (Online of Information Visualisation Environments - http://otal.umd.edu/Olive/), and the CS348 course...", + "neighbors": [ + 454 + ], + "mask": "Validation" + }, + { + "node_id": 326, + "label": 4, + "text": "Gathering User Interface Design Requirements for Social Computing Design for cooperation is a challenge. As designers we note that as we are moving towards the final years of this century, several areas have achieved significant breakthroughs. Among them, it is easy to perceive that areas of Computing and Telecommunications have had an impact of paramount importance to society as a whole. These technologies have allowed an increasing integration of research fields, people of various backgrounds and abilities as well as made the interaction of different cultures possible. As a result, we have been living in the Internet era with a very large number of Web sites which can be visited, queried and played with. That constitutes what we call social computing. Application examples are: digital libraries, health care information systems, Physics collaboratories, and Web-based entertainments like interactive Web games. Within this context, we are concerned with the user interface design requirements gathering for such systems. In that sense, we present a prot...", + "neighbors": [ + 733 + ], + "mask": "Train" + }, + { + "node_id": 327, + "label": 5, + "text": "A Tableau-Based Proof Method for Temporal Logics of Knowledge and Belief . In this paper we define two logics, KLn and BLn , and present tableau-based decision procedures for both. KLn is a temporal logic of knowledge. Thus, in addition to the usual connectives of linear discrete temporal logic, it contains a set of unary modal connectives for representing the knowledge possessed by agents. The logic BLn is somewhat similar; it is a temporal logic that contains connectives for representing the beliefs of agents. In addition to a complete formal definition of the two logics and their decision procedures, the paper includes a brief review of their applications in AI and mainstream computer science, correctness proofs for the decision procedures, a number of worked examples illustrating the decision procedures, and some pointers to further work. KEYWORDS:Temporal logics of knowledge and belief, theorem proving, tableau. 1 Introduction This paper presents two logics, called KLn and BLn respectively, and gives tableau-based decision procedures for both. The l...", + "neighbors": [ + 82, + 159, + 953 + ], + "mask": "Validation" + }, + { + "node_id": 328, + "label": 1, + "text": "GIB: Steps Toward an Expert-Level Bridge-Playing Program This paper describes Goren In a Box (gib), the first bridge-playing program to approach the level of a human expert. We give a basic overview of the algorithms used, describe their strengths and weaknesses, and present the results of experiments comparing gib to both human opponents and earlier programs. Introduction Of all the classic games of skill, only card games and Go have yet to see the appearance of serious computer challengers. In Go, this appears to be because the game is fundamentally one of pattern recognition as opposed to search; the brute-force techniques that have been so successful in the development of chess-playing programs have failed almost utterly to deal with Go's huge branching factor. Indeed, the arguably strongest Go program in the world was beaten by Janice Kim in the AAAI-97 Hall of Champions after Kim had given the program a monumental 25 stone handicap. Card games appear to be different. Perhaps because they are games of imperfect information, or perhaps...", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 329, + "label": 0, + "text": "Autonomous Agents For Business Process Management : Traditional approaches to managing business processes are often inadequate for large-scale, organisation -wide, dynamic settings. However since Internet and Intranet technologies have become widespread, an increasing number of business processes exhibit these properties. Therefore a new approach is needed. To this end, we describe the motivation, conceptualisation, design and implementation of a novel agent-based business process management system. The key advance of our system is that responsibility for enacting various components of the business process is delegated to a number of autonomous problem solving agents. To enact their role, these agents typically interact and negotiate with other agents in order to coordinate their actions and to buy in the services they require. This approach leads to a system that is significantly more agile and robust than its traditional counterparts. To help demonstrate these benefits, a companion paper describes the application of our system to a ...", + "neighbors": [ + 683, + 724 + ], + "mask": "Validation" + }, + { + "node_id": 330, + "label": 1, + "text": "Learning the Face Space - Representation and Recognition This paper advances an integrated learning and evolutionary computation methodology for approaching the task of learning the face space. The methodology is geared to provide a framework whereby enhanced and robust face coding and classification schemes can be derived and evaluated using both machine and human benchmark studies. In particular we take an interdisciplinary approach, drawing from the accumulated and vast knowledge of both the computer vision and psychology communities, and describe how evolutionary computation and statistical learning can engage in mutually beneficial relationships in order to define an exemplar (Absolute)-Based Coding (ABC) multidimensional face space representation for successfully coping with changing population (face) types, and to leverage past experience for incremental face space definition. 1. Introduction Among the most challenging tasks for visual form (`shape') analysis and object recognition are understanding how people process and recognize ...", + "neighbors": [ + 949, + 1118 + ], + "mask": "Train" + }, + { + "node_id": 331, + "label": 1, + "text": "Regularizing AdaBoost Boosting methods maximize a hard classification margin and are known as powerful techniques that do not exhibit overfitting for low noise cases. Also for noisy data boosting will try to enforce a hard margin and thereby give too much weight to outliers, which then leads to the dilemma of non-smooth fits and overfitting. Therefore we propose three algorithms to allow for soft margin classification by introducing regularization with slack variables into the boosting concept: (1) AdaBoost reg and regularized versions of (2) linear and (3) quadratic programming AdaBoost. Experiments show the usefulness of the proposed algorithms in comparison to another soft margin classifier: the support vector machine. 1 Introduction Boosting and other ensemble methods have been used with success in several applications, e. g. OCR [12, 7]. For low noise cases several lines of explanation have been proposed as candidates for explaining the well functioning of boosting methods. (a) Breiman proposed that ...", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 332, + "label": 1, + "text": "The Case Against Accuracy Estimation for Comparing Induction Algorithms We analyze critically the use of classification accuracy to compare classifiers on natural data sets, providing a thorough investigation using ROC analysis, standard machine learning algorithms, and standard benchmark data sets. The results raise serious concerns about the use of accuracy for comparing classifiers and draw into question the conclusions that can be drawn from such studies. In the course of the presentation, we describe and demonstrate what we believe to be the proper use of ROC analysis for comparative studies in machine learning research. We argue that this methodology is preferable both for making practical choices and for drawing scientific conclusions. 1 INTRODUCTION Substantial research has been devoted to the development and analysis of algorithms for building classifiers, and a necessary part of this research involves comparing induction algorithms. A common methodology for such evaluations is to perform statistical comparisons of the accuracies of learned class...", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 333, + "label": 0, + "text": "Integrating Peer-to-Peer Networking and Computing in the Agentscape Framework The combination of peer-to-peer networking and agentbased computing seems to be a perfect match. Agents are cooperative and communication oriented, while peerto -peer networks typically support distributed systems in which all nodes have equal roles and responsibilities. AgentScape is a framework designed to support large-scale multi-agent systems. Pole extends this framework with peerto -peer computing. This combination facilitates the development and deployment of new agent-based peer-to-peer applications and services.", + "neighbors": [ + 1054, + 1227 + ], + "mask": "Train" + }, + { + "node_id": 334, + "label": 4, + "text": "Real-Time American Sign Language Recognition Using Desk and Wearable Computer Based Video Hidden Markov models (HMM's) have been used prominently and successfully in speech recognition and, more recently, in handwriting recognition. Consequently, they seem ideal for visual recognition of complex, structured hand gestures such as are found in sign language. We describe two experiments that demonstrate a realtime HMM-based system for recognizing sentence level American Sign Language (ASL) without explicitly modeling the fingers. The first experiment tracks hands wearing colored gloves and attains a word accuracy of 99%. The second experiment tracks hands without gloves and attains a word accuracy of 92%. Both experiments have a 40 word lexicon. 1 Introduction While there are many different types of gestures, the most structured sets belong to the sign languages. In sign language, each gesture already has assigned meaning, and strong rules of context and grammar may be applied to make recognition tractable. To date, most work on sign language recognition has employed expensi...", + "neighbors": [ + 54, + 307, + 351, + 497, + 665, + 779, + 1129 + ], + "mask": "Test" + }, + { + "node_id": 335, + "label": 3, + "text": "Structure and Performance of Decision Support Algorithms on Active Disks Growth and usage trends for large decision support databases indicate that there is a need for architectures that scale the processing power as the dataset grows. These trends indicate that the processing demand for large decision support databases is growing faster than the improvement in performance of commodity processors. To meet this need, several researchers have recently proposed Active Disk/IDISK architectures which integrate substantial processing power and memory into disk units. In this paper, we examine the utility of Active Disks for decision support databases. We try to answer the following questions. First, is it possible to restructure algorithms for common decision support tasks to utilize Active Disks? Second, how does the performance of Active Disks compare with that of traditional servers for these tasks? Finally, how would Active Disks be integrated into the software architecture of decision support databases? 1 Introduction Growth and usage trends for large decis...", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 336, + "label": 3, + "text": "Algorithms for Temporal Query Operators in XML Databases The contents of an XML database or XML/Web data warehouse is seldom static. New documents are created, documents are deleted, and more important: documents are updated. In many cases, we want to be able to search in historical versions, retrieve documents valid at a certain time, query changes to documents, etc. This can be supported by extending the system with temporal database features. In this paper we describe the new query operators needed in order to support an XML query language which supports temporal operations. We also describe the algorithms which can make efficient implementation of these query operators possible. Keywords: XML, temporal databases, query processing 1", + "neighbors": [ + 705, + 912, + 1025, + 1069 + ], + "mask": "Validation" + }, + { + "node_id": 337, + "label": 3, + "text": "Algebraic Models for Contextual Nets We extend the algebraic approach of Meseguer and Montanari from ordinary place/transition Petri nets to contextual nets, covering both the collective and the individual token philosophy uniformly along the two interpretations of net behaviors.", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 338, + "label": 1, + "text": "An Ejection Chain Approach for the Generalized Assignment Problem this paper, we propose an ejection chain approach under the framework of tabu search (TS) for the generalized assignment problem (GAP), which is known to be NP-hard (Sahni and Gonzalez 1976). GAP seeks a minimum cost assignment of n jobs to m agents subject to a resource constraint for each agent. Among various heuristic algorithms developed for GAP are: a combination of the greedy method and local search by Martello and Toth (1981, 1990); a tabu search and simulated annealing approach by Osman (1995); a genetic algorithm by Chu and Beasley (1997); VDS methods by Amini and Racer (1995) and Racer and Amini (1994); a tabu search approach by Laguna et al. (1995) (which is proposed for a generalization of GAP); a set partitioning heuristic by Cattrysse et al. (1994); a relaxation heuristic by Lorena and Narciso (1996); a GRASP and MAX-MIN ant system combined with local search and tabu search by Lourenco and Serra (1998); a linear relaxation heuristic by Trick (1992); and so on. Many exact algorithms have also been proposed (e.g., Nauss 2003, Savelsbergh 1997). A simpler version of an ejection chain approach has also been proposed for the GAP in Laguna et al. (1995). Our ejection chain is based on the idea described in Glover (1997)", + "neighbors": [ + 1111 + ], + "mask": "Train" + }, + { + "node_id": 339, + "label": 3, + "text": "Atomi - Automated Reconstruction Of Topographic Objects From Aerial Images Using Vectorized Map Information The project ATOMI is a co-operation between the Federal Office of Topography (L+T) and ETH Zurich. The aim of ATOMI is to update vector data of road centerlines and building roof outlines from 1:25,000 maps, fitting it to the real landscape, improve the planimetric accuracy to 1m and derive height information (one representative height for each building) with 1-2 m accuracy. This update should be achieved by using image analysis techniques developed at ETH Zurich and digital aerial imagery. The whole procedure should be implemented as a stand-alone software package, able to import and export data as used at L+T. It should be quasi operational, fast, and the most important reliable. We do not aim at full automation (ca. 80% completeness is a plausible target). The paper will present in detail the aims, input data, strategy and general methods used in ATOMI. We will also present an overview of the results achieved up to now, and problems faced in building and road reconstruction. More de...", + "neighbors": [ + 93 + ], + "mask": "Validation" + }, + { + "node_id": 340, + "label": 5, + "text": "Causal Models of Mobile Service Robot Behavior Temporal projection, the process of predicting what will happen when a robot executes its plan, is essential for autonomous service robots to successfully plan their missions. This paper describes a causal model of the behavior exhibited by the mobile robot Rhino when running concurrent reactive plans for performing office delivery jobs. The model represents aspects of robot behavior that cannot be represented by most action models used in AI planning: it represents the temporal structure of continuous control processes, several modes of their interferences, and various kinds of uncertainty. This enhanced expressiveness enables xfrm (McD92; BM94), a robot planning system, to predict, and therefore forestall, various kinds of behavior flaws including missed deadlines whilst exploiting incidental opportunities. The proposed causal model is experimentally validated using the robot and its simulator. Introduction Temporal projection, the process of predicting what will happen when a ro...", + "neighbors": [ + 459 + ], + "mask": "Train" + }, + { + "node_id": 341, + "label": 2, + "text": "Boosting and Rocchio Applied to Text Filtering We discuss two learning algorithms for text filtering: modified Rocchio and a boosting algorithm called AdaBoost. We show how both algorithms can be adapted to maximize any general utility matrix that associates cost (or gain) for each pair of machine prediction and correct label. We first show that AdaBoost significantly outperforms another highly effective text filtering algorithm. We then compare AdaBoost and Rocchio over three large text filtering tasks. Overall both algorithms are comparable and are quite effective. AdaBoost produces better classifiers than Rocchio when the training collection contains a very large number of relevant documents. However, on these tasks, Rocchio runs much faster than AdaBoost. 1 Introduction With the explosion in the amount of information available electronically, information filtering systems that automatically send articles of potential interest to a user are becoming increasingly important. If users indicate their interests to a filtering system...", + "neighbors": [ + 142, + 674, + 1001, + 1090, + 1094 + ], + "mask": "Train" + }, + { + "node_id": 342, + "label": 0, + "text": "Impact: A Platform for Collaborating Agents twork. The Impact server provides the infrastructure upon which different Impact agents can interact. To avoid a performance bottleneck, multiple copies of the server can be replicated and scattered across the network. Impact agents A set of data objects can be represented in a wide variety of ways. When building an application, we'd like to select a data structure that supports the application operations that are the most frequently executed, the most critical, or both. So, any definition of an agent must support such flexible choice of data structures, and agentization must let us extend arbitrary data representations. In Impact, an agent consists of any body of software code whatsoever, with the associated wrapper. Figure 2 shows such an agent's architecture. The software code. The agent's code consists of two parts: . a set of data structures (or data types) manipulated by the agent. For example, if we are building a database agen", + "neighbors": [ + 29, + 1236 + ], + "mask": "Test" + }, + { + "node_id": 343, + "label": 1, + "text": "Finding Counterexamples to Inductive Conjectures We present an implementation of a method for \u00a3nding counterexamples to universally quanti\u00a3ed inductive conjectures in \u00a3rst-order logic. Our method uses the proof by consistency strategy to guide a search for a counterexample and a standard \u00a3rst-order theorem prover to perform a concurrent check for inconsistency. We explain brie\u00a4y the theory behind the method, describe our implementation, and evaluate results achieved on a variety of incorrect conjectures from various sources. Some work in progress is also presented: we are applying the method to the veri\u00a3cation of cryptographic security protocols. In this context, a counterexample to a security property can indicate an attack on the protocol, and our method extracts the trace of messages exchanged in order to effect the attack. This application demonstrates the advantages of the method, in that quite complex side conditions decide whether a particular sequence of messages is possible. Using a theorem prover provides a natural way of dealing with this. Some early results are presented and we discuss future work. 1", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 344, + "label": 1, + "text": "Induction of decision trees using RELIEFF In the context of machine learning from examples this paper deals with the problem of estimating the quality of attributes with and without dependencies between them. Greedy search prevents current inductive machine learning algorithms to detect significant dependencies between the attributes. Recently, Kira and Rendell developed the RELIEF algorithm for estimating the quality of attributes that is able to detect dependencies between attributes. We show strong relation between RELIEF's estimates and impurity functions, that are usually used for heuristic guidance of inductive learning algorithms. We propose to use RELIEFF, an extended version of RELIEF, instead of myopic impurity functions. We have reimplemented Assistant, a system for top down induction of decision trees, using RELIEFF as an estimator of attributes at each selection step. The algorithm is tested on several artificial and several real world problems. Results show the advantage of the presented approach to inductive lea...", + "neighbors": [ + 1016 + ], + "mask": "Train" + }, + { + "node_id": 345, + "label": 4, + "text": "Situated Computing: A Paradigm to Enhance the Mobile User's Interaction When people interact with computers, they have to pay attention for things that are not related to the situation of the problems because the interfaces are not contextualized to their working environment. Hence it is sometimes hard to integrate computers as embedded tools, which facilitate users to accomplish their objectives easily in the working life. Situated computing is a new paradigm for mobile computer users based on their physical context and activities carried out in the workspace. It defines the infrastructure how the situated interaction can be provided using applications. In this chapter we first describe a model called situation metaphor to design interaction between the user and mobile computers as the basis for the situated computing. Thereafter, a framework called Situated Information Filing and Filtering (SIFF) is presented as the foundation for situated application development. In general a three-stages schema is given considerting the top stage for situated applications. Four applications based on the SIFF are also presented to demonstrate the enhancement of mobile user's interaction that can be achieved.", + "neighbors": [ + 798 + ], + "mask": "Train" + }, + { + "node_id": 346, + "label": 2, + "text": "Jotmail: A Voicemail Interface That Enables You to See What Was Said stevew/julia/urs @ research.att.com Voicemail is a pervasive, but under-researched tool for workplace communication. Despite potential advantages of voicemail over email, current phone-based voicemail UIs are highly problematic for users. We present a novel, Web-based, voicemail interface, Jotmail. The design was based on data from several studies of voicemail tasks and user strategies. The GUI has two main elements: (a) personal annotations that serve as a visual analogue to underlying speech; (b) automatically derived message header information. We evaluated Jotmail in an 8-week field trial, where people used it as their only means for accessing voicemail. Jotmail was successful in supporting most key voicemail tasks, although users ' electronic annotation and archiving behaviors were different from our initial predictions. Our results argue for the utility of a combination of annotation based indexing and automatically derived information, as a general technique for accessing speech archives.", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 347, + "label": 2, + "text": "Learning to Extract Symbolic Knowledge from the World Wide Web The World Wide Web is a vast source of information accessible to computers, but understandable only to humans. The goal of the research described here is to automatically create a computer understandable knowledge base whose content mirrors that of the World Wide Web. Such a knowledge base would enable much more e ective retrieval of Web information, and promote new uses of the Web to support knowledge-based inference and problem solving. Our approach istodevelop a trainable information extraction system that takes two inputs. The rst is an ontology that de nes the classes (e.g., Company, Person, Employee, Product) and relations (e.g., Employed.By, Produced.By) ofinterest when creating the knowledge base. The second is a set of training data consisting of labeled regions of hypertext that represent instances of these classes and relations. Given these inputs, the system learns to extract information from other pages and hyperlinks on the Web. This paper describes our general approach, several machine learning algorithms for this task, and promising initial results with a prototype system that has created a knowledge base describing university people, courses, and research projects.", + "neighbors": [ + 280, + 439, + 496, + 543, + 609, + 655, + 660, + 682, + 726, + 851, + 865, + 1021, + 1090, + 1153, + 1206 + ], + "mask": "Train" + }, + { + "node_id": 348, + "label": 2, + "text": "Using linear classifiers in the integration of user modeling and text content analysis in the personalization of a Web-based Spanish News Service Nowadays many newspapers and news agencies offer personalized information access services and, moreover, there is a growing interest in the improvement of these services. In this paper we present a methodology useful to improve the intelligent personalization of news services and the way it has been applied to a Spanish relevant newspaper: ABC. Our methodology integrates textual content analysis tasks and machine learning techniques to achieve an elaborated user model, which represents separately short-term needs and long-term multi-topic interests. The characterization of a user's interests includes his preferences about structure (newspaper sections), content and information delivery. A wide coverage and non-specific-domain classification of topics and a personal set of keywords allow the user to define his preferences about content. Machine learning techniques are used to obtain an initial representation of each category of the topic classification. Finally, we introduce some details about the Mercurio system, which is being used to implement this methodology for ABC. We describe our experience and an evaluation of the system in comparison with other commercial systems.", + "neighbors": [ + 780, + 1083 + ], + "mask": "Train" + }, + { + "node_id": 349, + "label": 3, + "text": "Logical Semantics and Language for Databases with Partial and Complete Tuples and Sets (Extended Abstract) ) Mengchi Liu Department of Computer Science University of Regina, Regina, Saskatchewan, Canada S4S 0A2 Email: mliu@cs.uregina.ca Abstract We discuss the semantics of complex object databases with both partial and complete tuples and sets. We redefine the notion of database to reflect the existence of partial and complete tuples and sets and study how to integrate partial information about tuples and sets spread in the database and check consistency in the meantime. We also present a deductive language RLOG II for complex objects with null/unknown and inconsistent values based on Relationlog. The main novel feature of the language is that it is the only one that supports the null extended nested relational algebra operations directly and more importantly recursively. This work provides a firm logical foundation for nested relational and complex object databases that have both partial and complete tuples and sets and solves an open problem of supporting recursion with generic null/...", + "neighbors": [ + 861 + ], + "mask": "Train" + }, + { + "node_id": 350, + "label": 3, + "text": "ROL2: Towards a Real Deductive Object-Oriented Database Language ROL is a strongly typed deductive object-oriented database language. It integrates many important features of deductive databases and object-oriented databases. However, it is only a structurally object-oriented language. In this paper, we present our extension of ROL called ROL2. Most importantly, ROL2 supports behaviorally objectoriented features such as rule-based methods and encapsulation so that it is a now real deductive object-oriented database language. It supports in a rule-based framework nearly all important object-oriented features such as object identity, complex objects, typing, information hiding, rule-based methods, encapsulation of such methods, overloading, late binding, polymorphism, class hierarchies, multiple structural and behavioral inheritance with overriding, blocking, and conict handling. It is so far the only deductive system that supports all these features in a pure rule-based framework. Keywords: object-oriented databases, deductive databases, ...", + "neighbors": [ + 1063 + ], + "mask": "Train" + }, + { + "node_id": 351, + "label": 4, + "text": "Virtual Keyboards This paper describes a novel scheme for vision-based human computer interaction in which traditional input and output devices, monitors, keyboards and mice, are replaced with augmented reality displays, projection systems and cameras. User input is accomplished by projecting an image of the interface onto a flat surface in the scene which is monitored with a video camera. The scheme hinges on the observation that the relationship between the three surfaces of interest, the work surface, the virtual keyboard and the image obtained by the camera, can be characterized by projective transformations of RP 2 . This observation leads to a fast and accurate online calibration algorithm. The basic advantage of the vision based interaction techniques proposed in this paper is that they do not involve mechanical input devices such as keyboards, mice and touch screens. There are no moving parts and no wires to connect to the interface surface. By avoiding a physical instantiation of t...", + "neighbors": [ + 334 + ], + "mask": "Validation" + }, + { + "node_id": 352, + "label": 4, + "text": "Adaptable and Adaptive Information Provision for All Users, Including Disabled and Elderly People Due to the tremendously increasing popularity of the World-Wide Web, hypermedia is going to be the leading online information medium for some years to come and will most likely become the standard gateway for citizens to the \"information highway\". Already today, visitors of web sites are generally heterogeneous and have different needs, and this is likely to increase in the future. The aim of the AVANTI project is to cater hypermedia information to these individual needs by adapting the content and the presentation of web pages to each individual user. The special needs of elderly and disabled users are also partly considered. A model of the characteristics of user groups, individual users and usage environments, and a domain model are exploited in the adaptation process. One aim of this research is to verify that adaptation and user modeling techniques that were hitherto mostly used for catering interactive software systems to able-bodied users also prove useful for adaptation to users with special needs. Another original aspect is the development of a network-wide user modeling server that can concurrently accommodate the user modeling needs of several applications and several instances of an application within a distributed computing environment.", + "neighbors": [ + 793 + ], + "mask": "Train" + }, + { + "node_id": 353, + "label": 0, + "text": "Autonomous Robot that Uses Symbol Recognition and Artificial Emotion to Attend the AAAI Conference This paper describes our approach in designing an autonomous robot for the AAAI Mobile Robot Challenge, making the robot attend the National Conference on AI. The goal was to do a simplified version of the whole task, by integrating methodologies developed in various research projects conducted in our laboratory. Original contributions are the use of a symbol recognition technique to make the robot read signs, artificial emotion for expressing the state of the robot in the accomplishment of its goals, a touch screen for human-robot interaction, and a charging station for allowing the robot to recharge when necessary. All of these aspects are influenced by the different steps to be followed by the robot attendee to complete the task from start-to-end. Introduction LABORIUS is a young research laboratory interested in designing autonomous systems that can assist human in real life tasks. To do so, robots need some sort of \"social intelligence \", giving them the ability to ...", + "neighbors": [ + 1137 + ], + "mask": "Train" + }, + { + "node_id": 354, + "label": 3, + "text": "Efficient Concurrency Control for Broadcast Environments A crucial consideration in environments where data is broadcast to clients is the low bandwidth available for clients to communicate with servers. Advanced applications in such environments do need to read data that is mutually consistent aswell as current. However, given the asymmetric communication capabilities and the needs of clients in mobile environments, traditional serializability-based approaches are too restrictive, unnecessary, and impractical. We thus propose the use of a weaker correctness criterion called update consistency and outline mechanisms based on this criterion that ensure (1) the mutual consistency of data maintained by the server and read by clients, and (2) the currency of data read by clients. Using these mechanisms, clients can obtain data that is current and mutually consistent \"off the air\", i.e., without contacting the server to, say, obtain locks. Experimental results show a substantial reduction in response times as compared to existing (serializability-based) approaches. A further attractive feature of the approach is that if caching is possible at a client, weaker forms of currency can be obtained while still satisfying the mutual consistency of data.", + "neighbors": [ + 174, + 373, + 470 + ], + "mask": "Train" + }, + { + "node_id": 355, + "label": 5, + "text": "Ensemble Learning for Intrusion Detection in Computer Networks The security of computer networks plays a strategic role in modern computer systems. In order to enforce high protection levels against threats, a number of software tools are currently developed. Intrusion Detection Systems aim at detecting intruder who eluded the \"first line\" protection. In this paper, a pattern recognition approach to network intrusion detection based on ensemble learning paradigms is proposed. The potentialities of such an approach for data fusion and some open issues are outlined.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 356, + "label": 0, + "text": "Planning Agents in James Abstract \u2014 Testing is an obligatory step in developing multi-agent systems. For testing multi-agent systems in virtual, dynamic environments, simulation systems are required that support a modular, declarative construction of experimental frames, that facilitate the embeddence of a variety of agent architectures, and that allow an efficient parallel, distributed execution. We introduce the system James (A Java-Based Agent Modeling Environment for Simulation). In James agents and their dynamic environment are modeled as reflective, time triggered state automata. Its possibilities to compose experimental frames based on predefined components, to express temporal interdependencies, to capture the phenomenon of pro-activeness and reflectivity of agents are illuminated by experiments with planning agents. The underlying planning system is a general purpose system, about which no empirical results exist besides traditional static benchmark tests. We analyze the interplay between heuristics for selecting goals, viewing range, commitment strategies, explorativeness, and trust in the persistence of the world and uncover properties of the agent, the planning engine and the chosen test scenario: Tileworld. I.", + "neighbors": [ + 957, + 1266 + ], + "mask": "Train" + }, + { + "node_id": 357, + "label": 0, + "text": "An Architecture to Guide Crowds Using a Rule-Based Behavior System This paper describes a Client/Server architecture to combine the control of human agents performing \"intelligent actions\" (guided by a Rule-Based Behavior System -- RBBS) with the management of autonomous crowds which perform pre-programmed actions. Our main goal being ability to model crowds formed by a large number of agents (e.g. 1000), we have used pre-programmed actions and basic behaviors. In addition, RBBS provides the user with an interface for real-time behavior control of some groups of the crowd. This paper presents how the Server application deals with virtual human agent's behaviors using a rule-based system. Keywords Multi-agent co-ordination and collaboration, agent architectures, network agents, real-time performance, synthetic agents, rulebased system, human crowds' model. 1. INTRODUCTION Virtual humans grouped together to form crowds populating virtual worlds allow a more intuitive feeling of presence. However, the crowd is not only needed to create an at...", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 358, + "label": 1, + "text": "Parameter-less Genetic Algorithm: A Worst-case Time and Space Complexity Analysis In this paper, the worst-case analysis of the time and space complexity of the parameter-less genetic algorithm versus the genetic algorithm with an optimal population size is provided and the results of the analysis are discussed. Since the assumptions in order for the analysis to be correct are very weak, the result is applicable to a wide range of problems. Various configurations of the parameter-less genetic algorithm are considered and the results of their time and space complexity are compared. 1 Introduction A parameter-less genetic algorithm (Harik & Lobo, 1999) is an alternative to a common trialand -error method of tweaking the values of the parameters of the genetic algorithm in order to find a set-up to accurately and reliably solve a given problem. The algorithm manages a number of independent runs of the genetic algorithm with different population sizes with the remaining parameters set to fixed values according to the theory of genetic algorithms' control maps introduce...", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 359, + "label": 0, + "text": "The Repair of Speech Act Misunderstandings by Abductive Inference this paper, we have concentrated on the repair of mis-understanding. Our colleagues Heeman and Edmonds have looked at the repair of non-understanding. The difference between the two situations is that in the former, the agent derives exactly one interpretation of an utterance and hence is initially unaware of any problem; in the latter, the agent derives either more than one interpretation, with no way to choose between them, or no interpretation at all, and so the problem is immediately apparent. Heeman and Edmonds looked in particular at cases in which a referring expression uttered by one conversant was not understood by the other (Heeman and Hirst 1995; Edmonds 1994; Hirst et. al. 1994). Clark and his colleagues (Clark and Wilkes-Gibbs 1986; Clark 1993) have shown that in such situations, conversants will collaborate on repairing the problem by, in effect, negotiating a reconstruction or elaboration of the referring expression. Heeman and Edmonds model this with a plan recognition and generation system that can recognize faulty plans and try to repair them. Thus (as in our own model) two copies of the system can converse with each other, negotiating referents of referring expressions that are not understood by trying to recognize the referring plans of the other, repairing them where necessary, and presenting the new referring plan to the other for approval.", + "neighbors": [ + 178 + ], + "mask": "Test" + }, + { + "node_id": 360, + "label": 3, + "text": "SENTINEL: A Multiple Engine Information Retrieval and Visualization System We describe a prototype Information Retrieval system, SENTINEL, under development at Harris Corporation's Information Systems Division. SENTINEL is a fusion of multiple information retrieval technologies, integrating n-grams, a vector space model, and a neural network training rule. One of the primary advantages of SENTINEL is its 3-dimenstional visualization capability that is based fully upon the mathematical representation of information within SENTINEL. This 3-dimensional visualization capability provides users with an intuitive understanding, with relevance feedback/query refinement techniques that can be better utilized, resulting in higher retrieval accuracy (precision).", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 361, + "label": 0, + "text": "Foreign Event Handlers to Maintain Information Consistency and System Adequacy this paper is to describe novel applications of Mobile Code technology which have not appeared yet but should be feasible with our current knowledge of the domain. These new applications contradict the often-made observation that Mobile Code is just another technique that does not really bring much more possibilities than existing technologies for distributed applications. There is a whole class of problems that have not received much attention yet and that are not well managed by current environments. These are the problems of maintaining consistency of dynamic information and maintaining systems in adequacy with the ever changing requirements of customers. Our motivation is that, besides the quantitative improvements that most people expect from using Mobile Code, there is also a qualitative benefit which is even more important but not universally recognized now: Mobile Code allows communication with less conventions than message passing [5, 3]. Processes interconnected by Mobile Code still have to agree on high level encoding and synchronization primitives but these agreements are only a fraction of what is necessary to communicate. Many context dependent aspects can be encapsulated inside Mobile Code and changed when the context changes. Encapsulation has the same benefits here as in other software engineering domains: it reduces the dependency between components, thus reducing the number of modifications that we must make to software in order to adapt it to new requirements. For this reason we think that it is the best way to cope with systems that are distributed, hence not manageable by a single person or organization; that are dynamic, because the information they contain must change when the world itself changes; and that are evolving since the users discover n...", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 362, + "label": 5, + "text": "A Competitive Layer Model for Feature Binding and Sensory Segmentation We present a recurrent neural network for feature binding and sensory segmentation, the competitive layer model (CLM). The CLM uses topographically structured competitive and cooperative interactions in a layered network to partition a set of input features into salient groups. The dynamics is formulated within a standard additive recurrent network with linear threshold neurons. Contextual relations among features are coded by pairwise compatibilities which define an energy function to be minimized by the neural dynamics. Due to the usage of dynamical winner-take-all circuits the model gains more flexible response properties than spin models of segmentation by exploiting amplitude information in the grouping process. We prove analytic results on the convergence and stable attractors of the CLM, which generalize earlier results on winner-take-all networks, and incorporate deterministic annealing for robustness against local minima. The piecewise linear dynamics of the CLM allows a linear eigensubspace analysis which we use to analyze the dynamics of binding in conjunction with annealing. For the example of contour detection we show how the CLM can integrate figure-ground segmentation and grouping into a unified model.", + "neighbors": [ + 1095 + ], + "mask": "Train" + }, + { + "node_id": 363, + "label": 0, + "text": "Multiagent Systems Specification by UML Statecharts Aiming at Intelligent Manufacturing Multiagent systems are a promising new paradigm in computing, which are contributing to various fields. Many theories and technologies have been developed in order to design and specify multiagent systems, however, no standard procedure is used at present. Industrial applications often have a complex structure and need plenty of working resources. They require a standard specification method as well. As the standard method to design and specify software systems, we believe that one of the key words is simplicity for their wide acceptance. In this paper, we propose a method to specify multiagent systems, namely with UML statecharts. We use them for specifying almost all aspects of multiagent systems, because we think that it is an advantage to keep everything in one type of diagram. We apply", + "neighbors": [ + 184, + 508, + 602 + ], + "mask": "Train" + }, + { + "node_id": 364, + "label": 3, + "text": "A Performance Evaluation of Alternative Mapping Schemes for Storing XML Data in a Relational Database XML is emerging as one of the dominant data formats for data processing on the Internet. To query XML data, query languages likeXQL, Lorel, XML-QL, or XML-GL have been proposed. In this paper, we study how XML data can be stored and queried using a standard relational database system. For this purpose, we present alternative mapping schemes to store XML data in a relational database and discuss how XML-QL queries can be translated into SQL queries for every mapping scheme. We present the results of comprehensive performance experiments that analyze the tradeo#s of the alternative mapping schemes in terms of database size, query performance and update performance. While our discussion is focussed on XML and XML-QL, the results of this paper are relevant for most semi-structured data models and most query languages for semi-structured data. 1 Introduction It has become clear that not all applications are met by the relational, object-relational, or object-oriented data models. ...", + "neighbors": [ + 17, + 78, + 306, + 634 + ], + "mask": "Train" + }, + { + "node_id": 365, + "label": 1, + "text": "Saccadic Search with Gabor features applied to Eye Detection and Real-Time Head Tracking The Gabor decomposition is a ubiquitous tool in computer vision. Nevertheless, it is generally considered computationally demanding for active vision applications. We suggest an attention-driven approach to feature detection inspired by the human saccadic system. A dramatic speedup is achieved by computing the Gabor decomposition only on the points of a sparse retinotopic grid. An off-line eye detection application and a real-time head localisation and tracking system are presented. The real-time system features a novel eyeball-mounted camera designed to simulate the dynamic performance of the human eye and is, to the best of our knowledge, the first example of active vision system based on the Gabor decomposition.", + "neighbors": [ + 1152 + ], + "mask": "Train" + }, + { + "node_id": 366, + "label": 4, + "text": "How Many Separately Evolved Emotional Beasties Live Within Us? A problem which bedevils the study of emotions, and the study of consciousness, is that we assume a shared understanding of many everyday concepts, such as `emotion', `feeling', `pleasure', `pain', `desire', `awareness', etc. Unfortunately, these concepts are inherently very complex, ill-defined, and used with different meanings by different people. Moreover this goes unnoticed, so that people think they understand what they are referring to even when their understanding is very unclear. Consequently there is much discussion that is inherently vague, often at cross-purposes, and with apparent disagreements that arise out of people unwittingly talking about different things. We need a framework which explains how there can be all the diverse phenomena that different people refer to when they talk about emotions and other affective states and processes. The conjecture on which this paper is based is that adult humans have a type of information-processing architecture, with components whi...", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 367, + "label": 0, + "text": "Standardizing Agent Communication An Agent Communication Language (ACL) is a collection of speech-act-like message types, with agreed-upon semantics, which facilitate the knowledge and information exchange between software agents.", + "neighbors": [ + 441, + 663, + 1067 + ], + "mask": "Train" + }, + { + "node_id": 368, + "label": 3, + "text": "Maintaining Temporal Views Over Non-Temporal Information Sources For Data Warehousing An important use of data warehousing is to provide temporal views over the history of source data that may itself be non-temporal. While recent work in view maintenance is applicable to data warehousing, only non-temporal views have been considered. In this paper, we introduce a framework for maintaining temporal views over non-temporal information sources in a data warehousing environment. We describe an architecture for the temporal data warehouse that automatically maintains temporal views over non-temporal source relations, and allows users to ask temporal queries using these views. Because of the dimension of time, a materialized temporal view may need to be updated not only when source relations change, but also as time advances. We present incremental techniques to maintain temporal views for both cases, and outline the implementation of our approach in the WHIPS warehousing prototype at Stanford. 1 Introduction A data warehouse is a repository for efficient querying ...", + "neighbors": [ + 795 + ], + "mask": "Train" + }, + { + "node_id": 369, + "label": 1, + "text": "Active Markov Localization for Mobile Robots Localization is the problem of determining the position of a mobile robot from sensor data. Most existing localization approaches are passive, i.e., they do not exploit the opportunity to control the robot's effectors during localization. This paper proposes an active localization approach. The approach is based on Markov localization and provides rational criteria for (1) setting the robot's motion direction (exploration), and (2) determining the pointing direction of the sensors so as to most efficiently localize the robot. Furthermore, it is able to deal with noisy sensors and approximative world models. The appropriateness of our approach is demonstrated empirically using a mobile robot in a structured office environment. Key words: Robot Position Estimation, Autonomous Service Robots 1 Introduction To navigate reliably in indoor environments, a mobile robot must know where it is. Over the last few years, there has been a tremendous scientific interest in algorithms for estimating ...", + "neighbors": [ + 295, + 899, + 1194 + ], + "mask": "Test" + }, + { + "node_id": 370, + "label": 5, + "text": "Three New Algorithms for Projective Bundle Adjustment with Minimum Parameters Bundle adjustment is a technique used to compute the maximum likelihood estimate of structure and motion from image feature correspondences. It practice, large non-linear systems have to be solved, most of the time using an iterative optimization process starting from a sub-optimal solution obtained by using linear methods. The behaviour, in terms of convergence, and the computational cost of this process depend on the parameterization used to represent the problem, i.e. of structure and motion.", + "neighbors": [ + 623 + ], + "mask": "Test" + }, + { + "node_id": 371, + "label": 0, + "text": "Non-Supervised Sensory-Motor Agents Learning This text discusses a proposal for creation and destruction of neurons based on the sensory-motor activity. This model, called sensory-motor schema, is used to define a sensory-motor agent as a collection of activity schemata. The activity schema permits a useful distribution of neurons in a conceptual space, creating concepts based on action and sensation. Such approach is inspired in the theory of the Swiss psychologist and epistemologist Jean Piaget, and intends to make explicit the account of the processes of continuous interaction between sensory-motor agents and their environments when agents are producing cognitive structures. 1. Introduction The notion of an autonomous agent plays a central role in contemporaneous research on Artificial Intelligence [3]. Cognitive agents are based on symbolic processing mechanisms. Reactive agents are based on alternative computational mechanisms like neural networks, analogic processing, etc. The alternative approach using autonomous agents b...", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 372, + "label": 3, + "text": "Simultaneous Proxy Evaluation The Simultaneous Proxy Evaluation (SPE) architecture is designed to evaluate multiple web proxies in parallel using object requests which are duplicated and passed to each proxy. The SPE architecture reduces problems of unrealistic test environments, dated and/or inappropriate workloads, and is additionally applicable to contentbased prefetching proxies. It is intended to measure byte and object hit rates, client-perceived latencies, and cache consistency. We characterize a space of proxy evaluation methodologies and place this architecture within it. 1 Introduction This paper presents a new architecture for the evaluation of proxy caches. Initially, it grew out of research in techniques for prefetching in web caches. In particular, we found that existing mechanisms for the evaluation of proxy caches were not well suited to prefetching systems. Objective evaluation is paramount to all research, whether applied or academic. Since this is certainly relevant when exploring various approac...", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 373, + "label": 3, + "text": "Exploiting Versions for Handling Updates in Broadcast Disks Recently, broadcasting has attracted considerable attention as a means of disseminating information to large client populations in both wired and wireless settings. In this paper, we exploit versions to increase the concurrency of client transactions in the presence of updates. We consider three alternative mediums for storing versions: (a) the air: older versions are broadcast along with current data, (b) the client's local cache: older versions are maintained in cache, and (c) a local database or warehouse at the client: part of the server's database is maintained at the client in the form of a multiversion materialized view. The proposed techniques are scalable in that they provide consistency without any direct communication from clients to the server. Performance results show that the overhead of maintaining versions can be kept low, while providing a considerable increase in concurrency. 1 Introduction While traditionally data are delivered from servers to clients on demand, a...", + "neighbors": [ + 174, + 354, + 470 + ], + "mask": "Train" + }, + { + "node_id": 374, + "label": 3, + "text": "The Persistent Cache: Improving OID Indexing in Temporal Object-Oriented Database Systems In a temporal OODB, an OID index (OIDX) is needed to map from OID to the physical location of the object. In a transaction time temporal OODB, the OIDX should also index the object versions. In this case, the index entries, which we call object descriptors (OD), also include the commit timestamp of the transaction that created the object version. The OIDX in a non-temporal OODB only needs to be updated when an object is created, but in a temporal OODB, the OIDX have to be updated every time an object is updated. We have in a previous study shown that this can be a potential bottleneck, and in this report, we present the Persistent Cache (PCache), a novel approach which reduces the index update and lookup costs in temporal OODBs. In this report, we develop a cost model for the PCache, and use this to show that the use of a PCache can reduce the average access cost to only a fraction of the cost when not using the PCache. Even though the primary context of this report is OID indexing in ...", + "neighbors": [ + 630 + ], + "mask": "Train" + }, + { + "node_id": 375, + "label": 0, + "text": "A Knowledge-Based Approach for Designing Intelligent Team Training Systems This paper presents a knowledge approach to designing team training systems using intelligent agents. We envision a computer-based training system in which teams are trained by putting them through scenarios, which allow them to practice their team skills. There are two important roles that intelligent agents can play; these are virtual team members, and tutors. To carry out these functions, these agents must be equipped with an understanding of the task domain, the team structure, the selected decision-making process and their beliefs about other team members' mental states. Even though existing agent teamwork models incorporate many of the elements listed above, they have not focused on analyzing information needs of team members to support proactive agent interactions. To encode the team knowledge, we have developed a representation language, based on the BDI model, called MALLET. A Petri Net model of an individual agent's plans and information needs can be derived from the role des...", + "neighbors": [ + 686, + 775, + 964 + ], + "mask": "Train" + }, + { + "node_id": 376, + "label": 4, + "text": "Improving Interaction with Virtual Environments Introduction Virtual environments (VEs) provide a computer-based interface to a real-life or abstract space, using 3D graphics and 3D interaction techniques. VEs represent a novel interface style which offers new possibilities and challenges to human-computer interface design. However, studies of the design of VEs (Kaur et al., 1996) show that designers lack a coherent approach to design, especially interaction design. Designers appear to be pre-occupied with difficult technical issues and think little about supporting user interaction. However, major interaction problems have been found with current VEs, such as disorientation, perceptual misjudgements and difficulty finding and understanding available interactions (McGovern, 1993; COVEN, 1997). These common problems have been known to result in user frustration and a low usability and acceptability for the VE (Kaur et al., 1996; Miller 1994). Guidance is needed on interaction design for VEs to avoid such usability problems.", + "neighbors": [ + 678, + 1077, + 1105 + ], + "mask": "Train" + }, + { + "node_id": 377, + "label": 5, + "text": "From Markov Random Fields to Associative Memories and Back: Spin-Glass Markov Random Fields this paper we propose a fully connected energy function for Markov Random Field (MRF) modeling which is inspired by Spin-Glass Theory (SGT). Two major tasks in MRF modeling are how to define the neighborhood system for irregular sites and how to choose the energy function for a proper encoding of constraints. The proposed energy function offers two major advantages that makes it possible to avoid MRF modeling problems in the case of irregular sites. First, full connectivity makes the neighborhood definition irrelevant, and second, the energy function is defined independently of the considered application. A basic assumption in SGT is the infinite dimension of the configuration space in which the energy is defined; the choice of a particular energy function, which depends on the scalar product between configurations, allows us to use a kernel function in the energy formulation; this solves the problem of high dimensionality and makes it possible to use SGT results in an MRF framework. We call this new model Spin Glass- - Markov Random Field (SG-MRF). Experiments on textures and objects database show the correctness and effectiveness of the proposed model", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 378, + "label": 5, + "text": "HICAP: An Interactive Case-Based Planning Architecture and its Application to Noncombatant Evacuation Operations This paper describes HICAP (Hierarchical Interactive Case-based Architecture for Planning), a general purpose planning architecture that we have developed and applied to assist military commanders and their staff with planning NEOs (Noncombatant Evacuation Operations). HICAP integrates a hierarchical task editor, HTE, with a conversational case-based planning tool, NaCoDAE/HTN. In this application, HTE maintains an agenda of tactical planning tasks that, according to the guidelines indicated by military doctrine, must be addressed in a NEO plan. It also supports several bookkeeping tasks, which are crucial for large-scale planning tasks that differ greatly among different NEO operations. Military planning personnel select a task to decompose from HTE and then use NaCoDAE/HTN to interactively refine it into an operational plan by selecting and applying cases, which represent task decompositions from previous NEO operations. Thus, HICAP helps commanders by using previous experience to fo...", + "neighbors": [ + 924 + ], + "mask": "Train" + }, + { + "node_id": 379, + "label": 2, + "text": "Learning to Construct Knowledge Bases from the World Wide Web The World Wide Web is a vast source of information accessible to computers, but understandable only to humans. The goal of the research described here is to automatically create a computer understandable knowledge base whose content mirrors that of the World Wide Web. Such a knowledge base would enable much more effective retrieval of Web information, and promote new uses of the Web to support knowledge-based inference and problem solving. Our approach is to develop a trainable information extraction system that takes two inputs. The first is an ontology that defines the classes (e.g., company, person, employee, product) andrelations (e.g., employed by, produced by) of interest when creating the knowledge base. The second is a set of training data consisting of labeled regions of hypertext that represent instances of these classes and relations. Given these inputs, the system learns to extract information from other pages and hyperlinks on the Web. This article describes our general approach, several machine learning algorithms for this task, and promising initial results with a prototype system that has created a knowledge base describing university people, courses, and research projects.", + "neighbors": [ + 133, + 322, + 609, + 643, + 855, + 875, + 956, + 1122, + 1178 + ], + "mask": "Train" + }, + { + "node_id": 380, + "label": 0, + "text": "Algorithms for Optimizing Leveled Commitment Contracts In automated negotiation systems consisting of self-interested agents, contracts have traditionally been binding. Leveled commitment contracts\u2014i.e. contracts where each party can decommit by paying a predetermined penalty were recently shown to improve Pareto efficiency even if agents rationally decommit in Nash equilibrium using inflated thresholds on how good their outside offers must be before they decommit. This paper operationalizes the four leveled commitment contracting protocols by presenting algorithms for using them. Algorithms are presented for computing the Nash equilibrium decommitting thresholds and decommitting probabilities given the contract price and the penalties. Existence and uniqueness of the equilibrium are analyzed. Algorithms are also presented for optimizing the contract itself (price and penalties). Existence and uniqueness of the optimum are analyzed. Using the algorithms we offer a contract optimization service on the web as part of ('Mediator, our next generation electronic commerce server. Finally, the algorithms are generalized to contracts involving more than two agents. 1", + "neighbors": [ + 589 + ], + "mask": "Validation" + }, + { + "node_id": 381, + "label": 3, + "text": "ADOME: An Advanced Object Modelling Environment ADOME, ADvanced Object Modeling Environment, an approach to integrating data and knowledge management based on object-oriented technology, is presented. Next generation information systems will require more flexible data modelling capabilities than those provided by current object-oriented DBMSs. In particular, integration of data and knowledge management capabilities will become increasingly important. In this context, ADOME provides versatile role facilities that serve as \"dynamic binders\" between data objects and production rules, thereby facilitating flexible data and knowledge management integration. A prototype that implements this mechanism and the associated operators has been constructed on top of a commercial object-oriented DBMS and a rule base system. Index Terms: Object modeling, knowledge semantics, dynamic roles, object-oriented databases, nextgeneration information systems 1 Introduction Increasingly, organizations require more intelligent information management. In o...", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 382, + "label": 2, + "text": "Information Retrieval on the World Wide Web and Active Logic: A Survey and Problem Definition As more information becomes available on the World Wide Web (there are currently over 4 billion pages covering most areas of human endeavor), it becomes more difficult to provide effective search tools for information access. Today, people access web information through two main kinds of search interfaces: Browsers (clicking and following hyperlinks) and Query Engines (queries in the form of a set of keywords showing the topic of interest). The first process is tentative and time consuming and the second may not satisfy the user because of many inaccurate and irrelevant results. Better support is needed for expressing one's information need and returning high quality search results by web search tools. There appears to be a need for systems that do reasoning under uncertainty and are flexible enough to recover from the contradictions, inconsistencies, and irregularities that such reasoning involves.", + "neighbors": [ + 224, + 281, + 453, + 457, + 774, + 933, + 1031, + 1059, + 1183 + ], + "mask": "Validation" + }, + { + "node_id": 383, + "label": 0, + "text": "Antisocial Agents and Vickrey Auctions In recent years auctions have become more and more important in the field of multiagent systems as useful mechanisms for resource allocation and task assignment. In many cases the Vickrey (second-price sealed-bid) auction is used as a protocol that prescribes how the individual agents have to interact in order to come to an agreement. We show that the Vickrey auction, despite its theoretical benefits, is inappropriate if \"antisocial\" agents participate in the auction process. More specifically, an antisocial attitude for economic agents that makes reducing the profit of competitors their main goal besides maximizing their own profit is introduced. Under this novel condition, agents need to deviate from the dominant truth-telling strategy. This paper presents a strategy for bidders in repeated Vickrey auctions who are intending to inflict losses to fellow agents in order to be more successful, not in absolute measures, but relatively to the group of bidders. The strategy is evaluated in a simple task allocation scenario.", + "neighbors": [ + 624, + 667 + ], + "mask": "Train" + }, + { + "node_id": 384, + "label": 4, + "text": "Practical Guidelines for the Readability of IT-architecture Diagrams This paper presents the work done to establish guidelines for the creation of readable IT-architecture diagrams and gives some examples of guidelines and some examples of improved diagrams. These guidelines are meant to assist practicing IT-architects in preparing the diagrams to communicate their architectures to the various stakeholders. Diagramming has always been important in information technology (IT), but the recent interest in ITarchitecture, the widespread use of software and developments in electronic communication, make it necessary to again look at the rt of making diagrams'for this particular class and its users. The guidelines indicate how various visual attributes, like hierarchy, layout, color, form, graphics, etc. can contribute to the readability of IT-architecture diagrams. The emphasis is on the outward appearance of diagrams. Some additional support is given for the thinking/reasoning processes while designing or using a set of diagrams and an attempt is made to arrive at a rationale of these guidelines. An evaluation process has been performed with three groups of practicing IT-architects. The outcome of this evaluation is presented. This work is part of a more comprehensive research project on \"Visualisation of IT- architecture\".", + "neighbors": [ + 423 + ], + "mask": "Validation" + }, + { + "node_id": 385, + "label": 4, + "text": "Gaia: Enabling Active Spaces Ubiquitous computing promotes physical spaces with hundreds of specialized embedded devices that increase our productivity, alleviate some specific everyday tasks and provide new ways of interacting with the computational environment. Personal computers lose the focus of attention due to the fact that the computational environment is spread across the physical space. Therefore, the users' view of the computational environment is finally extended beyond the physical limits of the computer. Physical spaces become computer systems, or in other terms, Active Spaces. However, these Active Spaces require novel system software capable of seamlessly coordinating their hidden complexity. Our goal is to extend the model provided by current computer systems to allow interaction with physical spaces and their contained entities (physical and virtual) by means of a single abstraction called Active Space. 1. Introduction Ubiquitous computing promotes the proliferation of embedded devices specializ...", + "neighbors": [ + 194 + ], + "mask": "Train" + }, + { + "node_id": 386, + "label": 1, + "text": "Fuzzy Concepts and Formal Methods: A Fuzzy Logic Toolkit for Z It has been recognised that formal methods are useful as a modelling tool in requirements engineering. Specification languages such as Z permit the precise and unambiguous modelling of system properties and behaviour. However some system problems, particularly those drawn from the IS problem domain, may be difficult to model in crisp or precise terms. It may also be desirable that formal modelling should commence as early as possible, even when our understanding of parts of the problem domain is only approximate. This paper suggests fuzzy set theory as a possible representation scheme for this imprecision or approximation. We provide a summary of a toolkit that defines the operators, measures and modifiers necessary for the manipulation of fuzzy sets and relations. We also provide some examples of the laws which establishes an isomorphism between the extended notation presented here and conventional Z when applied to boolean sets and relations.", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 387, + "label": 5, + "text": "Choosing Good Distance Metrics and Local Planners for Probabilistic Roadmap Methods This paper presents a comparative evaluation of different distance metrics and local planners within the context of probabilistic roadmap methods for motion planning. Both C-space and Workspace distance metrics and local planners are considered. The study concentrates on cluttered three-dimensional Workspaces typical, e.g., of mechanical designs. Our results include recommendations for selecting appropriate combinations of distance metrics and local planners for use in motion planning methods, particularly probabilistic roadmap methods. Our study of distance metrics showed that the importance of the translational distance increased relative to the rotational distance as the environment become more crowded. We find that each local planner makes some connections than none of the others do \u2014 indicating that better connected roadmaps will be constructed using multiple local planners. We propose a new local planning method we call rotate-at-s that outperforms the common straight-line in C-space method in crowded environments.", + "neighbors": [ + 320, + 446 + ], + "mask": "Train" + }, + { + "node_id": 388, + "label": 4, + "text": "Mining Usability Information from Log Files: AMulti-Pronged Approach rooms is configurable by its occupants in how they organize various tools housing their data, documents, and graphics. The TW system provides for synchronous and asynchronous user interactions, but importantly these interactions are in the context of relevant data. The work in this experiment was characterized by several full group meetings (for planning and coordination) interspersed with periods of individual activity (asychronous work) and smaller coordination meetings of two or three team members around the \"hand-off\" of output from a task used as input for another task. Collected Data The native version of TW produces a server-based log-file that contains information about the identity of users entering the distributed application, the identity of the rooms through which users navigate, file uploads, and message passing between users. This set of interactions was deemed too rudimentary for capturing the type of data needed for usability analysis. Since the source co", + "neighbors": [ + 45 + ], + "mask": "Train" + }, + { + "node_id": 389, + "label": 3, + "text": "Scalable Algorithms for Large-Scale Temporal Aggregation The ability to model time-varying natures is essential to many database applications such as data warehousing and mining. However, the temporal aspects provide many unique characteristics and challenges for query processing and optimization. Among the challenges is computing temporal aggregates, which is complicated by having to compute temporal grouping. In this paper, we introduce a variety of temporal aggregation algorithms that overcome major drawbacks of previous work. First, for small-scale aggregations, both the worst-case and average-case processing time have been improved significantly. Second, for large-scale aggregations, the proposed algorithms can deal with a database that is substantially larger than the size of available memory. Third, the parallel algorithm designed on a shared-nothing architecture achieves scalable performance by delivering nearly linear scale-up and speed-up. The contributions made in this paper are particularly important because the rate of increase ...", + "neighbors": [ + 7, + 706 + ], + "mask": "Train" + }, + { + "node_id": 390, + "label": 4, + "text": "Feasibility Discussion of a Collaborative Virtual Environment - FINDING ALTERNATIVE WAYS FOR UNIVERSITY MEMBERS INTERACTION This paper discusses the potential impact and roadmap for the creation of a Collaborative Virtual Environment where all university members can interact in novel ways. Some actual NetLab figures are presented to justify this evolution as feasible. A related project that uses the potential created by the \"laptops for all\" action is a virtual incubator to simulate entrepreneurship bias is presented.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 391, + "label": 2, + "text": "Automatic Multi-Lingual Information Extraction Information Extraction (IE) is a burgeoning technique because of the explosion of internet. So far, most of the IE systems are focusing on English text; and most of them are in the supervised learning framework, which requires large amount of human labor; and most of them can only work in narrow domain, which is domain dependent. These systems are difficult to be ported to other languages, other domains because of these inherent shortcomings. Currently, besides western languages like English, there are many other Asian languages which are much di erent from English. In English, words are delimited by white-spaces so computer can easily tokenize the input text string. In many languages like Chinese, Japanese, Thai and Korea, they do not have word boundaries between words. This poses a difficult problem for the information extraction for those languages. In this thesis, we intend to implement a self-contained, language independent automatic IE system. The system is automatic because we are using a unsupervised learning framework in which no labeled data is required for training or a semi-supervised learning framework in which small amount of labeled data and large amount of unlabeled data are used. Specifically, we deal with Chinese and English languages name entity recognition and entity relation extraction, but the system can be easily extended to any other languages and other tasks. We implement an unsupervised Chinese word segmenter, a Chinese POS tagger, and we extend maximum entropy models to incorporate unlabeled data for general information extraction.", + "neighbors": [ + 401, + 437, + 523, + 601, + 609, + 891 + ], + "mask": "Test" + }, + { + "node_id": 392, + "label": 3, + "text": "Simplifying Data Access: The Energy Data Collection (EDC) Project The massive amount of statistical and text data available from government agencies has created a set of daunting challenges to both research and analysis communities. These problems include heterogeneity, size, distribution, and control of terminology. At the Digital Government Research Center we are investigating solutions to these key problems. In this paper we focus on (1) ontological mappings for terminology standardization, (2) data integration across data bases with high speed query processing, and (3) interfaces for query input and presentation of results. This collaboration between researchers from Columbia University and the Information Sciences Institute of the University of Southern California employs technology developed at both locations, in particular the SENSUS ontology, the SIMS multi-database access planner, the LKB automated dictionary and terminology analysis system, and others. The pilot application targets gasoline data from the Bureau of Labor Statistics, the Energy Information Administration of the Department of Energy, the Census Bureau, and other government agencies. 1", + "neighbors": [ + 506 + ], + "mask": "Test" + }, + { + "node_id": 393, + "label": 1, + "text": "Beyond Euclidean Eigenspaces: Bayesian Matching for Visual Recognition We propose a novel technique for direct visual matching of images for the purposes of face recognition and database search. Speci#cally,we argue in favor of a probabilistic measure of similarity, in contrast to simpler methods which are based on standard Euclidean L2 norms #e.g., template matching# or subspace-restricted norms #e.g., eigenspace matching#. The proposed similarity measure is based on a Bayesian analysis of image di#erences: we model twomutually exclusive classes of variation between two facial images: intra-personal #variations in appearance of the same individual, due to di#erent expressions or lighting# and extra-personal #variations in appearance due to a di#erence in identity#. The high-dimensional probability density functions for each respective class are then obtained from training data using an eigenspace density estimation technique and subsequently used to compute a similarity measure based on the a posteriori probability of membership in the intra-personal class, which is used to rank matches in the database. The performance advantage of this probabilistic matching technique over standard Euclidean nearest-neighbor eigenspace matching is demonstrated using results from ARPA's 1996 #FERET\" face recognition competition, in which this algorithm was found to be the top performer.", + "neighbors": [ + 1218 + ], + "mask": "Validation" + }, + { + "node_id": 394, + "label": 5, + "text": "The CMUnited-97 Robotic Soccer Team: Perception and Multiagent Control Robotic soccer is a challenging research domain which involves multiple agents that need to collaborate in an adversarial environment to achieve specificobjectives. In this paper, we describe CMUnited, the team of small robotic agents that we developed to enter the RoboCup-97 competition. We designed and built the robotic agents, devised the appropriate vision algorithm, and developed and implemented algorithms for strategic collaboration between the robots in an uncertain and dynamic environment. The robots can organize themselves in formations, hold specificroles, and pursue their goals. In game situations, they have demonstrated their collaborative behaviors on multiple occasions. The robots can also switch roles to maximize the overall performance of the team. We present an overview of the vision processing algorithm which successfully tracks multiple moving objects and predicts trajectories. The paper then focusses on the agent behaviors ranging from low-level individual behaviors to coordinated, strategic team behaviors.", + "neighbors": [ + 155, + 430, + 927, + 1266 + ], + "mask": "Validation" + }, + { + "node_id": 395, + "label": 2, + "text": "Improving Short-Text Classification Using Unlabeled Background Knowledge to Assess Document Similarity We describe a method for improving the classification of short text strings using a combination of labeled training data plus a secondary corpus of unlabeled but related longer documents. We show that such unlabeled background knowledge can greatly decrease error rates, particularly if the number of examples or the size of the strings in the training set is small. This is particularly useful when labeling text is a labor-intensive job and when there is a large amount of information available about a particular problem on the World Wide Web. Our approach views the task as one of information integration using WHIRL, a tool that combines database functionalities with techniques from the information-retrieval literature. 1. Introduction The task of classifying textual data that has been culled from sites on the World Wide Web is both difficult and intensively studied (Cohen & Hirsh, 1998; Joachims, 1998; Nigam et al., 1999). Applications of various machine learning techniqu...", + "neighbors": [ + 609, + 643, + 1144 + ], + "mask": "Train" + }, + { + "node_id": 396, + "label": 0, + "text": "Composable Agents for Patient Flow Control - Preliminary Concepts In this article we describe our research efforts in coping with a trade-off that can be often found in the control and optimization of todays business processes. Though centralized control may achieve better results in controlling the system behavior, there are usually social, technical and security constraints on applying centralized control. Distributed control on the other hand may cope with these constraints but also entails suboptimal results and communicational overhead. Our concept of composable agents tries to allow a dynamic and fluent transition between globalization and localization in business process control by adapting to the current real-world system structure. We are currently evaluating this concept in the framework of a patient flow control project at Charit'e Berlin. Todays applications of information technology face at least two major aspects of business settings. The first aspect is the partially or fully automated execution of complex business processes. This enfo...", + "neighbors": [ + 25, + 964 + ], + "mask": "Train" + }, + { + "node_id": 397, + "label": 3, + "text": "A Query Calculus for Spatio-Temporal Object Databases The development of any comprehensive proposal for spatio-temporal databases involves significant extensions to many aspects of a non-spatio-temporal architecture. One aspect that has received less attention than most is the development of a query calculus that can be used to provide a semantics for spatio-temporal queries and underpin an effective query optimization and evaluation framework. In this paper, we show how a query calculus for spatiotemporal object databases that builds upon the monoid calculus proposed by Fegaras and Maier for ODMG-compliant database systems can be developed. The paper shows how an extension of the ODMG type system with spatial and temporal types can be accommodated into the monoid approach. It uses several queries over historical (possibly spatial) data to illustrate how, by mapping them into monoid comprehensions, the way is open for the application of a logical optimizer based on the normalization algorithm proposed by Fegaras and Maier.", + "neighbors": [ + 59, + 209, + 302, + 481, + 493, + 692 + ], + "mask": "Train" + }, + { + "node_id": 398, + "label": 0, + "text": "Making Complex Articulated Agents Dance - An analysis of control methods drawn from robotics, animation, and biology . We discuss the tradeoffs involved in control of complex articulated agents, and present three implemented controllers for a complex task: a physically-based humanoid torso dancing the Macarena. The three controllers are drawn from animation, biological models, and robotics, and illustrate the issues of joint-space vs. Cartesian space task specification and implementation. We evaluate the controllers along several qualitative and quantitative dimensions, considering naturalness of movement and controller flexibility. Finally, we propose a general combination approach to control, aimed at utilizing the strengths of each alternative within a general framework for addressing complex motor control of articulated agents. Key words: articulated agent control, motor control, robotics, animation 1. Introduction Control of humanoid agents, dynamically simulated or physical, is an extremely difficult problem due to the high dimensionality of the control space, i.e., the many degrees of freed...", + "neighbors": [ + 183 + ], + "mask": "Validation" + }, + { + "node_id": 399, + "label": 3, + "text": "Segment-Based Approach for Subsequence Searches in Sequence Databases This paper investigates the subsequence searching problem under time warping in sequence databases. Time warping enables to find sequences with similar changing patterns even when they are of different lengths. Our work is motivated by the observation that subsequence searches slow down quadratically as the total length of data sequences increases. To resolve this problem, we propose the SegmentBased Approach for Subsequence Searches (SBASS), which modifies the similarity measure from time warping to piecewise time warping and limits the number of possible subsequences to be compared with a query sequence. For efficient retrieval of similar subsequences without false dismissal 1 , we extract feature vectors from all data segments exploiting their monotonically changing properties, and build a multi-dimensional index such as R-tree or R - tree. Using this index, queries are processed with four steps: 1) index filtering, 2) feature filtering, 3) successor filtering, and 4) post-proce...", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 400, + "label": 3, + "text": "Abduction in Logic Programming This paper is a survey and critical overview of recent work on the extension of Logic Programming to perform Abductive Reasoning (Abductive Logic Programming). We outline the general framework of Abduction and its applications to Knowledge Assimilation and Default Reasoning; and we introduce an argumentation-theoretic approach to the use of abduction as an interpretation for Negation as Failure. We also analyse the links between Abduction and the extension of Logic Programming obtained by adding a form of explicit negation. Finally we discuss the relation between Abduction and Truth Maintenance. 1 Introduction This paper is a survey and analysis of work on the extension of logic programming to perform abductive reasoning. The purpose of the paper is to provide a critical overview of some of the main research results, in order to develop a common framework for evaluating these results, to identify the main unresolved problems, and to indicate directions for future work. The emphasis i...", + "neighbors": [ + 588, + 771, + 1091 + ], + "mask": "Train" + }, + { + "node_id": 401, + "label": 2, + "text": "Scenario Customization for Information Extraction Information Extraction (IE) is an emerging NLP technology, whose function is to process unstructured, natural language text, to locate specific pieces of information, or facts, in the text, and to use these facts to fill a database. IE systems today are commonly based on pattern matching. The core IE engine uses a cascade of sets of patterns of increasing linguistic complexity. Each pattern consists of a regular expression and an associated mapping from syntactic to logical form. The pattern sets are customized for each new topic, as defined by the set of facts to be extracted. Construction of a pattern base for a new topic is recognized as a time-consuming and expensive process---a principal roadblock to wider use of IE technology in the large. An e#ective pattern base must be precise and have wide coverage. This thesis addresses the portability probl...", + "neighbors": [ + 391, + 855 + ], + "mask": "Train" + }, + { + "node_id": 402, + "label": 3, + "text": "Algorithms for Temporal Query Operators in XML Databases The contents of an XML database or XML/Web data warehouse is seldom static. New documents are created, documents are deleted, and more important: documents are updated. In many cases, we want to be able to search in historical versions, retrieve documents valid at a certain time, query changes to documents, etc. This can be supported by extending the system with temporal database features. In this paper we describe the new query operators needed in order to support an XML query language which supports temporal operations. We also describe the algorithms which can make efficient implementation of these query operators possible. Keywords: XML, temporal databases, query processing 1", + "neighbors": [ + 705, + 912, + 1025, + 1069 + ], + "mask": "Train" + }, + { + "node_id": 403, + "label": 2, + "text": "Probabilistic Hierarchical Clustering with Labeled and Unlabeled Data . This paper presents hierarchical probabilistic clustering methods for unsupervised and supervised learning in datamining applications, where supervised learning is performed using both labeled and unlabeled examples. The probabilistic clustering is based on the previously suggested Generalizable Gaussian Mixture model and is extended using a modified Expectation Maximization procedure for learning with both unlabeled and labeled examples. The proposed hierarchical scheme is agglomerative and based on probabilistic similarity measures. Here, we compare a L 2 dissimilarity measure, error confusion similarity, and accumulated posterior cluster probability measure. The unsupervised and supervised schemes are successfully tested on artificially data and for e-mails segmentation. 1", + "neighbors": [ + 609, + 1153 + ], + "mask": "Train" + }, + { + "node_id": 404, + "label": 2, + "text": "Theme-Based Retrieval of Web News Efficient information retrieval of highly dynamic information, such as Web news, is a complex task. As a result, search and retrieval environments for continuously updated news from other sources than the largest media conglomerates are almost absent on the Internet. Global search engines do not index or classify news information from smaller network communities. To address this problem, I developed NewsSearch, a news information management environment designed to improve retrieval efficiency of online news for the smaller networked communities. NewsSearch search achieves its goal through a combination of techniques: . Multiple indexing queues, defining multiple gathering schedules, to deal with different publication periodicities. . Information Retrieval techniques to news, in order to classify them into a pre-defined set of themes. . Support Vector Machines, which proved to be a fast and reliable classification technique. NewsSearch proved to be a scalable solution with acceptable storage needs even while managing a fairly large collection of daily publications. A combination of fine tuning of training strategies, noise filtering of Web news documents and multiple classifications, enable NewsSearch to achieve a classification accuracy of 95%. ACKNOWLEDGEMENTS This work was supported in part by the PRAXIS project ARIADNE (Pblico Digital -- Praxis XXI, Medida 3.1b) and project SAGRES (Praxis/P/TIT/1676/95). TABLE OF CONTENTS CHAPTER I", + "neighbors": [ + 502, + 931, + 1003 + ], + "mask": "Train" + }, + { + "node_id": 405, + "label": 0, + "text": "Towards Flexible Multi-Agent Decision-Making Under Time Pressure Abstract \u2014 Autonomous agents need considerable computational resources to perform rational decision-making. These demands are even more severe when other agents are present in the environment. In these settings, the quality of an agent\u2019s alternative behaviors depends not only on the state of the environment, but also on the actions of other agents, which in turn depend on the others \u2019 beliefs about the world, their preferences, and further on the other agents\u2019 beliefs about others, and so on. The complexity becomes prohibitive when large number of agents are present and when decisions have to be made under time pressure. In this paper we investigate strategies intended to tame the computational burden by using off-line computation in conjunction with on-line reasoning. We investigate two approaches. First, we use rules compiled off-line to constrain alternative actions considered during on-line reasoning. This method minimizes overhead, but is not sensitive to changes in realtime demands of the situation at hand. Second, we use performance profiles computed off-line and the notion of urgency (i.e., the value of time) computed on-line to choose the amount of information to be included during on-line deliberation. This method can adjust to various levels of real-time demands, but incurs some overhead associated with iterative deepening. We test our framework with experiments in a simulated anti-air defense domain. The experiments show that both procedures are effective in reducing computation time while offering good performance under time pressure.", + "neighbors": [ + 277, + 964 + ], + "mask": "Train" + }, + { + "node_id": 406, + "label": 0, + "text": "Generating and Using State Spaces of Object-Oriented Petri Nets : The article discusses the notion of state spaces of object-oriented Petri nets associated to the tool called PNtalk and the role of identifiers of dynamically appearing and disappearing instances within these state spaces. Methods of working with identifiers based on sophisticated naming rules and mechanisms for abstracting names are described and compared. Some optimizations of state space generating algorithms for the context of object-oriented Petri nets are briefly mentioned, as well. Key Words: Petri nets, object-orientation, state spaces, formal analysis and verification 1 Introduction Methods of formal analysis and verification has been developed as an alternative to simulation approaches of examining properties of complex systems. Although we are not always able to fully verify the behaviour of a system, even partial analysis or verification can reveal some errors which tend to be different from the ones found by simulation due to the different nature of formal analysis and...", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 407, + "label": 2, + "text": "Using Labeled and Unlabeled Data to Learn Drifting Concepts For many learning tasks, where data is collected over an extended period of time, one has to cope two problems. The distribution underlying the data is likely to change and only little labeled training data is available at each point in time. A typical example is information filtering, i. e. the adaptive classification of documents with respect to a particular user interest. Both the interest of the user and the document content change over time. A filtering system should be able to adapt to such concept changes. Since users often give little feedback, a filtering system should also be able to achieve a good performance, even if only few labeled training examples are provided. This paper proposes a method to recognize and handle concept changes with support vector machines and to use unlabeled data to reduce the need for labeled data. The method maintains windows on the training data, whose size is automatically adjusted so that the estimated generalization error is minimized. The approach is both theoretically well-founded as well as effective and efficient in practice. Since it does not require complicated parameterization, it is simpler to use and more robust than comparable heuristics. Experiments with simulated concept drift scenarios based on real-world text data compare the new method with other window management approaches and show that it can effectively select an appropriate window size in a robust way. In order to achieve an acceptable performance with fewer labeled training examples, the proposed method exploits unlabeled examples in a transductive way. 1", + "neighbors": [ + 609, + 865, + 1133, + 1153 + ], + "mask": "Train" + }, + { + "node_id": 408, + "label": 1, + "text": "Neuro-Mimetic Navigation Systems: A Computational Model of the Rat Hippocampus : We propose a bio-inspired approach to autonomous navigation based on some of the components that rats use for navigation. A spatial model of the environment is constructed by unsupervised Hebbian learning. The representation consists of a population of localized overlapping place elds, modeling place cell activity in the rat Hippocampus. Place elds are established by extracting spatio-temporal properties of the environment from visual sensory inputs. Visual ambiguities are resolved by means of path integration. Reinforcement learning is applied to use place cell activity for goal-oriented navigation. Experimental results obtained with a mobile Khepera robot are presented. Keywords: Autonomous robots, hippocampus, place elds, unsupervised learning, reinforcement learning, population vector coding, path integration. 1. Introduction The complexity of the autonomous navigation task is inherent in the concept of autonomy: Ideally, an autonomous agent should have a completely ...", + "neighbors": [ + 997 + ], + "mask": "Train" + }, + { + "node_id": 409, + "label": 1, + "text": "Markov Techniques for Object Localization With Force-Controlled Robots This paper deals with object localization with forcecontrolled robots in the Bayesian framework [1]. It describes a method based on Markov Localization techniques with a Monte Carlo implementation applied for solving 3D (6 degrees of freedom) global localization problems with force-controlled robots. The approach was successfully applied to problems such as the recursive localization of a box by a robot manipulator.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 410, + "label": 2, + "text": "Web Document Clustering: A Feasibility Demonstration Abstract Users of Web search engines are often forced to sift through the long ordered list of document \u201csnippets\u201d returned by the engines. The IR community has explored document clustering as an alternative method of organizing retrieval results, but clustering has yet to be deployed on the major search engines. The paper articulates the unique requirements of Web document clustering and reports on the first evaluation of clustering methods in this domain. A key requirement is that the methods create their clusters based on the short snippets returned by Web search engines. Surprisingly, we find that clusters based on snippets are almost as good as clusters created using the full text of Web documents. To satisfy the stringent requirements of the Web domain, we introduce an incremental, linear time (in the document collection size) algorithm called Suffix Tree Clustering (STC). which creates clusters based on phrases shared between documents. We show that STC is faster than standard clustering methods in this domain, and argue that Web document clustering via STC is both feasible and potentially beneficial. 1", + "neighbors": [ + 84, + 235, + 447, + 851, + 893, + 947, + 1005, + 1247 + ], + "mask": "Train" + }, + { + "node_id": 411, + "label": 2, + "text": "Discovering Informative Content Blocks from Web Documents In this paper, we propose a new approach to discover informative contents from a set of tabular documents (or Web pages) of a Web site. Our system, InfoDiscoverer, first partitions a page into several content blocks according to HTML tag in a Web page. Based on the occurrence of the features (terms) in the set of pages, it calculates entropy value of each feature. According to the entropy value of each feature in a content block, the entropy value of the block is defined. By analyzing the information measure, we propose a method to dynamically select the entropy-threshold that partitions blocks into either informative or redundant. Informative content blocks are distinguished parts of the page, whereas redundant content blocks are common parts. Based on the answer set generated from 13 manually tagged news Web sites with a total of 26,518 Web pages, experiments show that both recall and precision rates are greater than 0.956. That is, using the approach, informative blocks (news articles) of these sites can be automatically separated from semantically redundant contents such as advertisements, banners, navigation panels, news categories, etc. By adopting InfoDiscoverer as the preprocessor of information retrieval and extraction applications, the retrieval and extracting precision will be increased, and the indexing size and extracting complexity will also be reduced.", + "neighbors": [ + 112, + 1017 + ], + "mask": "Train" + }, + { + "node_id": 412, + "label": 2, + "text": "RoadRunner: Towards Automatic Data Extraction from Large Web Sites The paper investigates techniques for extracting data from HTML sites through the use of automatically generated wrappers. To automate the wrapper generation and the data extraction process, the paper develops a novel technique to compare HTML pages and generate a wrapper based on their similarities and differences. Experimental results on real-life data-intensive Web sites confirm the feasibility of the approach. 1", + "neighbors": [ + 570, + 612, + 855, + 1215, + 1232 + ], + "mask": "Train" + }, + { + "node_id": 413, + "label": 3, + "text": "The Diagnosis Frontend of the dlv System This paper presents the Diagnosis Frontend of dlv, which is a knowledge representation system under development at the Technische Universit\u00e4t Wien. The kernel language of the system is an extension of disjunctive logic programming (DLP) by integrity constraints; it offers frontends to several advanced knowledge representation formalisms. The formal model of diagnosis employed in the frontend includes both abductive diagnosis (over DLP theories) and consistency-based diagnosis. For each of the two diagnosis modalities, generic diagnoses, single error diagnoses, and subset minimal diagnoses are considered. We illustrate the use of the frontend by showing the dlv encodings of several diagnosis problems. Thereafter, we discuss implementation issues. Diagnostic reasoning is implemented on the dlv engine through suitable translations of diagnostic problems into disjunctive logic programs, such that their stable models correspond to diagnoses. For the six kinds of diagnostic reasoning problems emerging from above, such reductions are provided", + "neighbors": [ + 632, + 812 + ], + "mask": "Train" + }, + { + "node_id": 414, + "label": 1, + "text": "Integrating Case Based Reasoning and Tabu Search for Solving Optimisation Problems Tabu search is an established heuristic optimisation technique for problems where exact algorithms are not available. It belongs to the same family as simulated annealing or genetic algorithms. It extends the basic iterative improvement scheme by adding control learning. A technique of this kind, intensification, captures experience established on a frequency-based analysis of past search. Experience is reused while the same optimisation process is going on in order to guide search to better solutions. In this paper, we introduce a case-based reasoning approach for control learning in tabu search. Search experience concerns operator selection and is represented by cases. The aim of case reuse is to improve conflict resolution. While the proposed method is domain independent, we present its application to the NPhard uncapacitated facility location problem. Experimental results show that adding our approach to a basic tabu search optimisation significantly improves solution quality on t...", + "neighbors": [ + 611 + ], + "mask": "Test" + }, + { + "node_id": 415, + "label": 0, + "text": "Information agents on the move: A survey on load-balancing with mobile agents Information agents process and integrate heterogeneous, distributed information. To achieve this task efficiently, some researchers promote the idea of mobile information agents [13, 53, 44, 20, 10], which migrate between a user's host and other hosts in the network. We outline the concepts behind mobile information agents and give a survey on load balancing, which aims to optimise distributed information processing.", + "neighbors": [ + 465, + 1038 + ], + "mask": "Train" + }, + { + "node_id": 416, + "label": 0, + "text": "Formal ReSpecT Logic tuple centres have s own that logic-ba d languages can be e#ectively exploited not only for building individual agents and enabling interagent communication in multi-agent ssG ms butals for ruli ng inter-agent communications as to builds cial behaviours In this paper, we formally define the notion of logic tuple centre as well as the operationals emantics of the logic-bas d language ReSpecT for the behaviours pecification of logic tuple centres . For this purpos e, we exploit a generals emantic framework for as ynchronous dis tributeds ys tems allowing a coordination medium to be formally denoted in as eparate and independent way with res pect to the whole coordinateds ys tem. This s hows that a logic-bas ed coordination medium does not limit agents and coordination languages to be logic-bas ed, but may ins tead enable agents of di#erents orts and technologies to be combined and coordinated in an e#ective way by exploiting a logic-bas ed approach. 1 Coordinationm edia form ulti...", + "neighbors": [ + 47, + 119 + ], + "mask": "Train" + }, + { + "node_id": 417, + "label": 5, + "text": "Second Order Sufficient Conditions for Optimal Control Problems with Free Final Time: The Riccati Approach . Second order sufficient conditions (SSC) for control problems with control--state constraints and free final time are presented. Instead of deriving such SSC de initio, the control problem with free final time is tranformed into an augmented control problem with fixed final time for which well-known SSC exist. SSC are then expressed as a condition on the positive definiteness of the second variation. A convenient numerical tool for verifying this condition is based on the Riccati approach where one has to find a bounded solution of an associated Riccati equation satisfying specific boundary conditions. The augmented Riccati equations for the augmented control problem are derived and their modifications on the boundary of the control--state constraint are discussed. Two numerical examples, (1) the classical Earth-Mars orbit transfer in minimal time, (2) the Rayleigh problem in electrical engineering, demonstrate that the Riccati equation approach provides a viable numerical test of SS...", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 418, + "label": 1, + "text": "Rule Discovery with a Parallel Genetic Algorithm An important issue in data mining is scalability with respect to the size of the dataset being mined. In the paper we address this issue by presenting a parallel GA for rule discovery. This algorithm exploits both data parallelism, by distributing the data being mined across all available processors, and control parallelism, by distributing the population of individuals across all available processors. 1", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 419, + "label": 3, + "text": "WSQ/DSQ: A Practical Approach for Combined Querying of Databases and the Web www-db.stanford.edu We present WSQ/DSQ (pronounced \u201cwisk-disk\u201d), a new approach for combining the query facilities of traditional databases with existing search engines on the Web. WSQ, for Web-Supported (Database) Queries, leverages results from Web searches to enhance SQL queries over a relational database. DSQ, for Database-Supported (Web) Queries, uses information stored in the database to enhance and explain Web searches. This paper focuses primarily on WSQ, describing a simple, low-overhead way to support WSQ in a relational DBMS, and demonstrating the utility of WSQ with a number of interesting queries and results. The queries supported by WSQ are enabled by two virtual tables, whose tuples represent Web search results generated dynamically during query execution. WSQ query execution may involve many high-latency calls to one or more search engines, during which the query processor is idle. We present a lightweight technique called asynchronous iteration that can be integrated easily into a standard sequential query processor to enable concurrency between query processing and multiple Web search requests. Asynchronous iteration has broader applications than WSQ alone, and it opens up many interesting query optimization issues. We have developed a prototype implementation of WSQ by extending a DBMS with virtual tables and asynchronous iteration; performance results are reported. 1", + "neighbors": [ + 218, + 876, + 879, + 998 + ], + "mask": "Train" + }, + { + "node_id": 420, + "label": 3, + "text": "Integrating Keyword Search into XML Query Processing Due to the popularity of the XML data format, several query languages for XML have been proposed, specially devised to handle data whose structure is unknown, loose, or absent. While these languages are rich enough to allow for querying the content and structure of an XML document, a varying or unknown structure can make formulating queries a very difficult task. We propose an extension to XML query languages that enables keyword search at the granularity of XML elements, that helps novice users formulate queries, and also yields new optimization opportunities for the query processor. We present an implementation of this extension on top of a commercial RDBMS; we then discuss implementation choices and performance results. Keywords XML query processing, full-text index 1 Introduction There is no doubt that XML is rapidly becoming one of the most important data formats. It is already used for scientific data (e.g., DNA sequences), in linguistics (e.g., the Treebank database at the U...", + "neighbors": [ + 218, + 488, + 634, + 876 + ], + "mask": "Train" + }, + { + "node_id": 421, + "label": 0, + "text": "On the Emergence of Macro Spatial Structures in Dissipative Cellular Automata, and its Implications for Agent-based Distributed Computing This paper describes the peculiar behavior observed in a class of cellular automata that we have defined as \"dissipative\", i.e., cellular automata that are \"open\" and makes it possible for the environment to influence the evolution of the automata. Peculiar in the dynamic evolution of this class of cellular automata is that stable macro-level spatial structures emerge from local interactions among cells, a behavior that does not emerge when the cellular automaton is \"closed\", i.e., when the state of a cell is not influenced by the external world. On this basis, the paper discusses the relations of the performed experiments with the area of open distributed computing, and in particular of agent-based distributed computing. The basic intuition is that dissipative cellular automata express characteristics that strongly resembles those of wide-area open distributed systems based on autonomous and situated active components -- as agents are. Accordingly, similar sorts of macrolevel behaviors are likely to emerge and need to be studied, controlled, and possibly fruitfully exploited.", + "neighbors": [ + 246 + ], + "mask": "Train" + }, + { + "node_id": 422, + "label": 3, + "text": "Modeling Temporal Consistency in Data Warehouses Real-world changes are generally discovered delayed by computer systems. The typical update patterns for traditional data warehouses on an overnight or even weekly basis enlarge this propagation delay until the information is available to knowledge workers. The main contribution of the paper is the identification of two different temporal characterizations of the information appearing in a data warehouse: one is the classical description of the time instant when a given fact occurred, the other represents the instant when the information has been entered into the system. We present an approach for modeling conceptual time consistency problems and introduce a data model that deals with timely delays and supports knowledge workers to determine what the situation was in the past, knowing only the information available at a given instant of time. 1", + "neighbors": [ + 1243 + ], + "mask": "Train" + }, + { + "node_id": 423, + "label": 4, + "text": "Layout Rules for Graphical Web Documents The number of companies, institutions, and individuals competing for attention in the World-Wide Web is growing exponentially. This makes designing informative, easy-to-grasp, and visually appealing documents not only important for userfriendly information presentation, but also the key to success for any information provider. In this paper, we present layout guidelines for textual and graphical, static and dynamic, 2-D and 3-D Web documents which are drawn from fields as diverse as typography, Gestalt psychology, architecture, hypertext authoring, and human-computer interaction. Web documents are classified into five basic types, and our layout rules are applied to each of these. Finally, we show how currently evolving standards (HTML 3.0 for text and still graphics, Java for 2-D animation, and VRML for 3-D worlds) support applying those rules. 1 Introduction Whenever a new information-conveying technology is invented, it usually takes many years until authors develop new media that ...", + "neighbors": [ + 384 + ], + "mask": "Validation" + }, + { + "node_id": 424, + "label": 0, + "text": "Implementing Incremental Code Migration with XML We demonstrate how XML and related technologies can be used for code mobility at any granularity, thus overcoming the restrictions of existing approaches. By not fixing a particular granularity for mobile code, we enable complete programs as well as individual lines of code to be sent across the network. We define the concept of incremental code mobility as the ability to migrate and add, remove, or replace code fragments (i.e., increments) in a remote program. The combination of fine-grained and incremental migration achieves a previously unavailable degree of flexibility. We examine the application of incremental and fine-grained code migration to a variety of domains, including user interface management, application management on mobile thin clients, for example PDAs, and management of distributed documents. Keywords Incremental Code Migration, XML Technologies 1 INTRODUCTION The increasing popularity of Java and the spread of Webbased technologies are contributing to a growing ...", + "neighbors": [ + 757 + ], + "mask": "Train" + }, + { + "node_id": 425, + "label": 0, + "text": "How to Avoid Knowing It All Beliefs have been formally modelled in the last decades using doxastic logics. The possible worlds model and its associated Kripke semantics provide an intuitive semantics for these logics, but they seem to commit us to model agents that are logically omniscient (they believe every classical tautology) and perfect reasoners (their beliefs are closed under classical deductive closure). Thus, this model would not be appropriate to model non-ideal agents, that have resource limitations that prevent them from attaining such levels of doxastic competence. This report contains a statement of these problems and a brief survey of some of the most interesting approaches that have been suggested to overcome them. Contents 1 Formal models of belief 3 1.1 Possible worlds and Kripke semantics . . . . . . . . . . . . . . . . . . . . . 3 1.2 Logical omniscience and perfect reasoning . . . . . . . . . . . . . . . . . . 5 2 Avoiding logical omniscience 7 2.1 Syntactic approaches . . . . . . . ...", + "neighbors": [ + 827, + 964 + ], + "mask": "Train" + }, + { + "node_id": 426, + "label": 2, + "text": "Data Mining Models as Services on the Internet The goal of this article is to raise a debate on the usefulness of providing data mining models as services on the internet. These services can be provided by anyone with adequate data and expertise and made available on the internet for anyone to use. For instance, Yahoo or Altavista, given their huge categorized document collection, can train a document classifier and provide the model as a service on the internet. This way data mining can be made accessible to a wider audience instead of being limited to people with the data and the expertise. A host of practical problems need to be solved before this idea can be made to work. We identify them and close with an invitation for further debate and investigation. 1.", + "neighbors": [ + 440 + ], + "mask": "Validation" + }, + { + "node_id": 427, + "label": 2, + "text": "The Missing Link - A Probabilistic Model of Document Content and Hypertext Connectivity We describe a joint probabilistic model for modeling the contents and inter-connectivity of document collections such as sets of web pages or research paper archives. The model is based on a probabilistic factor decomposition and allows identifying principal topics of the collection as well as authoritative documents within those topics. Furthermore, the relationships between topics is mapped out in order to build a predictive model of link content. Among the many applications of this approach are information retrieval and search, topic identification, query disambiguation, focused web crawling, web authoring, and bibliometric analysis.", + "neighbors": [ + 216, + 247, + 722, + 774, + 1017, + 1068 + ], + "mask": "Train" + }, + { + "node_id": 428, + "label": 1, + "text": "Mixtures of Linear Subspaces for Face Detection We present two methods using mixtures of linear subspaces for face detection in gray level images. One method uses a mixture of factor analyzers to concurrently perform clustering and, within each cluster, perform local dimensionality reduction. The parameters of the mixture model are estimated using an EM algorithm. A face is detected if the probability of an input sample is above a predened threshold. The other mixture of subspaces method uses Kohonen 's self-organizing map for clustering and Fisher Linear Discriminant to nd an optimal projection and a Gaussian distribution to model the class-conditional density function of the projected samples for each class. The parameters of the class-conditional density functions are maximum likelihood estimates and the decision rule is also based on maximum likelihood. A wide range of face images including ones in dierent poses, with dierent expressions and under dierent lighting conditions are used as the training set to capture the varia...", + "neighbors": [ + 664 + ], + "mask": "Validation" + }, + { + "node_id": 429, + "label": 4, + "text": "Supporting Creativity with Advanced Information-Abundant User Interfaces A challenge for human-computer interaction researchers and user interface designers is to construct information technologies that support creativity. This ambitious goal can be attained if designers build on an adequate understanding of creative processes. This paper describes a model of creativity, the four-phase genex framework for generating excellence: - Collect: learn from previous works stored in digital libraries, the web, etc. - Relate: consult with peers and mentors at early, middle and late stages - Create: explore, compose, discover, and evaluate possible solutions - Donate: disseminate the results and contribute to the digital libraries, the web, etc. Within this integrated framework, there are eight activities that require human-computer interaction research and advanced user interface design. This paper concentrates on techniques of information visualization that support creative work by enabling users to find relevant information resources, identify desired items in a se...", + "neighbors": [ + 743, + 1167 + ], + "mask": "Train" + }, + { + "node_id": 430, + "label": 5, + "text": "The CMUnited-97 Simulator Team . The Soccer Server system provides a rich and challenging multiagent, real-time domain. Agents must accurately perceive and act despite a quickly changing, largely hidden, noisy world. They must also act at several levels, ranging from individual skills to full-team collaborative and adversarial behaviors. This article presents the CMUnited-97 approaches to the above challenges which helped the team to the semifinals of the 29-team RoboCup-97 tournament. 1 Introduction The Soccer Server system [5] used at RoboCup-97 [2] provides a rich and challenging multiagent, real-time domain. Sensing and acting is noisy, while interagent communication is unreliable and low-bandwidth. In order to be successful, each agent in a team must be able to sense and act in real time: sensations arrive at unpredictable intervals while actions are possible every 100ms. Furthermore, since the agents get local, noisy sensory information, they must have a method of converting their sensory inputs into a good w...", + "neighbors": [ + 127, + 155, + 394 + ], + "mask": "Test" + }, + { + "node_id": 431, + "label": 3, + "text": "Query Rewriting for Semistructured Data We address the problem of query rewriting for TSL, a language for querying semistructured data. We develop and present an algorithm that, given a semistructured query q and a set of semistructured views V, finds rewriting queries, i.e., queries that access the views and produce the same result as q. Our algorithm is based on appropriately generalizing containment mappings, the chase, and unification -- techniques that were developed for structured, relational data. We also develop an algorithm for equivalence checking of TSL queries. We show that the algorithm is sound and complete for TSL, i.e., it always finds every TSL rewriting query of q, and we discuss its complexity. We extend the rewriting algorithm to use available structural constraints (such as DTDs) to find more opportunities for query rewriting. We currently incorporate the algorithm in the TSIMMIS system. 1 Introduction Recently, many semistructured data models, query and view definition languages have been proposed [2...", + "neighbors": [ + 708 + ], + "mask": "Train" + }, + { + "node_id": 432, + "label": 4, + "text": "An Anthropomorphic Agent for the Use of Spatial Language . In this paper we describe the communication with a responsive virtual environment with the main emphasis on the processing of spatial expressions in natural language instructions. This work is part of the VIENA project in whichwechose interior design as an example domain. A multiagent system acts as an intelligent mediator between the user and a graphics system. To make the communication about spatial relations more intuitive, we developed an anthropomorphic agent which is graphically visualized in the scene. Considering the human-like #gure we explain the use of qualitative spatial expressions, like #right of \" and #there\". 1 Introduction Interactive 3-dimensional graphics systems are more useful #e.g. in design#, when users can concentrate on their imaginations and be free from technical considerations. Therefore it is important to improveinteraction with the virtual environmentbyway of natural, intuitive communication forms. In our work we consider a #virtual interface...", + "neighbors": [ + 90 + ], + "mask": "Test" + }, + { + "node_id": 433, + "label": 2, + "text": "Towards a Highly-Scalable and Effective Metasearch Engine A metasearch engine is a system that supports unified access to multiple local search engines. Database selection is one of the main challenges in building a large-scale metasearch engine. The problem is to efficiently and accurately determine a small number of potentially useful local search engines to invoke for each user query. In order to enable accurate selection, metadata that reect the contents of each search engine need to be collected and used. In this paper, we propose a highly scalable and accurate database selection method. This method has several novel features. First, the metadata for representing the contents of all search engines are organized into a single integrated representative. Such a representative yields both computation efficiency and storage efficiency. Second, our selection method is based on a theory for ranking search engines optimally. Experimental results indicate that this new method is very effective. An operational prototype system has been built based on the proposed approach.", + "neighbors": [ + 224, + 241, + 271, + 477, + 510, + 696, + 792, + 931, + 1003, + 1134, + 1165 + ], + "mask": "Train" + }, + { + "node_id": 434, + "label": 0, + "text": "Representing Coordination Relationships with Influence Diagrams It is well know the necessity of managing relationships among agents in a multi-agent system to achieve coordinated behavior. One approach to manage such relationships consists of using an explicit representation of them, allowing each agent to choose its actions based on them. Previous work in the area have considered ideal situations, such as fully known environments, static relationships and shared mental states. In this paper we propose to represent relationships among agents and entities in a multi-agent system by using influence diagrams.", + "neighbors": [ + 495, + 500 + ], + "mask": "Test" + }, + { + "node_id": 435, + "label": 2, + "text": "Concept Hierarchy Based Text Database Categorization Document categorization as a technique to improve the retrieval of useful documents has been extensively investigated. One important issue in a large-scale metasearch engine is to select text databases that are likely to contain useful documents for a given query. We believe that database categorization can be a potentially effective technique for good database selection, especially in the Internet environment where short queries are usually submitted. In this paper, we propose and evaluate several database categorization algorithms. This study indicates that while some document categorization algorithms could be adopted for database categorization, algorithms that take into consideration the special characteristics of databases may be more effective. Preliminary experimental results are provided to compare the proposed database categorization algorithms. A prototype database categorization system based on one of the proposed algorithms has been developed.", + "neighbors": [ + 271, + 696, + 792, + 931, + 1003, + 1124 + ], + "mask": "Train" + }, + { + "node_id": 436, + "label": 0, + "text": "The Organisation of Sociality: A Manifesto for a New Science of MultiAgent Systems . In this paper, we pose and motivate a challenge, namely the need for a new science of multiagent systems. We propose that this new science should be grounded, theoretically on a richer conception of sociality, and methodologically on the extensive use of computational modelling for real-world applications and social simulations. Here, the steps we set forth towards meeting that challenge are mainly theoretical. In this respect, we provide a new model of multi-agent systems that reflects a fully explicated conception of cognition, both at the individual and the collective level. Finally, the mechanisms and principles underpinning the model will be examined with particular emphasis on the contributions provided by contemporary organisation theory. 1.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 437, + "label": 2, + "text": "Kernel Expansions With Unlabeled Examples Modern classification applications necessitate supplementing the few available labeled examples with unlabeled examples to improve classification performance. We present a new tractable algorithm for exploiting unlabeled examples in discriminative classification. This is achieved essentially by expanding the input vectors into longer feature vectors via both labeled and unlabeled examples. The resulting classification method can be interpreted as a discriminative kernel density estimate and is readily trained via the EM algorithm, which in this case is both discriminative and achieves the optimal solution. We provide, in addition, a purely discriminative formulation of the estimation problem by appealing to the maximum entropy framework. We demonstrate that the proposed approach requires very few labeled examples for high classification accuracy. 1 Introduction In many modern classification problems such as text categorization, very few labeled examples are available but a...", + "neighbors": [ + 391, + 505, + 609, + 1133, + 1153 + ], + "mask": "Validation" + }, + { + "node_id": 438, + "label": 4, + "text": "NEXUS - Distributed Data Management Concepts for Location Aware Applications Nowadays, mobile computers like subnotebooks or personal digital assistants, as well as cellular phones can not only communicate wirelessly, but they can also determine their position via appropriate sensors like DGPS. Socalled location aware applications take advantage of this fact and structure information according to the position of their users. In order to be able to assign data to a certain location, these information systems have to refer to spatial computer models. The NEXUS project, which is supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation), aims at the development of a generic infrastructure that serves as a basis for location aware applications. The central task of this platform deals with the data management.", + "neighbors": [ + 256 + ], + "mask": "Train" + }, + { + "node_id": 439, + "label": 2, + "text": "Error-Correcting Output Coding for Text Classification This paper applies error-correcting output coding (ECOC) to the task of document categorization. ECOC, of recent vintage in the AI literature, is a method for decomposing a multiway classification problem into many binary classification tasks, and then combining the results of the subtasks into a hypothesized solution to the original problem. There has been much recent interest in the machine learning community about algorithms which integrate \"advice\" from many subordinate predictors into a single classifier, and error-correcting output coding is one such technique. We provide experimental results on several real-world datasets, extracted from the Internet, which demonstrate that ECOC can offer significant improvements in accuracy over conventional classification algorithms. 1 Introduction Error-correcting output coding is a recipe for solving multi-way classification problems. It works in two stages: first, independently construct many subordinate classifiers, each responsible for r...", + "neighbors": [ + 242, + 347 + ], + "mask": "Train" + }, + { + "node_id": 440, + "label": 1, + "text": "Meta-Learning in Distributed Data Mining Systems: Issues and Approaches Data mining systems aim to discover patterns and extract useful information from facts recorded in databases. A widely adopted approach to this objective is to apply various machine learning algorithms to compute descriptive models of the available data. Here, we explore one of the main challenges in this research area, the development of techniques that scale up to large and possibly physically distributed databases. Meta-learning is a technique that seeks to compute higher-level classifiers (or classification models), called meta-classifiers, that integrate in some principled fashion multiple classifiers computed separately over different databases. This study, describes meta-learning and presents the JAM system (Java Agents for Meta-learning), an agent-based meta-learning system for large-scale data mining applications. Specifically, it identifies and addresses several important desiderata for distributed data mining systems that stem from their additional complexity co...", + "neighbors": [ + 426, + 916 + ], + "mask": "Train" + }, + { + "node_id": 441, + "label": 0, + "text": "Investigating Interactions Between Agent Conversations and Agent Control Components Exploring agent conversation in the context of fine-grained agent coordination research has raised several intellectual questions. The major issues pertain to interactions between different agent conversations, the representations chosen for different classes of conversations, the explicit modeling of interactions between the conversations, and how to address these interactions. This paper is not so ambitious as to attempt to address these questions, only frame them in the context of quantified, scheduling-centric multi-agent coordination. research. 1 Introduction Based on a long history of work in agents and agent control components for building distributed AI and multi-agent systems, we are attempting to frame and address a set of intellectual questions pertaining to agent conversation. Interaction lies at the heart of the matter; the issue is interaction between different agent conversations, that possibly occur at different levels of abstraction, but also interaction between the m...", + "neighbors": [ + 367, + 495, + 500, + 724, + 847, + 945 + ], + "mask": "Train" + }, + { + "node_id": 442, + "label": 1, + "text": "Applications of Machine Learning and Rule Induction An important area of application for machine learning is in automating the acquisition of knowledge bases required for expert systems. In this paper, we review the major paradigms for machine learning, including neural networks, instance-based methods, genetic learning, rule induction, and analytic approaches. We consider rule induction in greater detail and review some of its recent applications, in each case stating the problem, how rule induction was used, and the status of the resulting expert system. In closing, we identify the main stages in fielding an applied learning system and draw some lessons from successful applications. Introduction Machine learning is the study of computational methods for improving performance by mechanizing the acquisition of knowledge from experience. Expert performance requires much domainspecific knowledge, and knowledge engineering has produced hundreds of AI expert systems that are now used regularly in industry. Machine learning aims to provide ...", + "neighbors": [ + 0, + 443, + 1246 + ], + "mask": "Train" + }, + { + "node_id": 443, + "label": 1, + "text": "A Survey of Methods for Scaling Up Inductive Algorithms . One of the defining challenges for the KDD research community is to enable inductive learning algorithms to mine very large databases. This paper summarizes, categorizes, and compares existing work on scaling up inductive algorithms. We concentrate on algorithms that build decision trees and rule sets, in order to provide focus and specific details; the issues and techniques generalize to other types of data mining. We begin with a discussion of important issues related to scaling up. We highlight similarities among scaling techniques by categorizing them into three main approaches. For each approach, we then describe, compare, and contrast the different constituent techniques, drawing on specific examples from published papers. Finally, we use the preceding analysis to suggest how to proceed when dealing with a large problem, and where to focus future research. Keywords: scaling up, inductive learning, decision trees, rule learning 1. Introduction The knowledge discovery and data...", + "neighbors": [ + 251, + 442, + 916 + ], + "mask": "Validation" + }, + { + "node_id": 444, + "label": 4, + "text": "The Structure of Object Transportation and Orientation in Human-Computer Interaction An experiment was conducted to investigate the relationship between object transportation and object orientation by the human hand in the context of humancomputer interaction (HCI). This work merges two streams of research: the structure of interactive manipulation in HCI and the natural hand prehension in human motor control. It was found that object transportation and object orientation have a parallel, interdependent structure which is generally persistent over different visual feedback conditions. The notion of concurrency and interdependence of multidimensional visuomotor control structure can provide a new framework for human-computer interface evaluation and design. Keywords Direct manipulation, input device, multi-dimensional control, visuomotor control, visual conditions, information processing, interface design, virtual reality. INTRODUCTION Object manipulation is a basic operation in humancomputer interaction (HCI). Modern computer technology advances towards affording m...", + "neighbors": [ + 260, + 1042 + ], + "mask": "Train" + }, + { + "node_id": 445, + "label": 0, + "text": "From Active Objects to Autonomous Agents This paper studies how to extend the concept of active objects into a structure of agents. It first discusses the requirements for autonomous agents that are not covered by simple active objects. We propose then the extension of the single behavior of an active object into a set of behaviors with a meta-behavior scheduling their activities. To make a concrete proposal based on these ideas we describe how we extended a framework of active objects, named Actalk, into a generic multi-agent platform, named DIMA. We discuss how this extension has been implemented. We finally report on one application of DIMA to simulate economic models. Keywords: active object, agent, implementation, meta-behavior, modularity, re-usability, simulation. 1 Introduction Object-oriented concurrent programming (OOCP) is the most appropriate and promising technology to implement agents. The concept of active object may be considered as the basic structure for building agents. Furthermore, the combinat...", + "neighbors": [ + 955 + ], + "mask": "Validation" + }, + { + "node_id": 446, + "label": 5, + "text": "OBPRM: An Obstacle-Based PRM for 3D Workspaces this paper we consider an obstacle-based prm", + "neighbors": [ + 320, + 387, + 1052 + ], + "mask": "Train" + }, + { + "node_id": 447, + "label": 2, + "text": "From Resource Discovery to Knowledge Discovery on the Internet More than 50 years ago, at a time when modern computers didn't exist yet, Vannevar Bush wrote about a multimedia digital library containing human collective knowledge and filled with \"trails\" linking materials of the same topic. At the end of World War II, Vannevar urged scientists to build such a knowledge store and make it useful, continuously extendable and more importantly, accessible for consultation. Today, the closest to the materialization of Vannevar's dream is the World-Wide Web hypertext and multimedia document collection. However, the ease of use and accessibility of the knowledge described by Vannevar is yet to be realized. Since the 60s, extensive research has been accomplished in the information retrieval field, and free-text search was finally adopted by many text repository systems in the late 80s. The advent of the World-Wide Web in the 90s helped text search become routine as millions of users use search engines daily to pinpoint resources on the Internet. However, r...", + "neighbors": [ + 9, + 410, + 897, + 1017 + ], + "mask": "Train" + }, + { + "node_id": 448, + "label": 1, + "text": "Three Ways to Grow Designs: A Comparison of Evolved Embryogenies for a Design Problem This paper explores the use of growth processes, or embryogenies, to map genotypes to phenotypes within evolutionary systems. Following a summary of the significant features of embryogenies, the three main types of embryogenies in Evolutionary Computation are then identified and explained: external, explicit and implicit. An experimental comparison between these three different embryogenies and an evolutionary algorithm with no embryogeny is performed. The problem set to the four evolutionary systems is to evolve tessellating tiles. In order to assess the scalability of the embryogenies, the problem is increased in difficulty by enlarging the size of tiles to be evolved. The results are surprising, with the implicit embryogeny outperforming all other techniques by showing no significant increase in the size of the genotypes or decrease in accuracy of evolution as the scale of the problem is increased. 1. Introduction The use of computers to evolve solutions to problems has seen a dra...", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 449, + "label": 5, + "text": "Implementing a Knowledge Date-a-Base Knowledge-based systems are very useful, but can be dicult to design because of the complexity of the realworld knowledge they represent. This paper compares the experiences of building the same knowledge base by hand in two dierent systems, Otter and CLIPS. The knowledge base considered is that of people's preferences towards others, in the interests of nding a \\dating match.\" Finally, this paper considers Horn theorems and their impact on the usefulness of knowledge systems. Introduction Because technology and automation are increasingly becoming a part of everyday life, it is benecial to enable technology to \\understand\" its application area. An obvious way of doing this is to implement and embed a knowledge base in an application. However, designing a good knowledge base is not trivial. A good knowledge base needs to be general so it can be reused, complete to avoid bad models, and ecient in description and time. This paper presents the authors' experiences implement...", + "neighbors": [ + 980 + ], + "mask": "Train" + }, + { + "node_id": 450, + "label": 4, + "text": "Jazz: An Extensible Zoomable User Interface Graphics Toolkit in Java In this paper we investigate the use of scene graphs as a general approach for implementing two-dimensional (2D) graphical applications, and in particular Zoomable User Interfaces (ZUIs). Scene graphs are typically found in three-dimensional (3D) graphics packages such as Sun's Java3D and SGI's OpenInventor. They have not been widely adopted by 2D graphical user interface toolkits. To explore the effectiveness of scene graph techniques, we have developed Jazz, a general-purpose 2D scene graph toolkit. Jazz is implemented in Java using Java2D, and runs on all platforms that support Java 2. This paper describes Jazz and the lessons we learned using Jazz for ZUIs. It also discusses how 2D scene graphs can be applied to other application areas. Keywords Zoomable User Interfaces (ZUIs), Animation, Graphics, User Interface Management Systems (UIMS), Pad++, Jazz. INTRODUCTION Today's Graphical User Interface (GUI) toolkits contain a wide range of built-in user interface objects (also kno...", + "neighbors": [ + 213, + 1021 + ], + "mask": "Train" + }, + { + "node_id": 451, + "label": 0, + "text": "Designing Agent-Oriented Systems by Analysing Agent Interactions . We propose a preliminary methodology for agent-oriented software engineering based on the idea of agent interaction analysis. This approach uses interactions between undetermined agents as the primary component of analysis and design. Agents as a basis for software engineering are useful because they provide a powerful and intuitive abstraction which can increase the comprehensiblity of a complex design. The paper describes a process by which the designer can derive the interactions that can occur in a system satisfying the given requirements and use them to design the structure of an agent-based system, including the identification of the agents themselves. We suggest that this approach has the flexibility necessary to provide agent-oriented designs for open and complex applications, and has value for future maintenance and extension of these systems. 1", + "neighbors": [ + 140, + 573, + 941, + 957 + ], + "mask": "Train" + }, + { + "node_id": 452, + "label": 3, + "text": "Searching Documents on the Intranet Searching for documents on the internet with today\u2019s search engines, which are mainly based on words in a document, is not satisfactory. Results can be improved by also taking the content of a document into account. The Extensible Markup Language (XML) enables us to do semantic tagging and to make the structure of a document explicit. But this describes a document only at the syntactical level. A more ideal situation would be when the XML tagging is also used to define the document at the semantical level. To realize this we allow an author of a document to describe the relevant concepts by means of tags like he would design an object-oriented database schema. In our approach a user searching for a particular document is presented a graphical description of such a schema, that describes the concepts defined for the webspace of an intranet. Via this interface the user can formulate OO-like queries or navigate to relevant web pages. To realize our ideas we are building an architecture based on the concept of an index-database. A prototype is up and running.", + "neighbors": [ + 78, + 161 + ], + "mask": "Validation" + }, + { + "node_id": 453, + "label": 0, + "text": "CiteSeer: An Autonomous Web Agent for Automatic Retrieval and Identification of Interesting Publications Published research papers available on the World Wide Web (WWW or Web) are often poorly organized, often exist in non-text form (e.g. Postscript) documents, and increase in quantity daily. Significant amounts of time and effort are commonly needed to find interesting and relevant publications on the Web. We have developed a Web based information agent that assists the user in the process of performing a scientific literature search. Given a set of keywords, the agent uses Web search engines and heuristics to locate and download papers. The papers are parsed in order to extract information features such as the abstract and individually identified citations which are placed into an SQL database. The agent's Web interface can be used to find relevant papers in the database using keyword searches, or by navigating the links between papers formed by the citations. Links to both \"citing\" and \"cited\" publications can be followed. In addition to simple browsing and keyword searches, the agent ...", + "neighbors": [ + 43, + 70, + 382, + 561, + 596, + 1181 + ], + "mask": "Train" + }, + { + "node_id": 454, + "label": 4, + "text": "Future Multimedia User Interfaces this article, we examine some of the work that has been done in these two fields and explore where they are heading. First, we review their often-confusing terminology and provide a brief historical overview. Since both fields rely largely on relatively unusual, and largely immature, hardware technologies, we next provide a high-level introduction to important hardware issues. This is followed by a description of the key approaches to system architecture used by current researchers. We then build on the background provided by these sections to lay out a set of current research issues and directions for future work. Throughout, we attempt to emphasize the many ways in which virtual environments and ubiquitous computing can complement each other, creating an exciting new form of multimedia computing that is far more powerful than either approach would make possible alone.", + "neighbors": [ + 325 + ], + "mask": "Validation" + }, + { + "node_id": 455, + "label": 3, + "text": "The SDCC Framework For Integrating Existing Algorithms for Diverse Data Warehouse Maintenance Tasks Recently proposed view maintenance algorithms tackle the problem of concurrent data updates happening at different autonomous ISs, whereas the EVE system addresses the maintenance of a data warehouse after schema changes of ISs. The concurrency of schema changes and data updates still remains an unexplored problem however. This paper now provides a first solution that guarantees concurrent view definition evolution and view extent maintenance of a DW defined over distributed ISs. For this problem, we introduce a framework called SDCC (Schema change and Data update Concurrency Control) system. SDCC integrates existing algorithms designed to address view maintenance subproblems, such as view extent maintenance after IS data updates, view definition evolution after IS schema changes, and view extent adaptation after view definition changes, into one system by providing protocols that enable them to correctly co-exist and collaborate. SDCC tracks any potential faulty updates of the DW ca...", + "neighbors": [ + 637, + 993, + 1209 + ], + "mask": "Train" + }, + { + "node_id": 456, + "label": 0, + "text": "Corporate Memory Management through Agents . The CoMMA project (Corporate Memory Management through Agents) aims at developing an open, agent-based platform for the management of a corporate memory by using the most advanced results on the technical, the content, and the user interaction level. We focus here on methodologies for the set-up of multi-agent systems, requirement engineering and knowledge acquisition approaches. 1. Introduction How to improve access, share and reuse of both internal and external knowledge in a company? How to improve newcomers' learning and integration in a company? How to enhance technology monitoring in a company? Knowledge Management (KM) aims at solving such problems. Different research communities offer - partial - solutions for supporting KM. The integration of results from these different research fields seems to be a promising approach. This is the motivation of the CoMMA IST project-funded by the European Commission- which started February 2000. The main objective is to implement and ...", + "neighbors": [ + 934, + 1085 + ], + "mask": "Validation" + }, + { + "node_id": 457, + "label": 2, + "text": "Intelligent Crawling on the World Wide Web with Arbitrary Predicates The enormous growth of the world wide web in recent years has made it important to perform resource discovery efficiently. Consequently, several new ideas have been proposed in recent years; among them a key technique is focused crawling which is able to crawl particular topical portions of the world wide web quickly without having to explore all web pages. In this paper, we propose the novel concept of intelligent crawling which actually learns characteristics of the linkage structure of the world wide web while performing the crawling. Specifically, the intelligent crawler uses the inlinking web page content, candidate URL structure, or other behaviors of the inlinking web pages or siblings in order to estimate the probability that a candidate is useful for a given crawl. This is a much more general framework than the focused crawling technique which is based on a pre-defined understanding of the topical structure of the web. The techniques discussed in this paper are applicable for crawling web pages which satisfy arbitrary user-defined predicates such as topical queries, keyword queries or any combinations of the above. Unlike focused crawling, it is not necessary to provide representative topical examples, since the crawler can learn its way into the appropriate topic. We refer to this technique as intelligent crawling because of its adaptive nature in adjusting to the web page linkage structure. The learning crawler is capable of reusing the knowledge gained in a given crawl in order to provide more efficient crawling for closely related predicates.", + "neighbors": [ + 1, + 53, + 382, + 649, + 774, + 1000, + 1059, + 1104 + ], + "mask": "Train" + }, + { + "node_id": 458, + "label": 3, + "text": "Tractable Query Answering in Indefinite Constraint Databases: Basic Results and Applications to Querying Spatiotemporal Information . We consider the scheme of indefinite constraint databases proposed by Koubarakis. This scheme can be used to represent indefinite information arising in temporal, spatial and truly spatiotemporal applications. The main technical problem that we address in this paper is the discovery of tractable classes of databases and queries in this scheme. We start with the assumption that we have a class of constraints C with satisfiability and variable elimination problems that can be solved in PTIME. Under this assumption, we show that there are several general classes of databases and queries for which query evaluation can be done with PTIME data complexity. We then search for tractable instances of C in the area of temporal and spatial constraints. Classes of constraints with tractable satisfiability problems can be easily found in the literature. The largest class that we consider is the class of Horn disjunctive linear constraints over the rationals. Because variable eliminati...", + "neighbors": [ + 481, + 881, + 1012 + ], + "mask": "Train" + }, + { + "node_id": 459, + "label": 5, + "text": "Generating, Executing and Revising Schedules for Autonomous Robot Office Couriers Scheduling the tasks of an autonomous robot office courier and carrying out the scheduled tasks reliably and efficiently pose challenging problems for autonomous robot control. To carry out their jobs reliably and efficiently many autonomous mobile service robots acting in human working environments have to view their jobs as everyday activity: they should accomplish longterm efficiency rather than optimize problem-solving episodes. They should also exploit opportunities and avoid problems flexibly because often robots are forced to generate schedules based on partial information. We propose to implement the controller for scheduled activity by employing concurrent reactive plans that reschedule the course of action whenever necessary and while performing their actions. The plans are represented modularly and transparently to allow for easy transformation. Scheduling and schedule repair methods are implemented as plan transformation rules. Introduction To carry out their jobs reliably...", + "neighbors": [ + 340 + ], + "mask": "Train" + }, + { + "node_id": 460, + "label": 4, + "text": "Sensing Techniques for Mobile Interaction We describe sensing techniques motivated by unique aspects of human-computer interaction with handheld devices in mobile settings. Special features of mobile interaction include changing orientation and position, changing venues, the use of computing as auxiliary to ongoing, real-world activities like talking to a colleague, and the general intimacy of use for such devices. We introduce and integrate a set of sensors into a handheld device, and demonstrate several new functionalities engendered by the sensors, such as recording memos when the device is held like a cell phone, switching between portrait and landscape display modes by holding the device in the desired orientation, automatically powering up the device when the user picks it up the device to start using it, and scrolling the display using tilt. We present an informal experiment, initial usability testing results, and user reactions to these techniques. Keywords Input devices, interaction techniques, sensing, contextaware...", + "neighbors": [ + 189, + 1006 + ], + "mask": "Train" + }, + { + "node_id": 461, + "label": 3, + "text": "Summary this paper. The main questions addressed in this setting deal with conditions under which it is possible to evaluate queries incrementally.", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 462, + "label": 0, + "text": "Self-Adaptive Operator Scheduling using the Religion-Based EA The optimal choice of the variation operators mutation and crossover and their parameters can be decisive for the performance of evolutionary algorithms (EAs). Usually the type of the operators (such as Gaussian mutation) remains the same during the entire run and the probabilistic frequency of their application is determined by a constant parameter, such as a fixed mutation rate. However, recent studies have shown that the optimal usage of a variation operator changes during the EA run. In this study, we combined the idea of self-adaptive mutation operator scheduling with the Religion-Based EA (RBEA), which is an agent model with spatially structured and variable sized subpopulations (religions). In our new model (OSRBEA), we used a selection of different operators, such that each operator type was applied within one specific subpopulation only. Our results indicate that the optimal choice of operators is problem dependent, varies during the run, and can be handled by our self-adaptive OSRBEA approach. Operator scheduling could clearly improve the performance of the already very powerful RBEA and was superior compared to a classic and other advanced EA approaches.", + "neighbors": [ + 783 + ], + "mask": "Validation" + }, + { + "node_id": 463, + "label": 1, + "text": "Dynamic on-line clustering and state extraction: An approach to symbolic learning Researchers often try to understand the representations that develop in the hidden layers of a neural network during training. Interpretation is difficult because the representations are typically highly distributed and continuous. By \"continuous,\" we mean that if one constructed a scatter plot over the hidden unit activity space of patterns obtained in response to various inputs, examination at any scale would reveal the patterns to be broadly distributed over the space. Such continuous representations are naturally obtained if the input space and activation dynamics are continuous. Continuous representations are not always appropriate. Many task domains might benefit from discrete representations -- representations selected from a finite set of alternatives. Example domains include finite-state machine emulation, data compression, language and higher cognition (involving discrete symbol processing), and categorization. In such domains, standard neural...", + "neighbors": [ + 166 + ], + "mask": "Train" + }, + { + "node_id": 464, + "label": 1, + "text": "A Behavior-Based Intelligent Control Architecture with Application to Coordination of Multiple Underwater Vehicles The paper presents a behavior-based intelligent control architecture for designing controllers which, based on their observation of sensor signals, compute the discrete control actions. These control actions then serve as the \"set-points\" for the lower level controllers. The behavior-based approach yields an intelligent controller which is a cascade of a perceptor and a response controller. The perceptor extracts the relevant symbolic information from the incoming continuous sensor signals, which enables the execution of one of the behaviors. The response controller is a discrete event system that computes the discrete control actions by executing one of the enabled behaviors. The behavioral approach additionally yields a hierarchical two layered response controller, which provides better complexity management. The inputs from the perceptor are used to first compute the higher level activities, called behaviors, and next to compute the corresponding lower level activities, called actio...", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 465, + "label": 0, + "text": "Hive: Distributed Agents for Networking Things Hive is a distributed agents platform, a decentralized system for building applications by networking local system resources. This paper presents the architecture of Hive, concentrating on the idea of an \"ecology of distributed agents\" and its implementation in a practical Java based system. Hive provides ad-hoc agent interaction, ontologies of agent capabilities, mobile agents, and a graphical interface to the distributed system. We are applying Hive to the problems of networking \"Things That Think,\" putting computation and communication in everyday places such as your shoes, your kitchen, or your own body. TTT shares the challenges and potentials of ubiquitous computing and embedded network applications. We have found that the flexibility of a distributed agents architecture is well suited for this application domain, enabling us to easily build applications and to reconfigure our systems on the fly. Hive enables us to make our environment and network more alive. This paper is dedic...", + "neighbors": [ + 258, + 415, + 701, + 978 + ], + "mask": "Train" + }, + { + "node_id": 466, + "label": 2, + "text": "Crawling the Hidden Web Current-day crawlers retrieve content only from the publicly indexable Web, i.e., the set of Web pages reachable purely by following hypertext links, ignoring search forms and pages that require authorization or prior registration. In particular, they ignore the tremendous amount of high quality content \"hidden\" behind search forms, in large searchable electronic databases. In this paper, we address the problem of designing a crawler capable of extracting content from this hidden Web. We introduce a generic operational model of a hidden Web crawler and describe how this model is realized in HiWE (Hidden Web Exposer), a prototype crawler built at Stanford. We introduce a new Layout-based Information Extraction Technique (LITE) and demonstrate its use in automatically extracting semantic information from search forms and response pages. We also present results from experiments conducted to test and validate our techniques. 1", + "neighbors": [ + 224, + 1134 + ], + "mask": "Train" + }, + { + "node_id": 467, + "label": 4, + "text": "BUILD-IT: A Planning Tool for Construction and Design It is time to go beyond the established approaches in humancomputer interaction. With the Augmented Reality (AR) design strategy humans are able to behave as much as possible in a natural way: behavior of humans in the real world with other humans and/or real world objects. Following the fundamental constraints of natural way of interacting we derive a set of recommendations for the next generation of user interfaces: the Natural User Interface (NUI). The concept of NUI is presented in form of a runnable demonstrator: a computer vision-based interaction technique for a planning tool for construction and design tasks. Keywords augmented reality, digital desk, natural user interface, computer vision-based interaction", + "neighbors": [ + 238 + ], + "mask": "Train" + }, + { + "node_id": 468, + "label": 3, + "text": "Normal Forms for Defeasible Logic Defeasible logic is an important logic-programming based nonmonotonic reasoning formalism which has an efficient implementation. It makes use of facts, strict rules, defeasible rules, defeaters, and a superiority relation. Representation results are important because they can help the assimilation of a concept by confining attention to its critical aspects. In this paper we derive some representation results for defeasible logic. In particular we show that the superiority relation does not add to the expressive power of the logic, and can be simulated by other ingredients in a modular way. Also, facts can be simulated by strict rules. Finally we show that we cannot simplify the logic any further in a modular way: Strict rules, defeasible rules, and defeaters form a minimal set of independent ingredients in the logic. 1 Introduction Normal forms play an important role in computer science. Examples of areas where normal forms have proved fruitful include logic [10], where normal forms o...", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 469, + "label": 0, + "text": "ITR: A Framework for Environment-Aware, Massively Distributed Computing physical environment in real-time, and the need to reason about emerging aggregate properties as opposed to individual component behavior. In this research we propose to develop theory, methods and tools for massively distributed, environment-aware computing (more succinctly referred to as swarm computing). The state of swarm computing today is similar to that of sequential computing in the early 1950s. Developers painstakingly produce swarm programs by designing and programming the actions of individual devices, and converge on an acceptable program through extensive simulation and experimentation. In the pre-compiler era, skeptical programmers believed that a mechanical process could not possibly produce code of comparable quality to that produced by highly skilled machine coders and that the cost of machine time is high enough to outweigh any possible savings in programmer effort. The state of swarm programming today is similar: devices are still expensive enough an", + "neighbors": [ + 689 + ], + "mask": "Test" + }, + { + "node_id": 470, + "label": 3, + "text": "The View Holder Approach: Utilizing Customized Materialized Views To Create Database Services Suitable For Mobile Database Applications among mobile devices (i.e., a laptop vs. a pager) and the amount of information available from today's database environments and the Internet. To this end, this dissertation presents the development of customizable view maintenance services, called the View Holder approach, whose middleware mechanism within the fixed network dynamically maintains versions of the views so that to meet the data consistency and currency requirements of a particular mobile client. In a general form, a View Holder can support a community of mobile clients with common interests. The motivation for maintaining versions is to compensate for the data changes that occurred to the materialized views that were used during disconnection as well as to reduce the cost of wireless communication. In order to maintain these views, customized view maintenance is performed at the data sources by translating the mobile machine's request into a materialization program containing a triggering", + "neighbors": [ + 174, + 278, + 354, + 373 + ], + "mask": "Train" + }, + { + "node_id": 471, + "label": 2, + "text": "Using Relevance Feedback In Contentbased Image Metasearch this article with a review of the issues in content-based visual query, then describe the current MetaSeek implementation. We present the results of experiments that evaluated the implementation in comparison to a previous version of the system and a baseline engine that randomly selects the individual search engines to query. We conclude by summarizing open issues for future research.", + "neighbors": [ + 1066 + ], + "mask": "Train" + }, + { + "node_id": 472, + "label": 0, + "text": "JACK Intelligent Agents - Components for Intelligent Agents in Java This paper is organised as follows. Section 2 introduces JACK Intelligent Agents, presenting the approach taken by AOS to its design and outlining its major engineering characteristics. The BDI model is discussed briefly in Section 3. Section 4 gives an outline of how to build an application with JACK Intelligent Agents. Finally, in Section 5 we discuss how the use of this framework can be beneficial to both engineers and researchers. For brevity, we will refer to JACK Intelligent Agents simply as \"JACK\".", + "neighbors": [ + 182, + 885 + ], + "mask": "Train" + }, + { + "node_id": 473, + "label": 1, + "text": "Applying Formal Concepts to Learning Systems Validation In the problem area of evaluating complex software systems, there are two distinguished areas of research, development, and application identified by the two buzzwords validation and verification, respectively. From the perspective adopted by the authors, verification is usually more formally based and, thus, can be supported by formal reasoning tools like theorem provers, for instance. The scope of verification approaches is limited by the difficulty of finding a sufficiently complete formalization to built upon. In paramount realistic problem domains, validation seems to be more appropriate, although it is less stringent in character and, therefore, validation results are often less definite. The aim of this paper is to exemplify a validation approach based on a clear and thoroughly formal theory. In this way, validation and verification should be brought closer to each other. To allow for precise and sufficiently clear results, the authors have selected the applicatio...", + "neighbors": [ + 935, + 1268 + ], + "mask": "Test" + }, + { + "node_id": 474, + "label": 4, + "text": "Design and Implementation of Expressive Footwear As an outgrowth of our interest in dense wireless sensing and expressive applications of wearable computing, we have developed the world's most versatile human-computer interface for the foot. By dense wireless sensing, we mean the remote acquisition of many different parameters with a compact, autonomous sensor cluster. We have developed such a low-power sensor card to measure over 16 continuous quantities and transmit them wirelessly to a remote base station, updating all variables at 50 Hz. We have integrated a pair of these devices onto the feet of dancers and athletes, measuring continuous pressure at 3 points near the toe, dynamic pressure at the heel, bidirectional bend of the sole, height of each foot off conducting strips in the stage, angular rate of each foot about the vertical, angular position of each foot about the Earth's local magnetic field, as well as their tilt and low-G acceleration, 3-axis shock acceleration (from kicks and jumps), and position (via an integrated s...", + "neighbors": [ + 979 + ], + "mask": "Train" + }, + { + "node_id": 475, + "label": 3, + "text": "NiagaraCQ: A Scalable Continuous Query System for Internet Databases Continuous queries are persistent queries that allow users to receive new results when they become available. While continuous query systems can transform a passive web into an active environment, they need to be able to support millions of queries due to the scale of the Internet. No existing systems have achieved this level of scalability. NiagaraCQ addresses this problem by grouping continuous queries based on the observation that many web queries share similar structures. Grouped queries can share the common computation, tend to fit in memory and can reduce the I/O cost significantly. Furthermore, grouping on selection predicates can eliminate a large number of unnecessary query invocations. Our grouping technique is distinguished from previous group optimization approaches in the following ways. First, we use an incremental group optimization strategy with dynamic re-grouping. New queries are added to existing query groups, without having to regroup already installed queries. Second, we use a query-split scheme that requires minimal changes to a general-purpose query engine. Third, NiagaraCQ groups both change-based and timer-based queries in a uniform way. To insure that NiagaraCQ is scalable, we have also employed other techniques including incremental evaluation of continuous queries, use of both pull and push models for detecting heterogeneous data source changes, and memory caching. This paper presents the design of NiagaraCQ system and gives some experimental results on the system\u2019s performance and scalability. 1.", + "neighbors": [ + 20, + 101, + 651 + ], + "mask": "Train" + }, + { + "node_id": 476, + "label": 1, + "text": "CABINS: A Framework of Knowledge Acquisition and Iterative Revision for Schedule Improvement and Reactive Repair Practical scheduling problems generally require allocation of resources in the presence of a large, diverse and typically conflicting set of constraints and optimization criteria. The ill-structuredness of both the solution space and the desired objectives make scheduling problems difficult to formalize. This paper describes a case-based learning method for acquiring context-dependent user optimization preferences and tradeoffs and using them to incrementally improve schedule quality in predictive scheduling and reactive schedule management in response to unexpected execution events. The approach, implemented in the CABINS system, uses acquired user preferences to dynamically modify search control to guide schedule improvement. During iterative repair, cases are exploited for: (1) repair action selection, (2) evaluation of intermediate repair results and (3) recovery from revision failures. The method allows the system to dynamically switch between repair heuristic actions, each of whi...", + "neighbors": [ + 195 + ], + "mask": "Train" + }, + { + "node_id": 477, + "label": 2, + "text": "Efficient and Effective Metasearch for a Large Number of Text Databases Metasearch engines can be used to facilitate ordinary users for retrieving information from multiple local sources (text databases). In a metasearch engine, the contents of each local database is represented by a representative. Each user query is evaluated against the set of representatives of all databases in order to determine the appropriate databases to search. When the number of databases is very large, say in the order of tens of thousands or more, then a traditional metasearch engine may become inefficient as each query needs to be evaluated against too many database representatives. Furthermore, the storage requirement on the site containing the metasearch engine can be very large. In this paper, we propose to use a hierarchy of database representatives to improve the efficiency. We provide an algorithm to search the hierarchy. We show that the retrieval effectiveness of our algorithm is the same as that of evaluating the user query against all database representatives. We als...", + "neighbors": [ + 271, + 433, + 526, + 696, + 897, + 1124, + 1165 + ], + "mask": "Validation" + }, + { + "node_id": 478, + "label": 2, + "text": "Contextual Rules for Text Analysis In this paper we describe a rule-based formalism for the analysis and labelling of texts segments. The rules are contextual rewriting rules with a restricted form of negation. They allow to underspecify text segments not considered relevant to a given task and to base decisions upon context. A parser for these rules is presented and consistence and completeness issues are discussed. Some results of an implementation of this parser with a set of rules oriented to the segmentation of texts in propositions are shown.", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 479, + "label": 1, + "text": "Protein Structure Prediction With Evolutionary Algorithms Evolutionary algorithms have been successfully applied to a variety of molecular structure prediction problems. In this paper we reconsider the design of genetic algorithms that have been applied to a simple protein structure prediction problem. Our analysis considers the impact of several algorithmic factors for this problem: the conformational representation, the energy formulation and the way in which infeasible conformations are penalized. Further we empirically evaluate the impact of these factors on a small set of polymer sequences. Our analysis leads to specific recommendations for both GAs as well as other heuristic methods for solving PSP on the HP model. 1 INTRODUCTION A protein is a chain of amino acid residues that folds into a specific native tertiary structure under certain physiological conditions. A protein's structure determines its biological function. Consequently, methods for solving protein structure prediction (PSP) problems are valuable tools for modern molecula...", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 480, + "label": 5, + "text": "Characterizing Operating System Activity In Specjvm98 Benchmarks : Complete system simulation to understand the influence of architecture and operating systems on application execution has been identified to be crucial for systems design. This problem is particularly interesting in the context of Java since it is not only the application that can invoke kernel services, but so does the underlying Java Virtual Machine (JVM) implementation which runs these programs. Further, the JVM style (JIT compiler or interpreter) and the manner in which the different JVM components (such as the garbage collector and class loader) are exercised, can have a significant impact on the kernel activities. To investigate these issues, this chapter uses complete system simulation of the SPECjvm98 benchmarks on the SimOS simulation platform. The execution of these benchmarks on both JIT compilers and interpreters is profiled in detail. The kernel activity of SPECjvm98 applications constitutes up to 17% of the execution time in the large dataset and up to 31% i...", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 481, + "label": 3, + "text": "The DEDALE System for Complex Spatial Queries This paper presents dedale, a spatial database system intended to overcome some limitations of current systems by providing an abstract and non-specialized data model and query language for the representation and manipulation of spatial objects. dedale relies on a logical model based on linear constraints, which generalizes the constraint database model of [KKR90]. While in the classical constraint model, spatial data is always decomposed into its convex components, in dedale holes are allowed to fit the need of practical applications. The logical representation of spatial data although slightly more costly in memory, has the advantage of simplifying the algorithms. dedale relies on nested relations, in which all sorts of data (thematic, spatial, etc.) are stored in a uniform fashion. This new data model supports declarative query languages, which allow an intuitive and efficient manipulation of spatial objects. Their formal foundation constitutes a basis for practical query optimizati...", + "neighbors": [ + 147, + 397, + 458 + ], + "mask": "Train" + }, + { + "node_id": 482, + "label": 2, + "text": "SimRank: A Measure of Structural-Context Similarity The problem of measuring \"similarity\" of objects arises in many applications, and many domain-specific measures have been developed, e.g., matching text across documents or computing overlap among item-sets. We propose a complementary approach, applicable in any domain with object-to-object relationships, that measures similarity of the structural context in which objects occur, based on their relationships with other objects. Effectively, we compute a measure that says \"two objects are similar if they are related to similar objects.\" This general similarity measure, called SimRank, is based on a simple and intuitive graph-theoretic model. For a given domain, SimRank can be combined with other domain-specific similarity measures. We suggest techniques for efficient computation of SimRank scores, and provide experimental results on two application domains showing the computational feasibility and effectiveness of our approach.", + "neighbors": [ + 216, + 1189 + ], + "mask": "Train" + }, + { + "node_id": 483, + "label": 2, + "text": "The THISL Broadcast News Retrieval System This paper described the THISL spoken document retrieval system for British and North American Broadcast News. The system is based on the ABBOT large vocabulary speech recognizer, using a recurrent network acoustic model, and a probabilistic text retrieval system. We discuss the development of a realtime British English Broadcast News system, and its integration into a spoken document retrieval system. Detailed evaluation is performed using a similar North American Broadcast News system, to take advantage of the TREC SDR evaluation methodology. We report results on this evaluation, with particular reference to the effect of query expansion and of automatic segmentation algorithms. 1. INTRODUCTION THISL is an ESPRIT Long Term Research project in the area of speech retrieval. It is concerned with the construction of a system which performs good recognition of broadcast speech from television and radio news programmes, from which it can produce multimedia indexing data. The principal obj...", + "neighbors": [ + 604 + ], + "mask": "Validation" + }, + { + "node_id": 484, + "label": 3, + "text": "A String-based Model for Infinite Granularities (Extended Abstract) ) Jef Wijsen Universit'e de Mons-Hainaut Jef.Wijsen@umh.ac.be Abstract In the last few years, the concept of time granularity has been defined by several researchers, and a glossary of time granularity concepts has been published. These definitions often view a time granularity as a (mostly infinite) sequence of time granules. Although this view is conceptually clean, it is extremely inefficient or even practically impossible to represent a time granularity in this manner. In this paper, we present a practical formalism for the finite representation of infinite granularities. The formalism is string-based, allows symbolic reasoning, and can be extended to multiple dimensions to accommodate, for example, space. Introduction In the last few years, formalisms to represent and to reason about temporal and spatial granularity have been developed in several areas of computer science. Although several researchers have used different definitions of time granularity, they comm...", + "neighbors": [ + 1237 + ], + "mask": "Test" + }, + { + "node_id": 485, + "label": 0, + "text": "A Multi-Agent Approach to Vehicle Monitoring in Motorway . This paper describes CaseLP, a prototyping environment for MultiAgent Systems (MAS), and its adoption for the development of a distributed industrial application. CaseLP employs architecture definition, communication, logic and procedural languages to model a MAS from the top-level architecture down to procedural behavior of each agent's instance. The executable specification which is obtained can be employed as a rapid prototype which helps in taking quick decisions on the best possible implementation solutions. Such capabilities have been applied to a distributed application of Elsag company, in order to assess the best policies for data communication and database allocation before the concrete implementation. The application consists in remote traffic control and surveillance over service areas on an Italian motorway, employing automatic detection and car plate reading at monitored gates. CaseLP allowed to predict data communication performance statistics under differe...", + "neighbors": [ + 106, + 521, + 1222 + ], + "mask": "Test" + }, + { + "node_id": 486, + "label": 0, + "text": "Process- and Agent-Based Modelling Techniques for Dialogue Systems and Virtual Environments This text presents results of ongoing research, which is aimed at developing a framework for developing multimodal natural language dialogue systems operating within virtual environments. The aspects of multimodality and presence in a virtual environment are chosen as the main focus of this research. It may be argued that specification techniques would form the basis of such a framework. Therefore, a general overview and evaluation is given of existing specification techniques for interactive systems, based on both literature and previous research results. This includes the object-oriented model, process algebras, interactor models, and agent systems. Agent systems are further subdivided into intentional logics, production rule systems, agent communication languages, agent platforms, and agent architectures. A new agent system is proposed, which is based on update notification mechanisms as found in interactor models, and the `facilitator' function as found in some agent platfo...", + "neighbors": [ + 941 + ], + "mask": "Train" + }, + { + "node_id": 487, + "label": 3, + "text": "Temporal Objects for Spatio-Temporal Data Models and a Comparison of Their Representations Abstract: Currently, there are strong efforts to integrate spatial and temporal database technology into spatio-temporal database systems. This paper views the topic from a rather fundamental perspective and makes several contributions. First, it reviews existing temporal and spatial data models and presents a completely new approach to temporal data modeling based on the very general notion of temporal object. The definition of temporal objects is centered around the observation that anything that changes over time can be expressed as a function over time. For the modeling of spatial objects the well known concept of spatial data types is employed. As specific subclasses, linear temporal and spatial objects are identified. Second, the paper proposes the database embedding of temporal objects by means of the abstract data type (ADT) approach to the integration of complex objects into databases. Furthermore, we make statements about the expressiveness of different temporal and spatial database embeddings. Third, we consider the combination of temporal and spatial objects into spatio-temporal objects in (relational) databases. We explain various alternatives for spatio-temporal data models and databases and compare their expressiveness. Spatio-temporal objects turn out to be specific instances of temporal objects. 1", + "neighbors": [ + 151 + ], + "mask": "Train" + }, + { + "node_id": 488, + "label": 3, + "text": "On Supporting Containment Queries in Relational Database Management Systems Virtually all proposals for querying XML include a class of query we term \u201ccontainment queries\u201d. It is also clear that in the foreseeable future, a substantial amount of XML data will be stored in relational database systems. This raises the question of how to support these containment queries. The inverted list technology that underlies much of Information Retrieval is well-suited to these queries, but should we implement this technology (a) in a separate loosely-coupled IR engine, or (b) using the native tables and query execution machinery of the RDBMS? With option (b), more than twenty years of work on RDBMS query optimization, query execution, scalability, and concurrency control and recovery immediately extend to the queries and structures that implement these new operations. But all this will be irrelevant if the performance of option (b) lags that of (a) by too much. In this paper, we explore some performance implications of both options using native implementations in two commercial relational database systems and in a special purpose inverted list engine. Our performance study shows that while RDBMSs are generally poorly suited for such queries, under certain conditions they can outperform an inverted list engine. Our analysis further identifies two significant causes that differentiate the performance of the IR and RDBMS implementations: the join algorithms employed and the hardware cache utilization. Our results suggest that contrary to most expectations, with some modifications, a native implementation in an RDBMS can support this class of query much more efficiently.", + "neighbors": [ + 306, + 420, + 634, + 1162 + ], + "mask": "Test" + }, + { + "node_id": 489, + "label": 4, + "text": "ARQuake: An Outdoor/Indoor Augmented Reality First Person Application This pap er presents an outdoor/indoor augmented re- ality first person applic ationAR(2uake we have developal. ARQuake is an extension of the desktop game Quake, and as such we are investigating how to convert a desktop first person application into an outdoor/indoor mobile augmented reality application. We present an archire cture for a low cost, moderately accurate six degrees of freedom tracking system based on GP$, digital compass, and fiducial vision-based tracking. Usability issues such as monster selection, colour, and input devies are discussed. A second application for AR architectural design visualisation is presented.", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 490, + "label": 2, + "text": "Exploiting Geographical Location Information of Web Pages Many information resources on the web are relevant primarily to limited geographical communities. For instance, web sites containing information on restaurants, theaters, and apartment rentals are relevant primarily to web users in geographical proximity to these locations. In contrast, other information resources are relevant to a broader geographical community. For instance, an on-line newspaper may be relevant to users across the United States. Unfortunately, the geographical scope of web resources is largely ignored by web search engines. We make the case for identifying and exploiting the geographical location information of web sites so that web search engines can rank resources in a geographically sensitive fashion, in addition to using more traditional information-retrieval strategies. In this paper, we first consider how to compute the geographical location of web pages. Subsequently, we consider how to exploit such information in one specific \"proof-of-concept\" appl...", + "neighbors": [ + 1000, + 1017, + 1104 + ], + "mask": "Train" + }, + { + "node_id": 491, + "label": 4, + "text": "MRML: Towards an extensible standard for multimedia querying and benchmarking In recent years, the need for databases which query multimedia data by content has become apparent. Many commercial and non--commercial research groups are trying to fulfill these needs. The development of research can be described as moving in two directions ffl search for new, useful query and interaction paradigms ffl deeper research to improve the performance of systems that have adopted a given query paradigm. The search for new better performance given a query paradigm has led to \"clusters\" of systems which are similar in their interaction with the user, and which give a certain set of interaction capabilities to the user. It is already visible, that research will move towards systems which enable the user to formulate multi--paradigm queries in order to further improve results. As a consequence of the above, there is the need for ffl A common mechanism for shipping multi--paradigm queries and their results , which assures that the right query processor processes th...", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 492, + "label": 2, + "text": "Alerting Services in a Digital Library Environment The classical paradigm of finding information in the WWW by initiating retrieval and browsing becomes more and more ineffective. Other techniques have to be considered. Automatic delivery of contents to the user according to their needs and filtered by her profile of interests is required. Current implementations of such Alerting Services at content providers side have several drawbacks. In my research project I evaluate methods and techniques for Alerting Services with special respect to the area of digital libraries. I intend to provide a framework that supports design decisions in building alerting services depending on the infrastructure and desired system parameters. 1 Introduction Imagine one morning you just arrive at your office and switch on your computer to have a look at the recent news in your special field of research. Little pictures for each topic tell you that some interesting documents arrived. Behind one icon you find for instance the new announcements for c...", + "neighbors": [ + 314 + ], + "mask": "Validation" + }, + { + "node_id": 493, + "label": 3, + "text": "Optimizing Queries with Object Updates Object-oriented databases (OODBs) provide powerful data abstractions and modeling facilities but they usually lack a suitable framework for query processing and optimization. Even though there is an increasing number of recent proposals on OODB query optimization, only few of them are actually focused on query optimization in the presence of object identity and destructive updates, features often supported by most realistic OODB languages. This paper presents a formal framework for optimizing object-oriented queries in the presence of side effects. These queries may contain object updates at any place and in any form. We present a language extension to the monoid comprehension calculus to express these object-oriented features and we give a formal meaning to these extensions. Our method is based on denotational semantics, which is often used to give a formal meaning to imperative programming languages. The semantics of our language extensions is expressed in terms of our monoid calculu...", + "neighbors": [ + 302, + 397, + 1047 + ], + "mask": "Train" + }, + { + "node_id": 494, + "label": 2, + "text": "Using Text Elements by Context to Display Search Results in Information Retrieval Systems - Model and Research Results Information retrieval systems display search results by various methods. This paper focuses on a model for displaying a list of search results by means of textual elements that utilize a new information unit that replaces the currently used information unit. The paper includes a short description of several studies that support the model. 1. Introduction Because of the growth in the number and scope of global databases, a special approach is required to locating information, from the perspective of the user interface. The Internet, as it exists today, is an outstanding example of a broad-base, unfocused database. Most Internet search engines display their information as a serially ordered list of results (with a partial attempt at ranking the results). In most cases, this list includes the document title, URL and, at times, the first few lines of the document. The information, as currently displayed to the user, is incomplete and insufficiently focused on the search query. This requi...", + "neighbors": [ + 1032 + ], + "mask": "Train" + }, + { + "node_id": 495, + "label": 0, + "text": "Environment Centered Analysis and Design of Coordination Mechanisms Environment Centered Analysis and Design of Coordination Mechanisms May 1995 KEITH S. DECKER B.S., Carnegie Mellon University M.S., Rensselaer Polytechnic Institute Ph.D., University of Massachusetts Amherst Directed by: Professor Victor R. Lesser Committee: Professor Paul R. Cohen Professor John A. Stankovic Professor Douglas L. Anderton Coordination, as the act of managing interdependencies between activities, is one of the central research issues in Distributed Artificial Intelligence. Many researchers have shown that there is no single best organization or coordination mechanism for all environments. Problems in coordinating the activities of distributed intelligent agents appear in many domains: the control of distributed sensor networks; multi-agent scheduling of people and/or machines; distributed diagnosis of errors in local-area or telephone networks; concurrent engineering; `software agents' for information gathering. The design of coordination mechanisms for groups of compu...", + "neighbors": [ + 200, + 434, + 441, + 513, + 847, + 1119 + ], + "mask": "Validation" + }, + { + "node_id": 496, + "label": 2, + "text": "Context in Web Search Web search engines generally treat search requests in isolation. The results for a given query are identical, independent of the user, or the context in which the user made the request. Nextgeneration search engines will make increasing use of context information, either by using explicit or implicit context information from users, or by implementing additional functionality within restricted contexts. Greater use of context in web search may help increase competition and diversity on the web.", + "neighbors": [ + 224, + 279, + 347, + 502, + 587, + 595, + 627, + 696, + 774, + 845, + 855, + 931, + 1000, + 1003, + 1017, + 1104, + 1207, + 1233 + ], + "mask": "Test" + }, + { + "node_id": 497, + "label": 4, + "text": "Symbiotic Interfaces For Wearable Face Recognition We introduce a wearable face detection method that exploits constraints in face scale and orientation imposed by the proximity of participants in near social interactions. Using this method we describe a wearable system that perceives \u201csocial engagement,\u201d i.e., when the wearer begins to interact with other individuals. One possible application is improving the interfaces of portable consumer electronics, such as cellular phones, to avoid interrupting the user during face-to-face interactions. Our experimental system proved> 90 % accurate when tested on wearable video data captured at a professional conference. Over three hundred individuals were captured, and the data was separated into independent training and test sets. A goal is to incorporate user interface in mobile machine recognition systems to improve performance. The user may provide real-time feedback to the system or may subtly cue the system through typical daily activities, such as turning to face a speaker, as to when conditions for recognition are favorable. 1", + "neighbors": [ + 307, + 334, + 665, + 738, + 987 + ], + "mask": "Train" + }, + { + "node_id": 498, + "label": 3, + "text": "First-Order Queries On Finite Structures Over The Reals We investigate properties of finite relational structures over the reals expressed by first-order sentences whose predicates are the relations of the structure plus arbitrary polynomial inequalities, and whose quantifiers can range over the whole set of reals. In constraint programming terminology, this corresponds to Boolean real polynomial constraint queries on finite structures. The fact that quantifiers range over all reals seems crucial; however, we observe that each sentence in the first-order theory of the reals can be evaluated by letting each quantifier range over only a finite set of real numbers without changing its truth value. Inspired by this observation, we then show that when all polynomials used are linear, each query can be expressed uniformly on all finite structures by a sentence of which the quantifiers range only over the finite domain of the structure. In other words, linear constraint programming on finite structures can be reduced to ordinary query evaluation as usual in finite model theory and databases. Moreover, if only \"generic\" queries are taken into consideration, we show that this can be reduced even further by proving that such Dept. Math. & Computer Sci., University of Antwerp (UIA), Universiteitsplein 1, B-2610 Antwerp, Belgium. E-mail: pareda@uia.ac.be. y Dept. WNI, University of Limburg (LUC), B-3590 Diepenbeek, Belgium. E-mail: vdbuss@luc.ac.be. z Computer Science Department, Indiana University, Bloomington, IN 47405-4101, USA. E-mail: vgucht@cs.indiana.edu. 1 queries can be expressed by sentences using as polynomial inequalities only those of the simple form x ! y. 1", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 499, + "label": 0, + "text": "An Architecture for Mobile BDI Agents BDI (Belief, Desire, Intention) is a mature and commonly adopted architecture for Intelligent Agents. BDI Agents are autonomous entities able to work in teams and react to changing environmental conditions. However, the current computational model adopted by BDI has problems which, amongst other limitations, prevent the development of mobile agents. In this paper, we discuss an architecture, TOMAS (Transaction Oriented Multi Agent System), that addresses these issues by combining BDI and the distributed nested transaction paradigms. An algorithm is presented which enable agents in TOMAS to become mobile. 1 Introduction Intelligent Agents are a very active area of AI research [WJ95] [Sho93]. Of the various agent architectures which have been proposed, BDI (Belief, Desire, Intention) [RG92] is probably the most mature and has been adopted by a few industrial applications. BDI Agents are autonomous entities able to work in teams and react to changing environmental conditions. Mobile m...", + "neighbors": [ + 808, + 816 + ], + "mask": "Train" + }, + { + "node_id": 500, + "label": 5, + "text": "Learning Quantitative Knowledge for Multiagent Coordination A central challenge of multiagent coordination is reasoning about how the actions of one agent affect the actions of another. Knowledge of these interrelationships can help coordinate agents --- preventing conflicts and exploiting beneficial relationships among actions. We explore three interlocking methods that learn quantitative knowledge of such non-local effects in TAEMS, a well-developed framework for multiagent coordination. The surprising simplicity and effectiveness of these methods demonstrates how agents can learn domain-specific knowledge quickly, extending the utility of coordination frameworks that explicitly represent coordination knowledge. Introduction A major challenge of designing effective multiagent systems is managing non-local effects --- situations where the actions of one agent impact the performance of other agents' actions. For example, one agent's action can enable, disable, facilitate, or hinder the actions of other agents. Poor accounting for ...", + "neighbors": [ + 434, + 441, + 1119 + ], + "mask": "Train" + }, + { + "node_id": 501, + "label": 3, + "text": "Design and Implementation of the ROL System ROL is a deductive object-oriented database system developed at the University of Regina. It eectively integrates important features of deductive databases and object-oriented databases in a uniform framework and provides a uniform rule-based declarative language for dening, manipulating and querying a database. This paper describes the latest implementation of ROL. 1 Introduction In the past decade a lot of interests arose in integrating deductive and object-oriented databases to gain the best of the two approaches such as recursion, declarative querying, and rm logical foundations from deductive approaches, and object identity, complex objects, classes, class hierarchy, property inheritance with overriding and schema from object-oriented approach. A number of deductive object-oriented database languages have been proposed, such as O-logic [17], revised O-logic [11], C-logic [8], IQL [2], IQL2[1], Flogic [10], LOGRES [7], LLO [16], LOL [6], CORAL++[19], Datalog method [3], DLT ...", + "neighbors": [ + 192, + 861 + ], + "mask": "Train" + }, + { + "node_id": 502, + "label": 2, + "text": "STARTS: Stanford Proposal for Internet Meta-Searching Document sources are available everywhere, both within the internal networks of organizations and on the Internet. Even individual organizations use search engines from different vendors to index their internal document collections. These search engines are typically incompatible in that they support different query models and interfaces, they do not return enough information with the query results for adequate merging of the results, and finally, in that they do not export metadata about the collections that they index (e.g., to assist in resource discovery). This paper describes STARTS, an emerging protocol for Internet retrieval and search that facilitates the task of querying multiple document sources. STARTS has been developed in a unique way. It is not a standard, but a group effort coordinated by Stanford's Digital Library project, and involving over 11 companies and organizations. The objective of this paper is not only to give an overview of the STARTS protocol proposal, but...", + "neighbors": [ + 241, + 271, + 404, + 496, + 526, + 792, + 1124 + ], + "mask": "Train" + }, + { + "node_id": 503, + "label": 3, + "text": "An Efficient Index Structure for OID Indexing in Parallel Temporal Object-Oriented Database Systems . In an object-oriented database system based on logical OIDs, an OID index (OIDX) is necessary to convert from logical OID to physical location. In a temporal objectoriented database system (TOODB), this OIDX also contains the timestamps of the object versions. We have in a previous paper studied OIDX performance with a relatively simple index. The studies have shown that OIDX maintenance can be quite costly, especially objects updates, because in a temporal OODB, the OIDX needs to be updated every time an object is updated. This has convinced us that a new index structure, particularly suitable to TOODB requirements, is necessary. In this report, we describe an efficient OID index structure for TOODBs, which we call The Vagabond Temporal OID Index (VTOIDX). The main goals of the VTOIDX is 1) support for temporal data, while still having index performance close to a non-temporal/one version database system, 2) efficient object-relational operation, and 3) easy tertiary storage migrati...", + "neighbors": [ + 630, + 760 + ], + "mask": "Train" + }, + { + "node_id": 504, + "label": 1, + "text": "Simultaneous Learning of Negatively Correlated Neural Networks A new approach to designing neural network ensembles has been proposed recently [1]. Experimental studies on some regression tasks have shown that the new approach performs significantly better than previous ones [1]. This paper presents a new algorithm for designing neural network ensembles for classification problems with noise. This new algorithm is different from that used for regression tasks although the idea is similar. The idea behind this new algorithm is to encourage different individual networks in an ensemble to learn different parts or aspects of the training data so that the whole ensemble can learn the whole training data better. Negatively correlated networks are trained with a novel correlation penalty term in the error function to encourage such specialisation. In our algorithm, individual networks are trained simultaneously rather than sequentially. This provides an opportunity for different networks to interact with other and to specialise. Experiments on two real-w...", + "neighbors": [ + 108 + ], + "mask": "Train" + }, + { + "node_id": 505, + "label": 1, + "text": "Unsupervised Learning from Dyadic Data Dyadic data refers to a domain with two finite sets of objects in which observations are made for dyads, i.e., pairs with one element from either set. This includes event co-occurrences, histogram data, and single stimulus preference data as special cases. Dyadic data arises naturally in many applications ranging from computational linguistics and information retrieval to preference analysis and computer vision. In this paper, we present a systematic, domain-independent framework for unsupervised learning from dyadic data by statistical mixture models. Our approach covers different models with flat and hierarchical latent class structures and unifies probabilistic modeling and structure discovery. Mixture models provide both, a parsimonious yet flexible parameterization of probability distributions with good generalization performance on sparse data, as well as structural information about data-inherent grouping structure. We propose an annealed version of the standard Expectation Maximization algorithm for model fitting which is empirically evaluated on a variety of data sets from different domains.", + "neighbors": [ + 50, + 437, + 592, + 722, + 1141 + ], + "mask": "Train" + }, + { + "node_id": 506, + "label": 3, + "text": "Fast Approximate Evaluation of OLAP Queries for Integrated Statistical Data We have developed a mediator architecture that integrates statistical information about energy products from several government agencies, such as the Bureau of Labor Statistics, the Energy Information Administration, and the California Energy Commission. Our architecture has a dual mode of operation. First, our system can retrieve live data from databases and web sources from these agencies. This allows the users to obtain completely up-to-date data. However, for complex analytical queries that typically require large amounts of data and processing, live access does not offer the level of interactivity that some users require. Second, our system can warehouse the information from the data sources to allow for complex analytical queries to be executed much more efficiently. However, the data would be only as recent as the last update to the data warehouse. In this paper we describe the architecture and focus on how to perform analytical queries against the data warehouse very efficiently. We present results using a fast wavelet-based technique for progressive evaluation of range-sum queries. This technique allows for returning an approximate result to the query very efficiently and for fast convergence to the exact result. We envision users exploring many complex queries using the very fast approximate results as guidance and only obtaining the exact results for those queries that are deemed of interest. We present experimental results showing the efficiency of both approximate and exact queries. 1", + "neighbors": [ + 392 + ], + "mask": "Test" + }, + { + "node_id": 507, + "label": 2, + "text": "Towards Efficient Multi-Feature Queries in Heterogeneous Environments Applications like multimedia databases or enterprisewide information management systems have to meet the challenge of efficiently retrieving best matching objects from vast collections of data. We present a new algorithm Stream-Combine for processing multi-feature queries on heterogeneous data sources. Stream-Combine is selfadapting to different data distributions and to the specific kind of the combining function. Furthermore we present a new retrieval strategy that will essentially speed up the output of relevant objects.", + "neighbors": [ + 555, + 1144, + 1145 + ], + "mask": "Train" + }, + { + "node_id": 508, + "label": 0, + "text": "Spatial Agents Implemented in a Logical Expressible Language In this paper, we present a multi-layered architecture for spatial and temporal agents. The focus is laid on the declarativity of the approach, which makes agent scripts expressive and well understandable. They can be realized as (constraint) logic programs. The logical description language is able to express actions or plans for one and more autonomous and cooperating agents for the RoboCup (Simulator League). The system architecture hosts constraint technology for qualitative spatial reasoning, but quantitative data is taken into account, too. The basic (hardware) layer processes the agent's sensor information. An interface transfers this lowlevel data into a logical representation. It provides facilities to access the preprocessed data and supplies several basic skills. The second layer performs (qualitative) spatial reasoning. On top of this, the third layer enables more complex skills such as passing, offside-detection etc. At last, the fourth layer establishes acting as a team both by emergent and explicit cooperation. Logic and deduction provide a clean means to specify and also to implement teamwork behavior. 1", + "neighbors": [ + 184, + 363 + ], + "mask": "Train" + }, + { + "node_id": 509, + "label": 1, + "text": "Learning human arm movements by imitation: Evaluation of a biologically-inspired connectionist architecture . This paper is concerned with the evaluation of a model of human imitation of arm movements. The model consists of a hierarchy of artificial neural networks, which are abstractions of brain regions involved in visuo-motor control. These are the spinal cord, the primary and pre-motor cortexes (M1 & PM), the cerebellum, and the temporal cortex. A biomechanical simulation is developed which models the muscles and the complete dynamics of a 37 degree of freedom humanoid. Input to the model are data from human arm movements recorded using video and marker-based tracking systems. The model's performance is evaluated for reproducing reaching movements and oscillatory movements of the two arms. Results show a high qualitative and quantitative agreement with human data. In particular, the model reproduces the well known features of reaching movements in humans, namely the bell-shaped curves for the velocity and quasi-linear hand trajectories. Finally, the model's performance is compar...", + "neighbors": [ + 1072, + 1143 + ], + "mask": "Test" + }, + { + "node_id": 510, + "label": 2, + "text": "Detection of Heterogeneities in a Multiple Text Database Environment As the number of text retrieval systems (search engines) grows rapidly on the World Wide Web, there is an increasing need to build search brokers (metasearch engines) on top of them. Often, the task of building an effective and efficient metasearch engine is hindered by the heterogeneities among the underlying local search engines. In this paper, we first analyze the impact of various heterogeneities on building a metasearch engine. We then present some techniques that can be used to detect the most prominent heterogeneities among multiple search engines. Applications of utilizing the detected heterogeneities in building better metasearch engines will be provided.", + "neighbors": [ + 115, + 241, + 271, + 433, + 526, + 579, + 696, + 792, + 931, + 1003, + 1017, + 1253 + ], + "mask": "Train" + }, + { + "node_id": 511, + "label": 2, + "text": "Probabilistic Question Answering on the Web Web-based search engines such as Google and NorthernLight return documents that are relevant to a user query, not answers to user questions. We have developed an architecture that augments existing search engines so that they support natural language question answering. The process entails five steps: query modulation, document retrieval, passage extraction, phrase extraction, and answer ranking. In this paper we describe some probabilistic approaches to the last three of these stages. We show how our techniques apply to a number of existing search en-1 Radev et al. 2 gines and we also present results contrasting three different methods for question answering. Our algorithm, probabilistic phrase reranking (PPR), uses proximity and question type features and achieves a total reciprocal document rank of.20 on the TREC8 corpus. Our techniques have been implemented as a Web-accessible system, called NSIR.", + "neighbors": [ + 595, + 653 + ], + "mask": "Train" + }, + { + "node_id": 512, + "label": 1, + "text": "Using Case-Based Reasoning for Supporting Continuous Improvement Processes The goal of the IPQM project -- a collaboration of the Fraunhofer Institute for Manufacturing Engineering and Automation (IPA) in Stuttgart and the Fraunhofer Institute for Experimental Software Engineering (IESE) in Kaiserslautern -- is to develop a technical infrastructure to support continuous improvement processes. We describe the approach we took in some detail and focus on the implementation of the IPQM system and its currently ongoing evaluation in the healthcare sector. We also give an outlook on intended extensions of the system and its application in other domains.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 513, + "label": 0, + "text": "Achieving Coordination through Combining Joint Planning and Joint Learning . There are two major approaches to activity coordination in multiagent systems. First, by endowing the agents with the capability to jointly plan, that is, to jointly generate hypothetical activity sequences. Second, by endowing the agents with the capability to jointly learn, that is, to jointly choose the actions to be executed on the basis of what they know from experience about the interdependencies of their actions. This paper describes a new algorithm called JPJL (\"Joint Planning and Joint Learning\") that combines both approaches. The primary motivation behind this algorithm is to bring together the advantages of joint planning and joint learning while avoiding their disadvantages. Experimental results are provided that illustrate the potential benefits and shortcomings of the JPJL algorithm. 1 Motivation Multiagent Systems (MAS)---systems in which several interacting, intelligent and autonomous entities called agents pursue some set of goals or perform some set of tasks---have...", + "neighbors": [ + 495, + 945 + ], + "mask": "Train" + }, + { + "node_id": 514, + "label": 1, + "text": "Guided Crossover: A New Operator for Genetic Algorithm Based Optimization Genetic algorithms (GAs) have been extensively used in different domains as a means of doing global optimization in a simple yet reliable manner. They have a much better chance of getting to global optima than gradient based methods which usually converge to local sub optima. However, GAs have a tendency of getting only moderately close to the optima in a small number of iterations. To get very close to the optima, the GA needs a very large number of iterations. Whereas gradient based optimizers usually get very close to local optima in a relatively small number of iterations. In this paper we describe a new crossover operator which is designed to endow the GA with gradient-like abilities without actually computing any gradients and without sacrificing global optimality. The operator works by using guidance from all members of the GA population to select a direction for exploration. Empirical results in two engineering design domains and across both binary and floating point representa...", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 515, + "label": 0, + "text": "Jinni: Intelligent Mobile Agent Programming at the Intersection of Java and Prolog . Jinni (Java INference engine and Networked Interactor), is a lightweight, multi-threaded, logic programming language, intended to be used as a flexible scripting tool for gluing together knowledge processing components and Java objects in distributed applications. Jinni threads are coordinated through blackboards, local to each process. Associative search based on term unification (a variant of Linda) is used as the basic synchronization mechanism. Threads are controlled with tiny interpreters following a scripting language based on a subset of Prolog. Mobile threads, implemented by capturing first order continuations in a compact data structure sent over the network, allow Jinni to interoperate with remote high performance BinProlog servers for CPU-intensive knowledge processing and with other Jinni components over the Internet. The synergy of these features makes Jinni a convenient development platform for distributed AI, and in particular, for building intelligent autonomous agent...", + "neighbors": [ + 532 + ], + "mask": "Train" + }, + { + "node_id": 516, + "label": 0, + "text": "Modelling and Design of Multi-Agent Systems Abstract. Agent technologies are now being applied to the development of large-scale commercial and industrial software systems. Such systems are complex, involving hundreds, perhaps thousands of agents, and there is a pressing need for system modelling techniques that permit their complexity to be e ectively managed, and principled methodologies to guide the process of system design. Without adequate techniques to support the design process, such systems will not be su ciently reliable, maintainable or extensible, will be di cult to comprehend, and their elements will not be re-usable. In this paper, we present techniques for modelling agents and multi-agent systems which adapt and extend existing Object-Oriented representation techniques, and a methodology which provides a clear conceptual framework to guide system design and speci cation. We have developed these techniques for systems of agents based upon a particular Belief-Desire-Intention architecture, but have soughttoprovide a framework for the description of agent systems that is su ciently general to be applicable to other agent architectures, and which may be extended in various ways. 1", + "neighbors": [ + 522, + 580, + 941, + 957 + ], + "mask": "Train" + }, + { + "node_id": 517, + "label": 2, + "text": "Automatic Text Summarization of Multiple Documents Scientists have retrieved what appear to be normal human eggs from human ovarian tissue that was grafted onto research mice. This is the first research group to obtain mature, potentially fertilizable eggs. Results of the research are being presented today at the conference of the European Society of Human Reproduction and Embryology. A report published last year demonstrated that ovarian tissue which was frozen and then replaced into a woman's body resulted in ovulation and menstruation. Such methods are being considered for women being treated for cancer with methods that would severely diminish or destroy their reproductive chances. However, there is concern that the retransplanted tissue might contain cancer cells. The current study proposes to reduce that risk. This is yet another step toward enabling women to freeze ovarian tissue in their early 20's, when it is generally most productive, to delay reproduction until their later years.", + "neighbors": [ + 548, + 695 + ], + "mask": "Train" + }, + { + "node_id": 518, + "label": 3, + "text": "Accurate Estimation of the Cost of Spatial Selections Optimizing queries that involve operations on spatial data requires estimating the selectivity and cost of these operations. In this paper, we focus on estimating the cost of spatial selections, or window queries, where the query windows and data objects are general polygons. Cost estimation techniques previously proposed in the literature only handle rectangular query windows over rectangular data objects, thus ignoring the very significant cost of exact geometry comparison (the refinement step in a \u201cfilter and refine\u201d query processing strategy). The cost of the exact geometry comparison depends on the selectivity of the filtering step and the average number of vertices in the candidate objects identified by this step. In this paper, we introduce a new type of histogram for spatial data that captures the complexity and size of the spatial objects as well as their location. Capturing these attributes makes this type of histogram useful for accurate estimation, as we experimentally demonstrate. We also investigate sampling-based estimation approaches. Sampling can yield better selectivity estimates than histograms for polygon data, but at the high cost of performing exact geometry comparisons for all the sampled objects. 1.", + "neighbors": [ + 992 + ], + "mask": "Train" + }, + { + "node_id": 519, + "label": 2, + "text": "Programming by Demonstration for Information Agents this article we will refer to the user in the female form, while the agent will be referred to using male forms.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 520, + "label": 4, + "text": "Smart-Its Friends: A Technique for Users to Easily Establish Connections between Smart Artefacts . Ubiquitous computing is associated with a vision of everything being connected to everything. However, for successful applications to emerge, it will not be the quantity but the quality and usefulness of connections that will matter. Our concern is how qualitative relations and more selective connections can be established between smart artefacts, and how users can retain control over artefact interconnection. We propose context proximity for selective artefact communication, using the context of artefacts for matchmaking. We further suggest to empower users with simple but effective means to impose the same context on a number of artefacts. To prove our point we have implemented Smart-Its Friends, small embedded devices that become connected when a user holds them together and shakes them. 1", + "neighbors": [ + 701, + 869, + 1226 + ], + "mask": "Train" + }, + { + "node_id": 521, + "label": 0, + "text": "CaseLP, A Rapid Prototyping Environment For Agent Based Software Intelligent agents and multi-agent systems are increasingly recognized as an innovative approach for analyzing, designing and implementing complex, heterogeneous and distributed software applications. The agent-based view offers a powerful and high level conceptualization that software engineers can exploit to considerably improve the way in which software is realized. Agent-based software engineering is a recent and very interesting research area. Due to its novelty, there is still no evidence of well-established practices for the development of agent-based applications and thus experimentation in this direction is very important. This dissertation", + "neighbors": [ + 485, + 573, + 884, + 957, + 1067 + ], + "mask": "Test" + }, + { + "node_id": 522, + "label": 0, + "text": "Agent-Oriented Software Engineering Agent-oriented techniques represent an exciting new means of analysing, designing and building complex software systems. They have the potential to significantly improve current practice in software engineering and to extend the range of applications that can feasibly be tackled. Yet, to date, there have been few serious attempts to cast agent systems as a software engineering paradigm. This paper seeks to rectify this omission. Specifically, it will be argued that: (i) the conceptual apparatus of agent-oriented systems is well-suited to building software solutions for complex systems and (ii) agent-oriented approaches represent a genuine advance over the current state of the art for engineering complex systems. Following on from this view, the major issues raised by adopting an agent-oriented approach to software engineering are highlighted and discussed.", + "neighbors": [ + 264, + 516, + 964 + ], + "mask": "Train" + }, + { + "node_id": 523, + "label": 2, + "text": "Discriminant-EM Algorithm with Application to Image Retrieval In many vision applications, the practice of supervised learning faces several difficulties, one of which is that insufficient labeled training data result in poor generalization. In image retrieval, we have very few labeled images from query and relevance feedback so that it is hard to automatically weight image features and select similarity metrics for image classification. This paper investigates the possibility of including an unlabeled data set to make up the insufficiency of labeled data. Different from most current research in image retrieval, the proposed approach tries to cast image retrieval as a transductive learning problem, in which the generalization of an image classifier is only defined on a set of images such as the given image database. Formulating this transductive problem in a probabilistic framework, the proposed algorithm, Discriminant-EM (D-EM), not only estimates the parameters of a generative model, but also finds a linear transformation to relax the assumption of pro...", + "neighbors": [ + 100, + 391, + 609, + 824, + 1133, + 1153, + 1203 + ], + "mask": "Test" + }, + { + "node_id": 524, + "label": 2, + "text": "Combining Collaborative Filtering with Personal Agents for Better Recommendations Information filtering agents and collaborative filtering both attempt to alleviate information overload by identifying which items a user will find worthwhile. Information filtering (IF) focuses on the analysis of item content and the development of a personal user interest profile. Collaborative filtering (CF) focuses on identification of other users with similar tastes and the use of their opinions to recommend items. Each technique has advantages and limitations that suggest that the two could be beneficially combined. This paper shows that a CF framework can be used to combine personal IF agents and the opinions of a community of users to produce better recommendations than either agents or users can produce alone. It also shows that using CF to create a personal combination of a set of agents produces better results than either individual agents or other combination mechanisms. One key implication of these results is that users can avoid having to select among ag...", + "neighbors": [ + 937, + 1068 + ], + "mask": "Test" + }, + { + "node_id": 525, + "label": 2, + "text": "Using Software Agents to Support Evolution of Distributed Workflow Models This paper outlines a high-level design of how software agents can be used combined with an existing CAGIS Process Centred Environment to deal with evolution of distributed, fragmented workflow models. Our process centred environment allows process fragments of the same workflow model to be located in workspaces that are geographically distributed. These process fragments can be changed independently in local workspaces causing consistency problems. We propose to use software mobile agents, offering awareness services solving conflicting updates of process fragment. Our solution is illustrated using some scenarios. Keywords: Process centred environments, software agents, workflow model consistency, workflow model evolution, distribution, fragmentation. 1 Introduction Dealing with evolution of workflow processes is not a trivial matter. One simple solution to this problem is to have one centralised workflow model, that cannot be changed after it is instanciated. In practice, it is ho...", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 526, + "label": 2, + "text": "Building Efficient and Effective Metasearch Engines Frequently a user's information needs are stored in the databases of multiple search engines. It is inconvenient and inefficient for an ordinary user to invoke multiple search engines and identify useful documents from the returned results. To support unified access to multiple search engines, a metasearch engine can be constructed. When a metasearch engine receives a query from a user, it invokes the underlying search engines to retrieve useful information for the user. Metasearch engines have other benefits as a search tool such as increasing the search coverage of the Web and improving the scalability of the search. In this article, we survey techniques that have been proposed to tackle several underlying challenges for building a good metasearch engine. Among the main challenges, the database selection problem is to identify search engines that are likely to return useful documents to a given query. The document selection problem is to determine what documents to retrieve from each identified search engine. The result merging problem is to combine the documents returned from multiple search engines. We will also point out some problems that need to be further researched.", + "neighbors": [ + 216, + 224, + 241, + 271, + 477, + 502, + 510, + 579, + 696, + 792, + 931, + 1003, + 1017 + ], + "mask": "Validation" + }, + { + "node_id": 527, + "label": 3, + "text": "Knowledge Discovery from Client-Server Databases . The subject of this paper is the implementation of knowledge discovery in databases. Specifically, we assess the requirements for interfacing tools to client-server database systems in view of the architecture of those systems and of \"knowledge discovery processes\". We introduce the concept of a query frontier of an exploratory process, and propose a strategy based on optimizing the current query frontier rather than individual knowledge discovery algorithms. This approach has the advantage of enhanced genericity and interoperability. We demonstrate a small set of query primitives, and show how one example tool, the well-known decision tree induction algorithm C4.5, can be rewritten to function in this environment. 1 Introduction Relational databases are the current dominant database technology in industry, and many organizations have collected large amounts of data in so-called data warehouses expressly for the purpose of decision support and data mining. In general the data must ...", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 528, + "label": 5, + "text": "Sensor Fault Detection and Identification in a Mobile Robot Multiple model adaptive estimation (MMAE) is used to detect and identify sensor failures in a mobile robot. Each estimator is a Kalman filter with a specific embedded failure model. The filter bank also contains one filter which has the nominal model embedded within it. The filter residuals are postprocessed to produce a probabilistic interpretation of the operation of the system. The output of the system at any given time is the confidence in the correctness of the various embedded models. As an additional feature the standard assumption that the measurements are available at a constant, common frequency, is relaxed. Measurements are assumed to be asynchronous and of varying frequency. The particularly difficult case of 'soft' sensor failure is also handled successfully. A system architecture is presented for the general problem of failure detection and identification in mobile robots. As an example, the MMAE algorithm is demonstrated on a Pioneer I robot in the case of three different sensor failures.", + "neighbors": [ + 554 + ], + "mask": "Test" + }, + { + "node_id": 529, + "label": 3, + "text": "Efficient Support for P-HTTP in Cluster-Based Web Servers This paper studies mechanisms and policies for supporting HTTP/1.1 persistent connections in cluster-based Web servers that employ content-based request distribution. We present two mechanisms for the efficient, content-based distribution of HTTP/1.1 requests among the back-end nodes of a cluster server. A trace-driven simulation shows that these mechanisms, combined with an extension of the locality-aware request distribution (LARD) policy, are effective in yielding scalable performance for HTTP/1.1 requests. We implemented the simpler of these two mechanisms, back-end forwarding. Measurements of this mechanism in connection with extended LARD on a prototype cluster, driven with traces from actual Web servers, confirm the simulation results. The throughput of the prototype is up to four times better than that achieved by conventional weighted round-robin request distribution. In addition, throughput with persistent connections is up to 26% better than without.", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 530, + "label": 3, + "text": "Integrating Light-Weight Workflow Management Systems within Existing Business Environments Workflow management systems support the efficient, largely au- tomated execution of business processes. However, using a workflow management system typically requires implementing the application's control flow exclusively by the workflow management system. This approach is powerful if the control flow is specified and implemented from scratch, but it has severe drawbacks if a workflow management system is to be integrated within environments with existing solutions for implementing control flow. Usual- ly, the existing solutions are too complex to be substituted by the workflow management system at once. Hence, the workflow management system must support an incremental integration, i.e. the reuse of existing implementations of control flow as well as their in- cremental substitution. Extending the workflow management system's functionality ac- cording to future application needs, e.g. by worklist and history management, must also be possible. In particular, at the beginning of...", + "neighbors": [ + 315 + ], + "mask": "Train" + }, + { + "node_id": 531, + "label": 4, + "text": "Realtime Personal Positioning System for Wearable Computers Context awareness is an important functionality for wearable computers. In particular, the computer should know where the person is in the environment. This paper proposes an image sequence matching technique for the recognition of locations and previously visited places. As in single word recognition in speech recognition, a dynamic programming algorithm is proposed for the calculation of the similarity of different locations. The system runs on a stand alone wearable computer such as a Libretto PC. Using a training sequence a dictionary of locations is created automatically. These locations are then be recognized by the system in realtime using a hatmounted camera. 1. Introduction Obtaining user location is one of the important functions for wearable computers in two applications. One is automatic self-summary, and the other is contextaware user interface. In self-summary, the user is wearing a small camera and a small computer, capturing and recording every event of his/her daily ...", + "neighbors": [ + 307, + 680, + 738, + 1129 + ], + "mask": "Validation" + }, + { + "node_id": 532, + "label": 0, + "text": "Computational Logic and Multi-Agent Systems: a Roadmap Agent-based computing is an emerging computing paradigm that has proved extremely successful in dealing with a number of problems arising from new technological developments and applications. In this paper we report the role of computational logic in modeling intelligent agents, by analysing existing agent theories, agent-oriented programming languages and applications, as well as identifying challenges and promising directions for future research. 1 Introduction In the past ten years the eld of agent-based computing has emerged and greatly expanded, due to new technological developments such as ever faster and cheaper computers, fast and reliable interconnections between them as well as the emergence of the world wide web. These developments have at the same time opened new application areas, such as electronic commerce, and posed new problems, such as that of integrating great quantities of information and building complex software, embedding legacy code. The establishment o...", + "neighbors": [ + 288, + 515, + 588, + 1236 + ], + "mask": "Validation" + }, + { + "node_id": 533, + "label": 1, + "text": "Simulating the Evolution of 2D Pattern Recognition on the CAM-Brain Machine, an Evolvable Hardware Tool for Building a 75 Million Neuron Artificial Brain This paper presents some simulation results of the evolution of 2D visual pattern recognizers to be implemented very shortly on real hardware, namely the \"CAM-Brain Machine\" (CBM), an FPGA based piece of evolvable hardware which implements a genetic algorithm (GA) to evolve a 3D cellular automata (CA) based neural network circuit module, of approximately 1,000 neurons, in about a second, i.e. a complete run of a GA, with 10,000s of circuit growths and performance evaluations. Up to 65,000 of these modules, each of which is evolved with a humanly specified function, can be downloaded into a large RAM space, and interconnected according to humanly specified artificial brain architectures. This RAM, containing an artificial brain with up to 75 million neurons, is then updated by the CBM at a rate of 130 billion CA cells per second. Such speeds will enable real time control of robots and hopefully the birth of a new research field that we call \"brain building\". The first such artif...", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 534, + "label": 1, + "text": "Genetic Algorithms Based Systems For Conceptual Engineering Design this paper we try to integrate methods of preferences and scenarios with Genetic Algorithms used to perform multi--objective optimisation. The goal is to make a system that will be able to work together with the designer during the conceptual design phase, where interaction and designer knowledge are sometimes more important than accuracy. MODULE OPTIMISATION CONSTRAINT HANDLING MODULE FUZZY RULES HANDLING MODULE", + "neighbors": [ + 1136 + ], + "mask": "Train" + }, + { + "node_id": 535, + "label": 3, + "text": "TEMPOS: A Temporal Database Model Seamlessly Extending ODMG This paper presents Tempos, a set of models and languages intended to seamlessly extend the ODMG object database standard with temporal functionalities. The proposed models exploit object-oriented technology to meet some important, yet traditionally neglected design criteria, related to legacy code migration and representation independence. Tempos has been fully formalized both at the syntactical and the semantical level and implemented on top of the O 2 DBMS. Its suitability in regard to applications' requirements has been validated through concrete case studies from various contexts. Keywords: temporal databases, temporal data models, temporal query languages, time representation, upward compatibility, object-oriented databases, ODMG R'esum'e Ce document pr'esente Tempos : un ensemble de mod`eles et de langages con\u00b8cus pour 'etendre le standard pour Bases de Donn'ees `a objets ODMG, par des fonctionnalit'es temporelles. Les mod`eles d'ecrits exploitent les possibilit'es de la tech...", + "neighbors": [ + 59, + 1058 + ], + "mask": "Train" + }, + { + "node_id": 536, + "label": 2, + "text": "Finding Related Pages in the World Wide Web When using traditional search engines, users have to formulate queries to describe their information need. This paper discusses a different approach toweb searching where the input to the search process is not a set of query terms, but instead is the URL of a page, and the output is a set of related web pages. A related web page is one that addresses the same topic as the original page. For example, www.washingtonpost.com is a page related to www.nytimes.com, since both are online newspapers. We describe two algorithms to identify related web pages. These algorithms use only the connectivity information in the web (i.e., the links between pages) and not the content of pages or usage information. We haveimplemented both algorithms and measured their runtime performance. To evaluate the e ectiveness of our algorithms, we performed a user study comparing our algorithms with Netscape's \\What's Related \" service [12]. Our study showed that the precision at 10 for our two algorithms are 73 % better and 51 % better than that of Netscape, despite the fact that Netscape uses both content and usage pattern information in addition to connectivity information.", + "neighbors": [ + 235, + 322, + 578, + 763, + 774, + 867, + 990, + 1000, + 1005, + 1017, + 1099, + 1228 + ], + "mask": "Train" + }, + { + "node_id": 537, + "label": 1, + "text": "Evolving Rule-Based Trading Systems In this study, a market trading rulebase is optimised using genetic programming (GP). The rulebase is comprised of simple relationships between technical indicators, and generates signals to buy, sell short, and remain inactive. The methodology is applied to prediction of the Standard & Poor's composite index (02-Jan-1990 to 18-Oct-2001). Two potential market systems are inferred: a simple system using few rules and nodes, and a more complex system. Results are compared with a benchmark buy-and-hold strategy. Neither trading system was found capable of consistently outperforming this benchmark. More complicated rulebases, in addition to being difficult to understand, are susceptible to overfitting.", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 538, + "label": 0, + "text": "Knowledge Base Support For Design And Synthesis Of Multiagent Systems", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 539, + "label": 3, + "text": "External Memory Algorithms and Data Structures Data sets in large applications are often too massive to fit completely inside the computer's internal memory. The resulting input/output communication (or I/O) between fast internal memory and slower external memory (such as disks) can be a major performance bottleneck. In this paper, we survey the state of the art in the design and analysis of external memory algorithms and data structures (which are sometimes referred to as \"EM\" or \"I/O\" or \"out-of-core\" algorithms and data structures). EM algorithms and data structures are often designed and analyzed using the parallel disk model (PDM). The three machine-independent measures of performance in PDM are the number of I/O operations, the CPU time, and the amount of disk space. PDM allows for multiple disks (or disk arrays) and parallel CPUs, and it can be generalized to handle tertiary storage and hierarchical memory. We discuss several important paradigms for how to solve batched and online problems efficiently in external memory. Programming tools and environments are available for simplifying the programming task. The TPIE system (Transparent Parallel I/O programming Environment) is both easy to use and efficient in terms of execution speed. We report on some experiments using TPIE in the domain of spatial databases. The newly developed EM algorithms and data structures that incorporate the paradigms we discuss are significantly faster than methods currently used in practice.", + "neighbors": [ + 929 + ], + "mask": "Train" + }, + { + "node_id": 540, + "label": 0, + "text": "Hierarchical Agent Interface for Animation Asynchronous, Hierarchical Agents (AHAs) provide a vertically structured multilevel abstraction hierarchy. In this paper, we argue that this multilevel hierarchy is a convenient way to create a human-agent interface at multiple levels of abstraction. In this way, the agent has several layers of specification (input) and visualization (output) which facilitates users with problem solving, because such an interface parallels the hierarchical and iterative nature of human creative thought processes. The AHA interface presents an intuitive, intimate interface which supports interactions on a scale from direct manipulation to delegation, depending on the user's choice. Another feature of this interface is its two modes of interaction: direct device interaction (mouse clicking) and interpretive, command line or scripting mode. This way, agents can be \"forced\" to perform certain activities via mouse clicks (direct control), or they can be programmed via scripts on the fly. We present example...", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 541, + "label": 4, + "text": "Partial Replication in the Vesta Software Repository The Vesta repository is a special-purpose replicated file system, developed as part of the Vesta software configuration management system. One of the major goals of Vesta is to make all software builds reproducible. To this end, the repository provides an append-only name space; new names can be inserted, but once a name exists, its meaning cannot change. More concretely, all files and some designated directories are immutable, while the remaining directories are appendable, allowing new names to be defined but not allowing existing names to be redefined. The data stored", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 542, + "label": 0, + "text": "Hierarchical Optimization of Policy-Coupled Semi-Markov Decision Processes One general strategy for approximately solving large Markov decision processes is \"divide-and-conquer\": the original problem is decomposed into sub-problems which interact with each other, but yet can be solved independently by taking into account the nature of the interaction. In this paper we focus on a class of \"policy-coupled\" semi-Markov decision processes (SMDPs), which arise in many nonstationary real-world multi-agent tasks, such as manufacturing and robotics. The nature of the interaction among sub-problems (agents) is more subtle than that studied previously: the components of a sub-SMDP, namely the available states and actions, transition probabilities and rewards, depend on the policies used in solving the \"neighboring\" sub-SMDPs. This \"strongly-coupled\" interaction among subproblems causes the approach of solving each sub-SMDP in parallel to fail. We present a novel approach whereby many variants of each sub-SMDP are solved, explicitly taking into account the different mod...", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 543, + "label": 3, + "text": "Knowledge Management through Ontologies Most enterprises agree that knowledge is an essential asset for success and survival on a increasingly competitive and global market. This awareness is one of the main reasons for the exponential growth of knowledge management in the past decade. Our approach to knowledge management is based on ontologies, and makes knowledge assets intelligently accessible to people in organizations. Most company-vital knowledge resides in the heads of people, and thus successful knowledge management does not only consider technical aspects, but also social ones. In this paper, we describe an approach to intelligent knowledge management that explicitly takes into account the social issues involved. The proof of concept is given by a large-scale initiative involving knowledge management of a virtual organization. 1 Introduction According to Information Week (Angus et al., 1998) \"the business problem that knowledge management is designed to solve is that knowledge acquired through experience doesn't ge...", + "neighbors": [ + 347, + 767 + ], + "mask": "Test" + }, + { + "node_id": 544, + "label": 3, + "text": "Indexing Techniques for Continuously Evolving Phenomena The management of spatial, temporal, and spatiotemporal data is becoming increasingly important in a wide range of applications. This ongoing Ph.D. project focuses on applications where spatial or temporal aspects of objects are continuously changing and there is a need for indexing techniques that \"track\" the changing data, even in-between explicit updates. In spatiotemporal applications, there is a need to record and efficiently query the history, the current state, and the predicted future behavior of continuously moving objects, such as vehicles, mobile telephones, and people. Likewise, in temporal applications and spatiotemporal applications with discrete change, time intervals may be naturally related to the current time, which continuously progresses. The paper outlines the research agenda of the Ph.D. project and describes briefly two access methods developed so far in this project. 1 Introduction Recent years have shown both an increase in the amounts of ...", + "neighbors": [ + 1012 + ], + "mask": "Train" + }, + { + "node_id": 545, + "label": 2, + "text": "Document Categorization and Query Generation on the World Wide Web Using WebACE We present WebACE, an agent for exploring and categorizing documents on the World Wide Web based on a user profile. The heart of the agent is an unsupervised categorization of a set of documents, combined with a process for generating new queries that is used to search for new related documents and for filtering the resulting documents to extract the ones most closely related to the starting set. The document categories are not given a priori. We present the overall architecture and describe two novel algorithms which provide significant improvement over Hierarchical Agglomeration Clustering and AutoClass algorithms and form the basis for the query generation and search component of the agent. We report on the results of our experiments comparing these new algorithms with more traditional clustering algorithms and we show that our algorithms are fast and scalable. y Authors are listed alphabetically. 1 Introduction The World Wide Web is a vast resource of information and services t...", + "neighbors": [ + 291, + 599, + 616, + 893, + 947, + 1126 + ], + "mask": "Train" + }, + { + "node_id": 546, + "label": 3, + "text": "Practical Lineage Tracing in Data Warehouses We consider the view data lineage problem in a warehousing environment: For a given data item in a materialized warehouse view, we want to identify the set of source data items that produced the view item. We formalize the problem and present a lineage tracing algorithm for relational views with aggregation. Based on our tracing algorithm, we propose a number of schemes for storing auxiliary views that enable consistent and efficient lineage tracing in a multisource data warehouse. We report on a performance study of the various schemes, identifying which schemes perform best in which settings. Based on our results, we have implemented a lineage tracing package in the WHIPS data warehousing system prototype at Stanford. With this package, users can select view tuples of interest, then efficiently \"drill down\" to examine the source data that produced them. 1 Introduction Data warehousing systems collect data from multiple distributed sources, integrate the information as materialized v...", + "neighbors": [ + 37 + ], + "mask": "Train" + }, + { + "node_id": 547, + "label": 5, + "text": "Experience with EMERALD to Date After summarizing the EMERALD architecture and the evolutionary process from which EMERALD has evolved, this paper focuses on our experience to date in designing, implementing, and applying EMERALD to various types of anomalies and misuse. The discussion addresses the fundamental importance of good software engineering practice and the importance of the system architecture -- in attaining detectability, interoperability, general applicability, and future evolvability. It also considers the importance of correlation among distributed and hierarchical instances of EMERALD, and needs for additional detection and analysis components. 1. Introduction EMERALD (Event Monitoring Enabling Responses to Anomalous Live Disturbances) [6, 8, 9] is an environment for anomaly and misuse detection and subsequent analysis of the behavior of systems and networks. EMERALD is being developed under DARPA/ITO Contract number F30602-96-C-0294 and applied under DARPA/ISO Contract number F30602-98-C-0059. EMER...", + "neighbors": [ + 300, + 567 + ], + "mask": "Train" + }, + { + "node_id": 548, + "label": 2, + "text": "Selecting Text Spans for Document Summaries: Heuristics and Metrics Human-quality text summarization systems are difficult to design, and even more difficult to evaluate, in part because documents can differ along several dimensions, such as length, writing style and lexical usage. Nevertheless, certain cues can often help suggest the selection of sentences for inclusion in a summary. This paper presents an analysis of news-article summaries generated by sentence extraction. Sentences are ranked for potential inclusion in the summary using a weighted combination of linguistic features -- derived from an analysis of news-wire summaries. This paper evaluates the relative effectiveness of these features. In order to do so, we discuss the construction of a large corpus of extraction-based summaries, and characterize the underlying degree of difficulty of summarization at different compression levels on articles in this corpus. Results on our feature set are presented after normalization by this degree of difficulty.", + "neighbors": [ + 517, + 695 + ], + "mask": "Train" + }, + { + "node_id": 549, + "label": 2, + "text": "Assessment Methods for Information Quality Criteria Information quality (IQ) is one of the most important aspects of information integration on the Internet. Many projects realize and address this fact by gathering and classifying IQ criteria. Hardly ever do the projects address the immense difficulty of assessing scores for the criteria. This task must precede any usage of criteria for qualifying and integrating information. After reviewing previous attempts to classify IQ criteria, in this paper we also classify criteria, but in a new, assessment-oriented way. We identify three sources for IQ scores and thus, three IQ criterion classes, each with different general assessment possibilities. Additionally, for each criterion we give detailed assessment methods. Finally, we consider confidence measures for these methods. Confidence expresses the accuracy, lastingness, and credibility of the individual assessment methods. 1 Introduction Low information quality is one of the most pressing problems for consume rs of information that is di...", + "neighbors": [ + 553, + 970 + ], + "mask": "Train" + }, + { + "node_id": 550, + "label": 2, + "text": "Automatic Text Detection and Tracking in Digital Video Text which appears in a scene or is graphically added to video can provide an important supplemental source of index information as well as clues for decoding the video's structure and for classification. In this paper we present algorithms for detecting and tracking text in digital video. Our system implements a scalespace feature extractor that feeds an artificial neural processor to detect text blocks. Our text tracking scheme consists of two modules: an SSD (Sum of Squared Difference)-based module to find the initial position and a contour-based module to refine the position. Experiments conducted with a variety of video sources show that our scheme can detect and track text robustly. Keywords Text Detection, Text Tracking, Video Indexing, Digital Libraries, Neural Network I. Introduction The continued proliferation of large amounts of digital video has increased demand for true content based indexing and retrieval systems. Traditionally, content has been indexed primaril...", + "neighbors": [ + 8, + 840, + 1173 + ], + "mask": "Test" + }, + { + "node_id": 551, + "label": 4, + "text": "Collections - Adapting The Display of Personal Objects for Different Audiences Although current networked systems and online applications provide new opportunities for displaying and sharing personal information, they do not account for the underlying social contexts that frame such interactions. Existing categorization and management mechanisms for digital content have been designed to focus on the data they handle without much regard for the social circumstances within which their content is shared. As we share large collections of personal information over mediated environments, our tools need to account for the social scenarios that surround our interactions. This thesis presents Collections: an application for the management of digital pictures according to their intended audiences. The goal is to create a graphical interface that supports the creation of fairly complex privacy decisions concerning the display of digital photographs. Simple graphics are used to enable the collector to create a wide range of audience arrangements for her digital pho...", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 552, + "label": 4, + "text": "Graspable interfaces: Establishing design principles PhD Research Plan for Morten Fjeld. Topic: Design of Tangible User Interfaces", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 553, + "label": 3, + "text": "An Extensible Framework for Data Cleaning Data integration solutions dealing with large amounts of data have been strongly required in the last few years. Besides the traditional data integration problems (e.g. schema integration, local to global schema mappings), three additional data problems have to be dealt with: (1) the absence of universal keys across dierent databases that is known as the object identity problem, (2) the existence of keyboard errors in the data, and (3) the presence of inconsistencies in data coming from multiple sources. Dealing with these problems is globally called the data cleaning process. In this work, we propose a framework which oers the fundamental services required by this process: data transformation, duplicate elimination and multi-table matching. These services are implemented using a set of purposely designed macro-operators. Moreover, we propose an SQL extension for specifying each of the macro-operators. One important feature of the framework is the ability of explicitly includ...", + "neighbors": [ + 549, + 697, + 970 + ], + "mask": "Train" + }, + { + "node_id": 554, + "label": 5, + "text": "Circumventing Dynamic Modeling: Evaluation of the Error-State Kalman Filter applied to Mobile Robot Localization The mobile robot localization problem is treated as a two-stage iterative estimation process. The attitude is estimated first and is then available for position estimation. The indirect (error state) form of the Kalman filter is developed for attitude estimation when applying gyro modeling. The main benefit of this choice is that complex dynamic modeling of the mobile robot and its interaction with the environment is avoided. The filter optimally combines the attitude rate information from the gyro and the absolute orientation measurements. The proposed implementation is independent of the structure of the vehicle or the morphology of the ground. The method can easily be transfered to another mobile platform provided it carries an equivalent set of sensors. The 2D case is studied in detail first. Results of extending the approach to the 3D case are presented. In both cases the results demonstrate the efficacy of the proposed method. 1 Introduction On July 4th 1997, the Mars Pathfinde...", + "neighbors": [ + 528 + ], + "mask": "Test" + }, + { + "node_id": 555, + "label": 2, + "text": "An XML-based Multimedia Middleware for Mobile Online Auctions Pervasive Internet services today promise to provide users with a quick and convenient access to a variety of commercial applications. However, due to unsuitable architectures and poor performance user acceptance is still low. To be a major success mobile services have to provide device-adapted content and advanced value-added Web services. Innovative enabling technologies like XML and wireless communication may for the first time provide a facility to interact with online applications anytime anywhere. We present a prototype implementing an efficient multimedia middleware approach towards ubiquitous value-added services using an auction house as a sample application. Advanced multi-feature retrieval technologies are combined with enhanced content delivery to show the impact of modern enterprise information systems on today's e-commerce applications. Keywords: mobile commerce, online auctions, middleware architectures, pervasive Internet technology, multimedia database appli...", + "neighbors": [ + 507, + 1145 + ], + "mask": "Train" + }, + { + "node_id": 556, + "label": 0, + "text": "Categorization of Software Errors that led to Security Breaches A set of errors known to have led to security breaches in computer systems was analyzed. The analysis led to a categorization of these errors. After examining several proposed schemes for the categorization of software errors a new scheme was developed and used. This scheme classifies errors by their cause, the nature of their impact, and the type of change, or fix, made to remove the error. The errors considered in this work are found in a database maintained by the COAST laboratory. The categorization is the first step in the investigation of the effectiveness of various measures of code coverage in revealing software errors that might lead to security breaches. 1 Introduction We report the outcome of an effort to categorize errors in software that are known to have led to security breaches. The set of errors used in this study came from a database of errors developed in the COAST laboratory [10]. Several existing schemes for the categorization of software errors were evaluated for ...", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 557, + "label": 0, + "text": "Agents That Reason and Negotiate By Arguing The need for negotiation in multi-agent systems stems from the requirement for agents to solve the problems posed by their interdependence upon one another. Negotiation provides a solution to these problems by giving the agents the means to resolve their conflicting objectives, correct inconsistencies in their knowledge of other agents' world views, and coordinate a joint approach to domain tasks which benefits all the agents concerned. We propose a framework, based upon a system of argumentation, which permits agents to negotiate in order to establish acceptable ways of solving problems. The framework provides a formal model of argumentation-based reasoning and negotiation, details a design philosophy which ensures a clear link between the formal model and its practical instantiation, and describes a case study of this relationship for a particular class of architectures (namely those for belief-desire-intention agents). 1 Introduction An increasing number of software app...", + "neighbors": [ + 222, + 263, + 597, + 724, + 852, + 953, + 964, + 1051, + 1166, + 1204, + 1208 + ], + "mask": "Train" + }, + { + "node_id": 558, + "label": 1, + "text": "How Developmental Psychology and Robotics Complement Each Other This paper presents two complementary ideas relating the study of human development and the construction of intelligent artifacts. First, the use of developmental models will be a critical requirement in the construction of robotic systems that can acquire a large repertoire of motor, perceptual, and cognitive capabilities. Second, robotic systems can be used as a test-bed for evaluating models of human development much in the same way that simulation studies are currently used to evaluate cognitive models. To further explore these ideas, two examples from the author's own work will be presented: the use of developmental models of hand-eye coordination to simplify the task of learning to reach for a visual target and the use of a humanoid robot to evaluate models of normal and abnormal social skill development. Introduction Research on human development and research on the construction of intelligent artifacts can and should be complementary. Studies of human developm...", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 559, + "label": 1, + "text": "Parameter Learning of Logic Programs for Symbolic-statistical Modeling We propose a logical/mathematical framework for statistical parameter learning of parameterized logic programs, i.e. de nite clause programs containing probabilistic facts with a parameterized distribution. It extends the traditional least Herbrand model semantics in logic programming to distribution semantics, possible world semantics with a probability distribution which is unconditionally applicable to arbitrary logic programs including ones for HMMs, PCFGs and Bayesian networks. We also propose a new EM algorithm, the graphical EM algorithm, thatrunsfora class of parameterized logic programs representing sequential decision processes where each decision is exclusive and independent. It runs on a new data structure called support graphs describing the logical relationship between observations and their explanations, and learns parameters by computing inside and outside probability generalized for logic programs. The complexity analysis shows that when combined with OLDT search for all explanations for observations, the graphical EM algorithm, despite its generality, has the same time complexity as existing EM algorithms, i.e. the Baum-Welch algorithm for HMMs, the Inside-Outside algorithm for PCFGs, and the one for singly connected Bayesian networks that have beendeveloped independently in each research eld. Learning experiments with PCFGs using two corpora of moderate size indicate that the graphical EM algorithm can signi cantly outperform the Inside-Outside algorithm. 1.", + "neighbors": [ + 39 + ], + "mask": "Validation" + }, + { + "node_id": 560, + "label": 1, + "text": "Structure Identification of Fuzzy Classifiers For complex and high-dimensional problems, data-driven identification of classifiers has to deal with structural issues like the selection of the relevant features and effective initial partition of the input domain. Therefore, the identification of fuzzy classifiers is a challenging topic. Decision-tree (DT) generation algorithms are effective in feature selection and extraction of crisp classification rules, hence they can be used for the initialization of fuzzy systems. Because fuzzy classifiers have much flexible decision boundaries than DTs, fuzzy models can be more parsimonious than DTs. Hence, to get compact, easily interpretable and transparent classification system, a new structure identification algorithm is proposed, where genetic algorithm (GA) based parameter optimization of the DT initialized fuzzy sets is combined with similarity based rule base simplification algorithms. The performance of the approach is studied on a specially designed artificial data. An application to the Cancer classification problem is also shown.", + "neighbors": [ + 828, + 1202 + ], + "mask": "Train" + }, + { + "node_id": 561, + "label": 2, + "text": "A System For Automatic Personalized Tracking of Scientific Literature on the Web We introduce a system as part of the CiteSeer digital library project for automatic tracking of scientific literature that is relevant to a user\u2019s research interests. Unlike previous systems that use simple keyword matching, CiteSeer is able to track and recommend topically relevant papers even when keyword based query profiles fail. This is made possible through the use of a heterogenous profile to represent user interests. These profiles include several representations, including content based relatedness measures. The CiteSeer tracking system is well integrated into the search and browsing facilities of CiteSeer, and provides the user with great flexibility in tuning a profile to better match his or her interests. The software for this system is available, and a sample database is online as a public service.", + "neighbors": [ + 43, + 70, + 314, + 453, + 596 + ], + "mask": "Train" + }, + { + "node_id": 562, + "label": 3, + "text": "Reclustering of HEP Data in Object-Oriented Databases The Large Hadron Collider (LHC), build at CERN, will enter operation in 2005. The experiments at the LHC will generate some 5 PB of data per year, which are stored in an ODBMS. A good object clustering on the disk drives will be critical to achieve a high data throughput required by future analysis scenarios. This paper presents a new reclustering algorithm for HEP data that maximizes the read transfer rate for objects contained in multiple overlapping collections. It works by decomposing the stored objects into a number of chunks and rearranging them by means of heuristics solving the traveling salesman problem with Hamming distance. Furthermore experimental results of a prototype are presented. Keywords: object-oriented databases, scientific databases, object clustering, query optimisation 1 Introduction The ATLAS experiment [1] at CERN, due to take data in the year 2005 will store approximately 1 PB (10 15 bytes) of data per year. Data taking is expected to last 15 or more yea...", + "neighbors": [ + 1019 + ], + "mask": "Train" + }, + { + "node_id": 563, + "label": 0, + "text": "Towards UML-based Analysis and Design of Multi-Agent Systems The visual modeling facilities of the UML do not provide sufficient means to support the design of multi-agent systems. In this paper, we are investigating the development phases of requirements analysis, design, and code generation for multi agent systems. In the requirements analysis phase, we are using extended use case diagrams to identify agents and their relationship to the environment. In the design phase, we are using stereotyped class and object diagrams to model different agent types and their related goals and strategies. While these diagrams define the static agent system architecture, dynamic agent behavior is modeled in statecharts with respect to the BDI 1 agent approach. Concerning code generation, we show how the used diagrams can be taken to generate code for CASA, our executable agent specification language that is integrated into an existing multi-agent framework. 1", + "neighbors": [ + 941, + 1249 + ], + "mask": "Train" + }, + { + "node_id": 564, + "label": 1, + "text": "Improvement in a Lazy Context: An Operational Theory for Call-By-Need The standard implementation technique for lazy functional languages is call-by-need, which ensures that an argument to a function in any given call is evaluated at most once. A significant problem with call-by-need is that it is difficult --- even for compiler writers --- to predict the effects of program transformations. The traditional theories for lazy functional languages are based on call-by-name models, and offer no help in determining which transformations do indeed optimize a program. In this article we present an operational theory for callby -need, based upon an improvement ordering on programs: M is improved by N if in all program-contexts C, when C[M ] terminates then C[N ] terminates at least as cheaply. We show that this improvement relation satisfies a \"context lemma\", and supports a rich inequational theory, subsuming the call-by-need lambda calculi of Ariola et al. [AFM + 95]. The reduction-based call-by-need calculi are inadequate as a theory of lazy-program tran...", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 565, + "label": 4, + "text": "Maintaining the Illusion of Interacting Within a 3D Virtual Space It is widely thought to more or less a degree, that a sense of presence may be induced in users of new and emerging media technologies, such as, the Internet, digital television and cinema (supporting interaction), teleconferencing and 3D virtual reality systems. In this paper, it is argued that presence presupposes that participants are absorbed in the illusion of interacting within the visual spaces created by these media. That is, prior to the possibility of any inducement of presence, participants need to be absorbed in the illusion conveyed by the media. Without this, participants' attention is broken and the illusion is lost. Hence, the potential to induce presence in participants ceases. To encourage participants to lose sight of the means of representation and be drawn into the illusion conveyed by these media, this paper proposes the development of design principles to increase participants' experience. In an attempt to inform design principles, this paper focuses on another artificial although highly successful visual medium - film. By way of example, this paper concentrates on one medium, virtual reality, and proposes design principles that attempt to maintain the illusion of interacting within 3D virtual space. This attempts to provide a platform through the resourceful blend of hardware and software Virtual Reality (VR) enabling technologies on which to support a well designed virtual environment and hence, from which the inducement of presence in participants may develop.", + "neighbors": [ + 678, + 786 + ], + "mask": "Train" + }, + { + "node_id": 566, + "label": 1, + "text": "On Concept Space and Hypothesis Space in Case-Based Learning Algorithms . In order to learn more about the behaviour of case-based reasoners as learning systems, we formalise a simple case-based learner as a PAC learning algorithm. We show that the case-based representation hCB; oei is rich enough to express any boolean function. We define a family of simple case-based learning algorithms which use a single, fixed similarity measure and we give necessary and sufficient conditions for the consistency of these learning algorithms in terms of the chosen similarity measure. Finally, we consider the way in which these simple algorithms, when trained on target concepts from a restricted concept space, often output hypotheses which are outside the chosen concept space. A case study investigates this relationship between concept space and hypothesis space and concludes that the case-based algorithm studied is a less than optimal learning algorithm for the chosen, small, concept space. 1 Introduction The performance of a case-based reasoning system [13] will chan...", + "neighbors": [ + 1149, + 1259 + ], + "mask": "Test" + }, + { + "node_id": 567, + "label": 0, + "text": "Intrusion Detection: A Bibliography This document contains more than 600 references, dated from 1980 to 2001. We undoubtedly have forgotten some important citations, either through oversight or ignorance. Moreover, errors may remain in the citations. Thus, we ask for your indulgence and, more importantly, for your help. Send us a note if you nd any errors and let us know of any omissions", + "neighbors": [ + 547, + 620 + ], + "mask": "Validation" + }, + { + "node_id": 568, + "label": 0, + "text": "Analysis and Design using MaSE and agentTool This paper provides an overview of the work being done at the Air Force Institute of Technology on the Multiagent Systems Engineering methodology and the associated agentTool environment. Our research is focused on discovering methods and techniques for engineering practical multiagent systems. It uses the abstraction provided by multiagent systems for developing intelligent, distributed software systems.", + "neighbors": [ + 573 + ], + "mask": "Train" + }, + { + "node_id": 569, + "label": 0, + "text": "ATNoSFERES: a Model for Evolutive Agent Behaviors This paper introduces ATNoSFEERS, a model aimed at designing evolutive and adaptive behaviors for agents or multi-agent systems. We first discuss briefly the main problems raised by classical evolutionary models, which are not intended to produce agents or behaviors but to solve problems. Then we provide detailed explanations about the model we propose and its components. We also show through a simple example how the system works, and give some experimental results. Finally, we discuss the features of our model and propose extensions.", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 570, + "label": 2, + "text": "Finite-state approaches to Web information extraction Introduction An information agent is a distributed system that receives a goal through its user interface, gathers information relevant to this goal from a variety of sources, processes this content as appropriate, and delivers the results to the users. We focus on the second stage in this generic architecture. We survey a variety of information extraction techniques that enable information agents to automatically gather information from heterogeneous sources. For example, consider an agent that mediates package-delivery requests. To satisfy such requests, the agent might need to retrieve address information from geographic services, ask an advertising service for freight forwarders that serve the destination, request quotes from the relevant freight forwarders, retrieve duties and legal constraints from government sites, get weather information to estimate transportation delays, etc. Information extraction (IE) is a form of shallow document processing that involves populating", + "neighbors": [ + 279, + 412, + 855, + 1215, + 1233 + ], + "mask": "Test" + }, + { + "node_id": 571, + "label": 2, + "text": "Developing Language Processing Components with GATE (a User Guide) Contents 1 Introduction 3 1.1 How to Use This Text . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2 Context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.3 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.4 Structure of the Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.5 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2 How To. . . 14 2.1 Download GATE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.2 Install and Run GATE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.3 [D,F] Configure GATE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.4 Build GATE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.5 [D,F] Create a New CREOLE Resource . . . . . . . . . . . . . . . . . . . . 18 2.6