dataset
stringclasses 6
values | query
dict | candidates
list |
---|---|---|
arnetminer | {
"doc_id": "2877433",
"title": "Mother, May I? OWL-based Policy Management at NASA",
"abstract": "Among the challenges of managing NASA’s information systems is the management (that is, creation, coordination, verification, validation, and enforcement) of many different role-based access control policies and mechanisms. This paper describes an actual data federation use case that demonstrates the inefficiencies created by this challenge and presents an approach to reducing these inefficiencies using OWL. The focus is on the representation of XACML policies in DL, but the approach generalizes to other policy languages.",
"corpus_id": 2877433
} | [
{
"doc_id": "17574900",
"title": "Opening, Closing Worlds - On Integrity Constraints",
"abstract": "In many data-centric applications it is desirable to use OWL as an expressive schema language where one expresses constraints that need to be satisfied by the (instance) data. However, some features of OWL’s semantics, specifically the Open World Assumption (OWL) and not having a Unique Name Assumption (UNA), make it hard to use OWL for this task. What would trigger a constraint violation in a closed world system like a relational database leads to new inferences in OWL. In this paper, we explore how OWL can be extended to accommodate integrity constraints and discuss several alternatives for the syntax and semantics of such an extension. We primarily focus on applications in the Supply Chain Management (SCM) domain but we are also gathering use cases and requirements from many other application areas to assess which of these alternatives provides the best solution.",
"corpus_id": 17574900,
"score": 1
},
{
"doc_id": "9634553",
"title": "Managing Change: An Ontology Version Control System",
"abstract": "In this paper we present the basic requirements and initial design of a system which manages and facilitates changes to an OWL ontology in a multi-editor environment. This system uses a centralized client-server architecture in which the server maintains the current state and full history of all managed ontologies. Clients can access the current ontology version, all historical revisions, and differences between arbitrary revisions, as well as metadata associated with revisions. This system will be used by many other ontology based services, such as incremental reasoning, collaborative ontology development, advanced ontology search, and ontology module extraction. Taken holistically, this network of services will provide a rich environment for the development and management of ontology based information systems. 1 Motivation and Requirements We need for a system that manages access to a changing ontology. This requirement is experienced by a variety applications with different stakeholders. An illustrative use case is presented below. A large distributed organization requires integration and alignment of many heterogeneous data sources and information artifacts. They facilitate such integration by employing one or more expressive OWL ontologies that exist in defined relations to data sources, information artifacts, and an enterprise conceptual model. These ontologies, as a critical infrastructure components, have stakeholders throughout the organization and outside its boundaries. Further, they are developed and maintained concurrently by many parties. Individual stakeholders participate in the ontology engineering process in different ways. Some are primarily consumers, but may make detailed edits to areas of the ontologies critical to them. Others are charged with maintaining high-level ontology coherence and use an integrated ontology development environment, such as Protege-OWL, to collaborate with similar editors in realtime, leveraging tools to maintain a dynamic view of the ontology. All stakeholders rely on the ontologies being available and consistent across the organization. This use case illustrates a set of requirements: Client Performance The network is a potential bottleneck of any distributed or client-server system, but the critical work of ontology development is",
"corpus_id": 9634553,
"score": 1
},
{
"doc_id": "18665149",
"title": "Owlgres: A Scalable OWL Reasoner",
"abstract": "We present Owlgres, a DL-Lite reasoner implementation written for PostgreSQL, a mature open source database. Owlgres is an OWL reasoner that provides consistency checking and conjunctive query services, supports DL-LiteR as well as the OWL sameAs construct, and is not limited to PostgreSQL. We discuss the implementation with special focus on sameAs and the supported subset of the SPARQL language. Emphasis is given to the implemented optimization techniques which resulted in significant performance improvement. Based on a confidential NASA dataset and part of the DBpedia dataset, we show a typical use case for Owlgres, i.e. given a terminology and a dataset, Owlgres provides querying on a persistent knowledge base with reasoning at query time in the expressivity of DL-LiteR.",
"corpus_id": 18665149,
"score": 1
},
{
"doc_id": "38418561",
"title": "Cryptographic Information Recovery Using Key Recover",
"abstract": "A note to readers: the authors consider that all techniques for key recovery may be viewed as having a position on a broad continuum. The only way to avoid misunderstanding is to identify particular techniques by listing their specific characteristics, rather than using multiply-defined terms. Different characteristics have advantages in different environments, so there is no 'best' key recovery technique. The paper notes some typical advantages and disadvantages of several techniques but should not be construed as an endorsement of any particular technique relative to another. Similarly, the authors recognize that terminology may vary from country to country. Cryptographic information recovery techniques provide for the recovery of plaintext from encrypted data. This (exceptional) need arises when the cryptographic keys involved are not available. For example, data files may have been encrypted using a key derived from a now forgotten or misplaced password. Overlapping and confusing terminology has been applied to the techniques of information recovery, including key escrow, key backup, key recovery, and trusted third party (ttp), all of which refer to methods for retrieving, recovering, or re-constructing keys. Even the underlying concept of 'trust' has broad meaning. Instead of attempting to 'define' these terms precisely, a continuum of functionality is defined. Several generic technologies, together with desirable characteristics of cryptographic information/key recovery techniques, are described.",
"corpus_id": 38418561,
"score": 0
},
{
"doc_id": "17466351",
"title": "Aristotle: a system for development of program analysis based tools",
"abstract": "Aristotle provides program analysis information, and supports the development of software engineering tools. Aristotle's front end consists of parsers that gather control flow, local dataflow and symbol table information for procedural language programs. We implemented a parser for C by incorporating analysis routines into the GNU C parser; a C++ parser is being implemented using similar techniques. Aristotle tools use the data provided by the parsers to perform a variety of tasks, such as dataflow and control dependence analysis, dataflow testing, graph construction and graph viewing. Most of Aristotle's components function on single procedures and entire programs. Parsers and tools use database handler routines to store information in, and retrieve it from, a central database. A user interface provides interactive menu-driven access to tools, and users can view results textually or graphically. Many tools can also be invoked directly from applications programs, which facilitates the development of new tools. To assist with system development and maintenance, we are also creating support tools for managing bug and test suite databases.",
"corpus_id": 17466351,
"score": 0
},
{
"doc_id": "29590207",
"title": "Combining spatial and scale-space techniques for edge detection to provide a spatially adaptive wavelet-based noise filtering algorithm",
"abstract": "New methods for detecting edges in an image using spatial and scale-space domains are proposed. A priori knowledge about geometrical characteristics of edges is used to assign a probability factor to the chance of any pixel being on an edge. An improved double thresholding technique is introduced for spatial domain filtering. Probabilities that pixels belong to a given edge are assigned based on pixel similarity across gradient amplitudes, gradient phases and edge connectivity. The scale-space approach uses dynamic range compression to allow wavelet correlation over a wider range of scales. A probabilistic formulation is used to combine the results obtained from filtering in each domain to provide a final edge probability image which has the advantages of both spatial and scale-space domain methods. Decomposing this edge probability image with the same wavelet as the original image permits the generation of adaptive filters that can recognize the characteristics of the edges in all wavelet detail and approximation images regardless of scale. These matched filters permit significant reduction in image noise without contributing to edge distortion. The spatially adaptive wavelet noise-filtering algorithm is qualitatively and quantitatively compared to a frequency domain and two wavelet based noise suppression algorithms using both natural and computer generated noisy images.",
"corpus_id": 29590207,
"score": 0
},
{
"doc_id": "11798209",
"title": "A Hardware-Assisted Tool for Fast, Full Code Coverage Analysis",
"abstract": "Software reliability can be improved by using code coverage analysis to ensure that all statements are executed at least once during the testing process. When full code coverage information is obtained through software code instrumentation, high runtime performance overheads are incurred. Techniques that perform deferred or selective code instrumentation have shown success in reducing run-time overheads; however, the execution profile remains distorted. Techniques have been proposed that use internal processor hardware during the data gathering process, e.g. program counter logging. These approaches have been shown to reduce overheads; but currently trade swift execution for sparse code coverage. By combining the branch-vector hardware designed for debugging modern embedded processors with on-demand code coverage analysis, we have developed a new tool which provides full code coverage, while minimizing performance distortions. Experimental results show a performance impact of only 8 - 12%, while still providing 100% code coverage information.",
"corpus_id": 11798209,
"score": 0
},
{
"doc_id": "167449",
"title": "Rigorous specification and conformance testing techniques for network protocols, as applied to TCP, UDP, and sockets",
"abstract": "Network protocols are hard to implement correctly. Despite the existence of RFCs and other standards, implementations often have subtle differences and bugs. One reason for this is that the specifications are typically informal, and hence inevitably contain ambiguities. Conformance testing against such specifications is challenging.In this paper we present a practical technique for rigorous protocol specification that supports specification-based testing. We have applied it to TCP, UDP, and the Sockets API, developing a detailed 'post-hoc' specification that accurately reflects the behaviour of several existing implementations (FreeBSD 4.6, Linux 2.4.20-8, and Windows XP SP1). The development process uncovered a number of differences between and infelicities in these implementations.Our experience shows for the first time that rigorous specification is feasible for protocols as complex as TCP@. We argue that the technique is also applicable 'pre-hoc', in the design phase of new protocols. We discuss how such a design-for-test approach should influence protocol development, leading to protocol specifications that are both unambiguous and clear, and to high-quality implementations that can be tested directly against those specifications.",
"corpus_id": 167449,
"score": 0
}
] |
arnetminer | {
"doc_id": "46649437",
"title": "Muscular Effects of VDT Work",
"abstract": null,
"corpus_id": 46649437
} | [
{
"doc_id": "16545052",
"title": "The Physical, Mental, and Emotional Stress Effects of VDT Work",
"abstract": "Are their backs killing them? Do they growl when a supervisor walks by? Something can be done for folks who spend their days in front of VDTs.",
"corpus_id": 16545052,
"score": 1
},
{
"doc_id": "20004685",
"title": "Mental and Emotional Issues in VDT Work",
"abstract": null,
"corpus_id": 20004685,
"score": 1
},
{
"doc_id": "16085618",
"title": "A TDD approach to introducing students to embedded programming",
"abstract": "Learning embedded programming is a highly demanding exercise. The beginner is bombarded with complexity from the start: embedded production based around a myriad of C++ constructs with low-level elements integrated onto ever more complicated processor architectures. The picture is further compounded by tool support having unfamiliar roles and appearances from previous student experiences. This demanding situation often has the student bewildered; seeking for \"a crutch\" or the simplest way forward regardless of the overall consequences. To control this potentially chaotic picture, the instructor needs to introduce devices to combat this complexity. We argue that test driven development (TDD) should become the instructor's principal weapon in this fight. Reasons for this belief combined with our, and the students', experiences with this novel approach are discussed.",
"corpus_id": 16085618,
"score": 0
},
{
"doc_id": "38418561",
"title": "Cryptographic Information Recovery Using Key Recover",
"abstract": "A note to readers: the authors consider that all techniques for key recovery may be viewed as having a position on a broad continuum. The only way to avoid misunderstanding is to identify particular techniques by listing their specific characteristics, rather than using multiply-defined terms. Different characteristics have advantages in different environments, so there is no 'best' key recovery technique. The paper notes some typical advantages and disadvantages of several techniques but should not be construed as an endorsement of any particular technique relative to another. Similarly, the authors recognize that terminology may vary from country to country. Cryptographic information recovery techniques provide for the recovery of plaintext from encrypted data. This (exceptional) need arises when the cryptographic keys involved are not available. For example, data files may have been encrypted using a key derived from a now forgotten or misplaced password. Overlapping and confusing terminology has been applied to the techniques of information recovery, including key escrow, key backup, key recovery, and trusted third party (ttp), all of which refer to methods for retrieving, recovering, or re-constructing keys. Even the underlying concept of 'trust' has broad meaning. Instead of attempting to 'define' these terms precisely, a continuum of functionality is defined. Several generic technologies, together with desirable characteristics of cryptographic information/key recovery techniques, are described.",
"corpus_id": 38418561,
"score": 0
},
{
"doc_id": "20760479",
"title": "The CORC experience: survey of founding libraries. Part II",
"abstract": null,
"corpus_id": 20760479,
"score": 0
},
{
"doc_id": "8437472",
"title": "Performance and Capacity Analysis of UWB Networks over 60GHz WPAN Channel",
"abstract": "In this paper we evaluate the system performance and capacity of single carrier ultra-wideband (UWB) networks over 60GHz wireless personal area network (WPAN) channel. Symbol error rate is derived for both single user and multiple access scenario with a general system capacity and performance evaluation approach based on moment generation function. System outage probability and network throughput performance are also studied. Based on the current IEEE 802.15.3 WPAN standard work, different transmission scenarios have been explored and the performances with RAKE reception has been obtained. The channel model is also based on the recent work of IEEE 802.15 WPAN group. Numerical results are given to illustrate the system performance.",
"corpus_id": 8437472,
"score": 0
},
{
"doc_id": "28518943",
"title": "The CORC experience: survey of founding libraries. Part I",
"abstract": "This survey, conducted in late 1999, found that CORC founding libraries shared a strong interest in controlling Internet resources and finding ways to catalog such resources quickly. Many cataloged in MARC. Although only a small number of them experimented with Dublin Core, many of them wanted to explore its potential for organizing Internet resources. Other metadata schemes were also used by some libraries. Overall, the founding libraries considered their CORC experience positive, but had several concerns. Their experience suggests that more work is needed to make fast, automated cataloging a reality. Since the findings of this study reflect experience with CORC at the developmental stage, the researchers proposed that CORC usage be monitored to identify trends in organizing Internet resources. A survey of CORC subscribers could be conducted to understand usage patterns and guide CORC’s development and improvement.",
"corpus_id": 28518943,
"score": 0
}
] |
arnetminer | {
"doc_id": "8437472",
"title": "Performance and Capacity Analysis of UWB Networks over 60GHz WPAN Channel",
"abstract": "In this paper we evaluate the system performance and capacity of single carrier ultra-wideband (UWB) networks over 60GHz wireless personal area network (WPAN) channel. Symbol error rate is derived for both single user and multiple access scenario with a general system capacity and performance evaluation approach based on moment generation function. System outage probability and network throughput performance are also studied. Based on the current IEEE 802.15.3 WPAN standard work, different transmission scenarios have been explored and the performances with RAKE reception has been obtained. The channel model is also based on the recent work of IEEE 802.15 WPAN group. Numerical results are given to illustrate the system performance.",
"corpus_id": 8437472
} | [
{
"doc_id": "2587153",
"title": "Towards context-aware face recognition",
"abstract": "In this paper, we focus on the use of context-aware, collaborative filtering, machine-learning techniques that leverage automatically sensed and inferred contextual metadata together with computer vision analysis of image content to make accurate predictions about the human subjects depicted in cameraphone photos. We apply Sparse-Factor Analysis (SFA) to both the contextual metadata gathered in the MMM2 system and the results of PCA (Principal Components Analysis) of the photo content to achieve a 60% face recognition accuracy of people depicted in our cameraphone photos, which is 40% better than media analysis alone. In short, we use context-aware media analysis to solve the face recognition problem for cameraphone photos.",
"corpus_id": 2587153,
"score": 1
},
{
"doc_id": "17466351",
"title": "Aristotle: a system for development of program analysis based tools",
"abstract": "Aristotle provides program analysis information, and supports the development of software engineering tools. Aristotle's front end consists of parsers that gather control flow, local dataflow and symbol table information for procedural language programs. We implemented a parser for C by incorporating analysis routines into the GNU C parser; a C++ parser is being implemented using similar techniques. Aristotle tools use the data provided by the parsers to perform a variety of tasks, such as dataflow and control dependence analysis, dataflow testing, graph construction and graph viewing. Most of Aristotle's components function on single procedures and entire programs. Parsers and tools use database handler routines to store information in, and retrieve it from, a central database. A user interface provides interactive menu-driven access to tools, and users can view results textually or graphically. Many tools can also be invoked directly from applications programs, which facilitates the development of new tools. To assist with system development and maintenance, we are also creating support tools for managing bug and test suite databases.",
"corpus_id": 17466351,
"score": 0
},
{
"doc_id": "9634553",
"title": "Managing Change: An Ontology Version Control System",
"abstract": "In this paper we present the basic requirements and initial design of a system which manages and facilitates changes to an OWL ontology in a multi-editor environment. This system uses a centralized client-server architecture in which the server maintains the current state and full history of all managed ontologies. Clients can access the current ontology version, all historical revisions, and differences between arbitrary revisions, as well as metadata associated with revisions. This system will be used by many other ontology based services, such as incremental reasoning, collaborative ontology development, advanced ontology search, and ontology module extraction. Taken holistically, this network of services will provide a rich environment for the development and management of ontology based information systems. 1 Motivation and Requirements We need for a system that manages access to a changing ontology. This requirement is experienced by a variety applications with different stakeholders. An illustrative use case is presented below. A large distributed organization requires integration and alignment of many heterogeneous data sources and information artifacts. They facilitate such integration by employing one or more expressive OWL ontologies that exist in defined relations to data sources, information artifacts, and an enterprise conceptual model. These ontologies, as a critical infrastructure components, have stakeholders throughout the organization and outside its boundaries. Further, they are developed and maintained concurrently by many parties. Individual stakeholders participate in the ontology engineering process in different ways. Some are primarily consumers, but may make detailed edits to areas of the ontologies critical to them. Others are charged with maintaining high-level ontology coherence and use an integrated ontology development environment, such as Protege-OWL, to collaborate with similar editors in realtime, leveraging tools to maintain a dynamic view of the ontology. All stakeholders rely on the ontologies being available and consistent across the organization. This use case illustrates a set of requirements: Client Performance The network is a potential bottleneck of any distributed or client-server system, but the critical work of ontology development is",
"corpus_id": 9634553,
"score": 0
},
{
"doc_id": "8493649",
"title": "Empirical analysis of the correlation between amount-of-reuse metrics in the C programming language",
"abstract": "Disclosed are the magnesium salts of N-carboxyamino acids, a process for their preparation, and their use of lubricating oil additives.",
"corpus_id": 8493649,
"score": 0
},
{
"doc_id": "17574900",
"title": "Opening, Closing Worlds - On Integrity Constraints",
"abstract": "In many data-centric applications it is desirable to use OWL as an expressive schema language where one expresses constraints that need to be satisfied by the (instance) data. However, some features of OWL’s semantics, specifically the Open World Assumption (OWL) and not having a Unique Name Assumption (UNA), make it hard to use OWL for this task. What would trigger a constraint violation in a closed world system like a relational database leads to new inferences in OWL. In this paper, we explore how OWL can be extended to accommodate integrity constraints and discuss several alternatives for the syntax and semantics of such an extension. We primarily focus on applications in the Supply Chain Management (SCM) domain but we are also gathering use cases and requirements from many other application areas to assess which of these alternatives provides the best solution.",
"corpus_id": 17574900,
"score": 0
},
{
"doc_id": "18665149",
"title": "Owlgres: A Scalable OWL Reasoner",
"abstract": "We present Owlgres, a DL-Lite reasoner implementation written for PostgreSQL, a mature open source database. Owlgres is an OWL reasoner that provides consistency checking and conjunctive query services, supports DL-LiteR as well as the OWL sameAs construct, and is not limited to PostgreSQL. We discuss the implementation with special focus on sameAs and the supported subset of the SPARQL language. Emphasis is given to the implemented optimization techniques which resulted in significant performance improvement. Based on a confidential NASA dataset and part of the DBpedia dataset, we show a typical use case for Owlgres, i.e. given a terminology and a dataset, Owlgres provides querying on a persistent knowledge base with reasoning at query time in the expressivity of DL-LiteR.",
"corpus_id": 18665149,
"score": 0
}
] |
arnetminer | {
"doc_id": "18665149",
"title": "Owlgres: A Scalable OWL Reasoner",
"abstract": "We present Owlgres, a DL-Lite reasoner implementation written for PostgreSQL, a mature open source database. Owlgres is an OWL reasoner that provides consistency checking and conjunctive query services, supports DL-LiteR as well as the OWL sameAs construct, and is not limited to PostgreSQL. We discuss the implementation with special focus on sameAs and the supported subset of the SPARQL language. Emphasis is given to the implemented optimization techniques which resulted in significant performance improvement. Based on a confidential NASA dataset and part of the DBpedia dataset, we show a typical use case for Owlgres, i.e. given a terminology and a dataset, Owlgres provides querying on a persistent knowledge base with reasoning at query time in the expressivity of DL-LiteR.",
"corpus_id": 18665149
} | [
{
"doc_id": "9634553",
"title": "Managing Change: An Ontology Version Control System",
"abstract": "In this paper we present the basic requirements and initial design of a system which manages and facilitates changes to an OWL ontology in a multi-editor environment. This system uses a centralized client-server architecture in which the server maintains the current state and full history of all managed ontologies. Clients can access the current ontology version, all historical revisions, and differences between arbitrary revisions, as well as metadata associated with revisions. This system will be used by many other ontology based services, such as incremental reasoning, collaborative ontology development, advanced ontology search, and ontology module extraction. Taken holistically, this network of services will provide a rich environment for the development and management of ontology based information systems. 1 Motivation and Requirements We need for a system that manages access to a changing ontology. This requirement is experienced by a variety applications with different stakeholders. An illustrative use case is presented below. A large distributed organization requires integration and alignment of many heterogeneous data sources and information artifacts. They facilitate such integration by employing one or more expressive OWL ontologies that exist in defined relations to data sources, information artifacts, and an enterprise conceptual model. These ontologies, as a critical infrastructure components, have stakeholders throughout the organization and outside its boundaries. Further, they are developed and maintained concurrently by many parties. Individual stakeholders participate in the ontology engineering process in different ways. Some are primarily consumers, but may make detailed edits to areas of the ontologies critical to them. Others are charged with maintaining high-level ontology coherence and use an integrated ontology development environment, such as Protege-OWL, to collaborate with similar editors in realtime, leveraging tools to maintain a dynamic view of the ontology. All stakeholders rely on the ontologies being available and consistent across the organization. This use case illustrates a set of requirements: Client Performance The network is a potential bottleneck of any distributed or client-server system, but the critical work of ontology development is",
"corpus_id": 9634553,
"score": 1
},
{
"doc_id": "17574900",
"title": "Opening, Closing Worlds - On Integrity Constraints",
"abstract": "In many data-centric applications it is desirable to use OWL as an expressive schema language where one expresses constraints that need to be satisfied by the (instance) data. However, some features of OWL’s semantics, specifically the Open World Assumption (OWL) and not having a Unique Name Assumption (UNA), make it hard to use OWL for this task. What would trigger a constraint violation in a closed world system like a relational database leads to new inferences in OWL. In this paper, we explore how OWL can be extended to accommodate integrity constraints and discuss several alternatives for the syntax and semantics of such an extension. We primarily focus on applications in the Supply Chain Management (SCM) domain but we are also gathering use cases and requirements from many other application areas to assess which of these alternatives provides the best solution.",
"corpus_id": 17574900,
"score": 1
},
{
"doc_id": "18147449",
"title": "A flexible hardware encoder for low-density parity-check codes",
"abstract": "We describe a flexible hardware encoder for regular and irregular low-density parity-check (LDPC) codes. Although LDPC codes achieve better performance and lower decoding complexity than turbo codes, a major drawback of LDPC codes is their apparently high encoding complexity. Using an efficient encoding method proposed by Richardson and Urbanke, we present a hardware LDPC encoder with linear encoding complexity. The encoder is flexible, supporting arbitrary H matrices, rates and block lengths. An implementation for a rate 1/2 irregular length 2000 LDPC code encoder on a Xilinx Virtex-II XC2V4000-6 FPGA takes up 4% of the device. It runs at 143 MHz and has a throughput of 45 million codeword bits per second (or 22 million information bits per second) with a latency of 0.18 ms. The performance can be improved by exploiting parallelism: several instances of the encoder can be mapped onto the same chip to encode multiple message blocks concurrently. An implementation of 16 instances of the encoder on the same device at 82 MHz is capable of 410 million codeword bits per second, 80 times faster than an Intel Pentium-lV 2.4 GHz PC.",
"corpus_id": 18147449,
"score": 0
},
{
"doc_id": "28518943",
"title": "The CORC experience: survey of founding libraries. Part I",
"abstract": "This survey, conducted in late 1999, found that CORC founding libraries shared a strong interest in controlling Internet resources and finding ways to catalog such resources quickly. Many cataloged in MARC. Although only a small number of them experimented with Dublin Core, many of them wanted to explore its potential for organizing Internet resources. Other metadata schemes were also used by some libraries. Overall, the founding libraries considered their CORC experience positive, but had several concerns. Their experience suggests that more work is needed to make fast, automated cataloging a reality. Since the findings of this study reflect experience with CORC at the developmental stage, the researchers proposed that CORC usage be monitored to identify trends in organizing Internet resources. A survey of CORC subscribers could be conducted to understand usage patterns and guide CORC’s development and improvement.",
"corpus_id": 28518943,
"score": 0
},
{
"doc_id": "14773701",
"title": "Engineering with logic: HOL specification and symbolic-evaluation testing for TCP implementations",
"abstract": "The TCP/IP protocols and Sockets API underlie much of modern computation, but their semantics have historically been very complex and ill-defined. The real standard is the de facto one of the common implementations, including, for example, the 15,000--20,000 lines of C in the BSD implementation. Dealing rigorously with the behaviour of such bodies of code is challenging.We have recently developed a post-hoc specification of TCP, UDP, and Sockets that is rigorous, detailed, readable, has broad coverage, and is remarkably accurate. In this paper we describe the novel techniques that were required.Working within a general-purpose proof assistant (HOL), we developed language idioms (within higher-order logic) in which to write the specification: operational semantics with nondeterminism, time, system calls, monadic relational programming, etc. We followed an experimental semantics approach, validating the specification against several thousand traces captured from three implementations (FreeBSD, Linux, and WinXP). Many differences between these were identified, and a number of bugs. Validation was done using a special-purpose symbolic model checker programmed above HOL.We suggest that similar logic engineering techniques could be applied to future critical software infrastructure at design time, leading to cleaner designs and (via specification-based testing using a similar checker) more predictable implementations.",
"corpus_id": 14773701,
"score": 0
},
{
"doc_id": "22319012",
"title": "A Scalable Testing Framework for Location-Based Services",
"abstract": "A novel testing framework for location based services is introduced. In particular, the paper showcases a novel architecture for such a framework. The implementation of the framework illustrates both the functionality and the feasibility of the framework proposed and the utility of the architecture. The new framework is evaluated through comparison to several other methodologies currently available for the testing of location-based applications. A case study is presented in which the testing framework was applied to a typical mobile service tracking system. It is concluded that the proposed testing framework achieves the best coverage of the entire location based service testing problem of the currently available methodologies; being equipped to test the widest array of application attributes and allowing for the automation of testing activities.",
"corpus_id": 22319012,
"score": 0
},
{
"doc_id": "17466351",
"title": "Aristotle: a system for development of program analysis based tools",
"abstract": "Aristotle provides program analysis information, and supports the development of software engineering tools. Aristotle's front end consists of parsers that gather control flow, local dataflow and symbol table information for procedural language programs. We implemented a parser for C by incorporating analysis routines into the GNU C parser; a C++ parser is being implemented using similar techniques. Aristotle tools use the data provided by the parsers to perform a variety of tasks, such as dataflow and control dependence analysis, dataflow testing, graph construction and graph viewing. Most of Aristotle's components function on single procedures and entire programs. Parsers and tools use database handler routines to store information in, and retrieve it from, a central database. A user interface provides interactive menu-driven access to tools, and users can view results textually or graphically. Many tools can also be invoked directly from applications programs, which facilitates the development of new tools. To assist with system development and maintenance, we are also creating support tools for managing bug and test suite databases.",
"corpus_id": 17466351,
"score": 0
}
] |
arnetminer | {
"doc_id": "167449",
"title": "Rigorous specification and conformance testing techniques for network protocols, as applied to TCP, UDP, and sockets",
"abstract": "Network protocols are hard to implement correctly. Despite the existence of RFCs and other standards, implementations often have subtle differences and bugs. One reason for this is that the specifications are typically informal, and hence inevitably contain ambiguities. Conformance testing against such specifications is challenging.In this paper we present a practical technique for rigorous protocol specification that supports specification-based testing. We have applied it to TCP, UDP, and the Sockets API, developing a detailed 'post-hoc' specification that accurately reflects the behaviour of several existing implementations (FreeBSD 4.6, Linux 2.4.20-8, and Windows XP SP1). The development process uncovered a number of differences between and infelicities in these implementations.Our experience shows for the first time that rigorous specification is feasible for protocols as complex as TCP@. We argue that the technique is also applicable 'pre-hoc', in the design phase of new protocols. We discuss how such a design-for-test approach should influence protocol development, leading to protocol specifications that are both unambiguous and clear, and to high-quality implementations that can be tested directly against those specifications.",
"corpus_id": 167449
} | [
{
"doc_id": "14773701",
"title": "Engineering with logic: HOL specification and symbolic-evaluation testing for TCP implementations",
"abstract": "The TCP/IP protocols and Sockets API underlie much of modern computation, but their semantics have historically been very complex and ill-defined. The real standard is the de facto one of the common implementations, including, for example, the 15,000--20,000 lines of C in the BSD implementation. Dealing rigorously with the behaviour of such bodies of code is challenging.We have recently developed a post-hoc specification of TCP, UDP, and Sockets that is rigorous, detailed, readable, has broad coverage, and is remarkably accurate. In this paper we describe the novel techniques that were required.Working within a general-purpose proof assistant (HOL), we developed language idioms (within higher-order logic) in which to write the specification: operational semantics with nondeterminism, time, system calls, monadic relational programming, etc. We followed an experimental semantics approach, validating the specification against several thousand traces captured from three implementations (FreeBSD, Linux, and WinXP). Many differences between these were identified, and a number of bugs. Validation was done using a special-purpose symbolic model checker programmed above HOL.We suggest that similar logic engineering techniques could be applied to future critical software infrastructure at design time, leading to cleaner designs and (via specification-based testing using a similar checker) more predictable implementations.",
"corpus_id": 14773701,
"score": 1
},
{
"doc_id": "8240301",
"title": "E-RACE, A Hardware-Assisted Approach to Lockset-Based Data Race Detection for Embedded Products",
"abstract": "Limited research exists for identifying data races under the specific characteristics found in embedded systems. E-RACE is a new style of data-race identification tool which directly utilizes specialized hardware capabilities to monitor the flow of data and instructions. Compared to existing data race analysis approaches, the hardware-assisted E-RACE tool has advantages of recognizing data-race issues without requiring extensive software code instrumentation. The tool is integrated into an Embedded Unit Testing Driven Development Framework to encourage the construction of testable code and early identification of data-races.",
"corpus_id": 8240301,
"score": 0
},
{
"doc_id": "2587153",
"title": "Towards context-aware face recognition",
"abstract": "In this paper, we focus on the use of context-aware, collaborative filtering, machine-learning techniques that leverage automatically sensed and inferred contextual metadata together with computer vision analysis of image content to make accurate predictions about the human subjects depicted in cameraphone photos. We apply Sparse-Factor Analysis (SFA) to both the contextual metadata gathered in the MMM2 system and the results of PCA (Principal Components Analysis) of the photo content to achieve a 60% face recognition accuracy of people depicted in our cameraphone photos, which is 40% better than media analysis alone. In short, we use context-aware media analysis to solve the face recognition problem for cameraphone photos.",
"corpus_id": 2587153,
"score": 0
},
{
"doc_id": "7180834",
"title": "PickPocket: A computer billiards shark",
"abstract": "Billiards is a game of both strategy and physical skill. To succeed, a player must be able to select strong shots, and then execute them accurately and consistently on the table. Several robotic billiards players have recently been developed. These systems address the task of executing shots on a physical table, but so far have incorporated little strategic reasoning. They require artificial intelligence to select the 'best' shot taking into account the accuracy of the robot, the noise inherent in the domain, the continuous nature of the search space, the difficulty of the shot, and the goal of maximizing the chances of winning. This article describes the program PickPocket, the winner of the simulated 8-ball tournaments at the 10th and 11th Computer Olympiad competitions. PickPocket is based on the traditional search framework, familiar from games such as chess, adapted to the continuous stochastic domain of billiards. Experimental results are presented exploring the properties of two search algorithms, Monte-Carlo search and Probabilistic search.",
"corpus_id": 7180834,
"score": 0
},
{
"doc_id": "9634553",
"title": "Managing Change: An Ontology Version Control System",
"abstract": "In this paper we present the basic requirements and initial design of a system which manages and facilitates changes to an OWL ontology in a multi-editor environment. This system uses a centralized client-server architecture in which the server maintains the current state and full history of all managed ontologies. Clients can access the current ontology version, all historical revisions, and differences between arbitrary revisions, as well as metadata associated with revisions. This system will be used by many other ontology based services, such as incremental reasoning, collaborative ontology development, advanced ontology search, and ontology module extraction. Taken holistically, this network of services will provide a rich environment for the development and management of ontology based information systems. 1 Motivation and Requirements We need for a system that manages access to a changing ontology. This requirement is experienced by a variety applications with different stakeholders. An illustrative use case is presented below. A large distributed organization requires integration and alignment of many heterogeneous data sources and information artifacts. They facilitate such integration by employing one or more expressive OWL ontologies that exist in defined relations to data sources, information artifacts, and an enterprise conceptual model. These ontologies, as a critical infrastructure components, have stakeholders throughout the organization and outside its boundaries. Further, they are developed and maintained concurrently by many parties. Individual stakeholders participate in the ontology engineering process in different ways. Some are primarily consumers, but may make detailed edits to areas of the ontologies critical to them. Others are charged with maintaining high-level ontology coherence and use an integrated ontology development environment, such as Protege-OWL, to collaborate with similar editors in realtime, leveraging tools to maintain a dynamic view of the ontology. All stakeholders rely on the ontologies being available and consistent across the organization. This use case illustrates a set of requirements: Client Performance The network is a potential bottleneck of any distributed or client-server system, but the critical work of ontology development is",
"corpus_id": 9634553,
"score": 0
},
{
"doc_id": "17466351",
"title": "Aristotle: a system for development of program analysis based tools",
"abstract": "Aristotle provides program analysis information, and supports the development of software engineering tools. Aristotle's front end consists of parsers that gather control flow, local dataflow and symbol table information for procedural language programs. We implemented a parser for C by incorporating analysis routines into the GNU C parser; a C++ parser is being implemented using similar techniques. Aristotle tools use the data provided by the parsers to perform a variety of tasks, such as dataflow and control dependence analysis, dataflow testing, graph construction and graph viewing. Most of Aristotle's components function on single procedures and entire programs. Parsers and tools use database handler routines to store information in, and retrieve it from, a central database. A user interface provides interactive menu-driven access to tools, and users can view results textually or graphically. Many tools can also be invoked directly from applications programs, which facilitates the development of new tools. To assist with system development and maintenance, we are also creating support tools for managing bug and test suite databases.",
"corpus_id": 17466351,
"score": 0
}
] |
arnetminer | {
"doc_id": "26971713",
"title": "Architecture Design for Globally Distributed Projects",
"abstract": "This paper talks through the practices and infrastructure that was used on the experimental Global Studio Project (GSP). While the architecture activities are highlighted, related practices such as project management, requirements engineering and integration and test in a distributed environment will also be discussed as lessons learned.",
"corpus_id": 26971713
} | [
{
"doc_id": "6404055",
"title": "Risk Mitigation Tactics for Planning and Monitoring Global Software Development Projects",
"abstract": "This tutorial describes a structured approach for determining the GSD related risks specific to a given project, selecting appropriate tactics for addressing these risks, and suggesting ways in which the level of risk can be monitored during the execution of the project.",
"corpus_id": 6404055,
"score": 1
},
{
"doc_id": "22319012",
"title": "A Scalable Testing Framework for Location-Based Services",
"abstract": "A novel testing framework for location based services is introduced. In particular, the paper showcases a novel architecture for such a framework. The implementation of the framework illustrates both the functionality and the feasibility of the framework proposed and the utility of the architecture. The new framework is evaluated through comparison to several other methodologies currently available for the testing of location-based applications. A case study is presented in which the testing framework was applied to a typical mobile service tracking system. It is concluded that the proposed testing framework achieves the best coverage of the entire location based service testing problem of the currently available methodologies; being equipped to test the widest array of application attributes and allowing for the automation of testing activities.",
"corpus_id": 22319012,
"score": 0
},
{
"doc_id": "29590207",
"title": "Combining spatial and scale-space techniques for edge detection to provide a spatially adaptive wavelet-based noise filtering algorithm",
"abstract": "New methods for detecting edges in an image using spatial and scale-space domains are proposed. A priori knowledge about geometrical characteristics of edges is used to assign a probability factor to the chance of any pixel being on an edge. An improved double thresholding technique is introduced for spatial domain filtering. Probabilities that pixels belong to a given edge are assigned based on pixel similarity across gradient amplitudes, gradient phases and edge connectivity. The scale-space approach uses dynamic range compression to allow wavelet correlation over a wider range of scales. A probabilistic formulation is used to combine the results obtained from filtering in each domain to provide a final edge probability image which has the advantages of both spatial and scale-space domain methods. Decomposing this edge probability image with the same wavelet as the original image permits the generation of adaptive filters that can recognize the characteristics of the edges in all wavelet detail and approximation images regardless of scale. These matched filters permit significant reduction in image noise without contributing to edge distortion. The spatially adaptive wavelet noise-filtering algorithm is qualitatively and quantitatively compared to a frequency domain and two wavelet based noise suppression algorithms using both natural and computer generated noisy images.",
"corpus_id": 29590207,
"score": 0
},
{
"doc_id": "18665149",
"title": "Owlgres: A Scalable OWL Reasoner",
"abstract": "We present Owlgres, a DL-Lite reasoner implementation written for PostgreSQL, a mature open source database. Owlgres is an OWL reasoner that provides consistency checking and conjunctive query services, supports DL-LiteR as well as the OWL sameAs construct, and is not limited to PostgreSQL. We discuss the implementation with special focus on sameAs and the supported subset of the SPARQL language. Emphasis is given to the implemented optimization techniques which resulted in significant performance improvement. Based on a confidential NASA dataset and part of the DBpedia dataset, we show a typical use case for Owlgres, i.e. given a terminology and a dataset, Owlgres provides querying on a persistent knowledge base with reasoning at query time in the expressivity of DL-LiteR.",
"corpus_id": 18665149,
"score": 0
},
{
"doc_id": "7180834",
"title": "PickPocket: A computer billiards shark",
"abstract": "Billiards is a game of both strategy and physical skill. To succeed, a player must be able to select strong shots, and then execute them accurately and consistently on the table. Several robotic billiards players have recently been developed. These systems address the task of executing shots on a physical table, but so far have incorporated little strategic reasoning. They require artificial intelligence to select the 'best' shot taking into account the accuracy of the robot, the noise inherent in the domain, the continuous nature of the search space, the difficulty of the shot, and the goal of maximizing the chances of winning. This article describes the program PickPocket, the winner of the simulated 8-ball tournaments at the 10th and 11th Computer Olympiad competitions. PickPocket is based on the traditional search framework, familiar from games such as chess, adapted to the continuous stochastic domain of billiards. Experimental results are presented exploring the properties of two search algorithms, Monte-Carlo search and Probabilistic search.",
"corpus_id": 7180834,
"score": 0
},
{
"doc_id": "9634553",
"title": "Managing Change: An Ontology Version Control System",
"abstract": "In this paper we present the basic requirements and initial design of a system which manages and facilitates changes to an OWL ontology in a multi-editor environment. This system uses a centralized client-server architecture in which the server maintains the current state and full history of all managed ontologies. Clients can access the current ontology version, all historical revisions, and differences between arbitrary revisions, as well as metadata associated with revisions. This system will be used by many other ontology based services, such as incremental reasoning, collaborative ontology development, advanced ontology search, and ontology module extraction. Taken holistically, this network of services will provide a rich environment for the development and management of ontology based information systems. 1 Motivation and Requirements We need for a system that manages access to a changing ontology. This requirement is experienced by a variety applications with different stakeholders. An illustrative use case is presented below. A large distributed organization requires integration and alignment of many heterogeneous data sources and information artifacts. They facilitate such integration by employing one or more expressive OWL ontologies that exist in defined relations to data sources, information artifacts, and an enterprise conceptual model. These ontologies, as a critical infrastructure components, have stakeholders throughout the organization and outside its boundaries. Further, they are developed and maintained concurrently by many parties. Individual stakeholders participate in the ontology engineering process in different ways. Some are primarily consumers, but may make detailed edits to areas of the ontologies critical to them. Others are charged with maintaining high-level ontology coherence and use an integrated ontology development environment, such as Protege-OWL, to collaborate with similar editors in realtime, leveraging tools to maintain a dynamic view of the ontology. All stakeholders rely on the ontologies being available and consistent across the organization. This use case illustrates a set of requirements: Client Performance The network is a potential bottleneck of any distributed or client-server system, but the critical work of ontology development is",
"corpus_id": 9634553,
"score": 0
}
] |
arnetminer | {
"doc_id": "9634553",
"title": "Managing Change: An Ontology Version Control System",
"abstract": "In this paper we present the basic requirements and initial design of a system which manages and facilitates changes to an OWL ontology in a multi-editor environment. This system uses a centralized client-server architecture in which the server maintains the current state and full history of all managed ontologies. Clients can access the current ontology version, all historical revisions, and differences between arbitrary revisions, as well as metadata associated with revisions. This system will be used by many other ontology based services, such as incremental reasoning, collaborative ontology development, advanced ontology search, and ontology module extraction. Taken holistically, this network of services will provide a rich environment for the development and management of ontology based information systems. 1 Motivation and Requirements We need for a system that manages access to a changing ontology. This requirement is experienced by a variety applications with different stakeholders. An illustrative use case is presented below. A large distributed organization requires integration and alignment of many heterogeneous data sources and information artifacts. They facilitate such integration by employing one or more expressive OWL ontologies that exist in defined relations to data sources, information artifacts, and an enterprise conceptual model. These ontologies, as a critical infrastructure components, have stakeholders throughout the organization and outside its boundaries. Further, they are developed and maintained concurrently by many parties. Individual stakeholders participate in the ontology engineering process in different ways. Some are primarily consumers, but may make detailed edits to areas of the ontologies critical to them. Others are charged with maintaining high-level ontology coherence and use an integrated ontology development environment, such as Protege-OWL, to collaborate with similar editors in realtime, leveraging tools to maintain a dynamic view of the ontology. All stakeholders rely on the ontologies being available and consistent across the organization. This use case illustrates a set of requirements: Client Performance The network is a potential bottleneck of any distributed or client-server system, but the critical work of ontology development is",
"corpus_id": 9634553
} | [
{
"doc_id": "17574900",
"title": "Opening, Closing Worlds - On Integrity Constraints",
"abstract": "In many data-centric applications it is desirable to use OWL as an expressive schema language where one expresses constraints that need to be satisfied by the (instance) data. However, some features of OWL’s semantics, specifically the Open World Assumption (OWL) and not having a Unique Name Assumption (UNA), make it hard to use OWL for this task. What would trigger a constraint violation in a closed world system like a relational database leads to new inferences in OWL. In this paper, we explore how OWL can be extended to accommodate integrity constraints and discuss several alternatives for the syntax and semantics of such an extension. We primarily focus on applications in the Supply Chain Management (SCM) domain but we are also gathering use cases and requirements from many other application areas to assess which of these alternatives provides the best solution.",
"corpus_id": 17574900,
"score": 1
},
{
"doc_id": "20004685",
"title": "Mental and Emotional Issues in VDT Work",
"abstract": null,
"corpus_id": 20004685,
"score": 0
},
{
"doc_id": "17466351",
"title": "Aristotle: a system for development of program analysis based tools",
"abstract": "Aristotle provides program analysis information, and supports the development of software engineering tools. Aristotle's front end consists of parsers that gather control flow, local dataflow and symbol table information for procedural language programs. We implemented a parser for C by incorporating analysis routines into the GNU C parser; a C++ parser is being implemented using similar techniques. Aristotle tools use the data provided by the parsers to perform a variety of tasks, such as dataflow and control dependence analysis, dataflow testing, graph construction and graph viewing. Most of Aristotle's components function on single procedures and entire programs. Parsers and tools use database handler routines to store information in, and retrieve it from, a central database. A user interface provides interactive menu-driven access to tools, and users can view results textually or graphically. Many tools can also be invoked directly from applications programs, which facilitates the development of new tools. To assist with system development and maintenance, we are also creating support tools for managing bug and test suite databases.",
"corpus_id": 17466351,
"score": 0
},
{
"doc_id": "28518943",
"title": "The CORC experience: survey of founding libraries. Part I",
"abstract": "This survey, conducted in late 1999, found that CORC founding libraries shared a strong interest in controlling Internet resources and finding ways to catalog such resources quickly. Many cataloged in MARC. Although only a small number of them experimented with Dublin Core, many of them wanted to explore its potential for organizing Internet resources. Other metadata schemes were also used by some libraries. Overall, the founding libraries considered their CORC experience positive, but had several concerns. Their experience suggests that more work is needed to make fast, automated cataloging a reality. Since the findings of this study reflect experience with CORC at the developmental stage, the researchers proposed that CORC usage be monitored to identify trends in organizing Internet resources. A survey of CORC subscribers could be conducted to understand usage patterns and guide CORC’s development and improvement.",
"corpus_id": 28518943,
"score": 0
},
{
"doc_id": "14773701",
"title": "Engineering with logic: HOL specification and symbolic-evaluation testing for TCP implementations",
"abstract": "The TCP/IP protocols and Sockets API underlie much of modern computation, but their semantics have historically been very complex and ill-defined. The real standard is the de facto one of the common implementations, including, for example, the 15,000--20,000 lines of C in the BSD implementation. Dealing rigorously with the behaviour of such bodies of code is challenging.We have recently developed a post-hoc specification of TCP, UDP, and Sockets that is rigorous, detailed, readable, has broad coverage, and is remarkably accurate. In this paper we describe the novel techniques that were required.Working within a general-purpose proof assistant (HOL), we developed language idioms (within higher-order logic) in which to write the specification: operational semantics with nondeterminism, time, system calls, monadic relational programming, etc. We followed an experimental semantics approach, validating the specification against several thousand traces captured from three implementations (FreeBSD, Linux, and WinXP). Many differences between these were identified, and a number of bugs. Validation was done using a special-purpose symbolic model checker programmed above HOL.We suggest that similar logic engineering techniques could be applied to future critical software infrastructure at design time, leading to cleaner designs and (via specification-based testing using a similar checker) more predictable implementations.",
"corpus_id": 14773701,
"score": 0
},
{
"doc_id": "30141926",
"title": "Nonparametric regression using linear combinations of basis functions",
"abstract": "This paper discusses a Bayesian approach to nonparametric regression initially proposed by Smith and Kohn (1996. Journal of Econometrics 75: 317–344). In this approach the regression function is represented as a linear combination of basis terms. The basis terms can be univariate or multivariate functions and can include polynomials, natural splines and radial basis functions. A Bayesian hierarchical model is used such that the coefficient of each basis term can be zero with positive prior probability. The presence of basis terms in the model is determined by latent indicator variables. The posterior mean is estimated by Markov chain Monte Carlo simulation because it is computationally intractable to compute the posterior mean analytically unless a small number of basis terms is used. The present article updates the work of Smith and Kohn (1996. Journal of Econometrics 75: 317–344) to take account of work by us and others over the last three years. A careful discussion is given to all aspects of the model specification, function estimation and the use of sampling schemes. In particular, new sampling schemes are introduced to carry out the variable selection methodology.",
"corpus_id": 30141926,
"score": 0
}
] |
arnetminer | {
"doc_id": "20004685",
"title": "Mental and Emotional Issues in VDT Work",
"abstract": null,
"corpus_id": 20004685
} | [
{
"doc_id": "16545052",
"title": "The Physical, Mental, and Emotional Stress Effects of VDT Work",
"abstract": "Are their backs killing them? Do they growl when a supervisor walks by? Something can be done for folks who spend their days in front of VDTs.",
"corpus_id": 16545052,
"score": 1
},
{
"doc_id": "26213696",
"title": "Formal methods fact vs. fiction",
"abstract": null,
"corpus_id": 26213696,
"score": 0
},
{
"doc_id": "30141926",
"title": "Nonparametric regression using linear combinations of basis functions",
"abstract": "This paper discusses a Bayesian approach to nonparametric regression initially proposed by Smith and Kohn (1996. Journal of Econometrics 75: 317–344). In this approach the regression function is represented as a linear combination of basis terms. The basis terms can be univariate or multivariate functions and can include polynomials, natural splines and radial basis functions. A Bayesian hierarchical model is used such that the coefficient of each basis term can be zero with positive prior probability. The presence of basis terms in the model is determined by latent indicator variables. The posterior mean is estimated by Markov chain Monte Carlo simulation because it is computationally intractable to compute the posterior mean analytically unless a small number of basis terms is used. The present article updates the work of Smith and Kohn (1996. Journal of Econometrics 75: 317–344) to take account of work by us and others over the last three years. A careful discussion is given to all aspects of the model specification, function estimation and the use of sampling schemes. In particular, new sampling schemes are introduced to carry out the variable selection methodology.",
"corpus_id": 30141926,
"score": 0
},
{
"doc_id": "8240301",
"title": "E-RACE, A Hardware-Assisted Approach to Lockset-Based Data Race Detection for Embedded Products",
"abstract": "Limited research exists for identifying data races under the specific characteristics found in embedded systems. E-RACE is a new style of data-race identification tool which directly utilizes specialized hardware capabilities to monitor the flow of data and instructions. Compared to existing data race analysis approaches, the hardware-assisted E-RACE tool has advantages of recognizing data-race issues without requiring extensive software code instrumentation. The tool is integrated into an Embedded Unit Testing Driven Development Framework to encourage the construction of testable code and early identification of data-races.",
"corpus_id": 8240301,
"score": 0
},
{
"doc_id": "8437472",
"title": "Performance and Capacity Analysis of UWB Networks over 60GHz WPAN Channel",
"abstract": "In this paper we evaluate the system performance and capacity of single carrier ultra-wideband (UWB) networks over 60GHz wireless personal area network (WPAN) channel. Symbol error rate is derived for both single user and multiple access scenario with a general system capacity and performance evaluation approach based on moment generation function. System outage probability and network throughput performance are also studied. Based on the current IEEE 802.15.3 WPAN standard work, different transmission scenarios have been explored and the performances with RAKE reception has been obtained. The channel model is also based on the recent work of IEEE 802.15 WPAN group. Numerical results are given to illustrate the system performance.",
"corpus_id": 8437472,
"score": 0
},
{
"doc_id": "11798209",
"title": "A Hardware-Assisted Tool for Fast, Full Code Coverage Analysis",
"abstract": "Software reliability can be improved by using code coverage analysis to ensure that all statements are executed at least once during the testing process. When full code coverage information is obtained through software code instrumentation, high runtime performance overheads are incurred. Techniques that perform deferred or selective code instrumentation have shown success in reducing run-time overheads; however, the execution profile remains distorted. Techniques have been proposed that use internal processor hardware during the data gathering process, e.g. program counter logging. These approaches have been shown to reduce overheads; but currently trade swift execution for sparse code coverage. By combining the branch-vector hardware designed for debugging modern embedded processors with on-demand code coverage analysis, we have developed a new tool which provides full code coverage, while minimizing performance distortions. Experimental results show a performance impact of only 8 - 12%, while still providing 100% code coverage information.",
"corpus_id": 11798209,
"score": 0
}
] |
arnetminer | {
"doc_id": "20760479",
"title": "The CORC experience: survey of founding libraries. Part II",
"abstract": null,
"corpus_id": 20760479
} | [
{
"doc_id": "28518943",
"title": "The CORC experience: survey of founding libraries. Part I",
"abstract": "This survey, conducted in late 1999, found that CORC founding libraries shared a strong interest in controlling Internet resources and finding ways to catalog such resources quickly. Many cataloged in MARC. Although only a small number of them experimented with Dublin Core, many of them wanted to explore its potential for organizing Internet resources. Other metadata schemes were also used by some libraries. Overall, the founding libraries considered their CORC experience positive, but had several concerns. Their experience suggests that more work is needed to make fast, automated cataloging a reality. Since the findings of this study reflect experience with CORC at the developmental stage, the researchers proposed that CORC usage be monitored to identify trends in organizing Internet resources. A survey of CORC subscribers could be conducted to understand usage patterns and guide CORC’s development and improvement.",
"corpus_id": 28518943,
"score": 1
},
{
"doc_id": "17466351",
"title": "Aristotle: a system for development of program analysis based tools",
"abstract": "Aristotle provides program analysis information, and supports the development of software engineering tools. Aristotle's front end consists of parsers that gather control flow, local dataflow and symbol table information for procedural language programs. We implemented a parser for C by incorporating analysis routines into the GNU C parser; a C++ parser is being implemented using similar techniques. Aristotle tools use the data provided by the parsers to perform a variety of tasks, such as dataflow and control dependence analysis, dataflow testing, graph construction and graph viewing. Most of Aristotle's components function on single procedures and entire programs. Parsers and tools use database handler routines to store information in, and retrieve it from, a central database. A user interface provides interactive menu-driven access to tools, and users can view results textually or graphically. Many tools can also be invoked directly from applications programs, which facilitates the development of new tools. To assist with system development and maintenance, we are also creating support tools for managing bug and test suite databases.",
"corpus_id": 17466351,
"score": 0
},
{
"doc_id": "8493649",
"title": "Empirical analysis of the correlation between amount-of-reuse metrics in the C programming language",
"abstract": "Disclosed are the magnesium salts of N-carboxyamino acids, a process for their preparation, and their use of lubricating oil additives.",
"corpus_id": 8493649,
"score": 0
},
{
"doc_id": "16545052",
"title": "The Physical, Mental, and Emotional Stress Effects of VDT Work",
"abstract": "Are their backs killing them? Do they growl when a supervisor walks by? Something can be done for folks who spend their days in front of VDTs.",
"corpus_id": 16545052,
"score": 0
},
{
"doc_id": "2877433",
"title": "Mother, May I? OWL-based Policy Management at NASA",
"abstract": "Among the challenges of managing NASA’s information systems is the management (that is, creation, coordination, verification, validation, and enforcement) of many different role-based access control policies and mechanisms. This paper describes an actual data federation use case that demonstrates the inefficiencies created by this challenge and presents an approach to reducing these inefficiencies using OWL. The focus is on the representation of XACML policies in DL, but the approach generalizes to other policy languages.",
"corpus_id": 2877433,
"score": 0
},
{
"doc_id": "18665149",
"title": "Owlgres: A Scalable OWL Reasoner",
"abstract": "We present Owlgres, a DL-Lite reasoner implementation written for PostgreSQL, a mature open source database. Owlgres is an OWL reasoner that provides consistency checking and conjunctive query services, supports DL-LiteR as well as the OWL sameAs construct, and is not limited to PostgreSQL. We discuss the implementation with special focus on sameAs and the supported subset of the SPARQL language. Emphasis is given to the implemented optimization techniques which resulted in significant performance improvement. Based on a confidential NASA dataset and part of the DBpedia dataset, we show a typical use case for Owlgres, i.e. given a terminology and a dataset, Owlgres provides querying on a persistent knowledge base with reasoning at query time in the expressivity of DL-LiteR.",
"corpus_id": 18665149,
"score": 0
}
] |
arnetminer | {
"doc_id": "22319012",
"title": "A Scalable Testing Framework for Location-Based Services",
"abstract": "A novel testing framework for location based services is introduced. In particular, the paper showcases a novel architecture for such a framework. The implementation of the framework illustrates both the functionality and the feasibility of the framework proposed and the utility of the architecture. The new framework is evaluated through comparison to several other methodologies currently available for the testing of location-based applications. A case study is presented in which the testing framework was applied to a typical mobile service tracking system. It is concluded that the proposed testing framework achieves the best coverage of the entire location based service testing problem of the currently available methodologies; being equipped to test the widest array of application attributes and allowing for the automation of testing activities.",
"corpus_id": 22319012
} | [
{
"doc_id": "16085618",
"title": "A TDD approach to introducing students to embedded programming",
"abstract": "Learning embedded programming is a highly demanding exercise. The beginner is bombarded with complexity from the start: embedded production based around a myriad of C++ constructs with low-level elements integrated onto ever more complicated processor architectures. The picture is further compounded by tool support having unfamiliar roles and appearances from previous student experiences. This demanding situation often has the student bewildered; seeking for \"a crutch\" or the simplest way forward regardless of the overall consequences. To control this potentially chaotic picture, the instructor needs to introduce devices to combat this complexity. We argue that test driven development (TDD) should become the instructor's principal weapon in this fight. Reasons for this belief combined with our, and the students', experiences with this novel approach are discussed.",
"corpus_id": 16085618,
"score": 1
},
{
"doc_id": "11798209",
"title": "A Hardware-Assisted Tool for Fast, Full Code Coverage Analysis",
"abstract": "Software reliability can be improved by using code coverage analysis to ensure that all statements are executed at least once during the testing process. When full code coverage information is obtained through software code instrumentation, high runtime performance overheads are incurred. Techniques that perform deferred or selective code instrumentation have shown success in reducing run-time overheads; however, the execution profile remains distorted. Techniques have been proposed that use internal processor hardware during the data gathering process, e.g. program counter logging. These approaches have been shown to reduce overheads; but currently trade swift execution for sparse code coverage. By combining the branch-vector hardware designed for debugging modern embedded processors with on-demand code coverage analysis, we have developed a new tool which provides full code coverage, while minimizing performance distortions. Experimental results show a performance impact of only 8 - 12%, while still providing 100% code coverage information.",
"corpus_id": 11798209,
"score": 1
},
{
"doc_id": "8240301",
"title": "E-RACE, A Hardware-Assisted Approach to Lockset-Based Data Race Detection for Embedded Products",
"abstract": "Limited research exists for identifying data races under the specific characteristics found in embedded systems. E-RACE is a new style of data-race identification tool which directly utilizes specialized hardware capabilities to monitor the flow of data and instructions. Compared to existing data race analysis approaches, the hardware-assisted E-RACE tool has advantages of recognizing data-race issues without requiring extensive software code instrumentation. The tool is integrated into an Embedded Unit Testing Driven Development Framework to encourage the construction of testable code and early identification of data-races.",
"corpus_id": 8240301,
"score": 1
},
{
"doc_id": "8493649",
"title": "Empirical analysis of the correlation between amount-of-reuse metrics in the C programming language",
"abstract": "Disclosed are the magnesium salts of N-carboxyamino acids, a process for their preparation, and their use of lubricating oil additives.",
"corpus_id": 8493649,
"score": 1
},
{
"doc_id": "167449",
"title": "Rigorous specification and conformance testing techniques for network protocols, as applied to TCP, UDP, and sockets",
"abstract": "Network protocols are hard to implement correctly. Despite the existence of RFCs and other standards, implementations often have subtle differences and bugs. One reason for this is that the specifications are typically informal, and hence inevitably contain ambiguities. Conformance testing against such specifications is challenging.In this paper we present a practical technique for rigorous protocol specification that supports specification-based testing. We have applied it to TCP, UDP, and the Sockets API, developing a detailed 'post-hoc' specification that accurately reflects the behaviour of several existing implementations (FreeBSD 4.6, Linux 2.4.20-8, and Windows XP SP1). The development process uncovered a number of differences between and infelicities in these implementations.Our experience shows for the first time that rigorous specification is feasible for protocols as complex as TCP@. We argue that the technique is also applicable 'pre-hoc', in the design phase of new protocols. We discuss how such a design-for-test approach should influence protocol development, leading to protocol specifications that are both unambiguous and clear, and to high-quality implementations that can be tested directly against those specifications.",
"corpus_id": 167449,
"score": 0
},
{
"doc_id": "18147449",
"title": "A flexible hardware encoder for low-density parity-check codes",
"abstract": "We describe a flexible hardware encoder for regular and irregular low-density parity-check (LDPC) codes. Although LDPC codes achieve better performance and lower decoding complexity than turbo codes, a major drawback of LDPC codes is their apparently high encoding complexity. Using an efficient encoding method proposed by Richardson and Urbanke, we present a hardware LDPC encoder with linear encoding complexity. The encoder is flexible, supporting arbitrary H matrices, rates and block lengths. An implementation for a rate 1/2 irregular length 2000 LDPC code encoder on a Xilinx Virtex-II XC2V4000-6 FPGA takes up 4% of the device. It runs at 143 MHz and has a throughput of 45 million codeword bits per second (or 22 million information bits per second) with a latency of 0.18 ms. The performance can be improved by exploiting parallelism: several instances of the encoder can be mapped onto the same chip to encode multiple message blocks concurrently. An implementation of 16 instances of the encoder on the same device at 82 MHz is capable of 410 million codeword bits per second, 80 times faster than an Intel Pentium-lV 2.4 GHz PC.",
"corpus_id": 18147449,
"score": 0
},
{
"doc_id": "166928377",
"title": "Promoting health and productivity in the computerized office",
"abstract": null,
"corpus_id": 166928377,
"score": 0
},
{
"doc_id": "9634553",
"title": "Managing Change: An Ontology Version Control System",
"abstract": "In this paper we present the basic requirements and initial design of a system which manages and facilitates changes to an OWL ontology in a multi-editor environment. This system uses a centralized client-server architecture in which the server maintains the current state and full history of all managed ontologies. Clients can access the current ontology version, all historical revisions, and differences between arbitrary revisions, as well as metadata associated with revisions. This system will be used by many other ontology based services, such as incremental reasoning, collaborative ontology development, advanced ontology search, and ontology module extraction. Taken holistically, this network of services will provide a rich environment for the development and management of ontology based information systems. 1 Motivation and Requirements We need for a system that manages access to a changing ontology. This requirement is experienced by a variety applications with different stakeholders. An illustrative use case is presented below. A large distributed organization requires integration and alignment of many heterogeneous data sources and information artifacts. They facilitate such integration by employing one or more expressive OWL ontologies that exist in defined relations to data sources, information artifacts, and an enterprise conceptual model. These ontologies, as a critical infrastructure components, have stakeholders throughout the organization and outside its boundaries. Further, they are developed and maintained concurrently by many parties. Individual stakeholders participate in the ontology engineering process in different ways. Some are primarily consumers, but may make detailed edits to areas of the ontologies critical to them. Others are charged with maintaining high-level ontology coherence and use an integrated ontology development environment, such as Protege-OWL, to collaborate with similar editors in realtime, leveraging tools to maintain a dynamic view of the ontology. All stakeholders rely on the ontologies being available and consistent across the organization. This use case illustrates a set of requirements: Client Performance The network is a potential bottleneck of any distributed or client-server system, but the critical work of ontology development is",
"corpus_id": 9634553,
"score": 0
},
{
"doc_id": "26213696",
"title": "Formal methods fact vs. fiction",
"abstract": null,
"corpus_id": 26213696,
"score": 0
}
] |
arnetminer | {
"doc_id": "16085618",
"title": "A TDD approach to introducing students to embedded programming",
"abstract": "Learning embedded programming is a highly demanding exercise. The beginner is bombarded with complexity from the start: embedded production based around a myriad of C++ constructs with low-level elements integrated onto ever more complicated processor architectures. The picture is further compounded by tool support having unfamiliar roles and appearances from previous student experiences. This demanding situation often has the student bewildered; seeking for \"a crutch\" or the simplest way forward regardless of the overall consequences. To control this potentially chaotic picture, the instructor needs to introduce devices to combat this complexity. We argue that test driven development (TDD) should become the instructor's principal weapon in this fight. Reasons for this belief combined with our, and the students', experiences with this novel approach are discussed.",
"corpus_id": 16085618
} | [
{
"doc_id": "8240301",
"title": "E-RACE, A Hardware-Assisted Approach to Lockset-Based Data Race Detection for Embedded Products",
"abstract": "Limited research exists for identifying data races under the specific characteristics found in embedded systems. E-RACE is a new style of data-race identification tool which directly utilizes specialized hardware capabilities to monitor the flow of data and instructions. Compared to existing data race analysis approaches, the hardware-assisted E-RACE tool has advantages of recognizing data-race issues without requiring extensive software code instrumentation. The tool is integrated into an Embedded Unit Testing Driven Development Framework to encourage the construction of testable code and early identification of data-races.",
"corpus_id": 8240301,
"score": 1
},
{
"doc_id": "11798209",
"title": "A Hardware-Assisted Tool for Fast, Full Code Coverage Analysis",
"abstract": "Software reliability can be improved by using code coverage analysis to ensure that all statements are executed at least once during the testing process. When full code coverage information is obtained through software code instrumentation, high runtime performance overheads are incurred. Techniques that perform deferred or selective code instrumentation have shown success in reducing run-time overheads; however, the execution profile remains distorted. Techniques have been proposed that use internal processor hardware during the data gathering process, e.g. program counter logging. These approaches have been shown to reduce overheads; but currently trade swift execution for sparse code coverage. By combining the branch-vector hardware designed for debugging modern embedded processors with on-demand code coverage analysis, we have developed a new tool which provides full code coverage, while minimizing performance distortions. Experimental results show a performance impact of only 8 - 12%, while still providing 100% code coverage information.",
"corpus_id": 11798209,
"score": 1
},
{
"doc_id": "8493649",
"title": "Empirical analysis of the correlation between amount-of-reuse metrics in the C programming language",
"abstract": "Disclosed are the magnesium salts of N-carboxyamino acids, a process for their preparation, and their use of lubricating oil additives.",
"corpus_id": 8493649,
"score": 1
},
{
"doc_id": "17466351",
"title": "Aristotle: a system for development of program analysis based tools",
"abstract": "Aristotle provides program analysis information, and supports the development of software engineering tools. Aristotle's front end consists of parsers that gather control flow, local dataflow and symbol table information for procedural language programs. We implemented a parser for C by incorporating analysis routines into the GNU C parser; a C++ parser is being implemented using similar techniques. Aristotle tools use the data provided by the parsers to perform a variety of tasks, such as dataflow and control dependence analysis, dataflow testing, graph construction and graph viewing. Most of Aristotle's components function on single procedures and entire programs. Parsers and tools use database handler routines to store information in, and retrieve it from, a central database. A user interface provides interactive menu-driven access to tools, and users can view results textually or graphically. Many tools can also be invoked directly from applications programs, which facilitates the development of new tools. To assist with system development and maintenance, we are also creating support tools for managing bug and test suite databases.",
"corpus_id": 17466351,
"score": 0
},
{
"doc_id": "302526",
"title": "Running the Table: An AI for Computer Billiards",
"abstract": "Billiards is a game of both strategy and physical skill. To succeed, a player must be able to select strong shots, and then execute them accurately and consistently. Several robotic billiards players have recently been developed. These systems address the task of executing shots on a physical table, but so far have incorporated little strategic reasoning. They require AI to select the 'best' shot taking into account the accuracy of the robotics, the noise inherent in the domain, the continuous nature of the search space, the difficulty of the shot, and the goal of maximizing the chances of winning. This paper develops and compares several approaches to establishing a strong AI for billiards. The resulting program, PickPocket, won the first international computer billiards competition.",
"corpus_id": 302526,
"score": 0
},
{
"doc_id": "26971713",
"title": "Architecture Design for Globally Distributed Projects",
"abstract": "This paper talks through the practices and infrastructure that was used on the experimental Global Studio Project (GSP). While the architecture activities are highlighted, related practices such as project management, requirements engineering and integration and test in a distributed environment will also be discussed as lessons learned.",
"corpus_id": 26971713,
"score": 0
},
{
"doc_id": "17574900",
"title": "Opening, Closing Worlds - On Integrity Constraints",
"abstract": "In many data-centric applications it is desirable to use OWL as an expressive schema language where one expresses constraints that need to be satisfied by the (instance) data. However, some features of OWL’s semantics, specifically the Open World Assumption (OWL) and not having a Unique Name Assumption (UNA), make it hard to use OWL for this task. What would trigger a constraint violation in a closed world system like a relational database leads to new inferences in OWL. In this paper, we explore how OWL can be extended to accommodate integrity constraints and discuss several alternatives for the syntax and semantics of such an extension. We primarily focus on applications in the Supply Chain Management (SCM) domain but we are also gathering use cases and requirements from many other application areas to assess which of these alternatives provides the best solution.",
"corpus_id": 17574900,
"score": 0
},
{
"doc_id": "2877433",
"title": "Mother, May I? OWL-based Policy Management at NASA",
"abstract": "Among the challenges of managing NASA’s information systems is the management (that is, creation, coordination, verification, validation, and enforcement) of many different role-based access control policies and mechanisms. This paper describes an actual data federation use case that demonstrates the inefficiencies created by this challenge and presents an approach to reducing these inefficiencies using OWL. The focus is on the representation of XACML policies in DL, but the approach generalizes to other policy languages.",
"corpus_id": 2877433,
"score": 0
}
] |
arnetminer | {
"doc_id": "8240301",
"title": "E-RACE, A Hardware-Assisted Approach to Lockset-Based Data Race Detection for Embedded Products",
"abstract": "Limited research exists for identifying data races under the specific characteristics found in embedded systems. E-RACE is a new style of data-race identification tool which directly utilizes specialized hardware capabilities to monitor the flow of data and instructions. Compared to existing data race analysis approaches, the hardware-assisted E-RACE tool has advantages of recognizing data-race issues without requiring extensive software code instrumentation. The tool is integrated into an Embedded Unit Testing Driven Development Framework to encourage the construction of testable code and early identification of data-races.",
"corpus_id": 8240301
} | [
{
"doc_id": "11798209",
"title": "A Hardware-Assisted Tool for Fast, Full Code Coverage Analysis",
"abstract": "Software reliability can be improved by using code coverage analysis to ensure that all statements are executed at least once during the testing process. When full code coverage information is obtained through software code instrumentation, high runtime performance overheads are incurred. Techniques that perform deferred or selective code instrumentation have shown success in reducing run-time overheads; however, the execution profile remains distorted. Techniques have been proposed that use internal processor hardware during the data gathering process, e.g. program counter logging. These approaches have been shown to reduce overheads; but currently trade swift execution for sparse code coverage. By combining the branch-vector hardware designed for debugging modern embedded processors with on-demand code coverage analysis, we have developed a new tool which provides full code coverage, while minimizing performance distortions. Experimental results show a performance impact of only 8 - 12%, while still providing 100% code coverage information.",
"corpus_id": 11798209,
"score": 1
},
{
"doc_id": "8493649",
"title": "Empirical analysis of the correlation between amount-of-reuse metrics in the C programming language",
"abstract": "Disclosed are the magnesium salts of N-carboxyamino acids, a process for their preparation, and their use of lubricating oil additives.",
"corpus_id": 8493649,
"score": 1
},
{
"doc_id": "17466351",
"title": "Aristotle: a system for development of program analysis based tools",
"abstract": "Aristotle provides program analysis information, and supports the development of software engineering tools. Aristotle's front end consists of parsers that gather control flow, local dataflow and symbol table information for procedural language programs. We implemented a parser for C by incorporating analysis routines into the GNU C parser; a C++ parser is being implemented using similar techniques. Aristotle tools use the data provided by the parsers to perform a variety of tasks, such as dataflow and control dependence analysis, dataflow testing, graph construction and graph viewing. Most of Aristotle's components function on single procedures and entire programs. Parsers and tools use database handler routines to store information in, and retrieve it from, a central database. A user interface provides interactive menu-driven access to tools, and users can view results textually or graphically. Many tools can also be invoked directly from applications programs, which facilitates the development of new tools. To assist with system development and maintenance, we are also creating support tools for managing bug and test suite databases.",
"corpus_id": 17466351,
"score": 0
},
{
"doc_id": "20760479",
"title": "The CORC experience: survey of founding libraries. Part II",
"abstract": null,
"corpus_id": 20760479,
"score": 0
},
{
"doc_id": "38418561",
"title": "Cryptographic Information Recovery Using Key Recover",
"abstract": "A note to readers: the authors consider that all techniques for key recovery may be viewed as having a position on a broad continuum. The only way to avoid misunderstanding is to identify particular techniques by listing their specific characteristics, rather than using multiply-defined terms. Different characteristics have advantages in different environments, so there is no 'best' key recovery technique. The paper notes some typical advantages and disadvantages of several techniques but should not be construed as an endorsement of any particular technique relative to another. Similarly, the authors recognize that terminology may vary from country to country. Cryptographic information recovery techniques provide for the recovery of plaintext from encrypted data. This (exceptional) need arises when the cryptographic keys involved are not available. For example, data files may have been encrypted using a key derived from a now forgotten or misplaced password. Overlapping and confusing terminology has been applied to the techniques of information recovery, including key escrow, key backup, key recovery, and trusted third party (ttp), all of which refer to methods for retrieving, recovering, or re-constructing keys. Even the underlying concept of 'trust' has broad meaning. Instead of attempting to 'define' these terms precisely, a continuum of functionality is defined. Several generic technologies, together with desirable characteristics of cryptographic information/key recovery techniques, are described.",
"corpus_id": 38418561,
"score": 0
},
{
"doc_id": "28518943",
"title": "The CORC experience: survey of founding libraries. Part I",
"abstract": "This survey, conducted in late 1999, found that CORC founding libraries shared a strong interest in controlling Internet resources and finding ways to catalog such resources quickly. Many cataloged in MARC. Although only a small number of them experimented with Dublin Core, many of them wanted to explore its potential for organizing Internet resources. Other metadata schemes were also used by some libraries. Overall, the founding libraries considered their CORC experience positive, but had several concerns. Their experience suggests that more work is needed to make fast, automated cataloging a reality. Since the findings of this study reflect experience with CORC at the developmental stage, the researchers proposed that CORC usage be monitored to identify trends in organizing Internet resources. A survey of CORC subscribers could be conducted to understand usage patterns and guide CORC’s development and improvement.",
"corpus_id": 28518943,
"score": 0
},
{
"doc_id": "302526",
"title": "Running the Table: An AI for Computer Billiards",
"abstract": "Billiards is a game of both strategy and physical skill. To succeed, a player must be able to select strong shots, and then execute them accurately and consistently. Several robotic billiards players have recently been developed. These systems address the task of executing shots on a physical table, but so far have incorporated little strategic reasoning. They require AI to select the 'best' shot taking into account the accuracy of the robotics, the noise inherent in the domain, the continuous nature of the search space, the difficulty of the shot, and the goal of maximizing the chances of winning. This paper develops and compares several approaches to establishing a strong AI for billiards. The resulting program, PickPocket, won the first international computer billiards competition.",
"corpus_id": 302526,
"score": 0
}
] |
arnetminer | {
"doc_id": "11798209",
"title": "A Hardware-Assisted Tool for Fast, Full Code Coverage Analysis",
"abstract": "Software reliability can be improved by using code coverage analysis to ensure that all statements are executed at least once during the testing process. When full code coverage information is obtained through software code instrumentation, high runtime performance overheads are incurred. Techniques that perform deferred or selective code instrumentation have shown success in reducing run-time overheads; however, the execution profile remains distorted. Techniques have been proposed that use internal processor hardware during the data gathering process, e.g. program counter logging. These approaches have been shown to reduce overheads; but currently trade swift execution for sparse code coverage. By combining the branch-vector hardware designed for debugging modern embedded processors with on-demand code coverage analysis, we have developed a new tool which provides full code coverage, while minimizing performance distortions. Experimental results show a performance impact of only 8 - 12%, while still providing 100% code coverage information.",
"corpus_id": 11798209
} | [
{
"doc_id": "8493649",
"title": "Empirical analysis of the correlation between amount-of-reuse metrics in the C programming language",
"abstract": "Disclosed are the magnesium salts of N-carboxyamino acids, a process for their preparation, and their use of lubricating oil additives.",
"corpus_id": 8493649,
"score": 1
},
{
"doc_id": "18665149",
"title": "Owlgres: A Scalable OWL Reasoner",
"abstract": "We present Owlgres, a DL-Lite reasoner implementation written for PostgreSQL, a mature open source database. Owlgres is an OWL reasoner that provides consistency checking and conjunctive query services, supports DL-LiteR as well as the OWL sameAs construct, and is not limited to PostgreSQL. We discuss the implementation with special focus on sameAs and the supported subset of the SPARQL language. Emphasis is given to the implemented optimization techniques which resulted in significant performance improvement. Based on a confidential NASA dataset and part of the DBpedia dataset, we show a typical use case for Owlgres, i.e. given a terminology and a dataset, Owlgres provides querying on a persistent knowledge base with reasoning at query time in the expressivity of DL-LiteR.",
"corpus_id": 18665149,
"score": 0
},
{
"doc_id": "20760479",
"title": "The CORC experience: survey of founding libraries. Part II",
"abstract": null,
"corpus_id": 20760479,
"score": 0
},
{
"doc_id": "30141926",
"title": "Nonparametric regression using linear combinations of basis functions",
"abstract": "This paper discusses a Bayesian approach to nonparametric regression initially proposed by Smith and Kohn (1996. Journal of Econometrics 75: 317–344). In this approach the regression function is represented as a linear combination of basis terms. The basis terms can be univariate or multivariate functions and can include polynomials, natural splines and radial basis functions. A Bayesian hierarchical model is used such that the coefficient of each basis term can be zero with positive prior probability. The presence of basis terms in the model is determined by latent indicator variables. The posterior mean is estimated by Markov chain Monte Carlo simulation because it is computationally intractable to compute the posterior mean analytically unless a small number of basis terms is used. The present article updates the work of Smith and Kohn (1996. Journal of Econometrics 75: 317–344) to take account of work by us and others over the last three years. A careful discussion is given to all aspects of the model specification, function estimation and the use of sampling schemes. In particular, new sampling schemes are introduced to carry out the variable selection methodology.",
"corpus_id": 30141926,
"score": 0
},
{
"doc_id": "166928377",
"title": "Promoting health and productivity in the computerized office",
"abstract": null,
"corpus_id": 166928377,
"score": 0
},
{
"doc_id": "29590207",
"title": "Combining spatial and scale-space techniques for edge detection to provide a spatially adaptive wavelet-based noise filtering algorithm",
"abstract": "New methods for detecting edges in an image using spatial and scale-space domains are proposed. A priori knowledge about geometrical characteristics of edges is used to assign a probability factor to the chance of any pixel being on an edge. An improved double thresholding technique is introduced for spatial domain filtering. Probabilities that pixels belong to a given edge are assigned based on pixel similarity across gradient amplitudes, gradient phases and edge connectivity. The scale-space approach uses dynamic range compression to allow wavelet correlation over a wider range of scales. A probabilistic formulation is used to combine the results obtained from filtering in each domain to provide a final edge probability image which has the advantages of both spatial and scale-space domain methods. Decomposing this edge probability image with the same wavelet as the original image permits the generation of adaptive filters that can recognize the characteristics of the edges in all wavelet detail and approximation images regardless of scale. These matched filters permit significant reduction in image noise without contributing to edge distortion. The spatially adaptive wavelet noise-filtering algorithm is qualitatively and quantitatively compared to a frequency domain and two wavelet based noise suppression algorithms using both natural and computer generated noisy images.",
"corpus_id": 29590207,
"score": 0
}
] |
arnetminer | {
"doc_id": "7180834",
"title": "PickPocket: A computer billiards shark",
"abstract": "Billiards is a game of both strategy and physical skill. To succeed, a player must be able to select strong shots, and then execute them accurately and consistently on the table. Several robotic billiards players have recently been developed. These systems address the task of executing shots on a physical table, but so far have incorporated little strategic reasoning. They require artificial intelligence to select the 'best' shot taking into account the accuracy of the robot, the noise inherent in the domain, the continuous nature of the search space, the difficulty of the shot, and the goal of maximizing the chances of winning. This article describes the program PickPocket, the winner of the simulated 8-ball tournaments at the 10th and 11th Computer Olympiad competitions. PickPocket is based on the traditional search framework, familiar from games such as chess, adapted to the continuous stochastic domain of billiards. Experimental results are presented exploring the properties of two search algorithms, Monte-Carlo search and Probabilistic search.",
"corpus_id": 7180834
} | [
{
"doc_id": "302526",
"title": "Running the Table: An AI for Computer Billiards",
"abstract": "Billiards is a game of both strategy and physical skill. To succeed, a player must be able to select strong shots, and then execute them accurately and consistently. Several robotic billiards players have recently been developed. These systems address the task of executing shots on a physical table, but so far have incorporated little strategic reasoning. They require AI to select the 'best' shot taking into account the accuracy of the robotics, the noise inherent in the domain, the continuous nature of the search space, the difficulty of the shot, and the goal of maximizing the chances of winning. This paper develops and compares several approaches to establishing a strong AI for billiards. The resulting program, PickPocket, won the first international computer billiards competition.",
"corpus_id": 302526,
"score": 1
},
{
"doc_id": "20760479",
"title": "The CORC experience: survey of founding libraries. Part II",
"abstract": null,
"corpus_id": 20760479,
"score": 0
},
{
"doc_id": "29590207",
"title": "Combining spatial and scale-space techniques for edge detection to provide a spatially adaptive wavelet-based noise filtering algorithm",
"abstract": "New methods for detecting edges in an image using spatial and scale-space domains are proposed. A priori knowledge about geometrical characteristics of edges is used to assign a probability factor to the chance of any pixel being on an edge. An improved double thresholding technique is introduced for spatial domain filtering. Probabilities that pixels belong to a given edge are assigned based on pixel similarity across gradient amplitudes, gradient phases and edge connectivity. The scale-space approach uses dynamic range compression to allow wavelet correlation over a wider range of scales. A probabilistic formulation is used to combine the results obtained from filtering in each domain to provide a final edge probability image which has the advantages of both spatial and scale-space domain methods. Decomposing this edge probability image with the same wavelet as the original image permits the generation of adaptive filters that can recognize the characteristics of the edges in all wavelet detail and approximation images regardless of scale. These matched filters permit significant reduction in image noise without contributing to edge distortion. The spatially adaptive wavelet noise-filtering algorithm is qualitatively and quantitatively compared to a frequency domain and two wavelet based noise suppression algorithms using both natural and computer generated noisy images.",
"corpus_id": 29590207,
"score": 0
},
{
"doc_id": "2587153",
"title": "Towards context-aware face recognition",
"abstract": "In this paper, we focus on the use of context-aware, collaborative filtering, machine-learning techniques that leverage automatically sensed and inferred contextual metadata together with computer vision analysis of image content to make accurate predictions about the human subjects depicted in cameraphone photos. We apply Sparse-Factor Analysis (SFA) to both the contextual metadata gathered in the MMM2 system and the results of PCA (Principal Components Analysis) of the photo content to achieve a 60% face recognition accuracy of people depicted in our cameraphone photos, which is 40% better than media analysis alone. In short, we use context-aware media analysis to solve the face recognition problem for cameraphone photos.",
"corpus_id": 2587153,
"score": 0
},
{
"doc_id": "9634553",
"title": "Managing Change: An Ontology Version Control System",
"abstract": "In this paper we present the basic requirements and initial design of a system which manages and facilitates changes to an OWL ontology in a multi-editor environment. This system uses a centralized client-server architecture in which the server maintains the current state and full history of all managed ontologies. Clients can access the current ontology version, all historical revisions, and differences between arbitrary revisions, as well as metadata associated with revisions. This system will be used by many other ontology based services, such as incremental reasoning, collaborative ontology development, advanced ontology search, and ontology module extraction. Taken holistically, this network of services will provide a rich environment for the development and management of ontology based information systems. 1 Motivation and Requirements We need for a system that manages access to a changing ontology. This requirement is experienced by a variety applications with different stakeholders. An illustrative use case is presented below. A large distributed organization requires integration and alignment of many heterogeneous data sources and information artifacts. They facilitate such integration by employing one or more expressive OWL ontologies that exist in defined relations to data sources, information artifacts, and an enterprise conceptual model. These ontologies, as a critical infrastructure components, have stakeholders throughout the organization and outside its boundaries. Further, they are developed and maintained concurrently by many parties. Individual stakeholders participate in the ontology engineering process in different ways. Some are primarily consumers, but may make detailed edits to areas of the ontologies critical to them. Others are charged with maintaining high-level ontology coherence and use an integrated ontology development environment, such as Protege-OWL, to collaborate with similar editors in realtime, leveraging tools to maintain a dynamic view of the ontology. All stakeholders rely on the ontologies being available and consistent across the organization. This use case illustrates a set of requirements: Client Performance The network is a potential bottleneck of any distributed or client-server system, but the critical work of ontology development is",
"corpus_id": 9634553,
"score": 0
},
{
"doc_id": "11798209",
"title": "A Hardware-Assisted Tool for Fast, Full Code Coverage Analysis",
"abstract": "Software reliability can be improved by using code coverage analysis to ensure that all statements are executed at least once during the testing process. When full code coverage information is obtained through software code instrumentation, high runtime performance overheads are incurred. Techniques that perform deferred or selective code instrumentation have shown success in reducing run-time overheads; however, the execution profile remains distorted. Techniques have been proposed that use internal processor hardware during the data gathering process, e.g. program counter logging. These approaches have been shown to reduce overheads; but currently trade swift execution for sparse code coverage. By combining the branch-vector hardware designed for debugging modern embedded processors with on-demand code coverage analysis, we have developed a new tool which provides full code coverage, while minimizing performance distortions. Experimental results show a performance impact of only 8 - 12%, while still providing 100% code coverage information.",
"corpus_id": 11798209,
"score": 0
}
] |
arnetminer | {
"doc_id": "18622837",
"title": "Graph Drawing Heuristics for Path Finding in Large Dimensionless Graphs",
"abstract": "This paper presents a heuristic for guiding A*search for approximating the shortest path between two vertices in arbitrarily-sized dimensionless graphs. First we discuss methods by which these dimensionless graphs are laid out into Euclidean drawings. Next, two heuristics are computed based on drawings of the graphs. We compare the performance of an A*-search using these heuristics with breadth-first search on graphs with various topological properties. The results show a large savings in the number of vertices expanded for large graphs.",
"corpus_id": 18622837
} | [
{
"doc_id": "8303407",
"title": "Probabilistic Prediction of Protein Secondary Structure Using Causal Networks (Extended Abstract)",
"abstract": "In this paper we present a probabilistic approach to analysis and prediction of protein structure. We argue that this approach provides a flexible and convenient mechanism to perform general scientific data analysis in molecular biology. We apply our approach to an important problem in molecular biology--predicting the secondary structure of proteins--and obtain experimental results comparable to several other methods. The causal networks that we use provide a very convenient medium for the scientist to experiment with different empirical models and obtain possibly important insights about the problem being studied.",
"corpus_id": 8303407,
"score": 1
},
{
"doc_id": "28678366",
"title": "Genetic Wrappers for Constructive Induction in High-Performance Data Mining",
"abstract": "We present an application of genetic algorithm-based design to configuration of high-level optimization systems, or wrappers, for relevance determination and constructive induction. Our system combines genetic wrappers with elicited knowledge on attribute relevance and synthesis. We discuss decision support issues in a large-scale commercial data mining project (cost prediction for multiple automobile insurance markets), and report experiments using D2K, a Java-based visual programming system for data mining and information visualization, and several commercial and research tools. Our GA system, Jenesis [HWRC00], is deployed on several network-of-workstation systems (Beowulf clusters). It achieves a linear speedup, due to a high degree of task parallelism, and improved test set accuracy, compared to decision tree learning with only constructive induction and state-space search-based wrappers [KJ97].",
"corpus_id": 28678366,
"score": 1
},
{
"doc_id": "5663752",
"title": "Text Extraction from the Web via Text-to-Tag Ratio",
"abstract": "We describe a method to extract content text from diverse Web pages by using the HTML document's text-to-tag ratio rather than specific HTML cues that may not be constant across various Web pages. We describe how to compute the text-to-tag ratio on a line-by-line basis and then cluster the results into content and non-content areas. With this approach we then show surprisingly high levels of recall for all levels of precision, and a large space savings.",
"corpus_id": 5663752,
"score": 1
},
{
"doc_id": "205451192",
"title": "Genetic wrappers for feature selection in decision tree induction and variable ordering in Bayesian network structure learning",
"abstract": "In this paper, we address the automated tuning of input specification for supervised inductive learning and develop combinatorial optimization solutions for two such tuning problems. First, we present a framework for selection and reordering of input variables to reduce generalization error in classification and probabilistic inference. One purpose of selection is to control overfitting using validation set accuracy as a criterion for relevance. Similarly, some inductive learning algorithms, such as greedy algorithms for learning probabilistic networks, are sensitive to the evaluation order of variables. We design a generic fitness function for validation of input specification, then use it to develop two genetic algorithm wrappers: one for the variable selection problem for decision tree inducers and one for the variable ordering problem for Bayesian network structure learning. We evaluate the wrappers, using real-world data for the selection wrapper and synthetic data for both, and discuss their limitations and generalizability to other inducers.",
"corpus_id": 205451192,
"score": 1
},
{
"doc_id": "11098618",
"title": "Relational Graphical Models of Computational Workflows for Data Mining",
"abstract": "Collaborative recommendation is the problem of analyzing the content of an information retrieval system and actions of its users, to predict additional topics or products a new user may find useful. Developing this capability poses several challenges to machine learning and reasoning under uncertainty. Recent systems such as CiteSeer [1] have succeeded in providing some specialized but comprehensive indices of full documents, but the kind of probabilistic models used in such indexing do not extend easily to information Grid databases and computational Grid workflows. The collection of user data from Grid portals [4] provides a test bed for the underlying IR technology, including learning and inference systems. To model workflows created using the TAVERNA editor [3] and SCUFL description language, the DESCRIBER system, shown in Figure 1, applies score-based structure learning algorithms, including Bayesian model selection and greedy search (cf. the K2 algorithm) adapted to relational graphical models. Figure 2 illustrates how the decision support front-end of DESCRIBER interacts with modules that learn and reason using probabilistic relational models,. The purpose is to discover interrelationships among, and thereby recommend, components used in workflows developed by other Grid users.",
"corpus_id": 11098618,
"score": 1
},
{
"doc_id": "39997128",
"title": "The Clusnet Algorithm And Time Series Prediction",
"abstract": "This paper describes a novel neural network architecture named ClusNet. This network is designed to study the trade-offs between the simplicity of instance-based methods and the accuracy of the more computational intensive learning methods. The features that make this network different from existing learning algorithms are outlined. A simple proof of convergence of the ClusNet algorithm is given. Experimental results showing the convergence of the algorithm on a specific problem is also presented. In this paper, ClusNet is applied to predict the temporal continuation of the Mackey-Glass chaotic time series. A comparison between the results obtained with ClusNet and other neural network algorithms is made. For example, ClusNet requires one-tenth the computing resources of the instance-based local linear method for this application while achieving comparable accuracy in this task. The sensitivity of ClusNet prediction accuracies on specific clustering algorithms is examined for an application. The simplicity and fast convergence of ClusNet makes it ideal as a rapid prototyping tool for applications where on-line learning is required.",
"corpus_id": 39997128,
"score": 0
},
{
"doc_id": "38525575",
"title": "Parameter significance estimation and financial prediction",
"abstract": "This paper deals with the problem of parameter significance estimation, and its application to currency exchange rate prediction. The basic problem is that over the years, practitioners in the field of financial engineering have developed dozens of technical and fundamental indicators on the basis of which they try to predict financial time series. The practitioners are now faced with the problem of finding out which combinations of those indicators are most significant or relevant, and how their significance changes over time. The authors propose a novel neural architecture calledSupNet for estimating the significance of various parameters. The methodology is based on the principle of penalizing those features that are the largest contributors to the error term. Two algorithms based on this principle are proposed. This approach is different from related methodologies, which are based on the principle of removing parameters with the least significance. The proposed methodology is demonstrated on the next day returns of the DM-US currency exchange rate, and promising results are obtained.",
"corpus_id": 38525575,
"score": 0
}
] |
arnetminer | {
"doc_id": "7219902",
"title": "Bayesian Network Models for Generation of Crisis Management Training Scenarios",
"abstract": "We present a noisy-OR Bayesian network model for simulation-based training, and an efficient search-based algorithm for automatic synthesis of plausible training scenarios from constraint specifications. This randomized algorithm for approximate causal inference is shown to outperform other randomized methods, such as those based on perturbation of the maximally plausible scenario. It has the added advantage of being able to generate acceptable scenarios (based on a maximum penalized likelihood criterion) faster than human subject matter experts, and with greater diversity than deterministic inference. We describe a field-tested interactive training system for crisis management and show how our model can be applied offline to produce scenario specifications. We then evaluate the performance of our automatic scenario generator and compare its results to those achieved by human instructors, stochastic simulation, and maximum likelihood inference. Finally, we discuss the applicability of our system and framework to a broader range of modeling problems for computer-assisted instruction.",
"corpus_id": 7219902
} | [
{
"doc_id": "205451192",
"title": "Genetic wrappers for feature selection in decision tree induction and variable ordering in Bayesian network structure learning",
"abstract": "In this paper, we address the automated tuning of input specification for supervised inductive learning and develop combinatorial optimization solutions for two such tuning problems. First, we present a framework for selection and reordering of input variables to reduce generalization error in classification and probabilistic inference. One purpose of selection is to control overfitting using validation set accuracy as a criterion for relevance. Similarly, some inductive learning algorithms, such as greedy algorithms for learning probabilistic networks, are sensitive to the evaluation order of variables. We design a generic fitness function for validation of input specification, then use it to develop two genetic algorithm wrappers: one for the variable selection problem for decision tree inducers and one for the variable ordering problem for Bayesian network structure learning. We evaluate the wrappers, using real-world data for the selection wrapper and synthetic data for both, and discuss their limitations and generalizability to other inducers.",
"corpus_id": 205451192,
"score": 1
},
{
"doc_id": "8303407",
"title": "Probabilistic Prediction of Protein Secondary Structure Using Causal Networks (Extended Abstract)",
"abstract": "In this paper we present a probabilistic approach to analysis and prediction of protein structure. We argue that this approach provides a flexible and convenient mechanism to perform general scientific data analysis in molecular biology. We apply our approach to an important problem in molecular biology--predicting the secondary structure of proteins--and obtain experimental results comparable to several other methods. The causal networks that we use provide a very convenient medium for the scientist to experiment with different empirical models and obtain possibly important insights about the problem being studied.",
"corpus_id": 8303407,
"score": 1
},
{
"doc_id": "5663752",
"title": "Text Extraction from the Web via Text-to-Tag Ratio",
"abstract": "We describe a method to extract content text from diverse Web pages by using the HTML document's text-to-tag ratio rather than specific HTML cues that may not be constant across various Web pages. We describe how to compute the text-to-tag ratio on a line-by-line basis and then cluster the results into content and non-content areas. With this approach we then show surprisingly high levels of recall for all levels of precision, and a large space savings.",
"corpus_id": 5663752,
"score": 1
},
{
"doc_id": "28678366",
"title": "Genetic Wrappers for Constructive Induction in High-Performance Data Mining",
"abstract": "We present an application of genetic algorithm-based design to configuration of high-level optimization systems, or wrappers, for relevance determination and constructive induction. Our system combines genetic wrappers with elicited knowledge on attribute relevance and synthesis. We discuss decision support issues in a large-scale commercial data mining project (cost prediction for multiple automobile insurance markets), and report experiments using D2K, a Java-based visual programming system for data mining and information visualization, and several commercial and research tools. Our GA system, Jenesis [HWRC00], is deployed on several network-of-workstation systems (Beowulf clusters). It achieves a linear speedup, due to a high degree of task parallelism, and improved test set accuracy, compared to decision tree learning with only constructive induction and state-space search-based wrappers [KJ97].",
"corpus_id": 28678366,
"score": 1
},
{
"doc_id": "11098618",
"title": "Relational Graphical Models of Computational Workflows for Data Mining",
"abstract": "Collaborative recommendation is the problem of analyzing the content of an information retrieval system and actions of its users, to predict additional topics or products a new user may find useful. Developing this capability poses several challenges to machine learning and reasoning under uncertainty. Recent systems such as CiteSeer [1] have succeeded in providing some specialized but comprehensive indices of full documents, but the kind of probabilistic models used in such indexing do not extend easily to information Grid databases and computational Grid workflows. The collection of user data from Grid portals [4] provides a test bed for the underlying IR technology, including learning and inference systems. To model workflows created using the TAVERNA editor [3] and SCUFL description language, the DESCRIBER system, shown in Figure 1, applies score-based structure learning algorithms, including Bayesian model selection and greedy search (cf. the K2 algorithm) adapted to relational graphical models. Figure 2 illustrates how the decision support front-end of DESCRIBER interacts with modules that learn and reason using probabilistic relational models,. The purpose is to discover interrelationships among, and thereby recommend, components used in workflows developed by other Grid users.",
"corpus_id": 11098618,
"score": 1
},
{
"doc_id": "38525575",
"title": "Parameter significance estimation and financial prediction",
"abstract": "This paper deals with the problem of parameter significance estimation, and its application to currency exchange rate prediction. The basic problem is that over the years, practitioners in the field of financial engineering have developed dozens of technical and fundamental indicators on the basis of which they try to predict financial time series. The practitioners are now faced with the problem of finding out which combinations of those indicators are most significant or relevant, and how their significance changes over time. The authors propose a novel neural architecture calledSupNet for estimating the significance of various parameters. The methodology is based on the principle of penalizing those features that are the largest contributors to the error term. Two algorithms based on this principle are proposed. This approach is different from related methodologies, which are based on the principle of removing parameters with the least significance. The proposed methodology is demonstrated on the next day returns of the DM-US currency exchange rate, and promising results are obtained.",
"corpus_id": 38525575,
"score": 0
},
{
"doc_id": "39997128",
"title": "The Clusnet Algorithm And Time Series Prediction",
"abstract": "This paper describes a novel neural network architecture named ClusNet. This network is designed to study the trade-offs between the simplicity of instance-based methods and the accuracy of the more computational intensive learning methods. The features that make this network different from existing learning algorithms are outlined. A simple proof of convergence of the ClusNet algorithm is given. Experimental results showing the convergence of the algorithm on a specific problem is also presented. In this paper, ClusNet is applied to predict the temporal continuation of the Mackey-Glass chaotic time series. A comparison between the results obtained with ClusNet and other neural network algorithms is made. For example, ClusNet requires one-tenth the computing resources of the instance-based local linear method for this application while achieving comparable accuracy in this task. The sensitivity of ClusNet prediction accuracies on specific clustering algorithms is examined for an application. The simplicity and fast convergence of ClusNet makes it ideal as a rapid prototyping tool for applications where on-line learning is required.",
"corpus_id": 39997128,
"score": 0
}
] |
arnetminer | {
"doc_id": "14472222",
"title": "An ACO Algorithm for the Most Probable Explanation Problem",
"abstract": "We describe an Ant Colony Optimization (ACO) algorithm, ANT-MPE, for the most probable explanation problem in Bayesian network inference After tuning its parameters settings, we compare ANT-MPE with four other sampling and local search-based approximate algorithms: Gibbs Sampling, Forward Sampling, Multistart Hillclimbing, and Tabu Search Experimental results on both artificial and real networks show that in general ANT-MPE outperforms all other algorithms, but on networks with unskewed distributions local search algorithms are slightly better The result reveals the nature of ACO as a combination of both sampling and local search It helps us to understand ACO better, and, more important, it also suggests a possible way to improve ACO.",
"corpus_id": 14472222
} | [
{
"doc_id": "11098618",
"title": "Relational Graphical Models of Computational Workflows for Data Mining",
"abstract": "Collaborative recommendation is the problem of analyzing the content of an information retrieval system and actions of its users, to predict additional topics or products a new user may find useful. Developing this capability poses several challenges to machine learning and reasoning under uncertainty. Recent systems such as CiteSeer [1] have succeeded in providing some specialized but comprehensive indices of full documents, but the kind of probabilistic models used in such indexing do not extend easily to information Grid databases and computational Grid workflows. The collection of user data from Grid portals [4] provides a test bed for the underlying IR technology, including learning and inference systems. To model workflows created using the TAVERNA editor [3] and SCUFL description language, the DESCRIBER system, shown in Figure 1, applies score-based structure learning algorithms, including Bayesian model selection and greedy search (cf. the K2 algorithm) adapted to relational graphical models. Figure 2 illustrates how the decision support front-end of DESCRIBER interacts with modules that learn and reason using probabilistic relational models,. The purpose is to discover interrelationships among, and thereby recommend, components used in workflows developed by other Grid users.",
"corpus_id": 11098618,
"score": 1
},
{
"doc_id": "28678366",
"title": "Genetic Wrappers for Constructive Induction in High-Performance Data Mining",
"abstract": "We present an application of genetic algorithm-based design to configuration of high-level optimization systems, or wrappers, for relevance determination and constructive induction. Our system combines genetic wrappers with elicited knowledge on attribute relevance and synthesis. We discuss decision support issues in a large-scale commercial data mining project (cost prediction for multiple automobile insurance markets), and report experiments using D2K, a Java-based visual programming system for data mining and information visualization, and several commercial and research tools. Our GA system, Jenesis [HWRC00], is deployed on several network-of-workstation systems (Beowulf clusters). It achieves a linear speedup, due to a high degree of task parallelism, and improved test set accuracy, compared to decision tree learning with only constructive induction and state-space search-based wrappers [KJ97].",
"corpus_id": 28678366,
"score": 1
},
{
"doc_id": "205451192",
"title": "Genetic wrappers for feature selection in decision tree induction and variable ordering in Bayesian network structure learning",
"abstract": "In this paper, we address the automated tuning of input specification for supervised inductive learning and develop combinatorial optimization solutions for two such tuning problems. First, we present a framework for selection and reordering of input variables to reduce generalization error in classification and probabilistic inference. One purpose of selection is to control overfitting using validation set accuracy as a criterion for relevance. Similarly, some inductive learning algorithms, such as greedy algorithms for learning probabilistic networks, are sensitive to the evaluation order of variables. We design a generic fitness function for validation of input specification, then use it to develop two genetic algorithm wrappers: one for the variable selection problem for decision tree inducers and one for the variable ordering problem for Bayesian network structure learning. We evaluate the wrappers, using real-world data for the selection wrapper and synthetic data for both, and discuss their limitations and generalizability to other inducers.",
"corpus_id": 205451192,
"score": 1
},
{
"doc_id": "5663752",
"title": "Text Extraction from the Web via Text-to-Tag Ratio",
"abstract": "We describe a method to extract content text from diverse Web pages by using the HTML document's text-to-tag ratio rather than specific HTML cues that may not be constant across various Web pages. We describe how to compute the text-to-tag ratio on a line-by-line basis and then cluster the results into content and non-content areas. With this approach we then show surprisingly high levels of recall for all levels of precision, and a large space savings.",
"corpus_id": 5663752,
"score": 1
},
{
"doc_id": "8303407",
"title": "Probabilistic Prediction of Protein Secondary Structure Using Causal Networks (Extended Abstract)",
"abstract": "In this paper we present a probabilistic approach to analysis and prediction of protein structure. We argue that this approach provides a flexible and convenient mechanism to perform general scientific data analysis in molecular biology. We apply our approach to an important problem in molecular biology--predicting the secondary structure of proteins--and obtain experimental results comparable to several other methods. The causal networks that we use provide a very convenient medium for the scientist to experiment with different empirical models and obtain possibly important insights about the problem being studied.",
"corpus_id": 8303407,
"score": 1
},
{
"doc_id": "38525575",
"title": "Parameter significance estimation and financial prediction",
"abstract": "This paper deals with the problem of parameter significance estimation, and its application to currency exchange rate prediction. The basic problem is that over the years, practitioners in the field of financial engineering have developed dozens of technical and fundamental indicators on the basis of which they try to predict financial time series. The practitioners are now faced with the problem of finding out which combinations of those indicators are most significant or relevant, and how their significance changes over time. The authors propose a novel neural architecture calledSupNet for estimating the significance of various parameters. The methodology is based on the principle of penalizing those features that are the largest contributors to the error term. Two algorithms based on this principle are proposed. This approach is different from related methodologies, which are based on the principle of removing parameters with the least significance. The proposed methodology is demonstrated on the next day returns of the DM-US currency exchange rate, and promising results are obtained.",
"corpus_id": 38525575,
"score": 0
},
{
"doc_id": "39997128",
"title": "The Clusnet Algorithm And Time Series Prediction",
"abstract": "This paper describes a novel neural network architecture named ClusNet. This network is designed to study the trade-offs between the simplicity of instance-based methods and the accuracy of the more computational intensive learning methods. The features that make this network different from existing learning algorithms are outlined. A simple proof of convergence of the ClusNet algorithm is given. Experimental results showing the convergence of the algorithm on a specific problem is also presented. In this paper, ClusNet is applied to predict the temporal continuation of the Mackey-Glass chaotic time series. A comparison between the results obtained with ClusNet and other neural network algorithms is made. For example, ClusNet requires one-tenth the computing resources of the instance-based local linear method for this application while achieving comparable accuracy in this task. The sensitivity of ClusNet prediction accuracies on specific clustering algorithms is examined for an application. The simplicity and fast convergence of ClusNet makes it ideal as a rapid prototyping tool for applications where on-line learning is required.",
"corpus_id": 39997128,
"score": 0
}
] |
arnetminer | {
"doc_id": "5371240",
"title": "A machine learning approach to algorithm selection for \n$\\mathcal{NP}$\n-hard optimization problems: a case study on the MPE problem",
"abstract": "Abstract\nGiven one instance of an \n$\\mathcal{NP}$\n-hard optimization problem, can we tell in advance whether it is exactly solvable or not? If it is not, can we predict which approximate algorithm is the best to solve it? Since the behavior of most approximate, randomized, and heuristic search algorithms for \n$\\mathcal{NP}$\n-hard problems is usually very difficult to characterize analytically, researchers have turned to experimental methods in order to answer these questions. In this paper we present a machine learning-based approach to address the above questions. Models induced from algorithmic performance data can represent the knowledge of how algorithmic performance depends on some easy-to-compute problem instance characteristics. Using these models, we can estimate approximately whether an input instance is exactly solvable or not. Furthermore, when it is classified as exactly unsolvable, we can select the best approximate algorithm for it among a list of candidates. In this paper we use the MPE (most probable explanation) problem in probabilistic inference as a case study to validate the proposed methodology. Our experimental results show that the machine learning-based algorithm selection system can integrate both exact and inexact algorithms and provide the best overall performance comparing to any single candidate algorithm.\n",
"corpus_id": 5371240
} | [
{
"doc_id": "11098618",
"title": "Relational Graphical Models of Computational Workflows for Data Mining",
"abstract": "Collaborative recommendation is the problem of analyzing the content of an information retrieval system and actions of its users, to predict additional topics or products a new user may find useful. Developing this capability poses several challenges to machine learning and reasoning under uncertainty. Recent systems such as CiteSeer [1] have succeeded in providing some specialized but comprehensive indices of full documents, but the kind of probabilistic models used in such indexing do not extend easily to information Grid databases and computational Grid workflows. The collection of user data from Grid portals [4] provides a test bed for the underlying IR technology, including learning and inference systems. To model workflows created using the TAVERNA editor [3] and SCUFL description language, the DESCRIBER system, shown in Figure 1, applies score-based structure learning algorithms, including Bayesian model selection and greedy search (cf. the K2 algorithm) adapted to relational graphical models. Figure 2 illustrates how the decision support front-end of DESCRIBER interacts with modules that learn and reason using probabilistic relational models,. The purpose is to discover interrelationships among, and thereby recommend, components used in workflows developed by other Grid users.",
"corpus_id": 11098618,
"score": 1
},
{
"doc_id": "28678366",
"title": "Genetic Wrappers for Constructive Induction in High-Performance Data Mining",
"abstract": "We present an application of genetic algorithm-based design to configuration of high-level optimization systems, or wrappers, for relevance determination and constructive induction. Our system combines genetic wrappers with elicited knowledge on attribute relevance and synthesis. We discuss decision support issues in a large-scale commercial data mining project (cost prediction for multiple automobile insurance markets), and report experiments using D2K, a Java-based visual programming system for data mining and information visualization, and several commercial and research tools. Our GA system, Jenesis [HWRC00], is deployed on several network-of-workstation systems (Beowulf clusters). It achieves a linear speedup, due to a high degree of task parallelism, and improved test set accuracy, compared to decision tree learning with only constructive induction and state-space search-based wrappers [KJ97].",
"corpus_id": 28678366,
"score": 1
},
{
"doc_id": "5663752",
"title": "Text Extraction from the Web via Text-to-Tag Ratio",
"abstract": "We describe a method to extract content text from diverse Web pages by using the HTML document's text-to-tag ratio rather than specific HTML cues that may not be constant across various Web pages. We describe how to compute the text-to-tag ratio on a line-by-line basis and then cluster the results into content and non-content areas. With this approach we then show surprisingly high levels of recall for all levels of precision, and a large space savings.",
"corpus_id": 5663752,
"score": 1
},
{
"doc_id": "8303407",
"title": "Probabilistic Prediction of Protein Secondary Structure Using Causal Networks (Extended Abstract)",
"abstract": "In this paper we present a probabilistic approach to analysis and prediction of protein structure. We argue that this approach provides a flexible and convenient mechanism to perform general scientific data analysis in molecular biology. We apply our approach to an important problem in molecular biology--predicting the secondary structure of proteins--and obtain experimental results comparable to several other methods. The causal networks that we use provide a very convenient medium for the scientist to experiment with different empirical models and obtain possibly important insights about the problem being studied.",
"corpus_id": 8303407,
"score": 1
},
{
"doc_id": "205451192",
"title": "Genetic wrappers for feature selection in decision tree induction and variable ordering in Bayesian network structure learning",
"abstract": "In this paper, we address the automated tuning of input specification for supervised inductive learning and develop combinatorial optimization solutions for two such tuning problems. First, we present a framework for selection and reordering of input variables to reduce generalization error in classification and probabilistic inference. One purpose of selection is to control overfitting using validation set accuracy as a criterion for relevance. Similarly, some inductive learning algorithms, such as greedy algorithms for learning probabilistic networks, are sensitive to the evaluation order of variables. We design a generic fitness function for validation of input specification, then use it to develop two genetic algorithm wrappers: one for the variable selection problem for decision tree inducers and one for the variable ordering problem for Bayesian network structure learning. We evaluate the wrappers, using real-world data for the selection wrapper and synthetic data for both, and discuss their limitations and generalizability to other inducers.",
"corpus_id": 205451192,
"score": 1
},
{
"doc_id": "38525575",
"title": "Parameter significance estimation and financial prediction",
"abstract": "This paper deals with the problem of parameter significance estimation, and its application to currency exchange rate prediction. The basic problem is that over the years, practitioners in the field of financial engineering have developed dozens of technical and fundamental indicators on the basis of which they try to predict financial time series. The practitioners are now faced with the problem of finding out which combinations of those indicators are most significant or relevant, and how their significance changes over time. The authors propose a novel neural architecture calledSupNet for estimating the significance of various parameters. The methodology is based on the principle of penalizing those features that are the largest contributors to the error term. Two algorithms based on this principle are proposed. This approach is different from related methodologies, which are based on the principle of removing parameters with the least significance. The proposed methodology is demonstrated on the next day returns of the DM-US currency exchange rate, and promising results are obtained.",
"corpus_id": 38525575,
"score": 0
},
{
"doc_id": "39997128",
"title": "The Clusnet Algorithm And Time Series Prediction",
"abstract": "This paper describes a novel neural network architecture named ClusNet. This network is designed to study the trade-offs between the simplicity of instance-based methods and the accuracy of the more computational intensive learning methods. The features that make this network different from existing learning algorithms are outlined. A simple proof of convergence of the ClusNet algorithm is given. Experimental results showing the convergence of the algorithm on a specific problem is also presented. In this paper, ClusNet is applied to predict the temporal continuation of the Mackey-Glass chaotic time series. A comparison between the results obtained with ClusNet and other neural network algorithms is made. For example, ClusNet requires one-tenth the computing resources of the instance-based local linear method for this application while achieving comparable accuracy in this task. The sensitivity of ClusNet prediction accuracies on specific clustering algorithms is examined for an application. The simplicity and fast convergence of ClusNet makes it ideal as a rapid prototyping tool for applications where on-line learning is required.",
"corpus_id": 39997128,
"score": 0
}
] |
arnetminer | {
"doc_id": "17680761",
"title": "Bi-relational Network Analysis Using a Fast Random Walk with Restart",
"abstract": "Identification of nodes relevant to a given node in a relational network is a basic problem in network analysis with great practical importance. Most existing network analysis algorithms utilize one single relation to define relevancy among nodes. However, in real world applications multiple relationships exist between nodes in a network. Therefore, network analysis algorithms that can make use of more than one relation to identify the relevance set for a node are needed. In this paper, we show how the Random Walk with Restart (RWR) approach can be used to study relevancy in a bi-relational network from the bibliographic domain, and show that making use of two relations results in better results as compared to approaches that use a single relation. As relational networks can be very large, we also propose a fast implementation for RWR by adapting an existing Iterative Aggregation and Disaggregation (IAD) approach. The IAD-based RWR exploits the block-wise structure of real world networks. Experimental results show significant increase in running time for the IAD-based RWR compared to the traditional power method based RWR.",
"corpus_id": 17680761
} | [
{
"doc_id": "205451192",
"title": "Genetic wrappers for feature selection in decision tree induction and variable ordering in Bayesian network structure learning",
"abstract": "In this paper, we address the automated tuning of input specification for supervised inductive learning and develop combinatorial optimization solutions for two such tuning problems. First, we present a framework for selection and reordering of input variables to reduce generalization error in classification and probabilistic inference. One purpose of selection is to control overfitting using validation set accuracy as a criterion for relevance. Similarly, some inductive learning algorithms, such as greedy algorithms for learning probabilistic networks, are sensitive to the evaluation order of variables. We design a generic fitness function for validation of input specification, then use it to develop two genetic algorithm wrappers: one for the variable selection problem for decision tree inducers and one for the variable ordering problem for Bayesian network structure learning. We evaluate the wrappers, using real-world data for the selection wrapper and synthetic data for both, and discuss their limitations and generalizability to other inducers.",
"corpus_id": 205451192,
"score": 1
},
{
"doc_id": "11098618",
"title": "Relational Graphical Models of Computational Workflows for Data Mining",
"abstract": "Collaborative recommendation is the problem of analyzing the content of an information retrieval system and actions of its users, to predict additional topics or products a new user may find useful. Developing this capability poses several challenges to machine learning and reasoning under uncertainty. Recent systems such as CiteSeer [1] have succeeded in providing some specialized but comprehensive indices of full documents, but the kind of probabilistic models used in such indexing do not extend easily to information Grid databases and computational Grid workflows. The collection of user data from Grid portals [4] provides a test bed for the underlying IR technology, including learning and inference systems. To model workflows created using the TAVERNA editor [3] and SCUFL description language, the DESCRIBER system, shown in Figure 1, applies score-based structure learning algorithms, including Bayesian model selection and greedy search (cf. the K2 algorithm) adapted to relational graphical models. Figure 2 illustrates how the decision support front-end of DESCRIBER interacts with modules that learn and reason using probabilistic relational models,. The purpose is to discover interrelationships among, and thereby recommend, components used in workflows developed by other Grid users.",
"corpus_id": 11098618,
"score": 1
},
{
"doc_id": "28678366",
"title": "Genetic Wrappers for Constructive Induction in High-Performance Data Mining",
"abstract": "We present an application of genetic algorithm-based design to configuration of high-level optimization systems, or wrappers, for relevance determination and constructive induction. Our system combines genetic wrappers with elicited knowledge on attribute relevance and synthesis. We discuss decision support issues in a large-scale commercial data mining project (cost prediction for multiple automobile insurance markets), and report experiments using D2K, a Java-based visual programming system for data mining and information visualization, and several commercial and research tools. Our GA system, Jenesis [HWRC00], is deployed on several network-of-workstation systems (Beowulf clusters). It achieves a linear speedup, due to a high degree of task parallelism, and improved test set accuracy, compared to decision tree learning with only constructive induction and state-space search-based wrappers [KJ97].",
"corpus_id": 28678366,
"score": 1
},
{
"doc_id": "8303407",
"title": "Probabilistic Prediction of Protein Secondary Structure Using Causal Networks (Extended Abstract)",
"abstract": "In this paper we present a probabilistic approach to analysis and prediction of protein structure. We argue that this approach provides a flexible and convenient mechanism to perform general scientific data analysis in molecular biology. We apply our approach to an important problem in molecular biology--predicting the secondary structure of proteins--and obtain experimental results comparable to several other methods. The causal networks that we use provide a very convenient medium for the scientist to experiment with different empirical models and obtain possibly important insights about the problem being studied.",
"corpus_id": 8303407,
"score": 1
},
{
"doc_id": "5663752",
"title": "Text Extraction from the Web via Text-to-Tag Ratio",
"abstract": "We describe a method to extract content text from diverse Web pages by using the HTML document's text-to-tag ratio rather than specific HTML cues that may not be constant across various Web pages. We describe how to compute the text-to-tag ratio on a line-by-line basis and then cluster the results into content and non-content areas. With this approach we then show surprisingly high levels of recall for all levels of precision, and a large space savings.",
"corpus_id": 5663752,
"score": 1
},
{
"doc_id": "38525575",
"title": "Parameter significance estimation and financial prediction",
"abstract": "This paper deals with the problem of parameter significance estimation, and its application to currency exchange rate prediction. The basic problem is that over the years, practitioners in the field of financial engineering have developed dozens of technical and fundamental indicators on the basis of which they try to predict financial time series. The practitioners are now faced with the problem of finding out which combinations of those indicators are most significant or relevant, and how their significance changes over time. The authors propose a novel neural architecture calledSupNet for estimating the significance of various parameters. The methodology is based on the principle of penalizing those features that are the largest contributors to the error term. Two algorithms based on this principle are proposed. This approach is different from related methodologies, which are based on the principle of removing parameters with the least significance. The proposed methodology is demonstrated on the next day returns of the DM-US currency exchange rate, and promising results are obtained.",
"corpus_id": 38525575,
"score": 0
},
{
"doc_id": "39997128",
"title": "The Clusnet Algorithm And Time Series Prediction",
"abstract": "This paper describes a novel neural network architecture named ClusNet. This network is designed to study the trade-offs between the simplicity of instance-based methods and the accuracy of the more computational intensive learning methods. The features that make this network different from existing learning algorithms are outlined. A simple proof of convergence of the ClusNet algorithm is given. Experimental results showing the convergence of the algorithm on a specific problem is also presented. In this paper, ClusNet is applied to predict the temporal continuation of the Mackey-Glass chaotic time series. A comparison between the results obtained with ClusNet and other neural network algorithms is made. For example, ClusNet requires one-tenth the computing resources of the instance-based local linear method for this application while achieving comparable accuracy in this task. The sensitivity of ClusNet prediction accuracies on specific clustering algorithms is examined for an application. The simplicity and fast convergence of ClusNet makes it ideal as a rapid prototyping tool for applications where on-line learning is required.",
"corpus_id": 39997128,
"score": 0
}
] |
arnetminer | {
"doc_id": "11098618",
"title": "Relational Graphical Models of Computational Workflows for Data Mining",
"abstract": "Collaborative recommendation is the problem of analyzing the content of an information retrieval system and actions of its users, to predict additional topics or products a new user may find useful. Developing this capability poses several challenges to machine learning and reasoning under uncertainty. Recent systems such as CiteSeer [1] have succeeded in providing some specialized but comprehensive indices of full documents, but the kind of probabilistic models used in such indexing do not extend easily to information Grid databases and computational Grid workflows. The collection of user data from Grid portals [4] provides a test bed for the underlying IR technology, including learning and inference systems. To model workflows created using the TAVERNA editor [3] and SCUFL description language, the DESCRIBER system, shown in Figure 1, applies score-based structure learning algorithms, including Bayesian model selection and greedy search (cf. the K2 algorithm) adapted to relational graphical models. Figure 2 illustrates how the decision support front-end of DESCRIBER interacts with modules that learn and reason using probabilistic relational models,. The purpose is to discover interrelationships among, and thereby recommend, components used in workflows developed by other Grid users.",
"corpus_id": 11098618
} | [
{
"doc_id": "205451192",
"title": "Genetic wrappers for feature selection in decision tree induction and variable ordering in Bayesian network structure learning",
"abstract": "In this paper, we address the automated tuning of input specification for supervised inductive learning and develop combinatorial optimization solutions for two such tuning problems. First, we present a framework for selection and reordering of input variables to reduce generalization error in classification and probabilistic inference. One purpose of selection is to control overfitting using validation set accuracy as a criterion for relevance. Similarly, some inductive learning algorithms, such as greedy algorithms for learning probabilistic networks, are sensitive to the evaluation order of variables. We design a generic fitness function for validation of input specification, then use it to develop two genetic algorithm wrappers: one for the variable selection problem for decision tree inducers and one for the variable ordering problem for Bayesian network structure learning. We evaluate the wrappers, using real-world data for the selection wrapper and synthetic data for both, and discuss their limitations and generalizability to other inducers.",
"corpus_id": 205451192,
"score": 1
},
{
"doc_id": "8590688",
"title": "Evolutionary tree genetic programming",
"abstract": "We introduce a clustering-based method of subpopulation management in genetic programming (GP) called Evolutionary Tree Genetic Programming (ETGP). The biological motivation behind this work is the observation that the natural evolution follows a tree-like phylogenetic pattern. Our goal is to simulate similar behavior in artificial evolutionary systems such as GP. To test our model we use three common GP benchmarks: the Ant Algorithm, 11-Multiplexer, and Parity problems.The performance of the ETGP system is empirically compared to those of the GP system. Code size and variance are consistently reduced by a small but statistically significant percentage, resulting in a slight speedup in the Ant and 11-Multiplexer problems, while the same comparisons on the Parity problem are inconclusive.",
"corpus_id": 8590688,
"score": 1
},
{
"doc_id": "28678366",
"title": "Genetic Wrappers for Constructive Induction in High-Performance Data Mining",
"abstract": "We present an application of genetic algorithm-based design to configuration of high-level optimization systems, or wrappers, for relevance determination and constructive induction. Our system combines genetic wrappers with elicited knowledge on attribute relevance and synthesis. We discuss decision support issues in a large-scale commercial data mining project (cost prediction for multiple automobile insurance markets), and report experiments using D2K, a Java-based visual programming system for data mining and information visualization, and several commercial and research tools. Our GA system, Jenesis [HWRC00], is deployed on several network-of-workstation systems (Beowulf clusters). It achieves a linear speedup, due to a high degree of task parallelism, and improved test set accuracy, compared to decision tree learning with only constructive induction and state-space search-based wrappers [KJ97].",
"corpus_id": 28678366,
"score": 1
},
{
"doc_id": "5663752",
"title": "Text Extraction from the Web via Text-to-Tag Ratio",
"abstract": "We describe a method to extract content text from diverse Web pages by using the HTML document's text-to-tag ratio rather than specific HTML cues that may not be constant across various Web pages. We describe how to compute the text-to-tag ratio on a line-by-line basis and then cluster the results into content and non-content areas. With this approach we then show surprisingly high levels of recall for all levels of precision, and a large space savings.",
"corpus_id": 5663752,
"score": 1
},
{
"doc_id": "8303407",
"title": "Probabilistic Prediction of Protein Secondary Structure Using Causal Networks (Extended Abstract)",
"abstract": "In this paper we present a probabilistic approach to analysis and prediction of protein structure. We argue that this approach provides a flexible and convenient mechanism to perform general scientific data analysis in molecular biology. We apply our approach to an important problem in molecular biology--predicting the secondary structure of proteins--and obtain experimental results comparable to several other methods. The causal networks that we use provide a very convenient medium for the scientist to experiment with different empirical models and obtain possibly important insights about the problem being studied.",
"corpus_id": 8303407,
"score": 1
},
{
"doc_id": "38525575",
"title": "Parameter significance estimation and financial prediction",
"abstract": "This paper deals with the problem of parameter significance estimation, and its application to currency exchange rate prediction. The basic problem is that over the years, practitioners in the field of financial engineering have developed dozens of technical and fundamental indicators on the basis of which they try to predict financial time series. The practitioners are now faced with the problem of finding out which combinations of those indicators are most significant or relevant, and how their significance changes over time. The authors propose a novel neural architecture calledSupNet for estimating the significance of various parameters. The methodology is based on the principle of penalizing those features that are the largest contributors to the error term. Two algorithms based on this principle are proposed. This approach is different from related methodologies, which are based on the principle of removing parameters with the least significance. The proposed methodology is demonstrated on the next day returns of the DM-US currency exchange rate, and promising results are obtained.",
"corpus_id": 38525575,
"score": 0
},
{
"doc_id": "39997128",
"title": "The Clusnet Algorithm And Time Series Prediction",
"abstract": "This paper describes a novel neural network architecture named ClusNet. This network is designed to study the trade-offs between the simplicity of instance-based methods and the accuracy of the more computational intensive learning methods. The features that make this network different from existing learning algorithms are outlined. A simple proof of convergence of the ClusNet algorithm is given. Experimental results showing the convergence of the algorithm on a specific problem is also presented. In this paper, ClusNet is applied to predict the temporal continuation of the Mackey-Glass chaotic time series. A comparison between the results obtained with ClusNet and other neural network algorithms is made. For example, ClusNet requires one-tenth the computing resources of the instance-based local linear method for this application while achieving comparable accuracy in this task. The sensitivity of ClusNet prediction accuracies on specific clustering algorithms is examined for an application. The simplicity and fast convergence of ClusNet makes it ideal as a rapid prototyping tool for applications where on-line learning is required.",
"corpus_id": 39997128,
"score": 0
}
] |
arnetminer | {
"doc_id": "44832335",
"title": "Dynamic System Prediction using Temporal Artificial Neural Networks and Multi-Objective Genetic Algorithms",
"abstract": "We investigate the problem of learning to predict dynamical systems that exhibit switching behavior as a function of exogenous variables. The family of dynamical systems we present is significant to the modeling of gene expression and organismal response to environmental conditions and change. We first develop a framework for learning to predict events such as state or phase changes as a function of multiple dynamic variables. Next, we consider the more challenging problem of identifying parameters and the functional form of the dynamical systems ab initio. We then survey several applicable representations and inductive learning techniques for each task. We then describe a comparative experiment in learning a particular instantiation of the dynamical system for a plant genome modeling application. Finally, we evaluate the results using predictive accuracy of the differential equation parameters or accuracy on the event prediction task; consider the ramifications for modeling the metabolic processes of living systems; and outline future challenges such as multi-objective optimization and finding relevant exogenous and latent variables.",
"corpus_id": 44832335
} | [
{
"doc_id": "5663752",
"title": "Text Extraction from the Web via Text-to-Tag Ratio",
"abstract": "We describe a method to extract content text from diverse Web pages by using the HTML document's text-to-tag ratio rather than specific HTML cues that may not be constant across various Web pages. We describe how to compute the text-to-tag ratio on a line-by-line basis and then cluster the results into content and non-content areas. With this approach we then show surprisingly high levels of recall for all levels of precision, and a large space savings.",
"corpus_id": 5663752,
"score": 1
},
{
"doc_id": "11098618",
"title": "Relational Graphical Models of Computational Workflows for Data Mining",
"abstract": "Collaborative recommendation is the problem of analyzing the content of an information retrieval system and actions of its users, to predict additional topics or products a new user may find useful. Developing this capability poses several challenges to machine learning and reasoning under uncertainty. Recent systems such as CiteSeer [1] have succeeded in providing some specialized but comprehensive indices of full documents, but the kind of probabilistic models used in such indexing do not extend easily to information Grid databases and computational Grid workflows. The collection of user data from Grid portals [4] provides a test bed for the underlying IR technology, including learning and inference systems. To model workflows created using the TAVERNA editor [3] and SCUFL description language, the DESCRIBER system, shown in Figure 1, applies score-based structure learning algorithms, including Bayesian model selection and greedy search (cf. the K2 algorithm) adapted to relational graphical models. Figure 2 illustrates how the decision support front-end of DESCRIBER interacts with modules that learn and reason using probabilistic relational models,. The purpose is to discover interrelationships among, and thereby recommend, components used in workflows developed by other Grid users.",
"corpus_id": 11098618,
"score": 1
},
{
"doc_id": "205451192",
"title": "Genetic wrappers for feature selection in decision tree induction and variable ordering in Bayesian network structure learning",
"abstract": "In this paper, we address the automated tuning of input specification for supervised inductive learning and develop combinatorial optimization solutions for two such tuning problems. First, we present a framework for selection and reordering of input variables to reduce generalization error in classification and probabilistic inference. One purpose of selection is to control overfitting using validation set accuracy as a criterion for relevance. Similarly, some inductive learning algorithms, such as greedy algorithms for learning probabilistic networks, are sensitive to the evaluation order of variables. We design a generic fitness function for validation of input specification, then use it to develop two genetic algorithm wrappers: one for the variable selection problem for decision tree inducers and one for the variable ordering problem for Bayesian network structure learning. We evaluate the wrappers, using real-world data for the selection wrapper and synthetic data for both, and discuss their limitations and generalizability to other inducers.",
"corpus_id": 205451192,
"score": 1
},
{
"doc_id": "8303407",
"title": "Probabilistic Prediction of Protein Secondary Structure Using Causal Networks (Extended Abstract)",
"abstract": "In this paper we present a probabilistic approach to analysis and prediction of protein structure. We argue that this approach provides a flexible and convenient mechanism to perform general scientific data analysis in molecular biology. We apply our approach to an important problem in molecular biology--predicting the secondary structure of proteins--and obtain experimental results comparable to several other methods. The causal networks that we use provide a very convenient medium for the scientist to experiment with different empirical models and obtain possibly important insights about the problem being studied.",
"corpus_id": 8303407,
"score": 1
},
{
"doc_id": "28678366",
"title": "Genetic Wrappers for Constructive Induction in High-Performance Data Mining",
"abstract": "We present an application of genetic algorithm-based design to configuration of high-level optimization systems, or wrappers, for relevance determination and constructive induction. Our system combines genetic wrappers with elicited knowledge on attribute relevance and synthesis. We discuss decision support issues in a large-scale commercial data mining project (cost prediction for multiple automobile insurance markets), and report experiments using D2K, a Java-based visual programming system for data mining and information visualization, and several commercial and research tools. Our GA system, Jenesis [HWRC00], is deployed on several network-of-workstation systems (Beowulf clusters). It achieves a linear speedup, due to a high degree of task parallelism, and improved test set accuracy, compared to decision tree learning with only constructive induction and state-space search-based wrappers [KJ97].",
"corpus_id": 28678366,
"score": 1
},
{
"doc_id": "39997128",
"title": "The Clusnet Algorithm And Time Series Prediction",
"abstract": "This paper describes a novel neural network architecture named ClusNet. This network is designed to study the trade-offs between the simplicity of instance-based methods and the accuracy of the more computational intensive learning methods. The features that make this network different from existing learning algorithms are outlined. A simple proof of convergence of the ClusNet algorithm is given. Experimental results showing the convergence of the algorithm on a specific problem is also presented. In this paper, ClusNet is applied to predict the temporal continuation of the Mackey-Glass chaotic time series. A comparison between the results obtained with ClusNet and other neural network algorithms is made. For example, ClusNet requires one-tenth the computing resources of the instance-based local linear method for this application while achieving comparable accuracy in this task. The sensitivity of ClusNet prediction accuracies on specific clustering algorithms is examined for an application. The simplicity and fast convergence of ClusNet makes it ideal as a rapid prototyping tool for applications where on-line learning is required.",
"corpus_id": 39997128,
"score": 0
},
{
"doc_id": "38525575",
"title": "Parameter significance estimation and financial prediction",
"abstract": "This paper deals with the problem of parameter significance estimation, and its application to currency exchange rate prediction. The basic problem is that over the years, practitioners in the field of financial engineering have developed dozens of technical and fundamental indicators on the basis of which they try to predict financial time series. The practitioners are now faced with the problem of finding out which combinations of those indicators are most significant or relevant, and how their significance changes over time. The authors propose a novel neural architecture calledSupNet for estimating the significance of various parameters. The methodology is based on the principle of penalizing those features that are the largest contributors to the error term. Two algorithms based on this principle are proposed. This approach is different from related methodologies, which are based on the principle of removing parameters with the least significance. The proposed methodology is demonstrated on the next day returns of the DM-US currency exchange rate, and promising results are obtained.",
"corpus_id": 38525575,
"score": 0
}
] |
arnetminer | {
"doc_id": "37164529",
"title": "Protein Secondary-Structure Modeling with Probabilistic Networks",
"abstract": "In this paper we study the performance of probabilistic networks in the context of protein sequence analysis in molecular biology. Specifically, we report the results of our initial experiments applying this framework to the problem of protein secondary structure prediction. One of the main advantages of the probabilistic approach we describe here is our ability to perform detailed experiments where we can experiment with different models. We can easily perform local substitutions (mutations) and measure (probabilistically) their effect on the global structure. Window-based methods do not support such experimentation as readily. Our method is efficient both during training and during prediction, which is important in order to be able to perform many experiments with different networks. We believe that probabilistic methods are comparable to other methods in prediction quality. In addition, the predictions generated by our methods have precise quantitative semantics which is not shared by other classification methods. Specifically, all the causal and statistical independence assumptions are made explicit in our networks thereby allowing biologists to study and experiment with different causal models in a convenient manner.",
"corpus_id": 37164529
} | [
{
"doc_id": "28678366",
"title": "Genetic Wrappers for Constructive Induction in High-Performance Data Mining",
"abstract": "We present an application of genetic algorithm-based design to configuration of high-level optimization systems, or wrappers, for relevance determination and constructive induction. Our system combines genetic wrappers with elicited knowledge on attribute relevance and synthesis. We discuss decision support issues in a large-scale commercial data mining project (cost prediction for multiple automobile insurance markets), and report experiments using D2K, a Java-based visual programming system for data mining and information visualization, and several commercial and research tools. Our GA system, Jenesis [HWRC00], is deployed on several network-of-workstation systems (Beowulf clusters). It achieves a linear speedup, due to a high degree of task parallelism, and improved test set accuracy, compared to decision tree learning with only constructive induction and state-space search-based wrappers [KJ97].",
"corpus_id": 28678366,
"score": 1
},
{
"doc_id": "5663752",
"title": "Text Extraction from the Web via Text-to-Tag Ratio",
"abstract": "We describe a method to extract content text from diverse Web pages by using the HTML document's text-to-tag ratio rather than specific HTML cues that may not be constant across various Web pages. We describe how to compute the text-to-tag ratio on a line-by-line basis and then cluster the results into content and non-content areas. With this approach we then show surprisingly high levels of recall for all levels of precision, and a large space savings.",
"corpus_id": 5663752,
"score": 1
},
{
"doc_id": "11098618",
"title": "Relational Graphical Models of Computational Workflows for Data Mining",
"abstract": "Collaborative recommendation is the problem of analyzing the content of an information retrieval system and actions of its users, to predict additional topics or products a new user may find useful. Developing this capability poses several challenges to machine learning and reasoning under uncertainty. Recent systems such as CiteSeer [1] have succeeded in providing some specialized but comprehensive indices of full documents, but the kind of probabilistic models used in such indexing do not extend easily to information Grid databases and computational Grid workflows. The collection of user data from Grid portals [4] provides a test bed for the underlying IR technology, including learning and inference systems. To model workflows created using the TAVERNA editor [3] and SCUFL description language, the DESCRIBER system, shown in Figure 1, applies score-based structure learning algorithms, including Bayesian model selection and greedy search (cf. the K2 algorithm) adapted to relational graphical models. Figure 2 illustrates how the decision support front-end of DESCRIBER interacts with modules that learn and reason using probabilistic relational models,. The purpose is to discover interrelationships among, and thereby recommend, components used in workflows developed by other Grid users.",
"corpus_id": 11098618,
"score": 1
},
{
"doc_id": "205451192",
"title": "Genetic wrappers for feature selection in decision tree induction and variable ordering in Bayesian network structure learning",
"abstract": "In this paper, we address the automated tuning of input specification for supervised inductive learning and develop combinatorial optimization solutions for two such tuning problems. First, we present a framework for selection and reordering of input variables to reduce generalization error in classification and probabilistic inference. One purpose of selection is to control overfitting using validation set accuracy as a criterion for relevance. Similarly, some inductive learning algorithms, such as greedy algorithms for learning probabilistic networks, are sensitive to the evaluation order of variables. We design a generic fitness function for validation of input specification, then use it to develop two genetic algorithm wrappers: one for the variable selection problem for decision tree inducers and one for the variable ordering problem for Bayesian network structure learning. We evaluate the wrappers, using real-world data for the selection wrapper and synthetic data for both, and discuss their limitations and generalizability to other inducers.",
"corpus_id": 205451192,
"score": 1
},
{
"doc_id": "8303407",
"title": "Probabilistic Prediction of Protein Secondary Structure Using Causal Networks (Extended Abstract)",
"abstract": "In this paper we present a probabilistic approach to analysis and prediction of protein structure. We argue that this approach provides a flexible and convenient mechanism to perform general scientific data analysis in molecular biology. We apply our approach to an important problem in molecular biology--predicting the secondary structure of proteins--and obtain experimental results comparable to several other methods. The causal networks that we use provide a very convenient medium for the scientist to experiment with different empirical models and obtain possibly important insights about the problem being studied.",
"corpus_id": 8303407,
"score": 1
},
{
"doc_id": "39997128",
"title": "The Clusnet Algorithm And Time Series Prediction",
"abstract": "This paper describes a novel neural network architecture named ClusNet. This network is designed to study the trade-offs between the simplicity of instance-based methods and the accuracy of the more computational intensive learning methods. The features that make this network different from existing learning algorithms are outlined. A simple proof of convergence of the ClusNet algorithm is given. Experimental results showing the convergence of the algorithm on a specific problem is also presented. In this paper, ClusNet is applied to predict the temporal continuation of the Mackey-Glass chaotic time series. A comparison between the results obtained with ClusNet and other neural network algorithms is made. For example, ClusNet requires one-tenth the computing resources of the instance-based local linear method for this application while achieving comparable accuracy in this task. The sensitivity of ClusNet prediction accuracies on specific clustering algorithms is examined for an application. The simplicity and fast convergence of ClusNet makes it ideal as a rapid prototyping tool for applications where on-line learning is required.",
"corpus_id": 39997128,
"score": 0
},
{
"doc_id": "38525575",
"title": "Parameter significance estimation and financial prediction",
"abstract": "This paper deals with the problem of parameter significance estimation, and its application to currency exchange rate prediction. The basic problem is that over the years, practitioners in the field of financial engineering have developed dozens of technical and fundamental indicators on the basis of which they try to predict financial time series. The practitioners are now faced with the problem of finding out which combinations of those indicators are most significant or relevant, and how their significance changes over time. The authors propose a novel neural architecture calledSupNet for estimating the significance of various parameters. The methodology is based on the principle of penalizing those features that are the largest contributors to the error term. Two algorithms based on this principle are proposed. This approach is different from related methodologies, which are based on the principle of removing parameters with the least significance. The proposed methodology is demonstrated on the next day returns of the DM-US currency exchange rate, and promising results are obtained.",
"corpus_id": 38525575,
"score": 0
}
] |
arnetminer | {
"doc_id": "6700416",
"title": "A Permutation Genetic Algorithm For Variable Ordering In Learning Bayesian Networks From Data",
"abstract": "Greedy score-based algorithms for learning the structure of Bayesian networks may produce very different models depending on the order in which variables are scored, These models often vary significantly in quality when applied to inference, Unfortunately, finding the optimal ordering of inputs entails search through the permutation space of variables, Furthermore, in real-world applications of structure learning, the gold standard network is typically unknown, In this paper, we first present a genetic algorithm (GA) that uses a well-known greedy algorithm for structure learning (K2) and approximate inference by importance sampling as primitives in searching this permutation space, We then develop a flexible fitness measure based upon inferential loss given a specification of evidence, Finally, we evaluate this GA wrapper using the well-known networks Asia and ALARM and show that it is competitive with exhaustive enumeration in finding good orderings for K2, resulting in structures with low inferential loss under importance sampling.",
"corpus_id": 6700416
} | [
{
"doc_id": "8303407",
"title": "Probabilistic Prediction of Protein Secondary Structure Using Causal Networks (Extended Abstract)",
"abstract": "In this paper we present a probabilistic approach to analysis and prediction of protein structure. We argue that this approach provides a flexible and convenient mechanism to perform general scientific data analysis in molecular biology. We apply our approach to an important problem in molecular biology--predicting the secondary structure of proteins--and obtain experimental results comparable to several other methods. The causal networks that we use provide a very convenient medium for the scientist to experiment with different empirical models and obtain possibly important insights about the problem being studied.",
"corpus_id": 8303407,
"score": 1
},
{
"doc_id": "5663752",
"title": "Text Extraction from the Web via Text-to-Tag Ratio",
"abstract": "We describe a method to extract content text from diverse Web pages by using the HTML document's text-to-tag ratio rather than specific HTML cues that may not be constant across various Web pages. We describe how to compute the text-to-tag ratio on a line-by-line basis and then cluster the results into content and non-content areas. With this approach we then show surprisingly high levels of recall for all levels of precision, and a large space savings.",
"corpus_id": 5663752,
"score": 1
},
{
"doc_id": "205451192",
"title": "Genetic wrappers for feature selection in decision tree induction and variable ordering in Bayesian network structure learning",
"abstract": "In this paper, we address the automated tuning of input specification for supervised inductive learning and develop combinatorial optimization solutions for two such tuning problems. First, we present a framework for selection and reordering of input variables to reduce generalization error in classification and probabilistic inference. One purpose of selection is to control overfitting using validation set accuracy as a criterion for relevance. Similarly, some inductive learning algorithms, such as greedy algorithms for learning probabilistic networks, are sensitive to the evaluation order of variables. We design a generic fitness function for validation of input specification, then use it to develop two genetic algorithm wrappers: one for the variable selection problem for decision tree inducers and one for the variable ordering problem for Bayesian network structure learning. We evaluate the wrappers, using real-world data for the selection wrapper and synthetic data for both, and discuss their limitations and generalizability to other inducers.",
"corpus_id": 205451192,
"score": 1
},
{
"doc_id": "11098618",
"title": "Relational Graphical Models of Computational Workflows for Data Mining",
"abstract": "Collaborative recommendation is the problem of analyzing the content of an information retrieval system and actions of its users, to predict additional topics or products a new user may find useful. Developing this capability poses several challenges to machine learning and reasoning under uncertainty. Recent systems such as CiteSeer [1] have succeeded in providing some specialized but comprehensive indices of full documents, but the kind of probabilistic models used in such indexing do not extend easily to information Grid databases and computational Grid workflows. The collection of user data from Grid portals [4] provides a test bed for the underlying IR technology, including learning and inference systems. To model workflows created using the TAVERNA editor [3] and SCUFL description language, the DESCRIBER system, shown in Figure 1, applies score-based structure learning algorithms, including Bayesian model selection and greedy search (cf. the K2 algorithm) adapted to relational graphical models. Figure 2 illustrates how the decision support front-end of DESCRIBER interacts with modules that learn and reason using probabilistic relational models,. The purpose is to discover interrelationships among, and thereby recommend, components used in workflows developed by other Grid users.",
"corpus_id": 11098618,
"score": 1
},
{
"doc_id": "28678366",
"title": "Genetic Wrappers for Constructive Induction in High-Performance Data Mining",
"abstract": "We present an application of genetic algorithm-based design to configuration of high-level optimization systems, or wrappers, for relevance determination and constructive induction. Our system combines genetic wrappers with elicited knowledge on attribute relevance and synthesis. We discuss decision support issues in a large-scale commercial data mining project (cost prediction for multiple automobile insurance markets), and report experiments using D2K, a Java-based visual programming system for data mining and information visualization, and several commercial and research tools. Our GA system, Jenesis [HWRC00], is deployed on several network-of-workstation systems (Beowulf clusters). It achieves a linear speedup, due to a high degree of task parallelism, and improved test set accuracy, compared to decision tree learning with only constructive induction and state-space search-based wrappers [KJ97].",
"corpus_id": 28678366,
"score": 1
},
{
"doc_id": "38525575",
"title": "Parameter significance estimation and financial prediction",
"abstract": "This paper deals with the problem of parameter significance estimation, and its application to currency exchange rate prediction. The basic problem is that over the years, practitioners in the field of financial engineering have developed dozens of technical and fundamental indicators on the basis of which they try to predict financial time series. The practitioners are now faced with the problem of finding out which combinations of those indicators are most significant or relevant, and how their significance changes over time. The authors propose a novel neural architecture calledSupNet for estimating the significance of various parameters. The methodology is based on the principle of penalizing those features that are the largest contributors to the error term. Two algorithms based on this principle are proposed. This approach is different from related methodologies, which are based on the principle of removing parameters with the least significance. The proposed methodology is demonstrated on the next day returns of the DM-US currency exchange rate, and promising results are obtained.",
"corpus_id": 38525575,
"score": 0
},
{
"doc_id": "39997128",
"title": "The Clusnet Algorithm And Time Series Prediction",
"abstract": "This paper describes a novel neural network architecture named ClusNet. This network is designed to study the trade-offs between the simplicity of instance-based methods and the accuracy of the more computational intensive learning methods. The features that make this network different from existing learning algorithms are outlined. A simple proof of convergence of the ClusNet algorithm is given. Experimental results showing the convergence of the algorithm on a specific problem is also presented. In this paper, ClusNet is applied to predict the temporal continuation of the Mackey-Glass chaotic time series. A comparison between the results obtained with ClusNet and other neural network algorithms is made. For example, ClusNet requires one-tenth the computing resources of the instance-based local linear method for this application while achieving comparable accuracy in this task. The sensitivity of ClusNet prediction accuracies on specific clustering algorithms is examined for an application. The simplicity and fast convergence of ClusNet makes it ideal as a rapid prototyping tool for applications where on-line learning is required.",
"corpus_id": 39997128,
"score": 0
}
] |
arnetminer | {
"doc_id": "18185082",
"title": "A Learning-Based Algorithm Selection Meta-reasoner for the Real-Time MPE Problem",
"abstract": "The algorithm selection problem aims to select the best algorithm for an input problem instance according to some characteristics of the instance This paper presents a learning-based inductive approach to build a predictive algorithm selection system from empirical algorithm performance data of the Most Probable Explanation(MPE) problem The learned model can serve as an algorithm selection meta-reasoner for the real-time MPE problem Experimental results show that the learned algorithm selection models can help integrate multiple MPE algorithms to gain a better overall performance of reasoning.",
"corpus_id": 18185082
} | [
{
"doc_id": "205451192",
"title": "Genetic wrappers for feature selection in decision tree induction and variable ordering in Bayesian network structure learning",
"abstract": "In this paper, we address the automated tuning of input specification for supervised inductive learning and develop combinatorial optimization solutions for two such tuning problems. First, we present a framework for selection and reordering of input variables to reduce generalization error in classification and probabilistic inference. One purpose of selection is to control overfitting using validation set accuracy as a criterion for relevance. Similarly, some inductive learning algorithms, such as greedy algorithms for learning probabilistic networks, are sensitive to the evaluation order of variables. We design a generic fitness function for validation of input specification, then use it to develop two genetic algorithm wrappers: one for the variable selection problem for decision tree inducers and one for the variable ordering problem for Bayesian network structure learning. We evaluate the wrappers, using real-world data for the selection wrapper and synthetic data for both, and discuss their limitations and generalizability to other inducers.",
"corpus_id": 205451192,
"score": 1
},
{
"doc_id": "28678366",
"title": "Genetic Wrappers for Constructive Induction in High-Performance Data Mining",
"abstract": "We present an application of genetic algorithm-based design to configuration of high-level optimization systems, or wrappers, for relevance determination and constructive induction. Our system combines genetic wrappers with elicited knowledge on attribute relevance and synthesis. We discuss decision support issues in a large-scale commercial data mining project (cost prediction for multiple automobile insurance markets), and report experiments using D2K, a Java-based visual programming system for data mining and information visualization, and several commercial and research tools. Our GA system, Jenesis [HWRC00], is deployed on several network-of-workstation systems (Beowulf clusters). It achieves a linear speedup, due to a high degree of task parallelism, and improved test set accuracy, compared to decision tree learning with only constructive induction and state-space search-based wrappers [KJ97].",
"corpus_id": 28678366,
"score": 1
},
{
"doc_id": "11098618",
"title": "Relational Graphical Models of Computational Workflows for Data Mining",
"abstract": "Collaborative recommendation is the problem of analyzing the content of an information retrieval system and actions of its users, to predict additional topics or products a new user may find useful. Developing this capability poses several challenges to machine learning and reasoning under uncertainty. Recent systems such as CiteSeer [1] have succeeded in providing some specialized but comprehensive indices of full documents, but the kind of probabilistic models used in such indexing do not extend easily to information Grid databases and computational Grid workflows. The collection of user data from Grid portals [4] provides a test bed for the underlying IR technology, including learning and inference systems. To model workflows created using the TAVERNA editor [3] and SCUFL description language, the DESCRIBER system, shown in Figure 1, applies score-based structure learning algorithms, including Bayesian model selection and greedy search (cf. the K2 algorithm) adapted to relational graphical models. Figure 2 illustrates how the decision support front-end of DESCRIBER interacts with modules that learn and reason using probabilistic relational models,. The purpose is to discover interrelationships among, and thereby recommend, components used in workflows developed by other Grid users.",
"corpus_id": 11098618,
"score": 1
},
{
"doc_id": "5663752",
"title": "Text Extraction from the Web via Text-to-Tag Ratio",
"abstract": "We describe a method to extract content text from diverse Web pages by using the HTML document's text-to-tag ratio rather than specific HTML cues that may not be constant across various Web pages. We describe how to compute the text-to-tag ratio on a line-by-line basis and then cluster the results into content and non-content areas. With this approach we then show surprisingly high levels of recall for all levels of precision, and a large space savings.",
"corpus_id": 5663752,
"score": 1
},
{
"doc_id": "8303407",
"title": "Probabilistic Prediction of Protein Secondary Structure Using Causal Networks (Extended Abstract)",
"abstract": "In this paper we present a probabilistic approach to analysis and prediction of protein structure. We argue that this approach provides a flexible and convenient mechanism to perform general scientific data analysis in molecular biology. We apply our approach to an important problem in molecular biology--predicting the secondary structure of proteins--and obtain experimental results comparable to several other methods. The causal networks that we use provide a very convenient medium for the scientist to experiment with different empirical models and obtain possibly important insights about the problem being studied.",
"corpus_id": 8303407,
"score": 1
},
{
"doc_id": "39997128",
"title": "The Clusnet Algorithm And Time Series Prediction",
"abstract": "This paper describes a novel neural network architecture named ClusNet. This network is designed to study the trade-offs between the simplicity of instance-based methods and the accuracy of the more computational intensive learning methods. The features that make this network different from existing learning algorithms are outlined. A simple proof of convergence of the ClusNet algorithm is given. Experimental results showing the convergence of the algorithm on a specific problem is also presented. In this paper, ClusNet is applied to predict the temporal continuation of the Mackey-Glass chaotic time series. A comparison between the results obtained with ClusNet and other neural network algorithms is made. For example, ClusNet requires one-tenth the computing resources of the instance-based local linear method for this application while achieving comparable accuracy in this task. The sensitivity of ClusNet prediction accuracies on specific clustering algorithms is examined for an application. The simplicity and fast convergence of ClusNet makes it ideal as a rapid prototyping tool for applications where on-line learning is required.",
"corpus_id": 39997128,
"score": 0
},
{
"doc_id": "38525575",
"title": "Parameter significance estimation and financial prediction",
"abstract": "This paper deals with the problem of parameter significance estimation, and its application to currency exchange rate prediction. The basic problem is that over the years, practitioners in the field of financial engineering have developed dozens of technical and fundamental indicators on the basis of which they try to predict financial time series. The practitioners are now faced with the problem of finding out which combinations of those indicators are most significant or relevant, and how their significance changes over time. The authors propose a novel neural architecture calledSupNet for estimating the significance of various parameters. The methodology is based on the principle of penalizing those features that are the largest contributors to the error term. Two algorithms based on this principle are proposed. This approach is different from related methodologies, which are based on the principle of removing parameters with the least significance. The proposed methodology is demonstrated on the next day returns of the DM-US currency exchange rate, and promising results are obtained.",
"corpus_id": 38525575,
"score": 0
}
] |
arnetminer | {
"doc_id": "11407252",
"title": "A genetic algorithm for tuning variable orderings in Bayesian network structure learning",
"abstract": "In the last two decades or so, Bayesian networks (BNs) [Pe88] have become a prevalent method for uncertain knowledge representation and reasoning. BNs are directed acyclic graphs (DAGs) where nodes represent random variables, and edges represent conditional dependence between random variables. Each node has a conditional probabilistic table (CPT) that contains probabilities of that node being a specific value given the values of its parents. The problem of learning a BN from data is important but hard. Finding the optimal structure of a BN from data has been shown to be NP-hard [HGC95], even without considering unobserved or irrelevant variables. In recent years, many Bayesian network learning algorithms have been developed. Generally these algorithms fall into two groups, score-based search and dependency analysis (conditional independence tests and constraint solving). Many previous approaches require that a node ordering is available before learning. Unfortunately, this is usually not the case in many real-world applications. To make greedy search usable when node orderings are unknown, we have developed a permutation genetic algorithm (GA) wrapper to tune the variable ordering given as input to K2 [CH92], a score-based BN learning algorithm. In our continuing project, we have used a probabilistic inference criterion as the GA’s fitness function and we are also trying some other criterion to evaluate the learning result such as the learning fixed-point property.",
"corpus_id": 11407252
} | [
{
"doc_id": "28678366",
"title": "Genetic Wrappers for Constructive Induction in High-Performance Data Mining",
"abstract": "We present an application of genetic algorithm-based design to configuration of high-level optimization systems, or wrappers, for relevance determination and constructive induction. Our system combines genetic wrappers with elicited knowledge on attribute relevance and synthesis. We discuss decision support issues in a large-scale commercial data mining project (cost prediction for multiple automobile insurance markets), and report experiments using D2K, a Java-based visual programming system for data mining and information visualization, and several commercial and research tools. Our GA system, Jenesis [HWRC00], is deployed on several network-of-workstation systems (Beowulf clusters). It achieves a linear speedup, due to a high degree of task parallelism, and improved test set accuracy, compared to decision tree learning with only constructive induction and state-space search-based wrappers [KJ97].",
"corpus_id": 28678366,
"score": 1
},
{
"doc_id": "11098618",
"title": "Relational Graphical Models of Computational Workflows for Data Mining",
"abstract": "Collaborative recommendation is the problem of analyzing the content of an information retrieval system and actions of its users, to predict additional topics or products a new user may find useful. Developing this capability poses several challenges to machine learning and reasoning under uncertainty. Recent systems such as CiteSeer [1] have succeeded in providing some specialized but comprehensive indices of full documents, but the kind of probabilistic models used in such indexing do not extend easily to information Grid databases and computational Grid workflows. The collection of user data from Grid portals [4] provides a test bed for the underlying IR technology, including learning and inference systems. To model workflows created using the TAVERNA editor [3] and SCUFL description language, the DESCRIBER system, shown in Figure 1, applies score-based structure learning algorithms, including Bayesian model selection and greedy search (cf. the K2 algorithm) adapted to relational graphical models. Figure 2 illustrates how the decision support front-end of DESCRIBER interacts with modules that learn and reason using probabilistic relational models,. The purpose is to discover interrelationships among, and thereby recommend, components used in workflows developed by other Grid users.",
"corpus_id": 11098618,
"score": 1
},
{
"doc_id": "8303407",
"title": "Probabilistic Prediction of Protein Secondary Structure Using Causal Networks (Extended Abstract)",
"abstract": "In this paper we present a probabilistic approach to analysis and prediction of protein structure. We argue that this approach provides a flexible and convenient mechanism to perform general scientific data analysis in molecular biology. We apply our approach to an important problem in molecular biology--predicting the secondary structure of proteins--and obtain experimental results comparable to several other methods. The causal networks that we use provide a very convenient medium for the scientist to experiment with different empirical models and obtain possibly important insights about the problem being studied.",
"corpus_id": 8303407,
"score": 1
},
{
"doc_id": "5663752",
"title": "Text Extraction from the Web via Text-to-Tag Ratio",
"abstract": "We describe a method to extract content text from diverse Web pages by using the HTML document's text-to-tag ratio rather than specific HTML cues that may not be constant across various Web pages. We describe how to compute the text-to-tag ratio on a line-by-line basis and then cluster the results into content and non-content areas. With this approach we then show surprisingly high levels of recall for all levels of precision, and a large space savings.",
"corpus_id": 5663752,
"score": 1
},
{
"doc_id": "205451192",
"title": "Genetic wrappers for feature selection in decision tree induction and variable ordering in Bayesian network structure learning",
"abstract": "In this paper, we address the automated tuning of input specification for supervised inductive learning and develop combinatorial optimization solutions for two such tuning problems. First, we present a framework for selection and reordering of input variables to reduce generalization error in classification and probabilistic inference. One purpose of selection is to control overfitting using validation set accuracy as a criterion for relevance. Similarly, some inductive learning algorithms, such as greedy algorithms for learning probabilistic networks, are sensitive to the evaluation order of variables. We design a generic fitness function for validation of input specification, then use it to develop two genetic algorithm wrappers: one for the variable selection problem for decision tree inducers and one for the variable ordering problem for Bayesian network structure learning. We evaluate the wrappers, using real-world data for the selection wrapper and synthetic data for both, and discuss their limitations and generalizability to other inducers.",
"corpus_id": 205451192,
"score": 1
},
{
"doc_id": "39997128",
"title": "The Clusnet Algorithm And Time Series Prediction",
"abstract": "This paper describes a novel neural network architecture named ClusNet. This network is designed to study the trade-offs between the simplicity of instance-based methods and the accuracy of the more computational intensive learning methods. The features that make this network different from existing learning algorithms are outlined. A simple proof of convergence of the ClusNet algorithm is given. Experimental results showing the convergence of the algorithm on a specific problem is also presented. In this paper, ClusNet is applied to predict the temporal continuation of the Mackey-Glass chaotic time series. A comparison between the results obtained with ClusNet and other neural network algorithms is made. For example, ClusNet requires one-tenth the computing resources of the instance-based local linear method for this application while achieving comparable accuracy in this task. The sensitivity of ClusNet prediction accuracies on specific clustering algorithms is examined for an application. The simplicity and fast convergence of ClusNet makes it ideal as a rapid prototyping tool for applications where on-line learning is required.",
"corpus_id": 39997128,
"score": 0
},
{
"doc_id": "38525575",
"title": "Parameter significance estimation and financial prediction",
"abstract": "This paper deals with the problem of parameter significance estimation, and its application to currency exchange rate prediction. The basic problem is that over the years, practitioners in the field of financial engineering have developed dozens of technical and fundamental indicators on the basis of which they try to predict financial time series. The practitioners are now faced with the problem of finding out which combinations of those indicators are most significant or relevant, and how their significance changes over time. The authors propose a novel neural architecture calledSupNet for estimating the significance of various parameters. The methodology is based on the principle of penalizing those features that are the largest contributors to the error term. Two algorithms based on this principle are proposed. This approach is different from related methodologies, which are based on the principle of removing parameters with the least significance. The proposed methodology is demonstrated on the next day returns of the DM-US currency exchange rate, and promising results are obtained.",
"corpus_id": 38525575,
"score": 0
}
] |
arnetminer | {
"doc_id": "35290659",
"title": "Automatic Synthesis of Compression Techniques for Heterogeneous",
"abstract": "We present a compression technique for heterogeneous files, those files which contain multiple types of data such as text, images, binary, audio, or animation. The system uses statistical methods to determine the best algorithm to use in compressing each block of data in a file (possibly a different algorithm for each block). The file is then compressed by applying the appropriate algorithm to each block. We obtain better savings than possible by using a single algorithm for compressing the file. The implementation of a working version of this heterogeneous compressor is described, along with examples of its value toward improving compression both in theoretical and applied contexts. We compare our results with those obtained using four commercially available compression programs, PKZIP, Unix compress, StuffIt, and Compact Pro, and show that our system provides better space savings.",
"corpus_id": 35290659
} | [
{
"doc_id": "205451192",
"title": "Genetic wrappers for feature selection in decision tree induction and variable ordering in Bayesian network structure learning",
"abstract": "In this paper, we address the automated tuning of input specification for supervised inductive learning and develop combinatorial optimization solutions for two such tuning problems. First, we present a framework for selection and reordering of input variables to reduce generalization error in classification and probabilistic inference. One purpose of selection is to control overfitting using validation set accuracy as a criterion for relevance. Similarly, some inductive learning algorithms, such as greedy algorithms for learning probabilistic networks, are sensitive to the evaluation order of variables. We design a generic fitness function for validation of input specification, then use it to develop two genetic algorithm wrappers: one for the variable selection problem for decision tree inducers and one for the variable ordering problem for Bayesian network structure learning. We evaluate the wrappers, using real-world data for the selection wrapper and synthetic data for both, and discuss their limitations and generalizability to other inducers.",
"corpus_id": 205451192,
"score": 1
},
{
"doc_id": "5663752",
"title": "Text Extraction from the Web via Text-to-Tag Ratio",
"abstract": "We describe a method to extract content text from diverse Web pages by using the HTML document's text-to-tag ratio rather than specific HTML cues that may not be constant across various Web pages. We describe how to compute the text-to-tag ratio on a line-by-line basis and then cluster the results into content and non-content areas. With this approach we then show surprisingly high levels of recall for all levels of precision, and a large space savings.",
"corpus_id": 5663752,
"score": 1
},
{
"doc_id": "11098618",
"title": "Relational Graphical Models of Computational Workflows for Data Mining",
"abstract": "Collaborative recommendation is the problem of analyzing the content of an information retrieval system and actions of its users, to predict additional topics or products a new user may find useful. Developing this capability poses several challenges to machine learning and reasoning under uncertainty. Recent systems such as CiteSeer [1] have succeeded in providing some specialized but comprehensive indices of full documents, but the kind of probabilistic models used in such indexing do not extend easily to information Grid databases and computational Grid workflows. The collection of user data from Grid portals [4] provides a test bed for the underlying IR technology, including learning and inference systems. To model workflows created using the TAVERNA editor [3] and SCUFL description language, the DESCRIBER system, shown in Figure 1, applies score-based structure learning algorithms, including Bayesian model selection and greedy search (cf. the K2 algorithm) adapted to relational graphical models. Figure 2 illustrates how the decision support front-end of DESCRIBER interacts with modules that learn and reason using probabilistic relational models,. The purpose is to discover interrelationships among, and thereby recommend, components used in workflows developed by other Grid users.",
"corpus_id": 11098618,
"score": 1
},
{
"doc_id": "8303407",
"title": "Probabilistic Prediction of Protein Secondary Structure Using Causal Networks (Extended Abstract)",
"abstract": "In this paper we present a probabilistic approach to analysis and prediction of protein structure. We argue that this approach provides a flexible and convenient mechanism to perform general scientific data analysis in molecular biology. We apply our approach to an important problem in molecular biology--predicting the secondary structure of proteins--and obtain experimental results comparable to several other methods. The causal networks that we use provide a very convenient medium for the scientist to experiment with different empirical models and obtain possibly important insights about the problem being studied.",
"corpus_id": 8303407,
"score": 1
},
{
"doc_id": "28678366",
"title": "Genetic Wrappers for Constructive Induction in High-Performance Data Mining",
"abstract": "We present an application of genetic algorithm-based design to configuration of high-level optimization systems, or wrappers, for relevance determination and constructive induction. Our system combines genetic wrappers with elicited knowledge on attribute relevance and synthesis. We discuss decision support issues in a large-scale commercial data mining project (cost prediction for multiple automobile insurance markets), and report experiments using D2K, a Java-based visual programming system for data mining and information visualization, and several commercial and research tools. Our GA system, Jenesis [HWRC00], is deployed on several network-of-workstation systems (Beowulf clusters). It achieves a linear speedup, due to a high degree of task parallelism, and improved test set accuracy, compared to decision tree learning with only constructive induction and state-space search-based wrappers [KJ97].",
"corpus_id": 28678366,
"score": 1
},
{
"doc_id": "38525575",
"title": "Parameter significance estimation and financial prediction",
"abstract": "This paper deals with the problem of parameter significance estimation, and its application to currency exchange rate prediction. The basic problem is that over the years, practitioners in the field of financial engineering have developed dozens of technical and fundamental indicators on the basis of which they try to predict financial time series. The practitioners are now faced with the problem of finding out which combinations of those indicators are most significant or relevant, and how their significance changes over time. The authors propose a novel neural architecture calledSupNet for estimating the significance of various parameters. The methodology is based on the principle of penalizing those features that are the largest contributors to the error term. Two algorithms based on this principle are proposed. This approach is different from related methodologies, which are based on the principle of removing parameters with the least significance. The proposed methodology is demonstrated on the next day returns of the DM-US currency exchange rate, and promising results are obtained.",
"corpus_id": 38525575,
"score": 0
},
{
"doc_id": "39997128",
"title": "The Clusnet Algorithm And Time Series Prediction",
"abstract": "This paper describes a novel neural network architecture named ClusNet. This network is designed to study the trade-offs between the simplicity of instance-based methods and the accuracy of the more computational intensive learning methods. The features that make this network different from existing learning algorithms are outlined. A simple proof of convergence of the ClusNet algorithm is given. Experimental results showing the convergence of the algorithm on a specific problem is also presented. In this paper, ClusNet is applied to predict the temporal continuation of the Mackey-Glass chaotic time series. A comparison between the results obtained with ClusNet and other neural network algorithms is made. For example, ClusNet requires one-tenth the computing resources of the instance-based local linear method for this application while achieving comparable accuracy in this task. The sensitivity of ClusNet prediction accuracies on specific clustering algorithms is examined for an application. The simplicity and fast convergence of ClusNet makes it ideal as a rapid prototyping tool for applications where on-line learning is required.",
"corpus_id": 39997128,
"score": 0
}
] |
arnetminer | {
"doc_id": "205451192",
"title": "Genetic wrappers for feature selection in decision tree induction and variable ordering in Bayesian network structure learning",
"abstract": "In this paper, we address the automated tuning of input specification for supervised inductive learning and develop combinatorial optimization solutions for two such tuning problems. First, we present a framework for selection and reordering of input variables to reduce generalization error in classification and probabilistic inference. One purpose of selection is to control overfitting using validation set accuracy as a criterion for relevance. Similarly, some inductive learning algorithms, such as greedy algorithms for learning probabilistic networks, are sensitive to the evaluation order of variables. We design a generic fitness function for validation of input specification, then use it to develop two genetic algorithm wrappers: one for the variable selection problem for decision tree inducers and one for the variable ordering problem for Bayesian network structure learning. We evaluate the wrappers, using real-world data for the selection wrapper and synthetic data for both, and discuss their limitations and generalizability to other inducers.",
"corpus_id": 205451192
} | [
{
"doc_id": "2749970",
"title": "Layered Learning in Genetic Programming for a Cooperative Robot Soccer Problem",
"abstract": "We present an alternative to standard genetic programming (GP) that applies layered learning techniques to decompose a problem. GP is applied to subproblems sequentially, where the population in the last generation of a subproblem is used as the initial population of the next subproblem. This method is applied to evolve agents to play keep-away soccer, a subproblem of robotic soccer that requires cooperation among multiple agents in a dynnamic environment. The layered learning paradigm allows GP to evolve better solutions faster than standard GP. Results show that the layered learning GP outperforms standard GP by evolving a lower fitness faster and an overall better fitness. Results indicate a wide area of future research with layered learning in GP.",
"corpus_id": 2749970,
"score": 1
},
{
"doc_id": "5663752",
"title": "Text Extraction from the Web via Text-to-Tag Ratio",
"abstract": "We describe a method to extract content text from diverse Web pages by using the HTML document's text-to-tag ratio rather than specific HTML cues that may not be constant across various Web pages. We describe how to compute the text-to-tag ratio on a line-by-line basis and then cluster the results into content and non-content areas. With this approach we then show surprisingly high levels of recall for all levels of precision, and a large space savings.",
"corpus_id": 5663752,
"score": 1
},
{
"doc_id": "28678366",
"title": "Genetic Wrappers for Constructive Induction in High-Performance Data Mining",
"abstract": "We present an application of genetic algorithm-based design to configuration of high-level optimization systems, or wrappers, for relevance determination and constructive induction. Our system combines genetic wrappers with elicited knowledge on attribute relevance and synthesis. We discuss decision support issues in a large-scale commercial data mining project (cost prediction for multiple automobile insurance markets), and report experiments using D2K, a Java-based visual programming system for data mining and information visualization, and several commercial and research tools. Our GA system, Jenesis [HWRC00], is deployed on several network-of-workstation systems (Beowulf clusters). It achieves a linear speedup, due to a high degree of task parallelism, and improved test set accuracy, compared to decision tree learning with only constructive induction and state-space search-based wrappers [KJ97].",
"corpus_id": 28678366,
"score": 1
},
{
"doc_id": "8303407",
"title": "Probabilistic Prediction of Protein Secondary Structure Using Causal Networks (Extended Abstract)",
"abstract": "In this paper we present a probabilistic approach to analysis and prediction of protein structure. We argue that this approach provides a flexible and convenient mechanism to perform general scientific data analysis in molecular biology. We apply our approach to an important problem in molecular biology--predicting the secondary structure of proteins--and obtain experimental results comparable to several other methods. The causal networks that we use provide a very convenient medium for the scientist to experiment with different empirical models and obtain possibly important insights about the problem being studied.",
"corpus_id": 8303407,
"score": 1
},
{
"doc_id": "8590688",
"title": "Evolutionary tree genetic programming",
"abstract": "We introduce a clustering-based method of subpopulation management in genetic programming (GP) called Evolutionary Tree Genetic Programming (ETGP). The biological motivation behind this work is the observation that the natural evolution follows a tree-like phylogenetic pattern. Our goal is to simulate similar behavior in artificial evolutionary systems such as GP. To test our model we use three common GP benchmarks: the Ant Algorithm, 11-Multiplexer, and Parity problems.The performance of the ETGP system is empirically compared to those of the GP system. Code size and variance are consistently reduced by a small but statistically significant percentage, resulting in a slight speedup in the Ant and 11-Multiplexer problems, while the same comparisons on the Parity problem are inconclusive.",
"corpus_id": 8590688,
"score": 1
},
{
"doc_id": "39997128",
"title": "The Clusnet Algorithm And Time Series Prediction",
"abstract": "This paper describes a novel neural network architecture named ClusNet. This network is designed to study the trade-offs between the simplicity of instance-based methods and the accuracy of the more computational intensive learning methods. The features that make this network different from existing learning algorithms are outlined. A simple proof of convergence of the ClusNet algorithm is given. Experimental results showing the convergence of the algorithm on a specific problem is also presented. In this paper, ClusNet is applied to predict the temporal continuation of the Mackey-Glass chaotic time series. A comparison between the results obtained with ClusNet and other neural network algorithms is made. For example, ClusNet requires one-tenth the computing resources of the instance-based local linear method for this application while achieving comparable accuracy in this task. The sensitivity of ClusNet prediction accuracies on specific clustering algorithms is examined for an application. The simplicity and fast convergence of ClusNet makes it ideal as a rapid prototyping tool for applications where on-line learning is required.",
"corpus_id": 39997128,
"score": 0
},
{
"doc_id": "38525575",
"title": "Parameter significance estimation and financial prediction",
"abstract": "This paper deals with the problem of parameter significance estimation, and its application to currency exchange rate prediction. The basic problem is that over the years, practitioners in the field of financial engineering have developed dozens of technical and fundamental indicators on the basis of which they try to predict financial time series. The practitioners are now faced with the problem of finding out which combinations of those indicators are most significant or relevant, and how their significance changes over time. The authors propose a novel neural architecture calledSupNet for estimating the significance of various parameters. The methodology is based on the principle of penalizing those features that are the largest contributors to the error term. Two algorithms based on this principle are proposed. This approach is different from related methodologies, which are based on the principle of removing parameters with the least significance. The proposed methodology is demonstrated on the next day returns of the DM-US currency exchange rate, and promising results are obtained.",
"corpus_id": 38525575,
"score": 0
}
] |
arnetminer | {
"doc_id": "39997128",
"title": "The Clusnet Algorithm And Time Series Prediction",
"abstract": "This paper describes a novel neural network architecture named ClusNet. This network is designed to study the trade-offs between the simplicity of instance-based methods and the accuracy of the more computational intensive learning methods. The features that make this network different from existing learning algorithms are outlined. A simple proof of convergence of the ClusNet algorithm is given. Experimental results showing the convergence of the algorithm on a specific problem is also presented. In this paper, ClusNet is applied to predict the temporal continuation of the Mackey-Glass chaotic time series. A comparison between the results obtained with ClusNet and other neural network algorithms is made. For example, ClusNet requires one-tenth the computing resources of the instance-based local linear method for this application while achieving comparable accuracy in this task. The sensitivity of ClusNet prediction accuracies on specific clustering algorithms is examined for an application. The simplicity and fast convergence of ClusNet makes it ideal as a rapid prototyping tool for applications where on-line learning is required.",
"corpus_id": 39997128
} | [
{
"doc_id": "38525575",
"title": "Parameter significance estimation and financial prediction",
"abstract": "This paper deals with the problem of parameter significance estimation, and its application to currency exchange rate prediction. The basic problem is that over the years, practitioners in the field of financial engineering have developed dozens of technical and fundamental indicators on the basis of which they try to predict financial time series. The practitioners are now faced with the problem of finding out which combinations of those indicators are most significant or relevant, and how their significance changes over time. The authors propose a novel neural architecture calledSupNet for estimating the significance of various parameters. The methodology is based on the principle of penalizing those features that are the largest contributors to the error term. Two algorithms based on this principle are proposed. This approach is different from related methodologies, which are based on the principle of removing parameters with the least significance. The proposed methodology is demonstrated on the next day returns of the DM-US currency exchange rate, and promising results are obtained.",
"corpus_id": 38525575,
"score": 1
},
{
"doc_id": "1797464",
"title": "Genetic Programming And Multi-agent Layered Learning By Reinforcements",
"abstract": "We present an adaptation of the standard genetic program (GP) to hierarchically decomposable, multi-agent learning problems. To break down a problem that requires cooperation of multiple agents, we use the team objective function to derive a simpler, intermediate objective function for pairs of cooperating agents. We apply GP to optimize first for the intermediate, then for the team objective function, using the final population from the earlier GP as the initial seed population for the next. This layered learning approach facilitates the discovery of primitive behaviors that can be reused and adapted towards complex objectives based on a shared team goal. We use this method to evolve agents to play a subproblem of robotic soccer (keep-away soccer). Finally, we show how layered learning GP evolves better agents than standard GP, including GP with automatically defined functions, and how the problem decomposition results in a significant learning-speed increase.",
"corpus_id": 1797464,
"score": 0
},
{
"doc_id": "11049498",
"title": "An Ant Colony Approach For The Steiner Tree Problem",
"abstract": "One ant is placed initially at each of the given terminal vertices that are to be connected. In each iteration, an ant is moved to a new location via an edge, determined stochastically, but biased in such a manner that the ants get drawn to the paths traced out by one another. Each ant maintains its own separate list of vertices already visited to avoid revisiting it. When any ant collides with another ant, or even with the path of another, it merges into the latter. An antm , currently at a vertexi , selects a vertex j not in its tabu list ) (m T , to move to, only if E j i ∈ ) , ( . In order to ensure that the ants merge with one another as quickly as possible, we define a potential for each vertex j in V , with respect to an ant m as follows,",
"corpus_id": 11049498,
"score": 0
}
] |
arnetminer | {
"doc_id": "8303407",
"title": "Probabilistic Prediction of Protein Secondary Structure Using Causal Networks (Extended Abstract)",
"abstract": "In this paper we present a probabilistic approach to analysis and prediction of protein structure. We argue that this approach provides a flexible and convenient mechanism to perform general scientific data analysis in molecular biology. We apply our approach to an important problem in molecular biology--predicting the secondary structure of proteins--and obtain experimental results comparable to several other methods. The causal networks that we use provide a very convenient medium for the scientist to experiment with different empirical models and obtain possibly important insights about the problem being studied.",
"corpus_id": 8303407
} | [
{
"doc_id": "8590688",
"title": "Evolutionary tree genetic programming",
"abstract": "We introduce a clustering-based method of subpopulation management in genetic programming (GP) called Evolutionary Tree Genetic Programming (ETGP). The biological motivation behind this work is the observation that the natural evolution follows a tree-like phylogenetic pattern. Our goal is to simulate similar behavior in artificial evolutionary systems such as GP. To test our model we use three common GP benchmarks: the Ant Algorithm, 11-Multiplexer, and Parity problems.The performance of the ETGP system is empirically compared to those of the GP system. Code size and variance are consistently reduced by a small but statistically significant percentage, resulting in a slight speedup in the Ant and 11-Multiplexer problems, while the same comparisons on the Parity problem are inconclusive.",
"corpus_id": 8590688,
"score": 1
},
{
"doc_id": "2749970",
"title": "Layered Learning in Genetic Programming for a Cooperative Robot Soccer Problem",
"abstract": "We present an alternative to standard genetic programming (GP) that applies layered learning techniques to decompose a problem. GP is applied to subproblems sequentially, where the population in the last generation of a subproblem is used as the initial population of the next subproblem. This method is applied to evolve agents to play keep-away soccer, a subproblem of robotic soccer that requires cooperation among multiple agents in a dynnamic environment. The layered learning paradigm allows GP to evolve better solutions faster than standard GP. Results show that the layered learning GP outperforms standard GP by evolving a lower fitness faster and an overall better fitness. Results indicate a wide area of future research with layered learning in GP.",
"corpus_id": 2749970,
"score": 1
},
{
"doc_id": "5663752",
"title": "Text Extraction from the Web via Text-to-Tag Ratio",
"abstract": "We describe a method to extract content text from diverse Web pages by using the HTML document's text-to-tag ratio rather than specific HTML cues that may not be constant across various Web pages. We describe how to compute the text-to-tag ratio on a line-by-line basis and then cluster the results into content and non-content areas. With this approach we then show surprisingly high levels of recall for all levels of precision, and a large space savings.",
"corpus_id": 5663752,
"score": 1
},
{
"doc_id": "28678366",
"title": "Genetic Wrappers for Constructive Induction in High-Performance Data Mining",
"abstract": "We present an application of genetic algorithm-based design to configuration of high-level optimization systems, or wrappers, for relevance determination and constructive induction. Our system combines genetic wrappers with elicited knowledge on attribute relevance and synthesis. We discuss decision support issues in a large-scale commercial data mining project (cost prediction for multiple automobile insurance markets), and report experiments using D2K, a Java-based visual programming system for data mining and information visualization, and several commercial and research tools. Our GA system, Jenesis [HWRC00], is deployed on several network-of-workstation systems (Beowulf clusters). It achieves a linear speedup, due to a high degree of task parallelism, and improved test set accuracy, compared to decision tree learning with only constructive induction and state-space search-based wrappers [KJ97].",
"corpus_id": 28678366,
"score": 1
},
{
"doc_id": "39278459",
"title": "Genetic Algorithms for Reformulation of Large-Scale KDD Problems with Many Irrelevant Attributes",
"abstract": "The goal of this research is to apply genetic implementations of algorithms for selection, partitioning, and synthesis of attributes in large-scale data mining problems. Domain knowledge about these operators has been shown to reduce the number of fitness evaluations for candidate attributes. We report results on genetic optimization of attribute selection problems and current work on attribute partitioning, synthesis specifications, and the encoding of domain knowledge about operators in a fitness function. The purpose of this approach is to reduce overfitting in inductive learning and produce more general genetic versions of existing search-based algorithms (or wrappers) for KDD performance tuning [KS98, HG00]. Several GA implementations of alternative attribute synthesis algorithms are applied to concept learning problems in military and commercial KDD applications. One of these, Jenesis, is deployed on several network-of-workstation clusters. It is shown to achieve strongly improved test set accuracy, compared to unwrapped decision tree learning and search-based wrappers [KS98].",
"corpus_id": 39278459,
"score": 1
},
{
"doc_id": "38525575",
"title": "Parameter significance estimation and financial prediction",
"abstract": "This paper deals with the problem of parameter significance estimation, and its application to currency exchange rate prediction. The basic problem is that over the years, practitioners in the field of financial engineering have developed dozens of technical and fundamental indicators on the basis of which they try to predict financial time series. The practitioners are now faced with the problem of finding out which combinations of those indicators are most significant or relevant, and how their significance changes over time. The authors propose a novel neural architecture calledSupNet for estimating the significance of various parameters. The methodology is based on the principle of penalizing those features that are the largest contributors to the error term. Two algorithms based on this principle are proposed. This approach is different from related methodologies, which are based on the principle of removing parameters with the least significance. The proposed methodology is demonstrated on the next day returns of the DM-US currency exchange rate, and promising results are obtained.",
"corpus_id": 38525575,
"score": 0
},
{
"doc_id": "39997128",
"title": "The Clusnet Algorithm And Time Series Prediction",
"abstract": "This paper describes a novel neural network architecture named ClusNet. This network is designed to study the trade-offs between the simplicity of instance-based methods and the accuracy of the more computational intensive learning methods. The features that make this network different from existing learning algorithms are outlined. A simple proof of convergence of the ClusNet algorithm is given. Experimental results showing the convergence of the algorithm on a specific problem is also presented. In this paper, ClusNet is applied to predict the temporal continuation of the Mackey-Glass chaotic time series. A comparison between the results obtained with ClusNet and other neural network algorithms is made. For example, ClusNet requires one-tenth the computing resources of the instance-based local linear method for this application while achieving comparable accuracy in this task. The sensitivity of ClusNet prediction accuracies on specific clustering algorithms is examined for an application. The simplicity and fast convergence of ClusNet makes it ideal as a rapid prototyping tool for applications where on-line learning is required.",
"corpus_id": 39997128,
"score": 0
}
] |
arnetminer | {
"doc_id": "29616313",
"title": "An Ant Colony Algorithm for Steiner Trees: New Results",
"abstract": null,
"corpus_id": 29616313
} | [
{
"doc_id": "28678366",
"title": "Genetic Wrappers for Constructive Induction in High-Performance Data Mining",
"abstract": "We present an application of genetic algorithm-based design to configuration of high-level optimization systems, or wrappers, for relevance determination and constructive induction. Our system combines genetic wrappers with elicited knowledge on attribute relevance and synthesis. We discuss decision support issues in a large-scale commercial data mining project (cost prediction for multiple automobile insurance markets), and report experiments using D2K, a Java-based visual programming system for data mining and information visualization, and several commercial and research tools. Our GA system, Jenesis [HWRC00], is deployed on several network-of-workstation systems (Beowulf clusters). It achieves a linear speedup, due to a high degree of task parallelism, and improved test set accuracy, compared to decision tree learning with only constructive induction and state-space search-based wrappers [KJ97].",
"corpus_id": 28678366,
"score": 1
},
{
"doc_id": "11098618",
"title": "Relational Graphical Models of Computational Workflows for Data Mining",
"abstract": "Collaborative recommendation is the problem of analyzing the content of an information retrieval system and actions of its users, to predict additional topics or products a new user may find useful. Developing this capability poses several challenges to machine learning and reasoning under uncertainty. Recent systems such as CiteSeer [1] have succeeded in providing some specialized but comprehensive indices of full documents, but the kind of probabilistic models used in such indexing do not extend easily to information Grid databases and computational Grid workflows. The collection of user data from Grid portals [4] provides a test bed for the underlying IR technology, including learning and inference systems. To model workflows created using the TAVERNA editor [3] and SCUFL description language, the DESCRIBER system, shown in Figure 1, applies score-based structure learning algorithms, including Bayesian model selection and greedy search (cf. the K2 algorithm) adapted to relational graphical models. Figure 2 illustrates how the decision support front-end of DESCRIBER interacts with modules that learn and reason using probabilistic relational models,. The purpose is to discover interrelationships among, and thereby recommend, components used in workflows developed by other Grid users.",
"corpus_id": 11098618,
"score": 1
},
{
"doc_id": "5663752",
"title": "Text Extraction from the Web via Text-to-Tag Ratio",
"abstract": "We describe a method to extract content text from diverse Web pages by using the HTML document's text-to-tag ratio rather than specific HTML cues that may not be constant across various Web pages. We describe how to compute the text-to-tag ratio on a line-by-line basis and then cluster the results into content and non-content areas. With this approach we then show surprisingly high levels of recall for all levels of precision, and a large space savings.",
"corpus_id": 5663752,
"score": 1
},
{
"doc_id": "205451192",
"title": "Genetic wrappers for feature selection in decision tree induction and variable ordering in Bayesian network structure learning",
"abstract": "In this paper, we address the automated tuning of input specification for supervised inductive learning and develop combinatorial optimization solutions for two such tuning problems. First, we present a framework for selection and reordering of input variables to reduce generalization error in classification and probabilistic inference. One purpose of selection is to control overfitting using validation set accuracy as a criterion for relevance. Similarly, some inductive learning algorithms, such as greedy algorithms for learning probabilistic networks, are sensitive to the evaluation order of variables. We design a generic fitness function for validation of input specification, then use it to develop two genetic algorithm wrappers: one for the variable selection problem for decision tree inducers and one for the variable ordering problem for Bayesian network structure learning. We evaluate the wrappers, using real-world data for the selection wrapper and synthetic data for both, and discuss their limitations and generalizability to other inducers.",
"corpus_id": 205451192,
"score": 1
},
{
"doc_id": "8303407",
"title": "Probabilistic Prediction of Protein Secondary Structure Using Causal Networks (Extended Abstract)",
"abstract": "In this paper we present a probabilistic approach to analysis and prediction of protein structure. We argue that this approach provides a flexible and convenient mechanism to perform general scientific data analysis in molecular biology. We apply our approach to an important problem in molecular biology--predicting the secondary structure of proteins--and obtain experimental results comparable to several other methods. The causal networks that we use provide a very convenient medium for the scientist to experiment with different empirical models and obtain possibly important insights about the problem being studied.",
"corpus_id": 8303407,
"score": 1
},
{
"doc_id": "38525575",
"title": "Parameter significance estimation and financial prediction",
"abstract": "This paper deals with the problem of parameter significance estimation, and its application to currency exchange rate prediction. The basic problem is that over the years, practitioners in the field of financial engineering have developed dozens of technical and fundamental indicators on the basis of which they try to predict financial time series. The practitioners are now faced with the problem of finding out which combinations of those indicators are most significant or relevant, and how their significance changes over time. The authors propose a novel neural architecture calledSupNet for estimating the significance of various parameters. The methodology is based on the principle of penalizing those features that are the largest contributors to the error term. Two algorithms based on this principle are proposed. This approach is different from related methodologies, which are based on the principle of removing parameters with the least significance. The proposed methodology is demonstrated on the next day returns of the DM-US currency exchange rate, and promising results are obtained.",
"corpus_id": 38525575,
"score": 0
},
{
"doc_id": "39997128",
"title": "The Clusnet Algorithm And Time Series Prediction",
"abstract": "This paper describes a novel neural network architecture named ClusNet. This network is designed to study the trade-offs between the simplicity of instance-based methods and the accuracy of the more computational intensive learning methods. The features that make this network different from existing learning algorithms are outlined. A simple proof of convergence of the ClusNet algorithm is given. Experimental results showing the convergence of the algorithm on a specific problem is also presented. In this paper, ClusNet is applied to predict the temporal continuation of the Mackey-Glass chaotic time series. A comparison between the results obtained with ClusNet and other neural network algorithms is made. For example, ClusNet requires one-tenth the computing resources of the instance-based local linear method for this application while achieving comparable accuracy in this task. The sensitivity of ClusNet prediction accuracies on specific clustering algorithms is examined for an application. The simplicity and fast convergence of ClusNet makes it ideal as a rapid prototyping tool for applications where on-line learning is required.",
"corpus_id": 39997128,
"score": 0
}
] |
arnetminer | {
"doc_id": "5663752",
"title": "Text Extraction from the Web via Text-to-Tag Ratio",
"abstract": "We describe a method to extract content text from diverse Web pages by using the HTML document's text-to-tag ratio rather than specific HTML cues that may not be constant across various Web pages. We describe how to compute the text-to-tag ratio on a line-by-line basis and then cluster the results into content and non-content areas. With this approach we then show surprisingly high levels of recall for all levels of precision, and a large space savings.",
"corpus_id": 5663752
} | [
{
"doc_id": "8590688",
"title": "Evolutionary tree genetic programming",
"abstract": "We introduce a clustering-based method of subpopulation management in genetic programming (GP) called Evolutionary Tree Genetic Programming (ETGP). The biological motivation behind this work is the observation that the natural evolution follows a tree-like phylogenetic pattern. Our goal is to simulate similar behavior in artificial evolutionary systems such as GP. To test our model we use three common GP benchmarks: the Ant Algorithm, 11-Multiplexer, and Parity problems.The performance of the ETGP system is empirically compared to those of the GP system. Code size and variance are consistently reduced by a small but statistically significant percentage, resulting in a slight speedup in the Ant and 11-Multiplexer problems, while the same comparisons on the Parity problem are inconclusive.",
"corpus_id": 8590688,
"score": 1
},
{
"doc_id": "2749970",
"title": "Layered Learning in Genetic Programming for a Cooperative Robot Soccer Problem",
"abstract": "We present an alternative to standard genetic programming (GP) that applies layered learning techniques to decompose a problem. GP is applied to subproblems sequentially, where the population in the last generation of a subproblem is used as the initial population of the next subproblem. This method is applied to evolve agents to play keep-away soccer, a subproblem of robotic soccer that requires cooperation among multiple agents in a dynnamic environment. The layered learning paradigm allows GP to evolve better solutions faster than standard GP. Results show that the layered learning GP outperforms standard GP by evolving a lower fitness faster and an overall better fitness. Results indicate a wide area of future research with layered learning in GP.",
"corpus_id": 2749970,
"score": 1
},
{
"doc_id": "28678366",
"title": "Genetic Wrappers for Constructive Induction in High-Performance Data Mining",
"abstract": "We present an application of genetic algorithm-based design to configuration of high-level optimization systems, or wrappers, for relevance determination and constructive induction. Our system combines genetic wrappers with elicited knowledge on attribute relevance and synthesis. We discuss decision support issues in a large-scale commercial data mining project (cost prediction for multiple automobile insurance markets), and report experiments using D2K, a Java-based visual programming system for data mining and information visualization, and several commercial and research tools. Our GA system, Jenesis [HWRC00], is deployed on several network-of-workstation systems (Beowulf clusters). It achieves a linear speedup, due to a high degree of task parallelism, and improved test set accuracy, compared to decision tree learning with only constructive induction and state-space search-based wrappers [KJ97].",
"corpus_id": 28678366,
"score": 1
},
{
"doc_id": "39278459",
"title": "Genetic Algorithms for Reformulation of Large-Scale KDD Problems with Many Irrelevant Attributes",
"abstract": "The goal of this research is to apply genetic implementations of algorithms for selection, partitioning, and synthesis of attributes in large-scale data mining problems. Domain knowledge about these operators has been shown to reduce the number of fitness evaluations for candidate attributes. We report results on genetic optimization of attribute selection problems and current work on attribute partitioning, synthesis specifications, and the encoding of domain knowledge about operators in a fitness function. The purpose of this approach is to reduce overfitting in inductive learning and produce more general genetic versions of existing search-based algorithms (or wrappers) for KDD performance tuning [KS98, HG00]. Several GA implementations of alternative attribute synthesis algorithms are applied to concept learning problems in military and commercial KDD applications. One of these, Jenesis, is deployed on several network-of-workstation clusters. It is shown to achieve strongly improved test set accuracy, compared to unwrapped decision tree learning and search-based wrappers [KS98].",
"corpus_id": 39278459,
"score": 1
},
{
"doc_id": "6398168",
"title": "Probabilistic Learning in Bayesian and Stochastic Neural Networks",
"abstract": "The goal of this research is to integrate aspects of artificial neural networks (ANNs) with symbolic machine learning methods in a probabilistic reasoning framework. Improved understanding of the semantics of neural nets supports principled integration efforts between seminumerical (so-called \"subsymbolic\") and symbolic intelligent systems. My dissertation focuses on learning of spatiotemporal (ST) sequences. In recent work, I have investigated architectures for modeling of ST sequences, and dualities between Bayesian networks and ANNs that expose their probabilistic and information theoretic foundations. In addition, I am developing algorithms for automated construction of Bayesian networks (and hybrid models); metrics for comparison of Bayesian networks across architectures; and a quantitative theory of feature construction (in the spirit of the PAC formalism from computational learning theory) for this learning environment. (Haussler 1988) Such methods for pattern prediction will be useful for building advanced knowledge based systems, with diagnostic applications such as intelligent monitoring tools.",
"corpus_id": 6398168,
"score": 1
},
{
"doc_id": "39997128",
"title": "The Clusnet Algorithm And Time Series Prediction",
"abstract": "This paper describes a novel neural network architecture named ClusNet. This network is designed to study the trade-offs between the simplicity of instance-based methods and the accuracy of the more computational intensive learning methods. The features that make this network different from existing learning algorithms are outlined. A simple proof of convergence of the ClusNet algorithm is given. Experimental results showing the convergence of the algorithm on a specific problem is also presented. In this paper, ClusNet is applied to predict the temporal continuation of the Mackey-Glass chaotic time series. A comparison between the results obtained with ClusNet and other neural network algorithms is made. For example, ClusNet requires one-tenth the computing resources of the instance-based local linear method for this application while achieving comparable accuracy in this task. The sensitivity of ClusNet prediction accuracies on specific clustering algorithms is examined for an application. The simplicity and fast convergence of ClusNet makes it ideal as a rapid prototyping tool for applications where on-line learning is required.",
"corpus_id": 39997128,
"score": 0
},
{
"doc_id": "38525575",
"title": "Parameter significance estimation and financial prediction",
"abstract": "This paper deals with the problem of parameter significance estimation, and its application to currency exchange rate prediction. The basic problem is that over the years, practitioners in the field of financial engineering have developed dozens of technical and fundamental indicators on the basis of which they try to predict financial time series. The practitioners are now faced with the problem of finding out which combinations of those indicators are most significant or relevant, and how their significance changes over time. The authors propose a novel neural architecture calledSupNet for estimating the significance of various parameters. The methodology is based on the principle of penalizing those features that are the largest contributors to the error term. Two algorithms based on this principle are proposed. This approach is different from related methodologies, which are based on the principle of removing parameters with the least significance. The proposed methodology is demonstrated on the next day returns of the DM-US currency exchange rate, and promising results are obtained.",
"corpus_id": 38525575,
"score": 0
}
] |
arnetminer | {
"doc_id": "16368379",
"title": "Self-Organized-Expert Modular Network for Classification of Spatiotemporal Sequences",
"abstract": "In this paper, we investigate a form of modular neural network for classification with a pre-separated input vectors entering its specialist expert networks, b specialist networks which are self-organized radial-basis function or self-targeted feedforward type and c which fuses or integrates the specialists with a single-layer net. When the modular architecture is applied to spatiotemporal sequences, the Specialist Nets are recurrent; specifically, we use the Input Recurrent type.The Specialist Networks SNs learn to divide their input space into a number of equivalence classes defined by self-organized clustering and learning using the statistical properties of the input domain. Once the specialists have settled in their training, the Fusion Network is trained by any supervised method to map to the semantic classes.We discuss the fact that this architecture and its training is quite distinct from the hierarchical mixture of experts HME type as well as from stacked generalization.Because the equivalence classes to which the SNs map the input vectors are determined by the natural clustering of the input data, the SNs learn rapidly and accurately. The fusion network also trains rapidly by reason of its simplicity.We argue, on theoretical grounds, that the accuracy of the system should be positively correlated to the product of the number of equivalence classes for all of the SNs.This network was applied, as an empirical test case, to the classification of melodies presented as direct audio events temporal sequences played by a human and subject, therefore, to biological variations. The audio input was divided into two modes: a frequency or pitch variation and b rhythm, both as functions of time. The results and observations show the technique to be very robust and support the theoretical deductions concerning accuracy.",
"corpus_id": 16368379
} | [
{
"doc_id": "28678366",
"title": "Genetic Wrappers for Constructive Induction in High-Performance Data Mining",
"abstract": "We present an application of genetic algorithm-based design to configuration of high-level optimization systems, or wrappers, for relevance determination and constructive induction. Our system combines genetic wrappers with elicited knowledge on attribute relevance and synthesis. We discuss decision support issues in a large-scale commercial data mining project (cost prediction for multiple automobile insurance markets), and report experiments using D2K, a Java-based visual programming system for data mining and information visualization, and several commercial and research tools. Our GA system, Jenesis [HWRC00], is deployed on several network-of-workstation systems (Beowulf clusters). It achieves a linear speedup, due to a high degree of task parallelism, and improved test set accuracy, compared to decision tree learning with only constructive induction and state-space search-based wrappers [KJ97].",
"corpus_id": 28678366,
"score": 1
},
{
"doc_id": "5663752",
"title": "Text Extraction from the Web via Text-to-Tag Ratio",
"abstract": "We describe a method to extract content text from diverse Web pages by using the HTML document's text-to-tag ratio rather than specific HTML cues that may not be constant across various Web pages. We describe how to compute the text-to-tag ratio on a line-by-line basis and then cluster the results into content and non-content areas. With this approach we then show surprisingly high levels of recall for all levels of precision, and a large space savings.",
"corpus_id": 5663752,
"score": 1
},
{
"doc_id": "11098618",
"title": "Relational Graphical Models of Computational Workflows for Data Mining",
"abstract": "Collaborative recommendation is the problem of analyzing the content of an information retrieval system and actions of its users, to predict additional topics or products a new user may find useful. Developing this capability poses several challenges to machine learning and reasoning under uncertainty. Recent systems such as CiteSeer [1] have succeeded in providing some specialized but comprehensive indices of full documents, but the kind of probabilistic models used in such indexing do not extend easily to information Grid databases and computational Grid workflows. The collection of user data from Grid portals [4] provides a test bed for the underlying IR technology, including learning and inference systems. To model workflows created using the TAVERNA editor [3] and SCUFL description language, the DESCRIBER system, shown in Figure 1, applies score-based structure learning algorithms, including Bayesian model selection and greedy search (cf. the K2 algorithm) adapted to relational graphical models. Figure 2 illustrates how the decision support front-end of DESCRIBER interacts with modules that learn and reason using probabilistic relational models,. The purpose is to discover interrelationships among, and thereby recommend, components used in workflows developed by other Grid users.",
"corpus_id": 11098618,
"score": 1
},
{
"doc_id": "8303407",
"title": "Probabilistic Prediction of Protein Secondary Structure Using Causal Networks (Extended Abstract)",
"abstract": "In this paper we present a probabilistic approach to analysis and prediction of protein structure. We argue that this approach provides a flexible and convenient mechanism to perform general scientific data analysis in molecular biology. We apply our approach to an important problem in molecular biology--predicting the secondary structure of proteins--and obtain experimental results comparable to several other methods. The causal networks that we use provide a very convenient medium for the scientist to experiment with different empirical models and obtain possibly important insights about the problem being studied.",
"corpus_id": 8303407,
"score": 1
},
{
"doc_id": "205451192",
"title": "Genetic wrappers for feature selection in decision tree induction and variable ordering in Bayesian network structure learning",
"abstract": "In this paper, we address the automated tuning of input specification for supervised inductive learning and develop combinatorial optimization solutions for two such tuning problems. First, we present a framework for selection and reordering of input variables to reduce generalization error in classification and probabilistic inference. One purpose of selection is to control overfitting using validation set accuracy as a criterion for relevance. Similarly, some inductive learning algorithms, such as greedy algorithms for learning probabilistic networks, are sensitive to the evaluation order of variables. We design a generic fitness function for validation of input specification, then use it to develop two genetic algorithm wrappers: one for the variable selection problem for decision tree inducers and one for the variable ordering problem for Bayesian network structure learning. We evaluate the wrappers, using real-world data for the selection wrapper and synthetic data for both, and discuss their limitations and generalizability to other inducers.",
"corpus_id": 205451192,
"score": 1
},
{
"doc_id": "38525575",
"title": "Parameter significance estimation and financial prediction",
"abstract": "This paper deals with the problem of parameter significance estimation, and its application to currency exchange rate prediction. The basic problem is that over the years, practitioners in the field of financial engineering have developed dozens of technical and fundamental indicators on the basis of which they try to predict financial time series. The practitioners are now faced with the problem of finding out which combinations of those indicators are most significant or relevant, and how their significance changes over time. The authors propose a novel neural architecture calledSupNet for estimating the significance of various parameters. The methodology is based on the principle of penalizing those features that are the largest contributors to the error term. Two algorithms based on this principle are proposed. This approach is different from related methodologies, which are based on the principle of removing parameters with the least significance. The proposed methodology is demonstrated on the next day returns of the DM-US currency exchange rate, and promising results are obtained.",
"corpus_id": 38525575,
"score": 0
},
{
"doc_id": "39997128",
"title": "The Clusnet Algorithm And Time Series Prediction",
"abstract": "This paper describes a novel neural network architecture named ClusNet. This network is designed to study the trade-offs between the simplicity of instance-based methods and the accuracy of the more computational intensive learning methods. The features that make this network different from existing learning algorithms are outlined. A simple proof of convergence of the ClusNet algorithm is given. Experimental results showing the convergence of the algorithm on a specific problem is also presented. In this paper, ClusNet is applied to predict the temporal continuation of the Mackey-Glass chaotic time series. A comparison between the results obtained with ClusNet and other neural network algorithms is made. For example, ClusNet requires one-tenth the computing resources of the instance-based local linear method for this application while achieving comparable accuracy in this task. The sensitivity of ClusNet prediction accuracies on specific clustering algorithms is examined for an application. The simplicity and fast convergence of ClusNet makes it ideal as a rapid prototyping tool for applications where on-line learning is required.",
"corpus_id": 39997128,
"score": 0
}
] |
arnetminer | {
"doc_id": "6398168",
"title": "Probabilistic Learning in Bayesian and Stochastic Neural Networks",
"abstract": "The goal of this research is to integrate aspects of artificial neural networks (ANNs) with symbolic machine learning methods in a probabilistic reasoning framework. Improved understanding of the semantics of neural nets supports principled integration efforts between seminumerical (so-called \"subsymbolic\") and symbolic intelligent systems. My dissertation focuses on learning of spatiotemporal (ST) sequences. In recent work, I have investigated architectures for modeling of ST sequences, and dualities between Bayesian networks and ANNs that expose their probabilistic and information theoretic foundations. In addition, I am developing algorithms for automated construction of Bayesian networks (and hybrid models); metrics for comparison of Bayesian networks across architectures; and a quantitative theory of feature construction (in the spirit of the PAC formalism from computational learning theory) for this learning environment. (Haussler 1988) Such methods for pattern prediction will be useful for building advanced knowledge based systems, with diagnostic applications such as intelligent monitoring tools.",
"corpus_id": 6398168
} | [
{
"doc_id": "28678366",
"title": "Genetic Wrappers for Constructive Induction in High-Performance Data Mining",
"abstract": "We present an application of genetic algorithm-based design to configuration of high-level optimization systems, or wrappers, for relevance determination and constructive induction. Our system combines genetic wrappers with elicited knowledge on attribute relevance and synthesis. We discuss decision support issues in a large-scale commercial data mining project (cost prediction for multiple automobile insurance markets), and report experiments using D2K, a Java-based visual programming system for data mining and information visualization, and several commercial and research tools. Our GA system, Jenesis [HWRC00], is deployed on several network-of-workstation systems (Beowulf clusters). It achieves a linear speedup, due to a high degree of task parallelism, and improved test set accuracy, compared to decision tree learning with only constructive induction and state-space search-based wrappers [KJ97].",
"corpus_id": 28678366,
"score": 1
},
{
"doc_id": "205451192",
"title": "Genetic wrappers for feature selection in decision tree induction and variable ordering in Bayesian network structure learning",
"abstract": "In this paper, we address the automated tuning of input specification for supervised inductive learning and develop combinatorial optimization solutions for two such tuning problems. First, we present a framework for selection and reordering of input variables to reduce generalization error in classification and probabilistic inference. One purpose of selection is to control overfitting using validation set accuracy as a criterion for relevance. Similarly, some inductive learning algorithms, such as greedy algorithms for learning probabilistic networks, are sensitive to the evaluation order of variables. We design a generic fitness function for validation of input specification, then use it to develop two genetic algorithm wrappers: one for the variable selection problem for decision tree inducers and one for the variable ordering problem for Bayesian network structure learning. We evaluate the wrappers, using real-world data for the selection wrapper and synthetic data for both, and discuss their limitations and generalizability to other inducers.",
"corpus_id": 205451192,
"score": 1
},
{
"doc_id": "8303407",
"title": "Probabilistic Prediction of Protein Secondary Structure Using Causal Networks (Extended Abstract)",
"abstract": "In this paper we present a probabilistic approach to analysis and prediction of protein structure. We argue that this approach provides a flexible and convenient mechanism to perform general scientific data analysis in molecular biology. We apply our approach to an important problem in molecular biology--predicting the secondary structure of proteins--and obtain experimental results comparable to several other methods. The causal networks that we use provide a very convenient medium for the scientist to experiment with different empirical models and obtain possibly important insights about the problem being studied.",
"corpus_id": 8303407,
"score": 1
},
{
"doc_id": "8590688",
"title": "Evolutionary tree genetic programming",
"abstract": "We introduce a clustering-based method of subpopulation management in genetic programming (GP) called Evolutionary Tree Genetic Programming (ETGP). The biological motivation behind this work is the observation that the natural evolution follows a tree-like phylogenetic pattern. Our goal is to simulate similar behavior in artificial evolutionary systems such as GP. To test our model we use three common GP benchmarks: the Ant Algorithm, 11-Multiplexer, and Parity problems.The performance of the ETGP system is empirically compared to those of the GP system. Code size and variance are consistently reduced by a small but statistically significant percentage, resulting in a slight speedup in the Ant and 11-Multiplexer problems, while the same comparisons on the Parity problem are inconclusive.",
"corpus_id": 8590688,
"score": 1
},
{
"doc_id": "11098618",
"title": "Relational Graphical Models of Computational Workflows for Data Mining",
"abstract": "Collaborative recommendation is the problem of analyzing the content of an information retrieval system and actions of its users, to predict additional topics or products a new user may find useful. Developing this capability poses several challenges to machine learning and reasoning under uncertainty. Recent systems such as CiteSeer [1] have succeeded in providing some specialized but comprehensive indices of full documents, but the kind of probabilistic models used in such indexing do not extend easily to information Grid databases and computational Grid workflows. The collection of user data from Grid portals [4] provides a test bed for the underlying IR technology, including learning and inference systems. To model workflows created using the TAVERNA editor [3] and SCUFL description language, the DESCRIBER system, shown in Figure 1, applies score-based structure learning algorithms, including Bayesian model selection and greedy search (cf. the K2 algorithm) adapted to relational graphical models. Figure 2 illustrates how the decision support front-end of DESCRIBER interacts with modules that learn and reason using probabilistic relational models,. The purpose is to discover interrelationships among, and thereby recommend, components used in workflows developed by other Grid users.",
"corpus_id": 11098618,
"score": 1
},
{
"doc_id": "38525575",
"title": "Parameter significance estimation and financial prediction",
"abstract": "This paper deals with the problem of parameter significance estimation, and its application to currency exchange rate prediction. The basic problem is that over the years, practitioners in the field of financial engineering have developed dozens of technical and fundamental indicators on the basis of which they try to predict financial time series. The practitioners are now faced with the problem of finding out which combinations of those indicators are most significant or relevant, and how their significance changes over time. The authors propose a novel neural architecture calledSupNet for estimating the significance of various parameters. The methodology is based on the principle of penalizing those features that are the largest contributors to the error term. Two algorithms based on this principle are proposed. This approach is different from related methodologies, which are based on the principle of removing parameters with the least significance. The proposed methodology is demonstrated on the next day returns of the DM-US currency exchange rate, and promising results are obtained.",
"corpus_id": 38525575,
"score": 0
},
{
"doc_id": "39997128",
"title": "The Clusnet Algorithm And Time Series Prediction",
"abstract": "This paper describes a novel neural network architecture named ClusNet. This network is designed to study the trade-offs between the simplicity of instance-based methods and the accuracy of the more computational intensive learning methods. The features that make this network different from existing learning algorithms are outlined. A simple proof of convergence of the ClusNet algorithm is given. Experimental results showing the convergence of the algorithm on a specific problem is also presented. In this paper, ClusNet is applied to predict the temporal continuation of the Mackey-Glass chaotic time series. A comparison between the results obtained with ClusNet and other neural network algorithms is made. For example, ClusNet requires one-tenth the computing resources of the instance-based local linear method for this application while achieving comparable accuracy in this task. The sensitivity of ClusNet prediction accuracies on specific clustering algorithms is examined for an application. The simplicity and fast convergence of ClusNet makes it ideal as a rapid prototyping tool for applications where on-line learning is required.",
"corpus_id": 39997128,
"score": 0
}
] |
arnetminer | {
"doc_id": "16016769",
"title": "Structural Prediction of Protein-Protein Interactions in Saccharomyces cerevisiae",
"abstract": "Protein-protein interactions (PPI) refer to the associations between proteins and the study of these associations. Several approaches have been used to address the problem of predicting PPI. Some of them are based on biological features extracted from a protein sequence (such as, amino acid composition, GO terms, etc.); others use relational and structural features extracted from the PPI network, which can be represented as a graph. Our approach falls in the second category. We adapt a general approach to graph feature extraction that has previously been applied to collaborative recommendation of friends in social networks. Several structural features are identified based on the PPI graph and used to learn classifiers for predicting new interactions. Two datasets containing Saccharomyces cerevisiae PPI are used to test the proposed approach. Both these datasets were assembled from the Database of Interacting Proteins (DIP). We assembled the first data set directly from DIP in April 2006, while the second data set has been used in previous studies, thus making it easy to compare our approach with previous approaches. Several classifiers are trained using the structural features extracted from the interactions graph. The results show good performance (accuracy, sensitivity and specificity), proving that the structural features are highly predictive with respect to PPI.",
"corpus_id": 16016769
} | [
{
"doc_id": "205451192",
"title": "Genetic wrappers for feature selection in decision tree induction and variable ordering in Bayesian network structure learning",
"abstract": "In this paper, we address the automated tuning of input specification for supervised inductive learning and develop combinatorial optimization solutions for two such tuning problems. First, we present a framework for selection and reordering of input variables to reduce generalization error in classification and probabilistic inference. One purpose of selection is to control overfitting using validation set accuracy as a criterion for relevance. Similarly, some inductive learning algorithms, such as greedy algorithms for learning probabilistic networks, are sensitive to the evaluation order of variables. We design a generic fitness function for validation of input specification, then use it to develop two genetic algorithm wrappers: one for the variable selection problem for decision tree inducers and one for the variable ordering problem for Bayesian network structure learning. We evaluate the wrappers, using real-world data for the selection wrapper and synthetic data for both, and discuss their limitations and generalizability to other inducers.",
"corpus_id": 205451192,
"score": 1
},
{
"doc_id": "11098618",
"title": "Relational Graphical Models of Computational Workflows for Data Mining",
"abstract": "Collaborative recommendation is the problem of analyzing the content of an information retrieval system and actions of its users, to predict additional topics or products a new user may find useful. Developing this capability poses several challenges to machine learning and reasoning under uncertainty. Recent systems such as CiteSeer [1] have succeeded in providing some specialized but comprehensive indices of full documents, but the kind of probabilistic models used in such indexing do not extend easily to information Grid databases and computational Grid workflows. The collection of user data from Grid portals [4] provides a test bed for the underlying IR technology, including learning and inference systems. To model workflows created using the TAVERNA editor [3] and SCUFL description language, the DESCRIBER system, shown in Figure 1, applies score-based structure learning algorithms, including Bayesian model selection and greedy search (cf. the K2 algorithm) adapted to relational graphical models. Figure 2 illustrates how the decision support front-end of DESCRIBER interacts with modules that learn and reason using probabilistic relational models,. The purpose is to discover interrelationships among, and thereby recommend, components used in workflows developed by other Grid users.",
"corpus_id": 11098618,
"score": 1
},
{
"doc_id": "5663752",
"title": "Text Extraction from the Web via Text-to-Tag Ratio",
"abstract": "We describe a method to extract content text from diverse Web pages by using the HTML document's text-to-tag ratio rather than specific HTML cues that may not be constant across various Web pages. We describe how to compute the text-to-tag ratio on a line-by-line basis and then cluster the results into content and non-content areas. With this approach we then show surprisingly high levels of recall for all levels of precision, and a large space savings.",
"corpus_id": 5663752,
"score": 1
},
{
"doc_id": "28678366",
"title": "Genetic Wrappers for Constructive Induction in High-Performance Data Mining",
"abstract": "We present an application of genetic algorithm-based design to configuration of high-level optimization systems, or wrappers, for relevance determination and constructive induction. Our system combines genetic wrappers with elicited knowledge on attribute relevance and synthesis. We discuss decision support issues in a large-scale commercial data mining project (cost prediction for multiple automobile insurance markets), and report experiments using D2K, a Java-based visual programming system for data mining and information visualization, and several commercial and research tools. Our GA system, Jenesis [HWRC00], is deployed on several network-of-workstation systems (Beowulf clusters). It achieves a linear speedup, due to a high degree of task parallelism, and improved test set accuracy, compared to decision tree learning with only constructive induction and state-space search-based wrappers [KJ97].",
"corpus_id": 28678366,
"score": 1
},
{
"doc_id": "8303407",
"title": "Probabilistic Prediction of Protein Secondary Structure Using Causal Networks (Extended Abstract)",
"abstract": "In this paper we present a probabilistic approach to analysis and prediction of protein structure. We argue that this approach provides a flexible and convenient mechanism to perform general scientific data analysis in molecular biology. We apply our approach to an important problem in molecular biology--predicting the secondary structure of proteins--and obtain experimental results comparable to several other methods. The causal networks that we use provide a very convenient medium for the scientist to experiment with different empirical models and obtain possibly important insights about the problem being studied.",
"corpus_id": 8303407,
"score": 1
},
{
"doc_id": "38525575",
"title": "Parameter significance estimation and financial prediction",
"abstract": "This paper deals with the problem of parameter significance estimation, and its application to currency exchange rate prediction. The basic problem is that over the years, practitioners in the field of financial engineering have developed dozens of technical and fundamental indicators on the basis of which they try to predict financial time series. The practitioners are now faced with the problem of finding out which combinations of those indicators are most significant or relevant, and how their significance changes over time. The authors propose a novel neural architecture calledSupNet for estimating the significance of various parameters. The methodology is based on the principle of penalizing those features that are the largest contributors to the error term. Two algorithms based on this principle are proposed. This approach is different from related methodologies, which are based on the principle of removing parameters with the least significance. The proposed methodology is demonstrated on the next day returns of the DM-US currency exchange rate, and promising results are obtained.",
"corpus_id": 38525575,
"score": 0
},
{
"doc_id": "39997128",
"title": "The Clusnet Algorithm And Time Series Prediction",
"abstract": "This paper describes a novel neural network architecture named ClusNet. This network is designed to study the trade-offs between the simplicity of instance-based methods and the accuracy of the more computational intensive learning methods. The features that make this network different from existing learning algorithms are outlined. A simple proof of convergence of the ClusNet algorithm is given. Experimental results showing the convergence of the algorithm on a specific problem is also presented. In this paper, ClusNet is applied to predict the temporal continuation of the Mackey-Glass chaotic time series. A comparison between the results obtained with ClusNet and other neural network algorithms is made. For example, ClusNet requires one-tenth the computing resources of the instance-based local linear method for this application while achieving comparable accuracy in this task. The sensitivity of ClusNet prediction accuracies on specific clustering algorithms is examined for an application. The simplicity and fast convergence of ClusNet makes it ideal as a rapid prototyping tool for applications where on-line learning is required.",
"corpus_id": 39997128,
"score": 0
}
] |
arnetminer | {
"doc_id": "1763805",
"title": "A Multistrategy Approach to Classifier Learning from Time Series",
"abstract": "We present an approach to inductive concept learning using multiple models for time series. Our objective is to improve the efficiency and accuracy of concept learning by decomposing learning tasks that admit multiple types of learning architectures and mixture estimation methods. The decomposition method adapts attribute subset selection and constructive induction (cluster definition) to define new subproblems. To these problem definitions, we can apply metric-based model selection to select from a database of learning components, thereby producing a specification for supervised learning using a mixture model. We report positive learning results using temporal artificial neural networks (ANNs), on a synthetic, multiattribute learning problem and on a real-world time series monitoring application.",
"corpus_id": 1763805
} | [
{
"doc_id": "8303407",
"title": "Probabilistic Prediction of Protein Secondary Structure Using Causal Networks (Extended Abstract)",
"abstract": "In this paper we present a probabilistic approach to analysis and prediction of protein structure. We argue that this approach provides a flexible and convenient mechanism to perform general scientific data analysis in molecular biology. We apply our approach to an important problem in molecular biology--predicting the secondary structure of proteins--and obtain experimental results comparable to several other methods. The causal networks that we use provide a very convenient medium for the scientist to experiment with different empirical models and obtain possibly important insights about the problem being studied.",
"corpus_id": 8303407,
"score": 1
},
{
"doc_id": "11098618",
"title": "Relational Graphical Models of Computational Workflows for Data Mining",
"abstract": "Collaborative recommendation is the problem of analyzing the content of an information retrieval system and actions of its users, to predict additional topics or products a new user may find useful. Developing this capability poses several challenges to machine learning and reasoning under uncertainty. Recent systems such as CiteSeer [1] have succeeded in providing some specialized but comprehensive indices of full documents, but the kind of probabilistic models used in such indexing do not extend easily to information Grid databases and computational Grid workflows. The collection of user data from Grid portals [4] provides a test bed for the underlying IR technology, including learning and inference systems. To model workflows created using the TAVERNA editor [3] and SCUFL description language, the DESCRIBER system, shown in Figure 1, applies score-based structure learning algorithms, including Bayesian model selection and greedy search (cf. the K2 algorithm) adapted to relational graphical models. Figure 2 illustrates how the decision support front-end of DESCRIBER interacts with modules that learn and reason using probabilistic relational models,. The purpose is to discover interrelationships among, and thereby recommend, components used in workflows developed by other Grid users.",
"corpus_id": 11098618,
"score": 1
},
{
"doc_id": "205451192",
"title": "Genetic wrappers for feature selection in decision tree induction and variable ordering in Bayesian network structure learning",
"abstract": "In this paper, we address the automated tuning of input specification for supervised inductive learning and develop combinatorial optimization solutions for two such tuning problems. First, we present a framework for selection and reordering of input variables to reduce generalization error in classification and probabilistic inference. One purpose of selection is to control overfitting using validation set accuracy as a criterion for relevance. Similarly, some inductive learning algorithms, such as greedy algorithms for learning probabilistic networks, are sensitive to the evaluation order of variables. We design a generic fitness function for validation of input specification, then use it to develop two genetic algorithm wrappers: one for the variable selection problem for decision tree inducers and one for the variable ordering problem for Bayesian network structure learning. We evaluate the wrappers, using real-world data for the selection wrapper and synthetic data for both, and discuss their limitations and generalizability to other inducers.",
"corpus_id": 205451192,
"score": 1
},
{
"doc_id": "28678366",
"title": "Genetic Wrappers for Constructive Induction in High-Performance Data Mining",
"abstract": "We present an application of genetic algorithm-based design to configuration of high-level optimization systems, or wrappers, for relevance determination and constructive induction. Our system combines genetic wrappers with elicited knowledge on attribute relevance and synthesis. We discuss decision support issues in a large-scale commercial data mining project (cost prediction for multiple automobile insurance markets), and report experiments using D2K, a Java-based visual programming system for data mining and information visualization, and several commercial and research tools. Our GA system, Jenesis [HWRC00], is deployed on several network-of-workstation systems (Beowulf clusters). It achieves a linear speedup, due to a high degree of task parallelism, and improved test set accuracy, compared to decision tree learning with only constructive induction and state-space search-based wrappers [KJ97].",
"corpus_id": 28678366,
"score": 1
},
{
"doc_id": "5663752",
"title": "Text Extraction from the Web via Text-to-Tag Ratio",
"abstract": "We describe a method to extract content text from diverse Web pages by using the HTML document's text-to-tag ratio rather than specific HTML cues that may not be constant across various Web pages. We describe how to compute the text-to-tag ratio on a line-by-line basis and then cluster the results into content and non-content areas. With this approach we then show surprisingly high levels of recall for all levels of precision, and a large space savings.",
"corpus_id": 5663752,
"score": 1
},
{
"doc_id": "39997128",
"title": "The Clusnet Algorithm And Time Series Prediction",
"abstract": "This paper describes a novel neural network architecture named ClusNet. This network is designed to study the trade-offs between the simplicity of instance-based methods and the accuracy of the more computational intensive learning methods. The features that make this network different from existing learning algorithms are outlined. A simple proof of convergence of the ClusNet algorithm is given. Experimental results showing the convergence of the algorithm on a specific problem is also presented. In this paper, ClusNet is applied to predict the temporal continuation of the Mackey-Glass chaotic time series. A comparison between the results obtained with ClusNet and other neural network algorithms is made. For example, ClusNet requires one-tenth the computing resources of the instance-based local linear method for this application while achieving comparable accuracy in this task. The sensitivity of ClusNet prediction accuracies on specific clustering algorithms is examined for an application. The simplicity and fast convergence of ClusNet makes it ideal as a rapid prototyping tool for applications where on-line learning is required.",
"corpus_id": 39997128,
"score": 0
},
{
"doc_id": "38525575",
"title": "Parameter significance estimation and financial prediction",
"abstract": "This paper deals with the problem of parameter significance estimation, and its application to currency exchange rate prediction. The basic problem is that over the years, practitioners in the field of financial engineering have developed dozens of technical and fundamental indicators on the basis of which they try to predict financial time series. The practitioners are now faced with the problem of finding out which combinations of those indicators are most significant or relevant, and how their significance changes over time. The authors propose a novel neural architecture calledSupNet for estimating the significance of various parameters. The methodology is based on the principle of penalizing those features that are the largest contributors to the error term. Two algorithms based on this principle are proposed. This approach is different from related methodologies, which are based on the principle of removing parameters with the least significance. The proposed methodology is demonstrated on the next day returns of the DM-US currency exchange rate, and promising results are obtained.",
"corpus_id": 38525575,
"score": 0
}
] |
arnetminer | {
"doc_id": "8952809",
"title": "High-Performance Commercial Data Mining: A Multistrategy Machine Learning Application",
"abstract": "We present an application of inductive concept learning and interactive visualization techniques to a large-scale commercial data mining project. This paper focuses on design and configuration of high-level optimization systems (wrappers) for relevance determination and constructive induction, and on integrating these wrappers with elicited knowledge on attribute relevance and synthesis. In particular, we discuss decision support issues for the application (cost prediction for automobile insurance markets in several states) and report experiments using D2K, a Java-based visual programming system for data mining and information visualization, and several commercial and research tools. We describe exploratory clustering, descriptive statistics, and supervised decision tree learning in this application, focusing on a parallel genetic algorithm (GA) system, Jenesis, which is used to implement relevance determination (attribute subset selection). Deployed on several high-performance network-of-workstation systems (Beowulf clusters), Jenesis achieves a linear speedup, due to a high degree of task parallelism. Its test set accuracy is significantly higher than that of decision tree inducers alone and is comparable to that of the best extant search-space based wrappers.",
"corpus_id": 8952809
} | [
{
"doc_id": "8303407",
"title": "Probabilistic Prediction of Protein Secondary Structure Using Causal Networks (Extended Abstract)",
"abstract": "In this paper we present a probabilistic approach to analysis and prediction of protein structure. We argue that this approach provides a flexible and convenient mechanism to perform general scientific data analysis in molecular biology. We apply our approach to an important problem in molecular biology--predicting the secondary structure of proteins--and obtain experimental results comparable to several other methods. The causal networks that we use provide a very convenient medium for the scientist to experiment with different empirical models and obtain possibly important insights about the problem being studied.",
"corpus_id": 8303407,
"score": 1
},
{
"doc_id": "28678366",
"title": "Genetic Wrappers for Constructive Induction in High-Performance Data Mining",
"abstract": "We present an application of genetic algorithm-based design to configuration of high-level optimization systems, or wrappers, for relevance determination and constructive induction. Our system combines genetic wrappers with elicited knowledge on attribute relevance and synthesis. We discuss decision support issues in a large-scale commercial data mining project (cost prediction for multiple automobile insurance markets), and report experiments using D2K, a Java-based visual programming system for data mining and information visualization, and several commercial and research tools. Our GA system, Jenesis [HWRC00], is deployed on several network-of-workstation systems (Beowulf clusters). It achieves a linear speedup, due to a high degree of task parallelism, and improved test set accuracy, compared to decision tree learning with only constructive induction and state-space search-based wrappers [KJ97].",
"corpus_id": 28678366,
"score": 1
},
{
"doc_id": "5663752",
"title": "Text Extraction from the Web via Text-to-Tag Ratio",
"abstract": "We describe a method to extract content text from diverse Web pages by using the HTML document's text-to-tag ratio rather than specific HTML cues that may not be constant across various Web pages. We describe how to compute the text-to-tag ratio on a line-by-line basis and then cluster the results into content and non-content areas. With this approach we then show surprisingly high levels of recall for all levels of precision, and a large space savings.",
"corpus_id": 5663752,
"score": 1
},
{
"doc_id": "205451192",
"title": "Genetic wrappers for feature selection in decision tree induction and variable ordering in Bayesian network structure learning",
"abstract": "In this paper, we address the automated tuning of input specification for supervised inductive learning and develop combinatorial optimization solutions for two such tuning problems. First, we present a framework for selection and reordering of input variables to reduce generalization error in classification and probabilistic inference. One purpose of selection is to control overfitting using validation set accuracy as a criterion for relevance. Similarly, some inductive learning algorithms, such as greedy algorithms for learning probabilistic networks, are sensitive to the evaluation order of variables. We design a generic fitness function for validation of input specification, then use it to develop two genetic algorithm wrappers: one for the variable selection problem for decision tree inducers and one for the variable ordering problem for Bayesian network structure learning. We evaluate the wrappers, using real-world data for the selection wrapper and synthetic data for both, and discuss their limitations and generalizability to other inducers.",
"corpus_id": 205451192,
"score": 1
},
{
"doc_id": "11098618",
"title": "Relational Graphical Models of Computational Workflows for Data Mining",
"abstract": "Collaborative recommendation is the problem of analyzing the content of an information retrieval system and actions of its users, to predict additional topics or products a new user may find useful. Developing this capability poses several challenges to machine learning and reasoning under uncertainty. Recent systems such as CiteSeer [1] have succeeded in providing some specialized but comprehensive indices of full documents, but the kind of probabilistic models used in such indexing do not extend easily to information Grid databases and computational Grid workflows. The collection of user data from Grid portals [4] provides a test bed for the underlying IR technology, including learning and inference systems. To model workflows created using the TAVERNA editor [3] and SCUFL description language, the DESCRIBER system, shown in Figure 1, applies score-based structure learning algorithms, including Bayesian model selection and greedy search (cf. the K2 algorithm) adapted to relational graphical models. Figure 2 illustrates how the decision support front-end of DESCRIBER interacts with modules that learn and reason using probabilistic relational models,. The purpose is to discover interrelationships among, and thereby recommend, components used in workflows developed by other Grid users.",
"corpus_id": 11098618,
"score": 1
},
{
"doc_id": "38525575",
"title": "Parameter significance estimation and financial prediction",
"abstract": "This paper deals with the problem of parameter significance estimation, and its application to currency exchange rate prediction. The basic problem is that over the years, practitioners in the field of financial engineering have developed dozens of technical and fundamental indicators on the basis of which they try to predict financial time series. The practitioners are now faced with the problem of finding out which combinations of those indicators are most significant or relevant, and how their significance changes over time. The authors propose a novel neural architecture calledSupNet for estimating the significance of various parameters. The methodology is based on the principle of penalizing those features that are the largest contributors to the error term. Two algorithms based on this principle are proposed. This approach is different from related methodologies, which are based on the principle of removing parameters with the least significance. The proposed methodology is demonstrated on the next day returns of the DM-US currency exchange rate, and promising results are obtained.",
"corpus_id": 38525575,
"score": 0
},
{
"doc_id": "39997128",
"title": "The Clusnet Algorithm And Time Series Prediction",
"abstract": "This paper describes a novel neural network architecture named ClusNet. This network is designed to study the trade-offs between the simplicity of instance-based methods and the accuracy of the more computational intensive learning methods. The features that make this network different from existing learning algorithms are outlined. A simple proof of convergence of the ClusNet algorithm is given. Experimental results showing the convergence of the algorithm on a specific problem is also presented. In this paper, ClusNet is applied to predict the temporal continuation of the Mackey-Glass chaotic time series. A comparison between the results obtained with ClusNet and other neural network algorithms is made. For example, ClusNet requires one-tenth the computing resources of the instance-based local linear method for this application while achieving comparable accuracy in this task. The sensitivity of ClusNet prediction accuracies on specific clustering algorithms is examined for an application. The simplicity and fast convergence of ClusNet makes it ideal as a rapid prototyping tool for applications where on-line learning is required.",
"corpus_id": 39997128,
"score": 0
}
] |
arnetminer | {
"doc_id": "19127972",
"title": "Genetic Algorithm Wrappers For Feature Subset Selection In Supervised Inductive Learning",
"abstract": "1. Inferential loss: Quality of the model produced by an inducer as detected through inferential loss evaluated over a holdout validation data set Dval ≡ D \\ Dtrain 2. Model loss: “Size” of the model under a specified coding or representation 3. Ordering loss: Inference/classificationindependent and model-independent measure of data quality given only training and validation dataD and hyperparameters ÿ",
"corpus_id": 19127972
} | [
{
"doc_id": "5663752",
"title": "Text Extraction from the Web via Text-to-Tag Ratio",
"abstract": "We describe a method to extract content text from diverse Web pages by using the HTML document's text-to-tag ratio rather than specific HTML cues that may not be constant across various Web pages. We describe how to compute the text-to-tag ratio on a line-by-line basis and then cluster the results into content and non-content areas. With this approach we then show surprisingly high levels of recall for all levels of precision, and a large space savings.",
"corpus_id": 5663752,
"score": 1
},
{
"doc_id": "28678366",
"title": "Genetic Wrappers for Constructive Induction in High-Performance Data Mining",
"abstract": "We present an application of genetic algorithm-based design to configuration of high-level optimization systems, or wrappers, for relevance determination and constructive induction. Our system combines genetic wrappers with elicited knowledge on attribute relevance and synthesis. We discuss decision support issues in a large-scale commercial data mining project (cost prediction for multiple automobile insurance markets), and report experiments using D2K, a Java-based visual programming system for data mining and information visualization, and several commercial and research tools. Our GA system, Jenesis [HWRC00], is deployed on several network-of-workstation systems (Beowulf clusters). It achieves a linear speedup, due to a high degree of task parallelism, and improved test set accuracy, compared to decision tree learning with only constructive induction and state-space search-based wrappers [KJ97].",
"corpus_id": 28678366,
"score": 1
},
{
"doc_id": "8303407",
"title": "Probabilistic Prediction of Protein Secondary Structure Using Causal Networks (Extended Abstract)",
"abstract": "In this paper we present a probabilistic approach to analysis and prediction of protein structure. We argue that this approach provides a flexible and convenient mechanism to perform general scientific data analysis in molecular biology. We apply our approach to an important problem in molecular biology--predicting the secondary structure of proteins--and obtain experimental results comparable to several other methods. The causal networks that we use provide a very convenient medium for the scientist to experiment with different empirical models and obtain possibly important insights about the problem being studied.",
"corpus_id": 8303407,
"score": 1
},
{
"doc_id": "11098618",
"title": "Relational Graphical Models of Computational Workflows for Data Mining",
"abstract": "Collaborative recommendation is the problem of analyzing the content of an information retrieval system and actions of its users, to predict additional topics or products a new user may find useful. Developing this capability poses several challenges to machine learning and reasoning under uncertainty. Recent systems such as CiteSeer [1] have succeeded in providing some specialized but comprehensive indices of full documents, but the kind of probabilistic models used in such indexing do not extend easily to information Grid databases and computational Grid workflows. The collection of user data from Grid portals [4] provides a test bed for the underlying IR technology, including learning and inference systems. To model workflows created using the TAVERNA editor [3] and SCUFL description language, the DESCRIBER system, shown in Figure 1, applies score-based structure learning algorithms, including Bayesian model selection and greedy search (cf. the K2 algorithm) adapted to relational graphical models. Figure 2 illustrates how the decision support front-end of DESCRIBER interacts with modules that learn and reason using probabilistic relational models,. The purpose is to discover interrelationships among, and thereby recommend, components used in workflows developed by other Grid users.",
"corpus_id": 11098618,
"score": 1
},
{
"doc_id": "205451192",
"title": "Genetic wrappers for feature selection in decision tree induction and variable ordering in Bayesian network structure learning",
"abstract": "In this paper, we address the automated tuning of input specification for supervised inductive learning and develop combinatorial optimization solutions for two such tuning problems. First, we present a framework for selection and reordering of input variables to reduce generalization error in classification and probabilistic inference. One purpose of selection is to control overfitting using validation set accuracy as a criterion for relevance. Similarly, some inductive learning algorithms, such as greedy algorithms for learning probabilistic networks, are sensitive to the evaluation order of variables. We design a generic fitness function for validation of input specification, then use it to develop two genetic algorithm wrappers: one for the variable selection problem for decision tree inducers and one for the variable ordering problem for Bayesian network structure learning. We evaluate the wrappers, using real-world data for the selection wrapper and synthetic data for both, and discuss their limitations and generalizability to other inducers.",
"corpus_id": 205451192,
"score": 1
},
{
"doc_id": "39997128",
"title": "The Clusnet Algorithm And Time Series Prediction",
"abstract": "This paper describes a novel neural network architecture named ClusNet. This network is designed to study the trade-offs between the simplicity of instance-based methods and the accuracy of the more computational intensive learning methods. The features that make this network different from existing learning algorithms are outlined. A simple proof of convergence of the ClusNet algorithm is given. Experimental results showing the convergence of the algorithm on a specific problem is also presented. In this paper, ClusNet is applied to predict the temporal continuation of the Mackey-Glass chaotic time series. A comparison between the results obtained with ClusNet and other neural network algorithms is made. For example, ClusNet requires one-tenth the computing resources of the instance-based local linear method for this application while achieving comparable accuracy in this task. The sensitivity of ClusNet prediction accuracies on specific clustering algorithms is examined for an application. The simplicity and fast convergence of ClusNet makes it ideal as a rapid prototyping tool for applications where on-line learning is required.",
"corpus_id": 39997128,
"score": 0
},
{
"doc_id": "38525575",
"title": "Parameter significance estimation and financial prediction",
"abstract": "This paper deals with the problem of parameter significance estimation, and its application to currency exchange rate prediction. The basic problem is that over the years, practitioners in the field of financial engineering have developed dozens of technical and fundamental indicators on the basis of which they try to predict financial time series. The practitioners are now faced with the problem of finding out which combinations of those indicators are most significant or relevant, and how their significance changes over time. The authors propose a novel neural architecture calledSupNet for estimating the significance of various parameters. The methodology is based on the principle of penalizing those features that are the largest contributors to the error term. Two algorithms based on this principle are proposed. This approach is different from related methodologies, which are based on the principle of removing parameters with the least significance. The proposed methodology is demonstrated on the next day returns of the DM-US currency exchange rate, and promising results are obtained.",
"corpus_id": 38525575,
"score": 0
}
] |
arnetminer | {
"doc_id": "8590688",
"title": "Evolutionary tree genetic programming",
"abstract": "We introduce a clustering-based method of subpopulation management in genetic programming (GP) called Evolutionary Tree Genetic Programming (ETGP). The biological motivation behind this work is the observation that the natural evolution follows a tree-like phylogenetic pattern. Our goal is to simulate similar behavior in artificial evolutionary systems such as GP. To test our model we use three common GP benchmarks: the Ant Algorithm, 11-Multiplexer, and Parity problems.The performance of the ETGP system is empirically compared to those of the GP system. Code size and variance are consistently reduced by a small but statistically significant percentage, resulting in a slight speedup in the Ant and 11-Multiplexer problems, while the same comparisons on the Parity problem are inconclusive.",
"corpus_id": 8590688
} | [
{
"doc_id": "39278459",
"title": "Genetic Algorithms for Reformulation of Large-Scale KDD Problems with Many Irrelevant Attributes",
"abstract": "The goal of this research is to apply genetic implementations of algorithms for selection, partitioning, and synthesis of attributes in large-scale data mining problems. Domain knowledge about these operators has been shown to reduce the number of fitness evaluations for candidate attributes. We report results on genetic optimization of attribute selection problems and current work on attribute partitioning, synthesis specifications, and the encoding of domain knowledge about operators in a fitness function. The purpose of this approach is to reduce overfitting in inductive learning and produce more general genetic versions of existing search-based algorithms (or wrappers) for KDD performance tuning [KS98, HG00]. Several GA implementations of alternative attribute synthesis algorithms are applied to concept learning problems in military and commercial KDD applications. One of these, Jenesis, is deployed on several network-of-workstation clusters. It is shown to achieve strongly improved test set accuracy, compared to unwrapped decision tree learning and search-based wrappers [KS98].",
"corpus_id": 39278459,
"score": 1
},
{
"doc_id": "18622837",
"title": "Graph Drawing Heuristics for Path Finding in Large Dimensionless Graphs",
"abstract": "This paper presents a heuristic for guiding A*search for approximating the shortest path between two vertices in arbitrarily-sized dimensionless graphs. First we discuss methods by which these dimensionless graphs are laid out into Euclidean drawings. Next, two heuristics are computed based on drawings of the graphs. We compare the performance of an A*-search using these heuristics with breadth-first search on graphs with various topological properties. The results show a large savings in the number of vertices expanded for large graphs.",
"corpus_id": 18622837,
"score": 1
},
{
"doc_id": "5371240",
"title": "A machine learning approach to algorithm selection for \n$\\mathcal{NP}$\n-hard optimization problems: a case study on the MPE problem",
"abstract": "Abstract\nGiven one instance of an \n$\\mathcal{NP}$\n-hard optimization problem, can we tell in advance whether it is exactly solvable or not? If it is not, can we predict which approximate algorithm is the best to solve it? Since the behavior of most approximate, randomized, and heuristic search algorithms for \n$\\mathcal{NP}$\n-hard problems is usually very difficult to characterize analytically, researchers have turned to experimental methods in order to answer these questions. In this paper we present a machine learning-based approach to address the above questions. Models induced from algorithmic performance data can represent the knowledge of how algorithmic performance depends on some easy-to-compute problem instance characteristics. Using these models, we can estimate approximately whether an input instance is exactly solvable or not. Furthermore, when it is classified as exactly unsolvable, we can select the best approximate algorithm for it among a list of candidates. In this paper we use the MPE (most probable explanation) problem in probabilistic inference as a case study to validate the proposed methodology. Our experimental results show that the machine learning-based algorithm selection system can integrate both exact and inexact algorithms and provide the best overall performance comparing to any single candidate algorithm.\n",
"corpus_id": 5371240,
"score": 1
},
{
"doc_id": "2749970",
"title": "Layered Learning in Genetic Programming for a Cooperative Robot Soccer Problem",
"abstract": "We present an alternative to standard genetic programming (GP) that applies layered learning techniques to decompose a problem. GP is applied to subproblems sequentially, where the population in the last generation of a subproblem is used as the initial population of the next subproblem. This method is applied to evolve agents to play keep-away soccer, a subproblem of robotic soccer that requires cooperation among multiple agents in a dynnamic environment. The layered learning paradigm allows GP to evolve better solutions faster than standard GP. Results show that the layered learning GP outperforms standard GP by evolving a lower fitness faster and an overall better fitness. Results indicate a wide area of future research with layered learning in GP.",
"corpus_id": 2749970,
"score": 1
},
{
"doc_id": "28678366",
"title": "Genetic Wrappers for Constructive Induction in High-Performance Data Mining",
"abstract": "We present an application of genetic algorithm-based design to configuration of high-level optimization systems, or wrappers, for relevance determination and constructive induction. Our system combines genetic wrappers with elicited knowledge on attribute relevance and synthesis. We discuss decision support issues in a large-scale commercial data mining project (cost prediction for multiple automobile insurance markets), and report experiments using D2K, a Java-based visual programming system for data mining and information visualization, and several commercial and research tools. Our GA system, Jenesis [HWRC00], is deployed on several network-of-workstation systems (Beowulf clusters). It achieves a linear speedup, due to a high degree of task parallelism, and improved test set accuracy, compared to decision tree learning with only constructive induction and state-space search-based wrappers [KJ97].",
"corpus_id": 28678366,
"score": 1
},
{
"doc_id": "38525575",
"title": "Parameter significance estimation and financial prediction",
"abstract": "This paper deals with the problem of parameter significance estimation, and its application to currency exchange rate prediction. The basic problem is that over the years, practitioners in the field of financial engineering have developed dozens of technical and fundamental indicators on the basis of which they try to predict financial time series. The practitioners are now faced with the problem of finding out which combinations of those indicators are most significant or relevant, and how their significance changes over time. The authors propose a novel neural architecture calledSupNet for estimating the significance of various parameters. The methodology is based on the principle of penalizing those features that are the largest contributors to the error term. Two algorithms based on this principle are proposed. This approach is different from related methodologies, which are based on the principle of removing parameters with the least significance. The proposed methodology is demonstrated on the next day returns of the DM-US currency exchange rate, and promising results are obtained.",
"corpus_id": 38525575,
"score": 0
},
{
"doc_id": "39997128",
"title": "The Clusnet Algorithm And Time Series Prediction",
"abstract": "This paper describes a novel neural network architecture named ClusNet. This network is designed to study the trade-offs between the simplicity of instance-based methods and the accuracy of the more computational intensive learning methods. The features that make this network different from existing learning algorithms are outlined. A simple proof of convergence of the ClusNet algorithm is given. Experimental results showing the convergence of the algorithm on a specific problem is also presented. In this paper, ClusNet is applied to predict the temporal continuation of the Mackey-Glass chaotic time series. A comparison between the results obtained with ClusNet and other neural network algorithms is made. For example, ClusNet requires one-tenth the computing resources of the instance-based local linear method for this application while achieving comparable accuracy in this task. The sensitivity of ClusNet prediction accuracies on specific clustering algorithms is examined for an application. The simplicity and fast convergence of ClusNet makes it ideal as a rapid prototyping tool for applications where on-line learning is required.",
"corpus_id": 39997128,
"score": 0
}
] |
arnetminer | {
"doc_id": "39278459",
"title": "Genetic Algorithms for Reformulation of Large-Scale KDD Problems with Many Irrelevant Attributes",
"abstract": "The goal of this research is to apply genetic implementations of algorithms for selection, partitioning, and synthesis of attributes in large-scale data mining problems. Domain knowledge about these operators has been shown to reduce the number of fitness evaluations for candidate attributes. We report results on genetic optimization of attribute selection problems and current work on attribute partitioning, synthesis specifications, and the encoding of domain knowledge about operators in a fitness function. The purpose of this approach is to reduce overfitting in inductive learning and produce more general genetic versions of existing search-based algorithms (or wrappers) for KDD performance tuning [KS98, HG00]. Several GA implementations of alternative attribute synthesis algorithms are applied to concept learning problems in military and commercial KDD applications. One of these, Jenesis, is deployed on several network-of-workstation clusters. It is shown to achieve strongly improved test set accuracy, compared to unwrapped decision tree learning and search-based wrappers [KS98].",
"corpus_id": 39278459
} | [
{
"doc_id": "11098618",
"title": "Relational Graphical Models of Computational Workflows for Data Mining",
"abstract": "Collaborative recommendation is the problem of analyzing the content of an information retrieval system and actions of its users, to predict additional topics or products a new user may find useful. Developing this capability poses several challenges to machine learning and reasoning under uncertainty. Recent systems such as CiteSeer [1] have succeeded in providing some specialized but comprehensive indices of full documents, but the kind of probabilistic models used in such indexing do not extend easily to information Grid databases and computational Grid workflows. The collection of user data from Grid portals [4] provides a test bed for the underlying IR technology, including learning and inference systems. To model workflows created using the TAVERNA editor [3] and SCUFL description language, the DESCRIBER system, shown in Figure 1, applies score-based structure learning algorithms, including Bayesian model selection and greedy search (cf. the K2 algorithm) adapted to relational graphical models. Figure 2 illustrates how the decision support front-end of DESCRIBER interacts with modules that learn and reason using probabilistic relational models,. The purpose is to discover interrelationships among, and thereby recommend, components used in workflows developed by other Grid users.",
"corpus_id": 11098618,
"score": 1
},
{
"doc_id": "2749970",
"title": "Layered Learning in Genetic Programming for a Cooperative Robot Soccer Problem",
"abstract": "We present an alternative to standard genetic programming (GP) that applies layered learning techniques to decompose a problem. GP is applied to subproblems sequentially, where the population in the last generation of a subproblem is used as the initial population of the next subproblem. This method is applied to evolve agents to play keep-away soccer, a subproblem of robotic soccer that requires cooperation among multiple agents in a dynnamic environment. The layered learning paradigm allows GP to evolve better solutions faster than standard GP. Results show that the layered learning GP outperforms standard GP by evolving a lower fitness faster and an overall better fitness. Results indicate a wide area of future research with layered learning in GP.",
"corpus_id": 2749970,
"score": 1
},
{
"doc_id": "28678366",
"title": "Genetic Wrappers for Constructive Induction in High-Performance Data Mining",
"abstract": "We present an application of genetic algorithm-based design to configuration of high-level optimization systems, or wrappers, for relevance determination and constructive induction. Our system combines genetic wrappers with elicited knowledge on attribute relevance and synthesis. We discuss decision support issues in a large-scale commercial data mining project (cost prediction for multiple automobile insurance markets), and report experiments using D2K, a Java-based visual programming system for data mining and information visualization, and several commercial and research tools. Our GA system, Jenesis [HWRC00], is deployed on several network-of-workstation systems (Beowulf clusters). It achieves a linear speedup, due to a high degree of task parallelism, and improved test set accuracy, compared to decision tree learning with only constructive induction and state-space search-based wrappers [KJ97].",
"corpus_id": 28678366,
"score": 1
},
{
"doc_id": "5371240",
"title": "A machine learning approach to algorithm selection for \n$\\mathcal{NP}$\n-hard optimization problems: a case study on the MPE problem",
"abstract": "Abstract\nGiven one instance of an \n$\\mathcal{NP}$\n-hard optimization problem, can we tell in advance whether it is exactly solvable or not? If it is not, can we predict which approximate algorithm is the best to solve it? Since the behavior of most approximate, randomized, and heuristic search algorithms for \n$\\mathcal{NP}$\n-hard problems is usually very difficult to characterize analytically, researchers have turned to experimental methods in order to answer these questions. In this paper we present a machine learning-based approach to address the above questions. Models induced from algorithmic performance data can represent the knowledge of how algorithmic performance depends on some easy-to-compute problem instance characteristics. Using these models, we can estimate approximately whether an input instance is exactly solvable or not. Furthermore, when it is classified as exactly unsolvable, we can select the best approximate algorithm for it among a list of candidates. In this paper we use the MPE (most probable explanation) problem in probabilistic inference as a case study to validate the proposed methodology. Our experimental results show that the machine learning-based algorithm selection system can integrate both exact and inexact algorithms and provide the best overall performance comparing to any single candidate algorithm.\n",
"corpus_id": 5371240,
"score": 1
},
{
"doc_id": "205451192",
"title": "Genetic wrappers for feature selection in decision tree induction and variable ordering in Bayesian network structure learning",
"abstract": "In this paper, we address the automated tuning of input specification for supervised inductive learning and develop combinatorial optimization solutions for two such tuning problems. First, we present a framework for selection and reordering of input variables to reduce generalization error in classification and probabilistic inference. One purpose of selection is to control overfitting using validation set accuracy as a criterion for relevance. Similarly, some inductive learning algorithms, such as greedy algorithms for learning probabilistic networks, are sensitive to the evaluation order of variables. We design a generic fitness function for validation of input specification, then use it to develop two genetic algorithm wrappers: one for the variable selection problem for decision tree inducers and one for the variable ordering problem for Bayesian network structure learning. We evaluate the wrappers, using real-world data for the selection wrapper and synthetic data for both, and discuss their limitations and generalizability to other inducers.",
"corpus_id": 205451192,
"score": 1
},
{
"doc_id": "39997128",
"title": "The Clusnet Algorithm And Time Series Prediction",
"abstract": "This paper describes a novel neural network architecture named ClusNet. This network is designed to study the trade-offs between the simplicity of instance-based methods and the accuracy of the more computational intensive learning methods. The features that make this network different from existing learning algorithms are outlined. A simple proof of convergence of the ClusNet algorithm is given. Experimental results showing the convergence of the algorithm on a specific problem is also presented. In this paper, ClusNet is applied to predict the temporal continuation of the Mackey-Glass chaotic time series. A comparison between the results obtained with ClusNet and other neural network algorithms is made. For example, ClusNet requires one-tenth the computing resources of the instance-based local linear method for this application while achieving comparable accuracy in this task. The sensitivity of ClusNet prediction accuracies on specific clustering algorithms is examined for an application. The simplicity and fast convergence of ClusNet makes it ideal as a rapid prototyping tool for applications where on-line learning is required.",
"corpus_id": 39997128,
"score": 0
},
{
"doc_id": "38525575",
"title": "Parameter significance estimation and financial prediction",
"abstract": "This paper deals with the problem of parameter significance estimation, and its application to currency exchange rate prediction. The basic problem is that over the years, practitioners in the field of financial engineering have developed dozens of technical and fundamental indicators on the basis of which they try to predict financial time series. The practitioners are now faced with the problem of finding out which combinations of those indicators are most significant or relevant, and how their significance changes over time. The authors propose a novel neural architecture calledSupNet for estimating the significance of various parameters. The methodology is based on the principle of penalizing those features that are the largest contributors to the error term. Two algorithms based on this principle are proposed. This approach is different from related methodologies, which are based on the principle of removing parameters with the least significance. The proposed methodology is demonstrated on the next day returns of the DM-US currency exchange rate, and promising results are obtained.",
"corpus_id": 38525575,
"score": 0
}
] |
arnetminer | {
"doc_id": "2749970",
"title": "Layered Learning in Genetic Programming for a Cooperative Robot Soccer Problem",
"abstract": "We present an alternative to standard genetic programming (GP) that applies layered learning techniques to decompose a problem. GP is applied to subproblems sequentially, where the population in the last generation of a subproblem is used as the initial population of the next subproblem. This method is applied to evolve agents to play keep-away soccer, a subproblem of robotic soccer that requires cooperation among multiple agents in a dynnamic environment. The layered learning paradigm allows GP to evolve better solutions faster than standard GP. Results show that the layered learning GP outperforms standard GP by evolving a lower fitness faster and an overall better fitness. Results indicate a wide area of future research with layered learning in GP.",
"corpus_id": 2749970
} | [
{
"doc_id": "6398168",
"title": "Probabilistic Learning in Bayesian and Stochastic Neural Networks",
"abstract": "The goal of this research is to integrate aspects of artificial neural networks (ANNs) with symbolic machine learning methods in a probabilistic reasoning framework. Improved understanding of the semantics of neural nets supports principled integration efforts between seminumerical (so-called \"subsymbolic\") and symbolic intelligent systems. My dissertation focuses on learning of spatiotemporal (ST) sequences. In recent work, I have investigated architectures for modeling of ST sequences, and dualities between Bayesian networks and ANNs that expose their probabilistic and information theoretic foundations. In addition, I am developing algorithms for automated construction of Bayesian networks (and hybrid models); metrics for comparison of Bayesian networks across architectures; and a quantitative theory of feature construction (in the spirit of the PAC formalism from computational learning theory) for this learning environment. (Haussler 1988) Such methods for pattern prediction will be useful for building advanced knowledge based systems, with diagnostic applications such as intelligent monitoring tools.",
"corpus_id": 6398168,
"score": 1
},
{
"doc_id": "28678366",
"title": "Genetic Wrappers for Constructive Induction in High-Performance Data Mining",
"abstract": "We present an application of genetic algorithm-based design to configuration of high-level optimization systems, or wrappers, for relevance determination and constructive induction. Our system combines genetic wrappers with elicited knowledge on attribute relevance and synthesis. We discuss decision support issues in a large-scale commercial data mining project (cost prediction for multiple automobile insurance markets), and report experiments using D2K, a Java-based visual programming system for data mining and information visualization, and several commercial and research tools. Our GA system, Jenesis [HWRC00], is deployed on several network-of-workstation systems (Beowulf clusters). It achieves a linear speedup, due to a high degree of task parallelism, and improved test set accuracy, compared to decision tree learning with only constructive induction and state-space search-based wrappers [KJ97].",
"corpus_id": 28678366,
"score": 1
},
{
"doc_id": "18622837",
"title": "Graph Drawing Heuristics for Path Finding in Large Dimensionless Graphs",
"abstract": "This paper presents a heuristic for guiding A*search for approximating the shortest path between two vertices in arbitrarily-sized dimensionless graphs. First we discuss methods by which these dimensionless graphs are laid out into Euclidean drawings. Next, two heuristics are computed based on drawings of the graphs. We compare the performance of an A*-search using these heuristics with breadth-first search on graphs with various topological properties. The results show a large savings in the number of vertices expanded for large graphs.",
"corpus_id": 18622837,
"score": 1
},
{
"doc_id": "5371240",
"title": "A machine learning approach to algorithm selection for \n$\\mathcal{NP}$\n-hard optimization problems: a case study on the MPE problem",
"abstract": "Abstract\nGiven one instance of an \n$\\mathcal{NP}$\n-hard optimization problem, can we tell in advance whether it is exactly solvable or not? If it is not, can we predict which approximate algorithm is the best to solve it? Since the behavior of most approximate, randomized, and heuristic search algorithms for \n$\\mathcal{NP}$\n-hard problems is usually very difficult to characterize analytically, researchers have turned to experimental methods in order to answer these questions. In this paper we present a machine learning-based approach to address the above questions. Models induced from algorithmic performance data can represent the knowledge of how algorithmic performance depends on some easy-to-compute problem instance characteristics. Using these models, we can estimate approximately whether an input instance is exactly solvable or not. Furthermore, when it is classified as exactly unsolvable, we can select the best approximate algorithm for it among a list of candidates. In this paper we use the MPE (most probable explanation) problem in probabilistic inference as a case study to validate the proposed methodology. Our experimental results show that the machine learning-based algorithm selection system can integrate both exact and inexact algorithms and provide the best overall performance comparing to any single candidate algorithm.\n",
"corpus_id": 5371240,
"score": 1
},
{
"doc_id": "11098618",
"title": "Relational Graphical Models of Computational Workflows for Data Mining",
"abstract": "Collaborative recommendation is the problem of analyzing the content of an information retrieval system and actions of its users, to predict additional topics or products a new user may find useful. Developing this capability poses several challenges to machine learning and reasoning under uncertainty. Recent systems such as CiteSeer [1] have succeeded in providing some specialized but comprehensive indices of full documents, but the kind of probabilistic models used in such indexing do not extend easily to information Grid databases and computational Grid workflows. The collection of user data from Grid portals [4] provides a test bed for the underlying IR technology, including learning and inference systems. To model workflows created using the TAVERNA editor [3] and SCUFL description language, the DESCRIBER system, shown in Figure 1, applies score-based structure learning algorithms, including Bayesian model selection and greedy search (cf. the K2 algorithm) adapted to relational graphical models. Figure 2 illustrates how the decision support front-end of DESCRIBER interacts with modules that learn and reason using probabilistic relational models,. The purpose is to discover interrelationships among, and thereby recommend, components used in workflows developed by other Grid users.",
"corpus_id": 11098618,
"score": 1
},
{
"doc_id": "39997128",
"title": "The Clusnet Algorithm And Time Series Prediction",
"abstract": "This paper describes a novel neural network architecture named ClusNet. This network is designed to study the trade-offs between the simplicity of instance-based methods and the accuracy of the more computational intensive learning methods. The features that make this network different from existing learning algorithms are outlined. A simple proof of convergence of the ClusNet algorithm is given. Experimental results showing the convergence of the algorithm on a specific problem is also presented. In this paper, ClusNet is applied to predict the temporal continuation of the Mackey-Glass chaotic time series. A comparison between the results obtained with ClusNet and other neural network algorithms is made. For example, ClusNet requires one-tenth the computing resources of the instance-based local linear method for this application while achieving comparable accuracy in this task. The sensitivity of ClusNet prediction accuracies on specific clustering algorithms is examined for an application. The simplicity and fast convergence of ClusNet makes it ideal as a rapid prototyping tool for applications where on-line learning is required.",
"corpus_id": 39997128,
"score": 0
}
] |
arnetminer | {
"doc_id": "28678366",
"title": "Genetic Wrappers for Constructive Induction in High-Performance Data Mining",
"abstract": "We present an application of genetic algorithm-based design to configuration of high-level optimization systems, or wrappers, for relevance determination and constructive induction. Our system combines genetic wrappers with elicited knowledge on attribute relevance and synthesis. We discuss decision support issues in a large-scale commercial data mining project (cost prediction for multiple automobile insurance markets), and report experiments using D2K, a Java-based visual programming system for data mining and information visualization, and several commercial and research tools. Our GA system, Jenesis [HWRC00], is deployed on several network-of-workstation systems (Beowulf clusters). It achieves a linear speedup, due to a high degree of task parallelism, and improved test set accuracy, compared to decision tree learning with only constructive induction and state-space search-based wrappers [KJ97].",
"corpus_id": 28678366
} | [
{
"doc_id": "7095064",
"title": "An evolutionary approach to constructive induction for link discovery",
"abstract": "This paper presents a genetic programming-based symbolic regression approach to the construction of relational features in link analysis applications. Specifically, we consider the problems of predicting, classifying and annotating friends relations in friends networks, based upon features constructed from network structure and user profile data. We explain how the problem of classifying a user pair in a social network, as directly connected or not, poses the problem of selecting and constructing relevant features. We use genetic programming to construct features, represented by multiple symbol trees with base features as their leaves. In this manner, the genetic program selects and constructs features that may not have been originally considered, but possess better predictive properties than the base features. Finally, we present classification results and compare these results with those of the control and similar approaches.",
"corpus_id": 7095064,
"score": 1
},
{
"doc_id": "13504862",
"title": "Control of inductive bias in supervised learning using evolutionary computation: a wrapper-based approach",
"abstract": "In this chapter, I discuss the problem of feature subset selection for supervised inductive learning approaches to knowledge discovery in databases (KDD), and examine this and related problems in the context of controlling inductive bias. I survey several combinatorial search and optimization approaches to this problem, focusing on data-driven, validation-based techniques. In particular, I present a wrapper approach that uses genetic algorithms for the search component, using a validation criterion based upon model accuracy and problem complexity, as the fitness measure. Next, I focus on design and configuration of high-level optimization systems (wrappers) for relevance determination and constructive induction, and on integrating these wrappers with elicited knowledge on attribute relevance and synthesis. I then discuss the relationship between this model selection criterion and those from the minimum description length (MDL) family of learning criteria. I then present results on several synthetic problems on task-decomposable machine learning and on two large-scale commercial data-mining and decision-support projects: crop condition monitoring, and loss prediction for insurance pricing. Finally, I report experiments using the Machine Learning in Java (MLJ) and Data to Knowledge (D2K) Java-based visual programming systems for data mining and information visualization, and several commercial and research tools. Test set accuracy using a genetic wrapper is significantly higher than that of decision tree inducers alone and is comparable to that of the best extant search-space based wrappers.",
"corpus_id": 13504862,
"score": 1
},
{
"doc_id": "1797464",
"title": "Genetic Programming And Multi-agent Layered Learning By Reinforcements",
"abstract": "We present an adaptation of the standard genetic program (GP) to hierarchically decomposable, multi-agent learning problems. To break down a problem that requires cooperation of multiple agents, we use the team objective function to derive a simpler, intermediate objective function for pairs of cooperating agents. We apply GP to optimize first for the intermediate, then for the team objective function, using the final population from the earlier GP as the initial seed population for the next. This layered learning approach facilitates the discovery of primitive behaviors that can be reused and adapted towards complex objectives based on a shared team goal. We use this method to evolve agents to play a subproblem of robotic soccer (keep-away soccer). Finally, we show how layered learning GP evolves better agents than standard GP, including GP with automatically defined functions, and how the problem decomposition results in a significant learning-speed increase.",
"corpus_id": 1797464,
"score": 1
},
{
"doc_id": "1482302",
"title": "GA-Hardness Revisited",
"abstract": "Ever since the invention of Genetic Algorithms (GAs), researchers have put a lot of efforts into understanding what makes a function or problem instance hard for GAs to optimize. Many measures have been proposed to distinguish so- called GA-hard from GA-easy problems. None of these, however, has yet achieved the goal of being a reliable predictive GA-hardness measure. In this paper, we first present a general, abstract theoretical framework of instance hardness and algorithm performance based on Kolmogorov complexity. We then list several major misconceptions of GA-hardness research in the context of this theory. Finally, we propose some future directions.",
"corpus_id": 1482302,
"score": 1
},
{
"doc_id": "11049498",
"title": "An Ant Colony Approach For The Steiner Tree Problem",
"abstract": "One ant is placed initially at each of the given terminal vertices that are to be connected. In each iteration, an ant is moved to a new location via an edge, determined stochastically, but biased in such a manner that the ants get drawn to the paths traced out by one another. Each ant maintains its own separate list of vertices already visited to avoid revisiting it. When any ant collides with another ant, or even with the path of another, it merges into the latter. An antm , currently at a vertexi , selects a vertex j not in its tabu list ) (m T , to move to, only if E j i ∈ ) , ( . In order to ensure that the ants merge with one another as quickly as possible, we define a potential for each vertex j in V , with respect to an ant m as follows,",
"corpus_id": 11049498,
"score": 1
},
{
"doc_id": "38525575",
"title": "Parameter significance estimation and financial prediction",
"abstract": "This paper deals with the problem of parameter significance estimation, and its application to currency exchange rate prediction. The basic problem is that over the years, practitioners in the field of financial engineering have developed dozens of technical and fundamental indicators on the basis of which they try to predict financial time series. The practitioners are now faced with the problem of finding out which combinations of those indicators are most significant or relevant, and how their significance changes over time. The authors propose a novel neural architecture calledSupNet for estimating the significance of various parameters. The methodology is based on the principle of penalizing those features that are the largest contributors to the error term. Two algorithms based on this principle are proposed. This approach is different from related methodologies, which are based on the principle of removing parameters with the least significance. The proposed methodology is demonstrated on the next day returns of the DM-US currency exchange rate, and promising results are obtained.",
"corpus_id": 38525575,
"score": 0
}
] |
arnetminer | {
"doc_id": "13504862",
"title": "Control of inductive bias in supervised learning using evolutionary computation: a wrapper-based approach",
"abstract": "In this chapter, I discuss the problem of feature subset selection for supervised inductive learning approaches to knowledge discovery in databases (KDD), and examine this and related problems in the context of controlling inductive bias. I survey several combinatorial search and optimization approaches to this problem, focusing on data-driven, validation-based techniques. In particular, I present a wrapper approach that uses genetic algorithms for the search component, using a validation criterion based upon model accuracy and problem complexity, as the fitness measure. Next, I focus on design and configuration of high-level optimization systems (wrappers) for relevance determination and constructive induction, and on integrating these wrappers with elicited knowledge on attribute relevance and synthesis. I then discuss the relationship between this model selection criterion and those from the minimum description length (MDL) family of learning criteria. I then present results on several synthetic problems on task-decomposable machine learning and on two large-scale commercial data-mining and decision-support projects: crop condition monitoring, and loss prediction for insurance pricing. Finally, I report experiments using the Machine Learning in Java (MLJ) and Data to Knowledge (D2K) Java-based visual programming systems for data mining and information visualization, and several commercial and research tools. Test set accuracy using a genetic wrapper is significantly higher than that of decision tree inducers alone and is comparable to that of the best extant search-space based wrappers.",
"corpus_id": 13504862
} | [
{
"doc_id": "8590688",
"title": "Evolutionary tree genetic programming",
"abstract": "We introduce a clustering-based method of subpopulation management in genetic programming (GP) called Evolutionary Tree Genetic Programming (ETGP). The biological motivation behind this work is the observation that the natural evolution follows a tree-like phylogenetic pattern. Our goal is to simulate similar behavior in artificial evolutionary systems such as GP. To test our model we use three common GP benchmarks: the Ant Algorithm, 11-Multiplexer, and Parity problems.The performance of the ETGP system is empirically compared to those of the GP system. Code size and variance are consistently reduced by a small but statistically significant percentage, resulting in a slight speedup in the Ant and 11-Multiplexer problems, while the same comparisons on the Parity problem are inconclusive.",
"corpus_id": 8590688,
"score": 1
},
{
"doc_id": "8303407",
"title": "Probabilistic Prediction of Protein Secondary Structure Using Causal Networks (Extended Abstract)",
"abstract": "In this paper we present a probabilistic approach to analysis and prediction of protein structure. We argue that this approach provides a flexible and convenient mechanism to perform general scientific data analysis in molecular biology. We apply our approach to an important problem in molecular biology--predicting the secondary structure of proteins--and obtain experimental results comparable to several other methods. The causal networks that we use provide a very convenient medium for the scientist to experiment with different empirical models and obtain possibly important insights about the problem being studied.",
"corpus_id": 8303407,
"score": 1
},
{
"doc_id": "11098618",
"title": "Relational Graphical Models of Computational Workflows for Data Mining",
"abstract": "Collaborative recommendation is the problem of analyzing the content of an information retrieval system and actions of its users, to predict additional topics or products a new user may find useful. Developing this capability poses several challenges to machine learning and reasoning under uncertainty. Recent systems such as CiteSeer [1] have succeeded in providing some specialized but comprehensive indices of full documents, but the kind of probabilistic models used in such indexing do not extend easily to information Grid databases and computational Grid workflows. The collection of user data from Grid portals [4] provides a test bed for the underlying IR technology, including learning and inference systems. To model workflows created using the TAVERNA editor [3] and SCUFL description language, the DESCRIBER system, shown in Figure 1, applies score-based structure learning algorithms, including Bayesian model selection and greedy search (cf. the K2 algorithm) adapted to relational graphical models. Figure 2 illustrates how the decision support front-end of DESCRIBER interacts with modules that learn and reason using probabilistic relational models,. The purpose is to discover interrelationships among, and thereby recommend, components used in workflows developed by other Grid users.",
"corpus_id": 11098618,
"score": 1
},
{
"doc_id": "205451192",
"title": "Genetic wrappers for feature selection in decision tree induction and variable ordering in Bayesian network structure learning",
"abstract": "In this paper, we address the automated tuning of input specification for supervised inductive learning and develop combinatorial optimization solutions for two such tuning problems. First, we present a framework for selection and reordering of input variables to reduce generalization error in classification and probabilistic inference. One purpose of selection is to control overfitting using validation set accuracy as a criterion for relevance. Similarly, some inductive learning algorithms, such as greedy algorithms for learning probabilistic networks, are sensitive to the evaluation order of variables. We design a generic fitness function for validation of input specification, then use it to develop two genetic algorithm wrappers: one for the variable selection problem for decision tree inducers and one for the variable ordering problem for Bayesian network structure learning. We evaluate the wrappers, using real-world data for the selection wrapper and synthetic data for both, and discuss their limitations and generalizability to other inducers.",
"corpus_id": 205451192,
"score": 1
},
{
"doc_id": "5663752",
"title": "Text Extraction from the Web via Text-to-Tag Ratio",
"abstract": "We describe a method to extract content text from diverse Web pages by using the HTML document's text-to-tag ratio rather than specific HTML cues that may not be constant across various Web pages. We describe how to compute the text-to-tag ratio on a line-by-line basis and then cluster the results into content and non-content areas. With this approach we then show surprisingly high levels of recall for all levels of precision, and a large space savings.",
"corpus_id": 5663752,
"score": 1
},
{
"doc_id": "38525575",
"title": "Parameter significance estimation and financial prediction",
"abstract": "This paper deals with the problem of parameter significance estimation, and its application to currency exchange rate prediction. The basic problem is that over the years, practitioners in the field of financial engineering have developed dozens of technical and fundamental indicators on the basis of which they try to predict financial time series. The practitioners are now faced with the problem of finding out which combinations of those indicators are most significant or relevant, and how their significance changes over time. The authors propose a novel neural architecture calledSupNet for estimating the significance of various parameters. The methodology is based on the principle of penalizing those features that are the largest contributors to the error term. Two algorithms based on this principle are proposed. This approach is different from related methodologies, which are based on the principle of removing parameters with the least significance. The proposed methodology is demonstrated on the next day returns of the DM-US currency exchange rate, and promising results are obtained.",
"corpus_id": 38525575,
"score": 0
}
] |
arnetminer | {
"doc_id": "7095064",
"title": "An evolutionary approach to constructive induction for link discovery",
"abstract": "This paper presents a genetic programming-based symbolic regression approach to the construction of relational features in link analysis applications. Specifically, we consider the problems of predicting, classifying and annotating friends relations in friends networks, based upon features constructed from network structure and user profile data. We explain how the problem of classifying a user pair in a social network, as directly connected or not, poses the problem of selecting and constructing relevant features. We use genetic programming to construct features, represented by multiple symbol trees with base features as their leaves. In this manner, the genetic program selects and constructs features that may not have been originally considered, but possess better predictive properties than the base features. Finally, we present classification results and compare these results with those of the control and similar approaches.",
"corpus_id": 7095064
} | [
{
"doc_id": "5663752",
"title": "Text Extraction from the Web via Text-to-Tag Ratio",
"abstract": "We describe a method to extract content text from diverse Web pages by using the HTML document's text-to-tag ratio rather than specific HTML cues that may not be constant across various Web pages. We describe how to compute the text-to-tag ratio on a line-by-line basis and then cluster the results into content and non-content areas. With this approach we then show surprisingly high levels of recall for all levels of precision, and a large space savings.",
"corpus_id": 5663752,
"score": 1
},
{
"doc_id": "8590688",
"title": "Evolutionary tree genetic programming",
"abstract": "We introduce a clustering-based method of subpopulation management in genetic programming (GP) called Evolutionary Tree Genetic Programming (ETGP). The biological motivation behind this work is the observation that the natural evolution follows a tree-like phylogenetic pattern. Our goal is to simulate similar behavior in artificial evolutionary systems such as GP. To test our model we use three common GP benchmarks: the Ant Algorithm, 11-Multiplexer, and Parity problems.The performance of the ETGP system is empirically compared to those of the GP system. Code size and variance are consistently reduced by a small but statistically significant percentage, resulting in a slight speedup in the Ant and 11-Multiplexer problems, while the same comparisons on the Parity problem are inconclusive.",
"corpus_id": 8590688,
"score": 1
},
{
"doc_id": "8303407",
"title": "Probabilistic Prediction of Protein Secondary Structure Using Causal Networks (Extended Abstract)",
"abstract": "In this paper we present a probabilistic approach to analysis and prediction of protein structure. We argue that this approach provides a flexible and convenient mechanism to perform general scientific data analysis in molecular biology. We apply our approach to an important problem in molecular biology--predicting the secondary structure of proteins--and obtain experimental results comparable to several other methods. The causal networks that we use provide a very convenient medium for the scientist to experiment with different empirical models and obtain possibly important insights about the problem being studied.",
"corpus_id": 8303407,
"score": 1
},
{
"doc_id": "205451192",
"title": "Genetic wrappers for feature selection in decision tree induction and variable ordering in Bayesian network structure learning",
"abstract": "In this paper, we address the automated tuning of input specification for supervised inductive learning and develop combinatorial optimization solutions for two such tuning problems. First, we present a framework for selection and reordering of input variables to reduce generalization error in classification and probabilistic inference. One purpose of selection is to control overfitting using validation set accuracy as a criterion for relevance. Similarly, some inductive learning algorithms, such as greedy algorithms for learning probabilistic networks, are sensitive to the evaluation order of variables. We design a generic fitness function for validation of input specification, then use it to develop two genetic algorithm wrappers: one for the variable selection problem for decision tree inducers and one for the variable ordering problem for Bayesian network structure learning. We evaluate the wrappers, using real-world data for the selection wrapper and synthetic data for both, and discuss their limitations and generalizability to other inducers.",
"corpus_id": 205451192,
"score": 1
},
{
"doc_id": "11098618",
"title": "Relational Graphical Models of Computational Workflows for Data Mining",
"abstract": "Collaborative recommendation is the problem of analyzing the content of an information retrieval system and actions of its users, to predict additional topics or products a new user may find useful. Developing this capability poses several challenges to machine learning and reasoning under uncertainty. Recent systems such as CiteSeer [1] have succeeded in providing some specialized but comprehensive indices of full documents, but the kind of probabilistic models used in such indexing do not extend easily to information Grid databases and computational Grid workflows. The collection of user data from Grid portals [4] provides a test bed for the underlying IR technology, including learning and inference systems. To model workflows created using the TAVERNA editor [3] and SCUFL description language, the DESCRIBER system, shown in Figure 1, applies score-based structure learning algorithms, including Bayesian model selection and greedy search (cf. the K2 algorithm) adapted to relational graphical models. Figure 2 illustrates how the decision support front-end of DESCRIBER interacts with modules that learn and reason using probabilistic relational models,. The purpose is to discover interrelationships among, and thereby recommend, components used in workflows developed by other Grid users.",
"corpus_id": 11098618,
"score": 1
},
{
"doc_id": "39997128",
"title": "The Clusnet Algorithm And Time Series Prediction",
"abstract": "This paper describes a novel neural network architecture named ClusNet. This network is designed to study the trade-offs between the simplicity of instance-based methods and the accuracy of the more computational intensive learning methods. The features that make this network different from existing learning algorithms are outlined. A simple proof of convergence of the ClusNet algorithm is given. Experimental results showing the convergence of the algorithm on a specific problem is also presented. In this paper, ClusNet is applied to predict the temporal continuation of the Mackey-Glass chaotic time series. A comparison between the results obtained with ClusNet and other neural network algorithms is made. For example, ClusNet requires one-tenth the computing resources of the instance-based local linear method for this application while achieving comparable accuracy in this task. The sensitivity of ClusNet prediction accuracies on specific clustering algorithms is examined for an application. The simplicity and fast convergence of ClusNet makes it ideal as a rapid prototyping tool for applications where on-line learning is required.",
"corpus_id": 39997128,
"score": 0
}
] |
arnetminer | {
"doc_id": "1797464",
"title": "Genetic Programming And Multi-agent Layered Learning By Reinforcements",
"abstract": "We present an adaptation of the standard genetic program (GP) to hierarchically decomposable, multi-agent learning problems. To break down a problem that requires cooperation of multiple agents, we use the team objective function to derive a simpler, intermediate objective function for pairs of cooperating agents. We apply GP to optimize first for the intermediate, then for the team objective function, using the final population from the earlier GP as the initial seed population for the next. This layered learning approach facilitates the discovery of primitive behaviors that can be reused and adapted towards complex objectives based on a shared team goal. We use this method to evolve agents to play a subproblem of robotic soccer (keep-away soccer). Finally, we show how layered learning GP evolves better agents than standard GP, including GP with automatically defined functions, and how the problem decomposition results in a significant learning-speed increase.",
"corpus_id": 1797464
} | [
{
"doc_id": "11098618",
"title": "Relational Graphical Models of Computational Workflows for Data Mining",
"abstract": "Collaborative recommendation is the problem of analyzing the content of an information retrieval system and actions of its users, to predict additional topics or products a new user may find useful. Developing this capability poses several challenges to machine learning and reasoning under uncertainty. Recent systems such as CiteSeer [1] have succeeded in providing some specialized but comprehensive indices of full documents, but the kind of probabilistic models used in such indexing do not extend easily to information Grid databases and computational Grid workflows. The collection of user data from Grid portals [4] provides a test bed for the underlying IR technology, including learning and inference systems. To model workflows created using the TAVERNA editor [3] and SCUFL description language, the DESCRIBER system, shown in Figure 1, applies score-based structure learning algorithms, including Bayesian model selection and greedy search (cf. the K2 algorithm) adapted to relational graphical models. Figure 2 illustrates how the decision support front-end of DESCRIBER interacts with modules that learn and reason using probabilistic relational models,. The purpose is to discover interrelationships among, and thereby recommend, components used in workflows developed by other Grid users.",
"corpus_id": 11098618,
"score": 1
},
{
"doc_id": "8303407",
"title": "Probabilistic Prediction of Protein Secondary Structure Using Causal Networks (Extended Abstract)",
"abstract": "In this paper we present a probabilistic approach to analysis and prediction of protein structure. We argue that this approach provides a flexible and convenient mechanism to perform general scientific data analysis in molecular biology. We apply our approach to an important problem in molecular biology--predicting the secondary structure of proteins--and obtain experimental results comparable to several other methods. The causal networks that we use provide a very convenient medium for the scientist to experiment with different empirical models and obtain possibly important insights about the problem being studied.",
"corpus_id": 8303407,
"score": 1
},
{
"doc_id": "205451192",
"title": "Genetic wrappers for feature selection in decision tree induction and variable ordering in Bayesian network structure learning",
"abstract": "In this paper, we address the automated tuning of input specification for supervised inductive learning and develop combinatorial optimization solutions for two such tuning problems. First, we present a framework for selection and reordering of input variables to reduce generalization error in classification and probabilistic inference. One purpose of selection is to control overfitting using validation set accuracy as a criterion for relevance. Similarly, some inductive learning algorithms, such as greedy algorithms for learning probabilistic networks, are sensitive to the evaluation order of variables. We design a generic fitness function for validation of input specification, then use it to develop two genetic algorithm wrappers: one for the variable selection problem for decision tree inducers and one for the variable ordering problem for Bayesian network structure learning. We evaluate the wrappers, using real-world data for the selection wrapper and synthetic data for both, and discuss their limitations and generalizability to other inducers.",
"corpus_id": 205451192,
"score": 1
},
{
"doc_id": "5663752",
"title": "Text Extraction from the Web via Text-to-Tag Ratio",
"abstract": "We describe a method to extract content text from diverse Web pages by using the HTML document's text-to-tag ratio rather than specific HTML cues that may not be constant across various Web pages. We describe how to compute the text-to-tag ratio on a line-by-line basis and then cluster the results into content and non-content areas. With this approach we then show surprisingly high levels of recall for all levels of precision, and a large space savings.",
"corpus_id": 5663752,
"score": 1
},
{
"doc_id": "8590688",
"title": "Evolutionary tree genetic programming",
"abstract": "We introduce a clustering-based method of subpopulation management in genetic programming (GP) called Evolutionary Tree Genetic Programming (ETGP). The biological motivation behind this work is the observation that the natural evolution follows a tree-like phylogenetic pattern. Our goal is to simulate similar behavior in artificial evolutionary systems such as GP. To test our model we use three common GP benchmarks: the Ant Algorithm, 11-Multiplexer, and Parity problems.The performance of the ETGP system is empirically compared to those of the GP system. Code size and variance are consistently reduced by a small but statistically significant percentage, resulting in a slight speedup in the Ant and 11-Multiplexer problems, while the same comparisons on the Parity problem are inconclusive.",
"corpus_id": 8590688,
"score": 1
},
{
"doc_id": "38525575",
"title": "Parameter significance estimation and financial prediction",
"abstract": "This paper deals with the problem of parameter significance estimation, and its application to currency exchange rate prediction. The basic problem is that over the years, practitioners in the field of financial engineering have developed dozens of technical and fundamental indicators on the basis of which they try to predict financial time series. The practitioners are now faced with the problem of finding out which combinations of those indicators are most significant or relevant, and how their significance changes over time. The authors propose a novel neural architecture calledSupNet for estimating the significance of various parameters. The methodology is based on the principle of penalizing those features that are the largest contributors to the error term. Two algorithms based on this principle are proposed. This approach is different from related methodologies, which are based on the principle of removing parameters with the least significance. The proposed methodology is demonstrated on the next day returns of the DM-US currency exchange rate, and promising results are obtained.",
"corpus_id": 38525575,
"score": 0
}
] |
arnetminer | {
"doc_id": "1482302",
"title": "GA-Hardness Revisited",
"abstract": "Ever since the invention of Genetic Algorithms (GAs), researchers have put a lot of efforts into understanding what makes a function or problem instance hard for GAs to optimize. Many measures have been proposed to distinguish so- called GA-hard from GA-easy problems. None of these, however, has yet achieved the goal of being a reliable predictive GA-hardness measure. In this paper, we first present a general, abstract theoretical framework of instance hardness and algorithm performance based on Kolmogorov complexity. We then list several major misconceptions of GA-hardness research in the context of this theory. Finally, we propose some future directions.",
"corpus_id": 1482302
} | [
{
"doc_id": "8303407",
"title": "Probabilistic Prediction of Protein Secondary Structure Using Causal Networks (Extended Abstract)",
"abstract": "In this paper we present a probabilistic approach to analysis and prediction of protein structure. We argue that this approach provides a flexible and convenient mechanism to perform general scientific data analysis in molecular biology. We apply our approach to an important problem in molecular biology--predicting the secondary structure of proteins--and obtain experimental results comparable to several other methods. The causal networks that we use provide a very convenient medium for the scientist to experiment with different empirical models and obtain possibly important insights about the problem being studied.",
"corpus_id": 8303407,
"score": 1
},
{
"doc_id": "8590688",
"title": "Evolutionary tree genetic programming",
"abstract": "We introduce a clustering-based method of subpopulation management in genetic programming (GP) called Evolutionary Tree Genetic Programming (ETGP). The biological motivation behind this work is the observation that the natural evolution follows a tree-like phylogenetic pattern. Our goal is to simulate similar behavior in artificial evolutionary systems such as GP. To test our model we use three common GP benchmarks: the Ant Algorithm, 11-Multiplexer, and Parity problems.The performance of the ETGP system is empirically compared to those of the GP system. Code size and variance are consistently reduced by a small but statistically significant percentage, resulting in a slight speedup in the Ant and 11-Multiplexer problems, while the same comparisons on the Parity problem are inconclusive.",
"corpus_id": 8590688,
"score": 1
},
{
"doc_id": "205451192",
"title": "Genetic wrappers for feature selection in decision tree induction and variable ordering in Bayesian network structure learning",
"abstract": "In this paper, we address the automated tuning of input specification for supervised inductive learning and develop combinatorial optimization solutions for two such tuning problems. First, we present a framework for selection and reordering of input variables to reduce generalization error in classification and probabilistic inference. One purpose of selection is to control overfitting using validation set accuracy as a criterion for relevance. Similarly, some inductive learning algorithms, such as greedy algorithms for learning probabilistic networks, are sensitive to the evaluation order of variables. We design a generic fitness function for validation of input specification, then use it to develop two genetic algorithm wrappers: one for the variable selection problem for decision tree inducers and one for the variable ordering problem for Bayesian network structure learning. We evaluate the wrappers, using real-world data for the selection wrapper and synthetic data for both, and discuss their limitations and generalizability to other inducers.",
"corpus_id": 205451192,
"score": 1
},
{
"doc_id": "11098618",
"title": "Relational Graphical Models of Computational Workflows for Data Mining",
"abstract": "Collaborative recommendation is the problem of analyzing the content of an information retrieval system and actions of its users, to predict additional topics or products a new user may find useful. Developing this capability poses several challenges to machine learning and reasoning under uncertainty. Recent systems such as CiteSeer [1] have succeeded in providing some specialized but comprehensive indices of full documents, but the kind of probabilistic models used in such indexing do not extend easily to information Grid databases and computational Grid workflows. The collection of user data from Grid portals [4] provides a test bed for the underlying IR technology, including learning and inference systems. To model workflows created using the TAVERNA editor [3] and SCUFL description language, the DESCRIBER system, shown in Figure 1, applies score-based structure learning algorithms, including Bayesian model selection and greedy search (cf. the K2 algorithm) adapted to relational graphical models. Figure 2 illustrates how the decision support front-end of DESCRIBER interacts with modules that learn and reason using probabilistic relational models,. The purpose is to discover interrelationships among, and thereby recommend, components used in workflows developed by other Grid users.",
"corpus_id": 11098618,
"score": 1
},
{
"doc_id": "5663752",
"title": "Text Extraction from the Web via Text-to-Tag Ratio",
"abstract": "We describe a method to extract content text from diverse Web pages by using the HTML document's text-to-tag ratio rather than specific HTML cues that may not be constant across various Web pages. We describe how to compute the text-to-tag ratio on a line-by-line basis and then cluster the results into content and non-content areas. With this approach we then show surprisingly high levels of recall for all levels of precision, and a large space savings.",
"corpus_id": 5663752,
"score": 1
},
{
"doc_id": "39997128",
"title": "The Clusnet Algorithm And Time Series Prediction",
"abstract": "This paper describes a novel neural network architecture named ClusNet. This network is designed to study the trade-offs between the simplicity of instance-based methods and the accuracy of the more computational intensive learning methods. The features that make this network different from existing learning algorithms are outlined. A simple proof of convergence of the ClusNet algorithm is given. Experimental results showing the convergence of the algorithm on a specific problem is also presented. In this paper, ClusNet is applied to predict the temporal continuation of the Mackey-Glass chaotic time series. A comparison between the results obtained with ClusNet and other neural network algorithms is made. For example, ClusNet requires one-tenth the computing resources of the instance-based local linear method for this application while achieving comparable accuracy in this task. The sensitivity of ClusNet prediction accuracies on specific clustering algorithms is examined for an application. The simplicity and fast convergence of ClusNet makes it ideal as a rapid prototyping tool for applications where on-line learning is required.",
"corpus_id": 39997128,
"score": 0
}
] |
arnetminer | {
"doc_id": "11049498",
"title": "An Ant Colony Approach For The Steiner Tree Problem",
"abstract": "One ant is placed initially at each of the given terminal vertices that are to be connected. In each iteration, an ant is moved to a new location via an edge, determined stochastically, but biased in such a manner that the ants get drawn to the paths traced out by one another. Each ant maintains its own separate list of vertices already visited to avoid revisiting it. When any ant collides with another ant, or even with the path of another, it merges into the latter. An antm , currently at a vertexi , selects a vertex j not in its tabu list ) (m T , to move to, only if E j i ∈ ) , ( . In order to ensure that the ants merge with one another as quickly as possible, we define a potential for each vertex j in V , with respect to an ant m as follows,",
"corpus_id": 11049498
} | [
{
"doc_id": "11098618",
"title": "Relational Graphical Models of Computational Workflows for Data Mining",
"abstract": "Collaborative recommendation is the problem of analyzing the content of an information retrieval system and actions of its users, to predict additional topics or products a new user may find useful. Developing this capability poses several challenges to machine learning and reasoning under uncertainty. Recent systems such as CiteSeer [1] have succeeded in providing some specialized but comprehensive indices of full documents, but the kind of probabilistic models used in such indexing do not extend easily to information Grid databases and computational Grid workflows. The collection of user data from Grid portals [4] provides a test bed for the underlying IR technology, including learning and inference systems. To model workflows created using the TAVERNA editor [3] and SCUFL description language, the DESCRIBER system, shown in Figure 1, applies score-based structure learning algorithms, including Bayesian model selection and greedy search (cf. the K2 algorithm) adapted to relational graphical models. Figure 2 illustrates how the decision support front-end of DESCRIBER interacts with modules that learn and reason using probabilistic relational models,. The purpose is to discover interrelationships among, and thereby recommend, components used in workflows developed by other Grid users.",
"corpus_id": 11098618,
"score": 1
},
{
"doc_id": "8303407",
"title": "Probabilistic Prediction of Protein Secondary Structure Using Causal Networks (Extended Abstract)",
"abstract": "In this paper we present a probabilistic approach to analysis and prediction of protein structure. We argue that this approach provides a flexible and convenient mechanism to perform general scientific data analysis in molecular biology. We apply our approach to an important problem in molecular biology--predicting the secondary structure of proteins--and obtain experimental results comparable to several other methods. The causal networks that we use provide a very convenient medium for the scientist to experiment with different empirical models and obtain possibly important insights about the problem being studied.",
"corpus_id": 8303407,
"score": 1
},
{
"doc_id": "5663752",
"title": "Text Extraction from the Web via Text-to-Tag Ratio",
"abstract": "We describe a method to extract content text from diverse Web pages by using the HTML document's text-to-tag ratio rather than specific HTML cues that may not be constant across various Web pages. We describe how to compute the text-to-tag ratio on a line-by-line basis and then cluster the results into content and non-content areas. With this approach we then show surprisingly high levels of recall for all levels of precision, and a large space savings.",
"corpus_id": 5663752,
"score": 1
},
{
"doc_id": "8590688",
"title": "Evolutionary tree genetic programming",
"abstract": "We introduce a clustering-based method of subpopulation management in genetic programming (GP) called Evolutionary Tree Genetic Programming (ETGP). The biological motivation behind this work is the observation that the natural evolution follows a tree-like phylogenetic pattern. Our goal is to simulate similar behavior in artificial evolutionary systems such as GP. To test our model we use three common GP benchmarks: the Ant Algorithm, 11-Multiplexer, and Parity problems.The performance of the ETGP system is empirically compared to those of the GP system. Code size and variance are consistently reduced by a small but statistically significant percentage, resulting in a slight speedup in the Ant and 11-Multiplexer problems, while the same comparisons on the Parity problem are inconclusive.",
"corpus_id": 8590688,
"score": 1
},
{
"doc_id": "205451192",
"title": "Genetic wrappers for feature selection in decision tree induction and variable ordering in Bayesian network structure learning",
"abstract": "In this paper, we address the automated tuning of input specification for supervised inductive learning and develop combinatorial optimization solutions for two such tuning problems. First, we present a framework for selection and reordering of input variables to reduce generalization error in classification and probabilistic inference. One purpose of selection is to control overfitting using validation set accuracy as a criterion for relevance. Similarly, some inductive learning algorithms, such as greedy algorithms for learning probabilistic networks, are sensitive to the evaluation order of variables. We design a generic fitness function for validation of input specification, then use it to develop two genetic algorithm wrappers: one for the variable selection problem for decision tree inducers and one for the variable ordering problem for Bayesian network structure learning. We evaluate the wrappers, using real-world data for the selection wrapper and synthetic data for both, and discuss their limitations and generalizability to other inducers.",
"corpus_id": 205451192,
"score": 1
},
{
"doc_id": "38525575",
"title": "Parameter significance estimation and financial prediction",
"abstract": "This paper deals with the problem of parameter significance estimation, and its application to currency exchange rate prediction. The basic problem is that over the years, practitioners in the field of financial engineering have developed dozens of technical and fundamental indicators on the basis of which they try to predict financial time series. The practitioners are now faced with the problem of finding out which combinations of those indicators are most significant or relevant, and how their significance changes over time. The authors propose a novel neural architecture calledSupNet for estimating the significance of various parameters. The methodology is based on the principle of penalizing those features that are the largest contributors to the error term. Two algorithms based on this principle are proposed. This approach is different from related methodologies, which are based on the principle of removing parameters with the least significance. The proposed methodology is demonstrated on the next day returns of the DM-US currency exchange rate, and promising results are obtained.",
"corpus_id": 38525575,
"score": 0
}
] |
arnetminer | {
"doc_id": "14589917",
"title": "Frequent value compression in packet-based NoC architectures",
"abstract": "The proliferation of Chip Multiprocessors (CMPs) has led to the integration of large on-chip caches. For scalability reasons, a large on-chip cache is often divided into smaller banks that are interconnected through packet-based Network-on-Chip (NoC). With increasing number of cores and cache banks integrated on a single die, the on-chip network introduces significant communication latency and power consumption. In this paper, we propose a novel scheme that exploits Frequent Value compression to optimize the power and performance of NoC. Our experimental results show that the proposed scheme reduces the router power by up to 16.7%, with CPI reduction as much as 23.5% in our setting. Comparing to the recent zero pattern compression scheme, the frequent value scheme saves up to 11.0% more router power and has up to 14.5% more CPI reduction. Hardware design of the FV table and its overhead are also presented.",
"corpus_id": 14589917
} | [
{
"doc_id": "13798358",
"title": "A durable and energy efficient main memory using phase change memory technology",
"abstract": "Using nonvolatile memories in memory hierarchy has been investigated to reduce its energy consumption because nonvolatile memories consume zero leakage power in memory cells. One of the difficulties is, however, that the endurance of most nonvolatile memory technologies is much shorter than the conventional SRAM and DRAM technology. This has limited its usage to only the low levels of a memory hierarchy, e.g., disks, that is far from the CPU.\n In this paper, we study the use of a new type of nonvolatile memories -- the Phase Change Memory (PCM) as the main memory for a 3D stacked chip. The main challenges we face are the limited PCM endurance, longer access latencies, and higher dynamic power compared to the conventional DRAM technology. We propose techniques to extend the endurance of the PCM to an average of 13 (for MLC PCM cell) to 22 (for SLC PCM) years. We also study the design choices of implementing PCM to achieve the best tradeoff between energy and performance. Our design reduced the total energy of an already low-power DRAM main memory of the same capacity by 65%, and energy-delay2 product by 60%. These results indicate that it is feasible to use PCM technology in place of DRAM in the main memory for better energy efficiency.",
"corpus_id": 13798358,
"score": 1
},
{
"doc_id": "11399662",
"title": "Adaptive Buffer Management for Efficient Code Dissemination in Multi-Application Wireless Sensor Networks",
"abstract": "Future wireless sensor networks (WSNs) are projected to run multiple applications in the same network infrastructure. While such multi-application WSNs (MA-WSNs) are economically more efficient and adapt better to the changing environments than traditional single-application WSNs, they usually require frequent code redistribution on wireless sensors, making it critical to design energy efficient post-deployment code dissemination protocols in MA-WSNs. Different applications in MA-WSNs often share some common code segments. Therefore when there is a need to disseminate a new application from the sink node, it is possible to disseminate its shared code segments from peer sensors instead of disseminating everything from the sink node. While dissemination protocols have been proposed to handle code of each single type, it is challenging to achieve energy efficiency when the code contains both types and needs simultaneous dissemination. In this paper we utilize an adaptive buffer management approach to achieve efficient code dissemination in MA-WSNs. Our experimental results show that adaptive buffer management can reduce the completion time and the message overhead up to 10% and 20% respectively.",
"corpus_id": 11399662,
"score": 1
},
{
"doc_id": "2961354",
"title": "On Adding Link Dimensional Dynamism to CSMA/CA Based MAC Protocols",
"abstract": "Though the popular IEEE 802.11 DCF is designed primarily for wireless LAN (WLAN) environments, today it is being widely used for wide area wireless mesh networking. The protocol parameters of IEEE 802.11 such as timeout values, interframe spaces, and slot durations, which are sufficient for a general WLAN environment need to be modified in order to efficiently operate in wide area wireless mesh networks. The current wide area wireless mesh network deployments use manual configuration of these parameters to the upper limit which essentially makes the networks operate at lower system efficiency. In this paper, we propose d802.11 (dynamic 802.11) which dynamically adapts the protocol parameters in order to operate at varying link distances. We present three strategies, (i) multiplicative timer back-off (MTB), (ii) additive timer back-off (ATB), and (iii) link RTT memorization (LRM), to adapt the ACK_TIMEOUT in d802.11 in order to provide better adaptation for varying link dimensions. Through extensive simulation experiments we observed significant performance improvement for the proposed strategies. We also theoretically modeled the maximum throughput as a function of the link dimension for the proposed system. Our results show that the LRM technique provides the best adaptation compared to all other schemes.",
"corpus_id": 2961354,
"score": 0
},
{
"doc_id": "16634499",
"title": "An Approximate Controller for Nonlinear Networked Control Systems with Time Delay",
"abstract": "An approximation controller is developed to obtain an optimal control for nonlinear networked control systems (NCS) with time delay. In the approach only the non-linear compensating term, solution of a sequence of adjoint vector differential equations, is required iteration. By taking the finite iteration of non-linear compensating term of optimal solution sequence, a suboptimal control law for NCS can be obtained.",
"corpus_id": 16634499,
"score": 0
},
{
"doc_id": "25182935",
"title": "An efficient random access scheme for OFDMA systems with implicit message transmission",
"abstract": "Random access channel (RACH) is usually used for initial channel access, bandwidth request, etc. In this paper, we analyze the throughput and access delay performance of the RACH in an orthogonal frequency division multiple access (OFDMA) system. A closed-form expression of the RACH throughput is presented and its mean access delays under both binary exponential and uniform backoff policies have also been derived. Moreover, to further improve the utilization of RACH in an OFDMA system, we propose a novel message transmission scheme in a shared RACH. The message in the proposed scheme can be sent implicitly in the cyclic-shifted preambles which make the transmission much more reliable in a contention based RACH. Meanwhile, performance of the preamble detection has also been improved by the redundant information conveyed in messages. Finally, simulation results will demonstrate performance gains of the proposed scheme.",
"corpus_id": 25182935,
"score": 0
},
{
"doc_id": "16952040",
"title": "A Speaker Verification System Based on EMD",
"abstract": "Most of the speech utterance feature extraction methods are based on the assumptions: utterance signal is short-term stable and independent between each other adjacent frames. This approach ignores the dynamic characteristics of speech signal. For the time-varying characteristics of the speech utterance, we propose a new feature extraction method based on empirical mode decomposition EMD. We can extract the LPCC feature parameters from different stages of IMFs which are decomposed by EMD process. A speaker verification system based on EMD is proposed and it is shown that it is better performance than the one with traditional LPCC features based on short-term process.",
"corpus_id": 16952040,
"score": 0
},
{
"doc_id": "6645019",
"title": "An Energy Reporting Aggregation Method Based on EAFM and DTRM in WSN",
"abstract": "Analyzed the energy consumption disciplinarian of the nodes in WSN, the node’s Energy Attenuation Forecast Model(EAFM) can be established. A Difference-threshold Reporting Mechanism (DTRM) is used to report the residual energy of nodes. The energy collection mechanism based on EAFM and DTRM can reduce energy data reporting times significantly,improve the efficient of energy data collection, save the node's energy at the same time. The experiments in the platform of telosb nodes show that the predicable rate is between 70% and85%, and this method can extend the node’s life by 1% ~ 4.5%.",
"corpus_id": 6645019,
"score": 0
}
] |
arnetminer | {
"doc_id": "9228850",
"title": "Formal specification and compositional verification of an atomic broadcast protocol",
"abstract": "We apply a formal method based on assertions to specify and verify an atomic broadcast protocol. The protocol is implemented by replicating a server process on all processors in a network. We show that the verification of the protocol can be done compositionally by using specifications in which timing is expressed by local clock values. First the requirements of the protocol are formally described. Next the underlying communication mechanism, the assumptions about local clocks, and the failure assumptions are axiomatized. Also the server process is represented by a formal specification. Then we verify that parallel execution of the server processes leads to the desired properties by proving that the conjunction of all server specifications and the axioms about the system implies the requirements of the protocol.",
"corpus_id": 9228850
} | [
{
"doc_id": "2745933",
"title": "Compositional verification of real-time systems with Explicit Clock Temporal Logic",
"abstract": "To specify and verify real-time systems, we consider a real-time version of temporal logic called Explicit Clock Temporal Logic. Timing properties are specified by extending the classical framework of temporal logic with a special variable which explicitly refers to a global notion of time. Programs are written in an Occam-like real-time language with synchronous message passing. To show that a program satisfies a specification, we formulate a proof system which is proved to be sound and relatively complete. The proof system is compositional, which makes it possible to decompose the design of a large system into the design of subsystems. This is shown by the verification of a small part of an avionics system.",
"corpus_id": 2745933,
"score": 1
},
{
"doc_id": "40982114",
"title": "A proof theory for asynchronously communicating real-time systems",
"abstract": "A compositional proof system is presented to axiomatize the real-time behavior of asynchronously communicating processes. Programs are written in a real-time version of CSP where processes asynchronously send and receive messages along channels that are capable of buffering an arbitrary number of messages. Timing properties are expressed in explicitly clock temporal logic, which extends linear temporal logic with a special time variable, referring to a global clock.<<ETX>>",
"corpus_id": 40982114,
"score": 1
},
{
"doc_id": "46289706",
"title": "Horseshoe in the hyperchaotic Mck Circuit",
"abstract": "The well-known Matsumoto–Chua–Kobayashi (MCK) circuit is of significance for studying hyperchaos, since it was the first experimental observation of hyperchaos from a real physical system. In this paper, we discuss the existence of hyperchaos in this circuit by virtue of topological horseshoe theory. The two disjoint compact subsets producing a horseshoe found in a specific 3D cross-section, both expand in two directions under the fourth Poincare return map, this fact means that there exists hyperchaos in the circuit.",
"corpus_id": 46289706,
"score": 0
},
{
"doc_id": "14589917",
"title": "Frequent value compression in packet-based NoC architectures",
"abstract": "The proliferation of Chip Multiprocessors (CMPs) has led to the integration of large on-chip caches. For scalability reasons, a large on-chip cache is often divided into smaller banks that are interconnected through packet-based Network-on-Chip (NoC). With increasing number of cores and cache banks integrated on a single die, the on-chip network introduces significant communication latency and power consumption. In this paper, we propose a novel scheme that exploits Frequent Value compression to optimize the power and performance of NoC. Our experimental results show that the proposed scheme reduces the router power by up to 16.7%, with CPI reduction as much as 23.5% in our setting. Comparing to the recent zero pattern compression scheme, the frequent value scheme saves up to 11.0% more router power and has up to 14.5% more CPI reduction. Hardware design of the FV table and its overhead are also presented.",
"corpus_id": 14589917,
"score": 0
},
{
"doc_id": "62584254",
"title": "A Comparison between the China Scientific and Technical Papers and Citations Database and the Science Citation Index in terms of journal hierarchies and inter-journal citation relations",
"abstract": "A wearable device and method are provided for reporting the time based on a wrist-related trigger. In one implementation, a wearable apparatus for providing time information to a user includes a wearable image sensor configured to capture real-time image data from an environment of a user of the wearable apparatus. The wearable apparatus also includes at least one processing device programmed to identify in the image data a wrist-related trigger associated with the user. The processing device is also programmed to provide an output to the user, the output including the time information, based on at least the identification of the wrist-related trigger.",
"corpus_id": 62584254,
"score": 0
},
{
"doc_id": "15353452",
"title": "Hybrid Intelligent System for Supervisory Control of Mineral Grinding Process",
"abstract": "The particle size is the important technical performance index of the grinding process, which closely related to the overall performance of the mineral processing. In this paper, we mainly concern on the determination of the particle size for the supervisory control of the grinding process by the technical performance index decision system. The overall structure of the system and introduce of every part are given briefly. The experiment results and its compare with the neural network method show its validity and efficiency",
"corpus_id": 15353452,
"score": 0
},
{
"doc_id": "16952040",
"title": "A Speaker Verification System Based on EMD",
"abstract": "Most of the speech utterance feature extraction methods are based on the assumptions: utterance signal is short-term stable and independent between each other adjacent frames. This approach ignores the dynamic characteristics of speech signal. For the time-varying characteristics of the speech utterance, we propose a new feature extraction method based on empirical mode decomposition EMD. We can extract the LPCC feature parameters from different stages of IMFs which are decomposed by EMD process. A speaker verification system based on EMD is proposed and it is shown that it is better performance than the one with traditional LPCC features based on short-term process.",
"corpus_id": 16952040,
"score": 0
}
] |
arnetminer | {
"doc_id": "10867701",
"title": "Asymptotic Capacity of Infrastructure Wireless Mesh Networks",
"abstract": "An infrastructure wireless mesh network (WMN) is a hierarchical network consisting of mesh clients, mesh routers and gateways. Mesh routers constitute a wireless mesh backbone, to which mesh clients are connected as a star topology, and gateways are chosen among mesh routers providing Internet access. In this paper, the throughput capacity of infrastructure WMNs is studied. For such a network with Nc randomly distributed mesh clients, Nr regularly placed mesh routers and Ng gateways, assuming that each mesh router can transmit at W bits/s, the per-client throughput capacity has been derived as a function of Nc , Nr , Ng and W . The result illustrates that, in order to achieve high capacity performance, the number of mesh routers and the number of gateways must be properly chosen. It also reveals that an infrastructure WMN can achieve the same asymptotic throughput capacity as that of a hybrid ad hoc network by choosing only a small number of mesh routers as gateways. This property makes WMNs a very promising solution for future wireless networking.",
"corpus_id": 10867701
} | [
{
"doc_id": "2961354",
"title": "On Adding Link Dimensional Dynamism to CSMA/CA Based MAC Protocols",
"abstract": "Though the popular IEEE 802.11 DCF is designed primarily for wireless LAN (WLAN) environments, today it is being widely used for wide area wireless mesh networking. The protocol parameters of IEEE 802.11 such as timeout values, interframe spaces, and slot durations, which are sufficient for a general WLAN environment need to be modified in order to efficiently operate in wide area wireless mesh networks. The current wide area wireless mesh network deployments use manual configuration of these parameters to the upper limit which essentially makes the networks operate at lower system efficiency. In this paper, we propose d802.11 (dynamic 802.11) which dynamically adapts the protocol parameters in order to operate at varying link distances. We present three strategies, (i) multiplicative timer back-off (MTB), (ii) additive timer back-off (ATB), and (iii) link RTT memorization (LRM), to adapt the ACK_TIMEOUT in d802.11 in order to provide better adaptation for varying link dimensions. Through extensive simulation experiments we observed significant performance improvement for the proposed strategies. We also theoretically modeled the maximum throughput as a function of the link dimension for the proposed system. Our results show that the LRM technique provides the best adaptation compared to all other schemes.",
"corpus_id": 2961354,
"score": 1
},
{
"doc_id": "37285882",
"title": "Dynamic adaptation of CSMA/CA MAC protocol for wide area wireless mesh networks",
"abstract": "Though the popular IEEE 802.11 DCF is designed primarily for Wireless LAN (WLAN) environments, today it is being widely used for wide area wireless mesh networking. The protocol parameters of IEEE 802.11 such as timeout values, interframe spaces, and slot durations, sufficient for a general WLAN environment, need to be modified in order to efficiently operate in wide area wireless mesh networks. The current wide area wireless mesh network deployments use manual configuration of these parameters to the upper limit which essentially makes the networks operate at lower system efficiency. In this paper, we propose d802.11 (dynamic 802.11) which dynamically adapts the protocol parameters in order to operate at varying link distances. In fact, in 802.11, a transmitter can face ACK/CTS timeout even when it started receiving ACK/CTS packet before the timeout value. We present three strategies, (i) multiplicative timer backoff (MTB), (ii) additive timer backoff (ATB), and (iii) link RTT memoization (LRM), to adapt the ACK_TIMEOUT in d802.11 in order to provide better adaptation for varying link dimensions. Through extensive simulation experiments we observed significant performance improvement for the proposed strategies. We also theoretically modeled the maximum link throughput as a function of the link dimension for the proposed system. Our results show that the LRM technique provides the best adaptation compared to all other schemes.",
"corpus_id": 37285882,
"score": 1
},
{
"doc_id": "22092303",
"title": "Nanotechnology as a field of science: Its delineation in terms of journals and patents",
"abstract": "The Journal Citation Reports of the Science Citation Index 2004 were used to delineate a core set of nanotechnology journals and a nanotechnology-relevant set. In comparison with 2003, the core set has grown and the relevant set has decreased. This suggests a higher degree of codification in the field of nanotechnology: the field has become more focused in terms of citation practices. Using the citing patterns among journals at the aggregate level, a core group of ten nanotechnology journals in the vector space can be delineated on the criterion of betweenness centrality. National contributions to this core group of journals are evaluated for the years 2003, 2004, and 2005. Additionally, the specific class of nanotechnology patents in the database of the U. S. Patent and Trade Office (USPTO) is analyzed to determine if non-patent literature references can be used as a source for the delineation of the knowledge base in terms of scientific journals. The references are primarily to general science journals and letters, and therefore not specific enough for the purpose of delineating a journal set.",
"corpus_id": 22092303,
"score": 0
},
{
"doc_id": "206599167",
"title": "A Fuzzy Logic Expert System for Fault Diagnosis and Security Assessment of Power Transformers",
"abstract": null,
"corpus_id": 206599167,
"score": 0
},
{
"doc_id": "11399662",
"title": "Adaptive Buffer Management for Efficient Code Dissemination in Multi-Application Wireless Sensor Networks",
"abstract": "Future wireless sensor networks (WSNs) are projected to run multiple applications in the same network infrastructure. While such multi-application WSNs (MA-WSNs) are economically more efficient and adapt better to the changing environments than traditional single-application WSNs, they usually require frequent code redistribution on wireless sensors, making it critical to design energy efficient post-deployment code dissemination protocols in MA-WSNs. Different applications in MA-WSNs often share some common code segments. Therefore when there is a need to disseminate a new application from the sink node, it is possible to disseminate its shared code segments from peer sensors instead of disseminating everything from the sink node. While dissemination protocols have been proposed to handle code of each single type, it is challenging to achieve energy efficiency when the code contains both types and needs simultaneous dissemination. In this paper we utilize an adaptive buffer management approach to achieve efficient code dissemination in MA-WSNs. Our experimental results show that adaptive buffer management can reduce the completion time and the message overhead up to 10% and 20% respectively.",
"corpus_id": 11399662,
"score": 0
},
{
"doc_id": "14589917",
"title": "Frequent value compression in packet-based NoC architectures",
"abstract": "The proliferation of Chip Multiprocessors (CMPs) has led to the integration of large on-chip caches. For scalability reasons, a large on-chip cache is often divided into smaller banks that are interconnected through packet-based Network-on-Chip (NoC). With increasing number of cores and cache banks integrated on a single die, the on-chip network introduces significant communication latency and power consumption. In this paper, we propose a novel scheme that exploits Frequent Value compression to optimize the power and performance of NoC. Our experimental results show that the proposed scheme reduces the router power by up to 16.7%, with CPI reduction as much as 23.5% in our setting. Comparing to the recent zero pattern compression scheme, the frequent value scheme saves up to 11.0% more router power and has up to 14.5% more CPI reduction. Hardware design of the FV table and its overhead are also presented.",
"corpus_id": 14589917,
"score": 0
},
{
"doc_id": "16952040",
"title": "A Speaker Verification System Based on EMD",
"abstract": "Most of the speech utterance feature extraction methods are based on the assumptions: utterance signal is short-term stable and independent between each other adjacent frames. This approach ignores the dynamic characteristics of speech signal. For the time-varying characteristics of the speech utterance, we propose a new feature extraction method based on empirical mode decomposition EMD. We can extract the LPCC feature parameters from different stages of IMFs which are decomposed by EMD process. A speaker verification system based on EMD is proposed and it is shown that it is better performance than the one with traditional LPCC features based on short-term process.",
"corpus_id": 16952040,
"score": 0
}
] |
arnetminer | {
"doc_id": "7114460",
"title": "A High Performance k-NN Classifier Using a Binary Correlation Matrix Memory",
"abstract": "This paper presents a novel and fast k-NN classifier that is based on a binary CMM (Correlation Matrix Memory) neural network. A robust encoding method is developed to meet CMM input requirements. A hardware implementation of the CMM is described, which gives over 200 times the speed of a current mid-range workstation, and is scaleable to very large problems. When tested on several benchmarks and compared with a simple k-NN method, the CMM classifier gave less than 1% lower accuracy and over 4 and 12 times speed-up in software and hardware respectively.",
"corpus_id": 7114460
} | [
{
"doc_id": "6413061",
"title": "A Binary Correlation Matrix Memory k-NN Classifier with Hardware Implementation",
"abstract": "This paper describes a generic and fast classifier that uses a binary CMM (Correlation Matrix Memory) neural network for storing and matching a large amount of patterns efficiently, and a k-NN rule for classification. To meet CMM input requirements, a robust encoding method is proposed to convert numerical inputs into binary ones with the maximally achievable uniformity. To reduce the execution bottleneck, a hardware implementation of the CMM is described, which shows the network with on-board training and testing operates at over 200 times the speed of a current mid-range workstation, and is scaleable to very large problems. The CMM classifier has been tested on several benchmarks and, comparing with a simple k-NN classifier, it gave less than 1% lower accuracy and over 4 and 12 times speed-ups in software and hardware respectively.",
"corpus_id": 6413061,
"score": 1
},
{
"doc_id": "6088061",
"title": "Analysis of Welding Defects in Spot Welding Process U-I Curves",
"abstract": "High speed collecting and managing system of spot welding signal can achieve the A/D conversion and data collection. The data of welding current, electrode voltage as well as welding cycle has been collected. Subsequently, they were processed with data acquisition and memory module, wavelet filtering of digit signal module, U-I curves and energy analyzing module, respectively. Based on the collections of welding current and electrode voltage, a criterion splash occurrence and the defective of loose weld by U-I curves change and acreage were proposed. The results prove that the criterion can decipher the welding splash and defective of loose weld accurately.",
"corpus_id": 6088061,
"score": 0
},
{
"doc_id": "20984696",
"title": "Available bit rate (ABR) source control and delay estimation",
"abstract": "The problem of regulating the transmission rate of an available bit rate (ABR) traffic source in an ATM network is examined. Of particular interest is linear quadratic (LQ) rate regulation based on estimates of the round-trip propagation delay. The round-trip delay is estimated using a nonlinear least mean square (NLMS) algorithm. Simulation results are used to demonstrate the method.",
"corpus_id": 20984696,
"score": 0
},
{
"doc_id": "46289706",
"title": "Horseshoe in the hyperchaotic Mck Circuit",
"abstract": "The well-known Matsumoto–Chua–Kobayashi (MCK) circuit is of significance for studying hyperchaos, since it was the first experimental observation of hyperchaos from a real physical system. In this paper, we discuss the existence of hyperchaos in this circuit by virtue of topological horseshoe theory. The two disjoint compact subsets producing a horseshoe found in a specific 3D cross-section, both expand in two directions under the fourth Poincare return map, this fact means that there exists hyperchaos in the circuit.",
"corpus_id": 46289706,
"score": 0
},
{
"doc_id": "37285882",
"title": "Dynamic adaptation of CSMA/CA MAC protocol for wide area wireless mesh networks",
"abstract": "Though the popular IEEE 802.11 DCF is designed primarily for Wireless LAN (WLAN) environments, today it is being widely used for wide area wireless mesh networking. The protocol parameters of IEEE 802.11 such as timeout values, interframe spaces, and slot durations, sufficient for a general WLAN environment, need to be modified in order to efficiently operate in wide area wireless mesh networks. The current wide area wireless mesh network deployments use manual configuration of these parameters to the upper limit which essentially makes the networks operate at lower system efficiency. In this paper, we propose d802.11 (dynamic 802.11) which dynamically adapts the protocol parameters in order to operate at varying link distances. In fact, in 802.11, a transmitter can face ACK/CTS timeout even when it started receiving ACK/CTS packet before the timeout value. We present three strategies, (i) multiplicative timer backoff (MTB), (ii) additive timer backoff (ATB), and (iii) link RTT memoization (LRM), to adapt the ACK_TIMEOUT in d802.11 in order to provide better adaptation for varying link dimensions. Through extensive simulation experiments we observed significant performance improvement for the proposed strategies. We also theoretically modeled the maximum link throughput as a function of the link dimension for the proposed system. Our results show that the LRM technique provides the best adaptation compared to all other schemes.",
"corpus_id": 37285882,
"score": 0
},
{
"doc_id": "6645019",
"title": "An Energy Reporting Aggregation Method Based on EAFM and DTRM in WSN",
"abstract": "Analyzed the energy consumption disciplinarian of the nodes in WSN, the node’s Energy Attenuation Forecast Model(EAFM) can be established. A Difference-threshold Reporting Mechanism (DTRM) is used to report the residual energy of nodes. The energy collection mechanism based on EAFM and DTRM can reduce energy data reporting times significantly,improve the efficient of energy data collection, save the node's energy at the same time. The experiments in the platform of telosb nodes show that the predicable rate is between 70% and85%, and this method can extend the node’s life by 1% ~ 4.5%.",
"corpus_id": 6645019,
"score": 0
}
] |
arnetminer | {
"doc_id": "11399662",
"title": "Adaptive Buffer Management for Efficient Code Dissemination in Multi-Application Wireless Sensor Networks",
"abstract": "Future wireless sensor networks (WSNs) are projected to run multiple applications in the same network infrastructure. While such multi-application WSNs (MA-WSNs) are economically more efficient and adapt better to the changing environments than traditional single-application WSNs, they usually require frequent code redistribution on wireless sensors, making it critical to design energy efficient post-deployment code dissemination protocols in MA-WSNs. Different applications in MA-WSNs often share some common code segments. Therefore when there is a need to disseminate a new application from the sink node, it is possible to disseminate its shared code segments from peer sensors instead of disseminating everything from the sink node. While dissemination protocols have been proposed to handle code of each single type, it is challenging to achieve energy efficiency when the code contains both types and needs simultaneous dissemination. In this paper we utilize an adaptive buffer management approach to achieve efficient code dissemination in MA-WSNs. Our experimental results show that adaptive buffer management can reduce the completion time and the message overhead up to 10% and 20% respectively.",
"corpus_id": 11399662
} | [
{
"doc_id": "13798358",
"title": "A durable and energy efficient main memory using phase change memory technology",
"abstract": "Using nonvolatile memories in memory hierarchy has been investigated to reduce its energy consumption because nonvolatile memories consume zero leakage power in memory cells. One of the difficulties is, however, that the endurance of most nonvolatile memory technologies is much shorter than the conventional SRAM and DRAM technology. This has limited its usage to only the low levels of a memory hierarchy, e.g., disks, that is far from the CPU.\n In this paper, we study the use of a new type of nonvolatile memories -- the Phase Change Memory (PCM) as the main memory for a 3D stacked chip. The main challenges we face are the limited PCM endurance, longer access latencies, and higher dynamic power compared to the conventional DRAM technology. We propose techniques to extend the endurance of the PCM to an average of 13 (for MLC PCM cell) to 22 (for SLC PCM) years. We also study the design choices of implementing PCM to achieve the best tradeoff between energy and performance. Our design reduced the total energy of an already low-power DRAM main memory of the same capacity by 65%, and energy-delay2 product by 60%. These results indicate that it is feasible to use PCM technology in place of DRAM in the main memory for better energy efficiency.",
"corpus_id": 13798358,
"score": 1
},
{
"doc_id": "37285882",
"title": "Dynamic adaptation of CSMA/CA MAC protocol for wide area wireless mesh networks",
"abstract": "Though the popular IEEE 802.11 DCF is designed primarily for Wireless LAN (WLAN) environments, today it is being widely used for wide area wireless mesh networking. The protocol parameters of IEEE 802.11 such as timeout values, interframe spaces, and slot durations, sufficient for a general WLAN environment, need to be modified in order to efficiently operate in wide area wireless mesh networks. The current wide area wireless mesh network deployments use manual configuration of these parameters to the upper limit which essentially makes the networks operate at lower system efficiency. In this paper, we propose d802.11 (dynamic 802.11) which dynamically adapts the protocol parameters in order to operate at varying link distances. In fact, in 802.11, a transmitter can face ACK/CTS timeout even when it started receiving ACK/CTS packet before the timeout value. We present three strategies, (i) multiplicative timer backoff (MTB), (ii) additive timer backoff (ATB), and (iii) link RTT memoization (LRM), to adapt the ACK_TIMEOUT in d802.11 in order to provide better adaptation for varying link dimensions. Through extensive simulation experiments we observed significant performance improvement for the proposed strategies. We also theoretically modeled the maximum link throughput as a function of the link dimension for the proposed system. Our results show that the LRM technique provides the best adaptation compared to all other schemes.",
"corpus_id": 37285882,
"score": 0
},
{
"doc_id": "40982114",
"title": "A proof theory for asynchronously communicating real-time systems",
"abstract": "A compositional proof system is presented to axiomatize the real-time behavior of asynchronously communicating processes. Programs are written in a real-time version of CSP where processes asynchronously send and receive messages along channels that are capable of buffering an arbitrary number of messages. Timing properties are expressed in explicitly clock temporal logic, which extends linear temporal logic with a special time variable, referring to a global clock.<<ETX>>",
"corpus_id": 40982114,
"score": 0
},
{
"doc_id": "25182935",
"title": "An efficient random access scheme for OFDMA systems with implicit message transmission",
"abstract": "Random access channel (RACH) is usually used for initial channel access, bandwidth request, etc. In this paper, we analyze the throughput and access delay performance of the RACH in an orthogonal frequency division multiple access (OFDMA) system. A closed-form expression of the RACH throughput is presented and its mean access delays under both binary exponential and uniform backoff policies have also been derived. Moreover, to further improve the utilization of RACH in an OFDMA system, we propose a novel message transmission scheme in a shared RACH. The message in the proposed scheme can be sent implicitly in the cyclic-shifted preambles which make the transmission much more reliable in a contention based RACH. Meanwhile, performance of the preamble detection has also been improved by the redundant information conveyed in messages. Finally, simulation results will demonstrate performance gains of the proposed scheme.",
"corpus_id": 25182935,
"score": 0
},
{
"doc_id": "2961354",
"title": "On Adding Link Dimensional Dynamism to CSMA/CA Based MAC Protocols",
"abstract": "Though the popular IEEE 802.11 DCF is designed primarily for wireless LAN (WLAN) environments, today it is being widely used for wide area wireless mesh networking. The protocol parameters of IEEE 802.11 such as timeout values, interframe spaces, and slot durations, which are sufficient for a general WLAN environment need to be modified in order to efficiently operate in wide area wireless mesh networks. The current wide area wireless mesh network deployments use manual configuration of these parameters to the upper limit which essentially makes the networks operate at lower system efficiency. In this paper, we propose d802.11 (dynamic 802.11) which dynamically adapts the protocol parameters in order to operate at varying link distances. We present three strategies, (i) multiplicative timer back-off (MTB), (ii) additive timer back-off (ATB), and (iii) link RTT memorization (LRM), to adapt the ACK_TIMEOUT in d802.11 in order to provide better adaptation for varying link dimensions. Through extensive simulation experiments we observed significant performance improvement for the proposed strategies. We also theoretically modeled the maximum throughput as a function of the link dimension for the proposed system. Our results show that the LRM technique provides the best adaptation compared to all other schemes.",
"corpus_id": 2961354,
"score": 0
},
{
"doc_id": "7515235",
"title": "On the finite sum representations of the Lauricella functions FD",
"abstract": "Abstract\nBy using divided differences, we derive two different ways of representing the Lauricella function of n variables FD(n)(a,b1,b2,. . .,bn;c;x1,x2,. . .,xn) as a finite sum, for b1,b2,. . .,bn positive integers, and a,c both positive integers or both positive rational numbers with c−a a positive integer.\n",
"corpus_id": 7515235,
"score": 0
}
] |
arnetminer | {
"doc_id": "601474",
"title": "Co-word Analysis using the Chinese Character Set",
"abstract": "Until recently, Chinese texts could not be studied using co-word analysis because the words are not separated by spaces in Chinese (and Japanese). A word can be composed of one or more characters. The online availability of programs that separate Chinese texts makes it possible to analyze them using semantic maps. Chinese characters contain not only information, but also meaning. This may enhance the readability of semantic maps. In this study, we analyze 58 words which occur ten or more times in the 1652 journal titles of the China Scientific and Technical Papers and Citations Database. The word occurrence matrix is visualized and factor-analyzed.",
"corpus_id": 601474
} | [
{
"doc_id": "22092303",
"title": "Nanotechnology as a field of science: Its delineation in terms of journals and patents",
"abstract": "The Journal Citation Reports of the Science Citation Index 2004 were used to delineate a core set of nanotechnology journals and a nanotechnology-relevant set. In comparison with 2003, the core set has grown and the relevant set has decreased. This suggests a higher degree of codification in the field of nanotechnology: the field has become more focused in terms of citation practices. Using the citing patterns among journals at the aggregate level, a core group of ten nanotechnology journals in the vector space can be delineated on the criterion of betweenness centrality. National contributions to this core group of journals are evaluated for the years 2003, 2004, and 2005. Additionally, the specific class of nanotechnology patents in the database of the U. S. Patent and Trade Office (USPTO) is analyzed to determine if non-patent literature references can be used as a source for the delineation of the knowledge base in terms of scientific journals. The references are primarily to general science journals and letters, and therefore not specific enough for the purpose of delineating a journal set.",
"corpus_id": 22092303,
"score": 1
},
{
"doc_id": "34974507",
"title": "The citation impacts and citation environments of Chinese journals in mathematics",
"abstract": "Based on the citation data of journals covered by the China Scientific and Technical Papers and Citations Database (CSTPCD), we obtained aggregated journal-journal citation environments by applying routines developed specifically for this purpose. Local citation impact of journals is defined as the share of the total citations in a local citation environment, which is expressed as a ratio and can be visualized by the size of the nodes. The vertical size of the nodes varies proportionally to a journal’s total citation share, while the horizontal size of the nodes is used to provide citation information after correction for the within-journal (self-) citations. In the “citing” environment, the equivalent of the local citation performance can also be considered as a citation activity index. Using the “citing” patterns as variables one is able to map how the relevant journal environments are perceived by the collective of authors of a journal, while the “cited” environment reflects the impact of journals in a local environment. In this study, we analyze citation impacts of three Chinese journals in mathematics and compare local citation impacts with impact factors. Local citation impacts reflect a journal’s status and function better than (global) impact factors. We also found that authors in Chinese journals prefer international instead of domestic ones as sources for their citations.",
"corpus_id": 34974507,
"score": 1
},
{
"doc_id": "19223951",
"title": "A comparison between the China Scientific and Technical Papers and Citations Database and the Science Citation Index in terms of journal hierarchies and interjournal citation relations",
"abstract": "The journal structure in the China Scientific and Technical Papers and Citations Database (CSTPCD) is analysed from three perspectives: the database level, the specialty level and the institutional level (i.e., university journals versus journals issued by the Chinese Academy of Sciences). The results are compared with those for (Chinese) journals included in the Science Citation Index. The frequency of journal-journal citation relations in the CSTPCD is an order of magnitude lower than in the SCI. Chinese journals, especially high-quality journals, prefer to cite international journals rather than domestic ones. However, Chinese journals do not get an equivalent reception from their international counterparts. The international visibility of Chinese journals is low, but varies among fields of science. Journals of the Chinese Academy of Sciences (CAS) have a better reception in the international scientific community than university journals.",
"corpus_id": 19223951,
"score": 1
},
{
"doc_id": "6453262",
"title": "Are the contributions of China and Korea upsetting the world system of science?",
"abstract": "SummaryInstitutions and their aggregates are not the right units of analysis for developing a science policy with cognitive goals in view. Institutions, however, can be compared in terms of their performance with reference to their previous stages. King's (2004) 'The scientific impact of nations' has provided the data for this comparison. Evaluation of the data from this perspective along the time axis leads to completely different and hitherto overlooked conclusions: a new dynamic can be revealed which points to a group of emerging nations. These nations do not increase their contributions marginally, but their national science systems grow endogenously. In addition to publications, their citation rates keep pace with the exponential growth patterns, albeit with a delay. The center of gravity of the world system of science may be changing accordingly.",
"corpus_id": 6453262,
"score": 1
},
{
"doc_id": "62584254",
"title": "A Comparison between the China Scientific and Technical Papers and Citations Database and the Science Citation Index in terms of journal hierarchies and inter-journal citation relations",
"abstract": "A wearable device and method are provided for reporting the time based on a wrist-related trigger. In one implementation, a wearable apparatus for providing time information to a user includes a wearable image sensor configured to capture real-time image data from an environment of a user of the wearable apparatus. The wearable apparatus also includes at least one processing device programmed to identify in the image data a wrist-related trigger associated with the user. The processing device is also programmed to provide an output to the user, the output including the time information, based on at least the identification of the wrist-related trigger.",
"corpus_id": 62584254,
"score": 1
},
{
"doc_id": "2745933",
"title": "Compositional verification of real-time systems with Explicit Clock Temporal Logic",
"abstract": "To specify and verify real-time systems, we consider a real-time version of temporal logic called Explicit Clock Temporal Logic. Timing properties are specified by extending the classical framework of temporal logic with a special variable which explicitly refers to a global notion of time. Programs are written in an Occam-like real-time language with synchronous message passing. To show that a program satisfies a specification, we formulate a proof system which is proved to be sound and relatively complete. The proof system is compositional, which makes it possible to decompose the design of a large system into the design of subsystems. This is shown by the verification of a small part of an avionics system.",
"corpus_id": 2745933,
"score": 0
},
{
"doc_id": "7114460",
"title": "A High Performance k-NN Classifier Using a Binary Correlation Matrix Memory",
"abstract": "This paper presents a novel and fast k-NN classifier that is based on a binary CMM (Correlation Matrix Memory) neural network. A robust encoding method is developed to meet CMM input requirements. A hardware implementation of the CMM is described, which gives over 200 times the speed of a current mid-range workstation, and is scaleable to very large problems. When tested on several benchmarks and compared with a simple k-NN method, the CMM classifier gave less than 1% lower accuracy and over 4 and 12 times speed-up in software and hardware respectively.",
"corpus_id": 7114460,
"score": 0
},
{
"doc_id": "2961354",
"title": "On Adding Link Dimensional Dynamism to CSMA/CA Based MAC Protocols",
"abstract": "Though the popular IEEE 802.11 DCF is designed primarily for wireless LAN (WLAN) environments, today it is being widely used for wide area wireless mesh networking. The protocol parameters of IEEE 802.11 such as timeout values, interframe spaces, and slot durations, which are sufficient for a general WLAN environment need to be modified in order to efficiently operate in wide area wireless mesh networks. The current wide area wireless mesh network deployments use manual configuration of these parameters to the upper limit which essentially makes the networks operate at lower system efficiency. In this paper, we propose d802.11 (dynamic 802.11) which dynamically adapts the protocol parameters in order to operate at varying link distances. We present three strategies, (i) multiplicative timer back-off (MTB), (ii) additive timer back-off (ATB), and (iii) link RTT memorization (LRM), to adapt the ACK_TIMEOUT in d802.11 in order to provide better adaptation for varying link dimensions. Through extensive simulation experiments we observed significant performance improvement for the proposed strategies. We also theoretically modeled the maximum throughput as a function of the link dimension for the proposed system. Our results show that the LRM technique provides the best adaptation compared to all other schemes.",
"corpus_id": 2961354,
"score": 0
},
{
"doc_id": "6088061",
"title": "Analysis of Welding Defects in Spot Welding Process U-I Curves",
"abstract": "High speed collecting and managing system of spot welding signal can achieve the A/D conversion and data collection. The data of welding current, electrode voltage as well as welding cycle has been collected. Subsequently, they were processed with data acquisition and memory module, wavelet filtering of digit signal module, U-I curves and energy analyzing module, respectively. Based on the collections of welding current and electrode voltage, a criterion splash occurrence and the defective of loose weld by U-I curves change and acreage were proposed. The results prove that the criterion can decipher the welding splash and defective of loose weld accurately.",
"corpus_id": 6088061,
"score": 0
},
{
"doc_id": "206599167",
"title": "A Fuzzy Logic Expert System for Fault Diagnosis and Security Assessment of Power Transformers",
"abstract": null,
"corpus_id": 206599167,
"score": 0
}
] |
arnetminer | {
"doc_id": "37285882",
"title": "Dynamic adaptation of CSMA/CA MAC protocol for wide area wireless mesh networks",
"abstract": "Though the popular IEEE 802.11 DCF is designed primarily for Wireless LAN (WLAN) environments, today it is being widely used for wide area wireless mesh networking. The protocol parameters of IEEE 802.11 such as timeout values, interframe spaces, and slot durations, sufficient for a general WLAN environment, need to be modified in order to efficiently operate in wide area wireless mesh networks. The current wide area wireless mesh network deployments use manual configuration of these parameters to the upper limit which essentially makes the networks operate at lower system efficiency. In this paper, we propose d802.11 (dynamic 802.11) which dynamically adapts the protocol parameters in order to operate at varying link distances. In fact, in 802.11, a transmitter can face ACK/CTS timeout even when it started receiving ACK/CTS packet before the timeout value. We present three strategies, (i) multiplicative timer backoff (MTB), (ii) additive timer backoff (ATB), and (iii) link RTT memoization (LRM), to adapt the ACK_TIMEOUT in d802.11 in order to provide better adaptation for varying link dimensions. Through extensive simulation experiments we observed significant performance improvement for the proposed strategies. We also theoretically modeled the maximum link throughput as a function of the link dimension for the proposed system. Our results show that the LRM technique provides the best adaptation compared to all other schemes.",
"corpus_id": 37285882
} | [
{
"doc_id": "2961354",
"title": "On Adding Link Dimensional Dynamism to CSMA/CA Based MAC Protocols",
"abstract": "Though the popular IEEE 802.11 DCF is designed primarily for wireless LAN (WLAN) environments, today it is being widely used for wide area wireless mesh networking. The protocol parameters of IEEE 802.11 such as timeout values, interframe spaces, and slot durations, which are sufficient for a general WLAN environment need to be modified in order to efficiently operate in wide area wireless mesh networks. The current wide area wireless mesh network deployments use manual configuration of these parameters to the upper limit which essentially makes the networks operate at lower system efficiency. In this paper, we propose d802.11 (dynamic 802.11) which dynamically adapts the protocol parameters in order to operate at varying link distances. We present three strategies, (i) multiplicative timer back-off (MTB), (ii) additive timer back-off (ATB), and (iii) link RTT memorization (LRM), to adapt the ACK_TIMEOUT in d802.11 in order to provide better adaptation for varying link dimensions. Through extensive simulation experiments we observed significant performance improvement for the proposed strategies. We also theoretically modeled the maximum throughput as a function of the link dimension for the proposed system. Our results show that the LRM technique provides the best adaptation compared to all other schemes.",
"corpus_id": 2961354,
"score": 1
},
{
"doc_id": "13798358",
"title": "A durable and energy efficient main memory using phase change memory technology",
"abstract": "Using nonvolatile memories in memory hierarchy has been investigated to reduce its energy consumption because nonvolatile memories consume zero leakage power in memory cells. One of the difficulties is, however, that the endurance of most nonvolatile memory technologies is much shorter than the conventional SRAM and DRAM technology. This has limited its usage to only the low levels of a memory hierarchy, e.g., disks, that is far from the CPU.\n In this paper, we study the use of a new type of nonvolatile memories -- the Phase Change Memory (PCM) as the main memory for a 3D stacked chip. The main challenges we face are the limited PCM endurance, longer access latencies, and higher dynamic power compared to the conventional DRAM technology. We propose techniques to extend the endurance of the PCM to an average of 13 (for MLC PCM cell) to 22 (for SLC PCM) years. We also study the design choices of implementing PCM to achieve the best tradeoff between energy and performance. Our design reduced the total energy of an already low-power DRAM main memory of the same capacity by 65%, and energy-delay2 product by 60%. These results indicate that it is feasible to use PCM technology in place of DRAM in the main memory for better energy efficiency.",
"corpus_id": 13798358,
"score": 0
},
{
"doc_id": "19739278",
"title": "A Framework for Web Usage Mining in Electronic Government",
"abstract": "Web usage mining has been a major component of management strategy to enhance organizational analysis and decision. The literature on Web usage mining that deals with strategies and technologies for effectively employing Web usage mining is quite vast. In recent years, E-government has received much attention from researchers and practitioners. Huge amounts of user access data are produced in Electronic government Web site everyday. The role of these data in the success of government management cannot be overstated because they affect government analysis, prediction, strategies, tactical, operational planning and control. Web usage miming in E-government has an important role to play in setting government objectives, discovering citizen behavior, and determining future courses of actions. Web usage mining in E-government has not received adequate attention from researchers or practitioners. We developed a framework to promote a better understanding of the importance of Web usage mining in E-government. Using the current literature, we developed the framework presented herein, in hopes that it would stimulate more interest in this important area.",
"corpus_id": 19739278,
"score": 0
},
{
"doc_id": "25182935",
"title": "An efficient random access scheme for OFDMA systems with implicit message transmission",
"abstract": "Random access channel (RACH) is usually used for initial channel access, bandwidth request, etc. In this paper, we analyze the throughput and access delay performance of the RACH in an orthogonal frequency division multiple access (OFDMA) system. A closed-form expression of the RACH throughput is presented and its mean access delays under both binary exponential and uniform backoff policies have also been derived. Moreover, to further improve the utilization of RACH in an OFDMA system, we propose a novel message transmission scheme in a shared RACH. The message in the proposed scheme can be sent implicitly in the cyclic-shifted preambles which make the transmission much more reliable in a contention based RACH. Meanwhile, performance of the preamble detection has also been improved by the redundant information conveyed in messages. Finally, simulation results will demonstrate performance gains of the proposed scheme.",
"corpus_id": 25182935,
"score": 0
},
{
"doc_id": "120217870",
"title": "Multivariate Padé approximants to a meromorphic function",
"abstract": "Abstract We explicitly construct the general multivariate Pade approximants to the function G q (x,y) ≔ ∑ j=1 ∞ 1 xy+q j x+q 2j , |q|>1, q∈ C by using the residue theorem and the functional equation method. Then we prove some convergence properties of the approximants.",
"corpus_id": 120217870,
"score": 0
},
{
"doc_id": "14589917",
"title": "Frequent value compression in packet-based NoC architectures",
"abstract": "The proliferation of Chip Multiprocessors (CMPs) has led to the integration of large on-chip caches. For scalability reasons, a large on-chip cache is often divided into smaller banks that are interconnected through packet-based Network-on-Chip (NoC). With increasing number of cores and cache banks integrated on a single die, the on-chip network introduces significant communication latency and power consumption. In this paper, we propose a novel scheme that exploits Frequent Value compression to optimize the power and performance of NoC. Our experimental results show that the proposed scheme reduces the router power by up to 16.7%, with CPI reduction as much as 23.5% in our setting. Comparing to the recent zero pattern compression scheme, the frequent value scheme saves up to 11.0% more router power and has up to 14.5% more CPI reduction. Hardware design of the FV table and its overhead are also presented.",
"corpus_id": 14589917,
"score": 0
}
] |
arnetminer | {
"doc_id": "120217870",
"title": "Multivariate Padé approximants to a meromorphic function",
"abstract": "Abstract We explicitly construct the general multivariate Pade approximants to the function G q (x,y) ≔ ∑ j=1 ∞ 1 xy+q j x+q 2j , |q|>1, q∈ C by using the residue theorem and the functional equation method. Then we prove some convergence properties of the approximants.",
"corpus_id": 120217870
} | [
{
"doc_id": "123438379",
"title": "Explicit Construction of Multivariate Padé Approximants for a q-Logarithm Function",
"abstract": "We explicitly construct the non-homogeneous multivariate Pade approximants to a two variable version of the q-logarithm functionL\"q(x, y)@[email protected]?i, j=0~(q-1)x^iy^jq^i^+^j^+^1-1, for |q|>1 and |x|, |y|<|q|, by using the residue theorem and functional equation method.",
"corpus_id": 123438379,
"score": 1
},
{
"doc_id": "7515235",
"title": "On the finite sum representations of the Lauricella functions FD",
"abstract": "Abstract\nBy using divided differences, we derive two different ways of representing the Lauricella function of n variables FD(n)(a,b1,b2,. . .,bn;c;x1,x2,. . .,xn) as a finite sum, for b1,b2,. . .,bn positive integers, and a,c both positive integers or both positive rational numbers with c−a a positive integer.\n",
"corpus_id": 7515235,
"score": 0
},
{
"doc_id": "20984696",
"title": "Available bit rate (ABR) source control and delay estimation",
"abstract": "The problem of regulating the transmission rate of an available bit rate (ABR) traffic source in an ATM network is examined. Of particular interest is linear quadratic (LQ) rate regulation based on estimates of the round-trip propagation delay. The round-trip delay is estimated using a nonlinear least mean square (NLMS) algorithm. Simulation results are used to demonstrate the method.",
"corpus_id": 20984696,
"score": 0
},
{
"doc_id": "16952040",
"title": "A Speaker Verification System Based on EMD",
"abstract": "Most of the speech utterance feature extraction methods are based on the assumptions: utterance signal is short-term stable and independent between each other adjacent frames. This approach ignores the dynamic characteristics of speech signal. For the time-varying characteristics of the speech utterance, we propose a new feature extraction method based on empirical mode decomposition EMD. We can extract the LPCC feature parameters from different stages of IMFs which are decomposed by EMD process. A speaker verification system based on EMD is proposed and it is shown that it is better performance than the one with traditional LPCC features based on short-term process.",
"corpus_id": 16952040,
"score": 0
},
{
"doc_id": "34974507",
"title": "The citation impacts and citation environments of Chinese journals in mathematics",
"abstract": "Based on the citation data of journals covered by the China Scientific and Technical Papers and Citations Database (CSTPCD), we obtained aggregated journal-journal citation environments by applying routines developed specifically for this purpose. Local citation impact of journals is defined as the share of the total citations in a local citation environment, which is expressed as a ratio and can be visualized by the size of the nodes. The vertical size of the nodes varies proportionally to a journal’s total citation share, while the horizontal size of the nodes is used to provide citation information after correction for the within-journal (self-) citations. In the “citing” environment, the equivalent of the local citation performance can also be considered as a citation activity index. Using the “citing” patterns as variables one is able to map how the relevant journal environments are perceived by the collective of authors of a journal, while the “cited” environment reflects the impact of journals in a local environment. In this study, we analyze citation impacts of three Chinese journals in mathematics and compare local citation impacts with impact factors. Local citation impacts reflect a journal’s status and function better than (global) impact factors. We also found that authors in Chinese journals prefer international instead of domestic ones as sources for their citations.",
"corpus_id": 34974507,
"score": 0
},
{
"doc_id": "10867701",
"title": "Asymptotic Capacity of Infrastructure Wireless Mesh Networks",
"abstract": "An infrastructure wireless mesh network (WMN) is a hierarchical network consisting of mesh clients, mesh routers and gateways. Mesh routers constitute a wireless mesh backbone, to which mesh clients are connected as a star topology, and gateways are chosen among mesh routers providing Internet access. In this paper, the throughput capacity of infrastructure WMNs is studied. For such a network with Nc randomly distributed mesh clients, Nr regularly placed mesh routers and Ng gateways, assuming that each mesh router can transmit at W bits/s, the per-client throughput capacity has been derived as a function of Nc , Nr , Ng and W . The result illustrates that, in order to achieve high capacity performance, the number of mesh routers and the number of gateways must be properly chosen. It also reveals that an infrastructure WMN can achieve the same asymptotic throughput capacity as that of a hybrid ad hoc network by choosing only a small number of mesh routers as gateways. This property makes WMNs a very promising solution for future wireless networking.",
"corpus_id": 10867701,
"score": 0
}
] |
arnetminer | {
"doc_id": "2745933",
"title": "Compositional verification of real-time systems with Explicit Clock Temporal Logic",
"abstract": "To specify and verify real-time systems, we consider a real-time version of temporal logic called Explicit Clock Temporal Logic. Timing properties are specified by extending the classical framework of temporal logic with a special variable which explicitly refers to a global notion of time. Programs are written in an Occam-like real-time language with synchronous message passing. To show that a program satisfies a specification, we formulate a proof system which is proved to be sound and relatively complete. The proof system is compositional, which makes it possible to decompose the design of a large system into the design of subsystems. This is shown by the verification of a small part of an avionics system.",
"corpus_id": 2745933
} | [
{
"doc_id": "40982114",
"title": "A proof theory for asynchronously communicating real-time systems",
"abstract": "A compositional proof system is presented to axiomatize the real-time behavior of asynchronously communicating processes. Programs are written in a real-time version of CSP where processes asynchronously send and receive messages along channels that are capable of buffering an arbitrary number of messages. Timing properties are expressed in explicitly clock temporal logic, which extends linear temporal logic with a special time variable, referring to a global clock.<<ETX>>",
"corpus_id": 40982114,
"score": 1
},
{
"doc_id": "37285882",
"title": "Dynamic adaptation of CSMA/CA MAC protocol for wide area wireless mesh networks",
"abstract": "Though the popular IEEE 802.11 DCF is designed primarily for Wireless LAN (WLAN) environments, today it is being widely used for wide area wireless mesh networking. The protocol parameters of IEEE 802.11 such as timeout values, interframe spaces, and slot durations, sufficient for a general WLAN environment, need to be modified in order to efficiently operate in wide area wireless mesh networks. The current wide area wireless mesh network deployments use manual configuration of these parameters to the upper limit which essentially makes the networks operate at lower system efficiency. In this paper, we propose d802.11 (dynamic 802.11) which dynamically adapts the protocol parameters in order to operate at varying link distances. In fact, in 802.11, a transmitter can face ACK/CTS timeout even when it started receiving ACK/CTS packet before the timeout value. We present three strategies, (i) multiplicative timer backoff (MTB), (ii) additive timer backoff (ATB), and (iii) link RTT memoization (LRM), to adapt the ACK_TIMEOUT in d802.11 in order to provide better adaptation for varying link dimensions. Through extensive simulation experiments we observed significant performance improvement for the proposed strategies. We also theoretically modeled the maximum link throughput as a function of the link dimension for the proposed system. Our results show that the LRM technique provides the best adaptation compared to all other schemes.",
"corpus_id": 37285882,
"score": 0
},
{
"doc_id": "123438379",
"title": "Explicit Construction of Multivariate Padé Approximants for a q-Logarithm Function",
"abstract": "We explicitly construct the non-homogeneous multivariate Pade approximants to a two variable version of the q-logarithm functionL\"q(x, y)@[email protected]?i, j=0~(q-1)x^iy^jq^i^+^j^+^1-1, for |q|>1 and |x|, |y|<|q|, by using the residue theorem and functional equation method.",
"corpus_id": 123438379,
"score": 0
},
{
"doc_id": "16634499",
"title": "An Approximate Controller for Nonlinear Networked Control Systems with Time Delay",
"abstract": "An approximation controller is developed to obtain an optimal control for nonlinear networked control systems (NCS) with time delay. In the approach only the non-linear compensating term, solution of a sequence of adjoint vector differential equations, is required iteration. By taking the finite iteration of non-linear compensating term of optimal solution sequence, a suboptimal control law for NCS can be obtained.",
"corpus_id": 16634499,
"score": 0
},
{
"doc_id": "19223951",
"title": "A comparison between the China Scientific and Technical Papers and Citations Database and the Science Citation Index in terms of journal hierarchies and interjournal citation relations",
"abstract": "The journal structure in the China Scientific and Technical Papers and Citations Database (CSTPCD) is analysed from three perspectives: the database level, the specialty level and the institutional level (i.e., university journals versus journals issued by the Chinese Academy of Sciences). The results are compared with those for (Chinese) journals included in the Science Citation Index. The frequency of journal-journal citation relations in the CSTPCD is an order of magnitude lower than in the SCI. Chinese journals, especially high-quality journals, prefer to cite international journals rather than domestic ones. However, Chinese journals do not get an equivalent reception from their international counterparts. The international visibility of Chinese journals is low, but varies among fields of science. Journals of the Chinese Academy of Sciences (CAS) have a better reception in the international scientific community than university journals.",
"corpus_id": 19223951,
"score": 0
},
{
"doc_id": "16952040",
"title": "A Speaker Verification System Based on EMD",
"abstract": "Most of the speech utterance feature extraction methods are based on the assumptions: utterance signal is short-term stable and independent between each other adjacent frames. This approach ignores the dynamic characteristics of speech signal. For the time-varying characteristics of the speech utterance, we propose a new feature extraction method based on empirical mode decomposition EMD. We can extract the LPCC feature parameters from different stages of IMFs which are decomposed by EMD process. A speaker verification system based on EMD is proposed and it is shown that it is better performance than the one with traditional LPCC features based on short-term process.",
"corpus_id": 16952040,
"score": 0
}
] |
arnetminer | {
"doc_id": "62584254",
"title": "A Comparison between the China Scientific and Technical Papers and Citations Database and the Science Citation Index in terms of journal hierarchies and inter-journal citation relations",
"abstract": "A wearable device and method are provided for reporting the time based on a wrist-related trigger. In one implementation, a wearable apparatus for providing time information to a user includes a wearable image sensor configured to capture real-time image data from an environment of a user of the wearable apparatus. The wearable apparatus also includes at least one processing device programmed to identify in the image data a wrist-related trigger associated with the user. The processing device is also programmed to provide an output to the user, the output including the time information, based on at least the identification of the wrist-related trigger.",
"corpus_id": 62584254
} | [
{
"doc_id": "19223951",
"title": "A comparison between the China Scientific and Technical Papers and Citations Database and the Science Citation Index in terms of journal hierarchies and interjournal citation relations",
"abstract": "The journal structure in the China Scientific and Technical Papers and Citations Database (CSTPCD) is analysed from three perspectives: the database level, the specialty level and the institutional level (i.e., university journals versus journals issued by the Chinese Academy of Sciences). The results are compared with those for (Chinese) journals included in the Science Citation Index. The frequency of journal-journal citation relations in the CSTPCD is an order of magnitude lower than in the SCI. Chinese journals, especially high-quality journals, prefer to cite international journals rather than domestic ones. However, Chinese journals do not get an equivalent reception from their international counterparts. The international visibility of Chinese journals is low, but varies among fields of science. Journals of the Chinese Academy of Sciences (CAS) have a better reception in the international scientific community than university journals.",
"corpus_id": 19223951,
"score": 1
},
{
"doc_id": "22092303",
"title": "Nanotechnology as a field of science: Its delineation in terms of journals and patents",
"abstract": "The Journal Citation Reports of the Science Citation Index 2004 were used to delineate a core set of nanotechnology journals and a nanotechnology-relevant set. In comparison with 2003, the core set has grown and the relevant set has decreased. This suggests a higher degree of codification in the field of nanotechnology: the field has become more focused in terms of citation practices. Using the citing patterns among journals at the aggregate level, a core group of ten nanotechnology journals in the vector space can be delineated on the criterion of betweenness centrality. National contributions to this core group of journals are evaluated for the years 2003, 2004, and 2005. Additionally, the specific class of nanotechnology patents in the database of the U. S. Patent and Trade Office (USPTO) is analyzed to determine if non-patent literature references can be used as a source for the delineation of the knowledge base in terms of scientific journals. The references are primarily to general science journals and letters, and therefore not specific enough for the purpose of delineating a journal set.",
"corpus_id": 22092303,
"score": 1
},
{
"doc_id": "34974507",
"title": "The citation impacts and citation environments of Chinese journals in mathematics",
"abstract": "Based on the citation data of journals covered by the China Scientific and Technical Papers and Citations Database (CSTPCD), we obtained aggregated journal-journal citation environments by applying routines developed specifically for this purpose. Local citation impact of journals is defined as the share of the total citations in a local citation environment, which is expressed as a ratio and can be visualized by the size of the nodes. The vertical size of the nodes varies proportionally to a journal’s total citation share, while the horizontal size of the nodes is used to provide citation information after correction for the within-journal (self-) citations. In the “citing” environment, the equivalent of the local citation performance can also be considered as a citation activity index. Using the “citing” patterns as variables one is able to map how the relevant journal environments are perceived by the collective of authors of a journal, while the “cited” environment reflects the impact of journals in a local environment. In this study, we analyze citation impacts of three Chinese journals in mathematics and compare local citation impacts with impact factors. Local citation impacts reflect a journal’s status and function better than (global) impact factors. We also found that authors in Chinese journals prefer international instead of domestic ones as sources for their citations.",
"corpus_id": 34974507,
"score": 1
},
{
"doc_id": "6453262",
"title": "Are the contributions of China and Korea upsetting the world system of science?",
"abstract": "SummaryInstitutions and their aggregates are not the right units of analysis for developing a science policy with cognitive goals in view. Institutions, however, can be compared in terms of their performance with reference to their previous stages. King's (2004) 'The scientific impact of nations' has provided the data for this comparison. Evaluation of the data from this perspective along the time axis leads to completely different and hitherto overlooked conclusions: a new dynamic can be revealed which points to a group of emerging nations. These nations do not increase their contributions marginally, but their national science systems grow endogenously. In addition to publications, their citation rates keep pace with the exponential growth patterns, albeit with a delay. The center of gravity of the world system of science may be changing accordingly.",
"corpus_id": 6453262,
"score": 1
},
{
"doc_id": "120217870",
"title": "Multivariate Padé approximants to a meromorphic function",
"abstract": "Abstract We explicitly construct the general multivariate Pade approximants to the function G q (x,y) ≔ ∑ j=1 ∞ 1 xy+q j x+q 2j , |q|>1, q∈ C by using the residue theorem and the functional equation method. Then we prove some convergence properties of the approximants.",
"corpus_id": 120217870,
"score": 0
},
{
"doc_id": "123438379",
"title": "Explicit Construction of Multivariate Padé Approximants for a q-Logarithm Function",
"abstract": "We explicitly construct the non-homogeneous multivariate Pade approximants to a two variable version of the q-logarithm functionL\"q(x, y)@[email protected]?i, j=0~(q-1)x^iy^jq^i^+^j^+^1-1, for |q|>1 and |x|, |y|<|q|, by using the residue theorem and functional equation method.",
"corpus_id": 123438379,
"score": 0
},
{
"doc_id": "16952040",
"title": "A Speaker Verification System Based on EMD",
"abstract": "Most of the speech utterance feature extraction methods are based on the assumptions: utterance signal is short-term stable and independent between each other adjacent frames. This approach ignores the dynamic characteristics of speech signal. For the time-varying characteristics of the speech utterance, we propose a new feature extraction method based on empirical mode decomposition EMD. We can extract the LPCC feature parameters from different stages of IMFs which are decomposed by EMD process. A speaker verification system based on EMD is proposed and it is shown that it is better performance than the one with traditional LPCC features based on short-term process.",
"corpus_id": 16952040,
"score": 0
},
{
"doc_id": "5738095",
"title": "Local Elastic Registration of Multimodal Medical Image Using Robust Point Matching and Compact Support RBF",
"abstract": "A novel local elastic registration of multimodal medical image method is proposed in this paper. At first, local deformation regions are detected by evaluating the variation of mutual information in re-quantified gray space of images. The re-quantified image retains anatomical structure of the organ well and reduces the gray levels greatly. Mutual information performs better in the quantification space and can be used to detect whether the deformation happens in small sampling images. Next, edges of the local deformation regions are detected. Fuzzy clustering method is performed on edge points and the clustering centers are chosen as candidate landmarks. Robust point matching is used to estimate landmarks correspondence in the local deformation regions. Finally, a new compact support radial basis function CSTPF has been adopted to deform image, which cost less bending energy than other RBFs. Local registration experiments of multimodal medical images show the feasibility of our method.",
"corpus_id": 5738095,
"score": 0
},
{
"doc_id": "7114460",
"title": "A High Performance k-NN Classifier Using a Binary Correlation Matrix Memory",
"abstract": "This paper presents a novel and fast k-NN classifier that is based on a binary CMM (Correlation Matrix Memory) neural network. A robust encoding method is developed to meet CMM input requirements. A hardware implementation of the CMM is described, which gives over 200 times the speed of a current mid-range workstation, and is scaleable to very large problems. When tested on several benchmarks and compared with a simple k-NN method, the CMM classifier gave less than 1% lower accuracy and over 4 and 12 times speed-up in software and hardware respectively.",
"corpus_id": 7114460,
"score": 0
}
] |
arnetminer | {
"doc_id": "22092303",
"title": "Nanotechnology as a field of science: Its delineation in terms of journals and patents",
"abstract": "The Journal Citation Reports of the Science Citation Index 2004 were used to delineate a core set of nanotechnology journals and a nanotechnology-relevant set. In comparison with 2003, the core set has grown and the relevant set has decreased. This suggests a higher degree of codification in the field of nanotechnology: the field has become more focused in terms of citation practices. Using the citing patterns among journals at the aggregate level, a core group of ten nanotechnology journals in the vector space can be delineated on the criterion of betweenness centrality. National contributions to this core group of journals are evaluated for the years 2003, 2004, and 2005. Additionally, the specific class of nanotechnology patents in the database of the U. S. Patent and Trade Office (USPTO) is analyzed to determine if non-patent literature references can be used as a source for the delineation of the knowledge base in terms of scientific journals. The references are primarily to general science journals and letters, and therefore not specific enough for the purpose of delineating a journal set.",
"corpus_id": 22092303
} | [
{
"doc_id": "6453262",
"title": "Are the contributions of China and Korea upsetting the world system of science?",
"abstract": "SummaryInstitutions and their aggregates are not the right units of analysis for developing a science policy with cognitive goals in view. Institutions, however, can be compared in terms of their performance with reference to their previous stages. King's (2004) 'The scientific impact of nations' has provided the data for this comparison. Evaluation of the data from this perspective along the time axis leads to completely different and hitherto overlooked conclusions: a new dynamic can be revealed which points to a group of emerging nations. These nations do not increase their contributions marginally, but their national science systems grow endogenously. In addition to publications, their citation rates keep pace with the exponential growth patterns, albeit with a delay. The center of gravity of the world system of science may be changing accordingly.",
"corpus_id": 6453262,
"score": 1
},
{
"doc_id": "34974507",
"title": "The citation impacts and citation environments of Chinese journals in mathematics",
"abstract": "Based on the citation data of journals covered by the China Scientific and Technical Papers and Citations Database (CSTPCD), we obtained aggregated journal-journal citation environments by applying routines developed specifically for this purpose. Local citation impact of journals is defined as the share of the total citations in a local citation environment, which is expressed as a ratio and can be visualized by the size of the nodes. The vertical size of the nodes varies proportionally to a journal’s total citation share, while the horizontal size of the nodes is used to provide citation information after correction for the within-journal (self-) citations. In the “citing” environment, the equivalent of the local citation performance can also be considered as a citation activity index. Using the “citing” patterns as variables one is able to map how the relevant journal environments are perceived by the collective of authors of a journal, while the “cited” environment reflects the impact of journals in a local environment. In this study, we analyze citation impacts of three Chinese journals in mathematics and compare local citation impacts with impact factors. Local citation impacts reflect a journal’s status and function better than (global) impact factors. We also found that authors in Chinese journals prefer international instead of domestic ones as sources for their citations.",
"corpus_id": 34974507,
"score": 1
},
{
"doc_id": "19223951",
"title": "A comparison between the China Scientific and Technical Papers and Citations Database and the Science Citation Index in terms of journal hierarchies and interjournal citation relations",
"abstract": "The journal structure in the China Scientific and Technical Papers and Citations Database (CSTPCD) is analysed from three perspectives: the database level, the specialty level and the institutional level (i.e., university journals versus journals issued by the Chinese Academy of Sciences). The results are compared with those for (Chinese) journals included in the Science Citation Index. The frequency of journal-journal citation relations in the CSTPCD is an order of magnitude lower than in the SCI. Chinese journals, especially high-quality journals, prefer to cite international journals rather than domestic ones. However, Chinese journals do not get an equivalent reception from their international counterparts. The international visibility of Chinese journals is low, but varies among fields of science. Journals of the Chinese Academy of Sciences (CAS) have a better reception in the international scientific community than university journals.",
"corpus_id": 19223951,
"score": 1
},
{
"doc_id": "16952040",
"title": "A Speaker Verification System Based on EMD",
"abstract": "Most of the speech utterance feature extraction methods are based on the assumptions: utterance signal is short-term stable and independent between each other adjacent frames. This approach ignores the dynamic characteristics of speech signal. For the time-varying characteristics of the speech utterance, we propose a new feature extraction method based on empirical mode decomposition EMD. We can extract the LPCC feature parameters from different stages of IMFs which are decomposed by EMD process. A speaker verification system based on EMD is proposed and it is shown that it is better performance than the one with traditional LPCC features based on short-term process.",
"corpus_id": 16952040,
"score": 0
},
{
"doc_id": "20984696",
"title": "Available bit rate (ABR) source control and delay estimation",
"abstract": "The problem of regulating the transmission rate of an available bit rate (ABR) traffic source in an ATM network is examined. Of particular interest is linear quadratic (LQ) rate regulation based on estimates of the round-trip propagation delay. The round-trip delay is estimated using a nonlinear least mean square (NLMS) algorithm. Simulation results are used to demonstrate the method.",
"corpus_id": 20984696,
"score": 0
},
{
"doc_id": "2961354",
"title": "On Adding Link Dimensional Dynamism to CSMA/CA Based MAC Protocols",
"abstract": "Though the popular IEEE 802.11 DCF is designed primarily for wireless LAN (WLAN) environments, today it is being widely used for wide area wireless mesh networking. The protocol parameters of IEEE 802.11 such as timeout values, interframe spaces, and slot durations, which are sufficient for a general WLAN environment need to be modified in order to efficiently operate in wide area wireless mesh networks. The current wide area wireless mesh network deployments use manual configuration of these parameters to the upper limit which essentially makes the networks operate at lower system efficiency. In this paper, we propose d802.11 (dynamic 802.11) which dynamically adapts the protocol parameters in order to operate at varying link distances. We present three strategies, (i) multiplicative timer back-off (MTB), (ii) additive timer back-off (ATB), and (iii) link RTT memorization (LRM), to adapt the ACK_TIMEOUT in d802.11 in order to provide better adaptation for varying link dimensions. Through extensive simulation experiments we observed significant performance improvement for the proposed strategies. We also theoretically modeled the maximum throughput as a function of the link dimension for the proposed system. Our results show that the LRM technique provides the best adaptation compared to all other schemes.",
"corpus_id": 2961354,
"score": 0
},
{
"doc_id": "123438379",
"title": "Explicit Construction of Multivariate Padé Approximants for a q-Logarithm Function",
"abstract": "We explicitly construct the non-homogeneous multivariate Pade approximants to a two variable version of the q-logarithm functionL\"q(x, y)@[email protected]?i, j=0~(q-1)x^iy^jq^i^+^j^+^1-1, for |q|>1 and |x|, |y|<|q|, by using the residue theorem and functional equation method.",
"corpus_id": 123438379,
"score": 0
},
{
"doc_id": "5738095",
"title": "Local Elastic Registration of Multimodal Medical Image Using Robust Point Matching and Compact Support RBF",
"abstract": "A novel local elastic registration of multimodal medical image method is proposed in this paper. At first, local deformation regions are detected by evaluating the variation of mutual information in re-quantified gray space of images. The re-quantified image retains anatomical structure of the organ well and reduces the gray levels greatly. Mutual information performs better in the quantification space and can be used to detect whether the deformation happens in small sampling images. Next, edges of the local deformation regions are detected. Fuzzy clustering method is performed on edge points and the clustering centers are chosen as candidate landmarks. Robust point matching is used to estimate landmarks correspondence in the local deformation regions. Finally, a new compact support radial basis function CSTPF has been adopted to deform image, which cost less bending energy than other RBFs. Local registration experiments of multimodal medical images show the feasibility of our method.",
"corpus_id": 5738095,
"score": 0
}
] |
arnetminer | {
"doc_id": "34974507",
"title": "The citation impacts and citation environments of Chinese journals in mathematics",
"abstract": "Based on the citation data of journals covered by the China Scientific and Technical Papers and Citations Database (CSTPCD), we obtained aggregated journal-journal citation environments by applying routines developed specifically for this purpose. Local citation impact of journals is defined as the share of the total citations in a local citation environment, which is expressed as a ratio and can be visualized by the size of the nodes. The vertical size of the nodes varies proportionally to a journal’s total citation share, while the horizontal size of the nodes is used to provide citation information after correction for the within-journal (self-) citations. In the “citing” environment, the equivalent of the local citation performance can also be considered as a citation activity index. Using the “citing” patterns as variables one is able to map how the relevant journal environments are perceived by the collective of authors of a journal, while the “cited” environment reflects the impact of journals in a local environment. In this study, we analyze citation impacts of three Chinese journals in mathematics and compare local citation impacts with impact factors. Local citation impacts reflect a journal’s status and function better than (global) impact factors. We also found that authors in Chinese journals prefer international instead of domestic ones as sources for their citations.",
"corpus_id": 34974507
} | [
{
"doc_id": "19223951",
"title": "A comparison between the China Scientific and Technical Papers and Citations Database and the Science Citation Index in terms of journal hierarchies and interjournal citation relations",
"abstract": "The journal structure in the China Scientific and Technical Papers and Citations Database (CSTPCD) is analysed from three perspectives: the database level, the specialty level and the institutional level (i.e., university journals versus journals issued by the Chinese Academy of Sciences). The results are compared with those for (Chinese) journals included in the Science Citation Index. The frequency of journal-journal citation relations in the CSTPCD is an order of magnitude lower than in the SCI. Chinese journals, especially high-quality journals, prefer to cite international journals rather than domestic ones. However, Chinese journals do not get an equivalent reception from their international counterparts. The international visibility of Chinese journals is low, but varies among fields of science. Journals of the Chinese Academy of Sciences (CAS) have a better reception in the international scientific community than university journals.",
"corpus_id": 19223951,
"score": 1
},
{
"doc_id": "6453262",
"title": "Are the contributions of China and Korea upsetting the world system of science?",
"abstract": "SummaryInstitutions and their aggregates are not the right units of analysis for developing a science policy with cognitive goals in view. Institutions, however, can be compared in terms of their performance with reference to their previous stages. King's (2004) 'The scientific impact of nations' has provided the data for this comparison. Evaluation of the data from this perspective along the time axis leads to completely different and hitherto overlooked conclusions: a new dynamic can be revealed which points to a group of emerging nations. These nations do not increase their contributions marginally, but their national science systems grow endogenously. In addition to publications, their citation rates keep pace with the exponential growth patterns, albeit with a delay. The center of gravity of the world system of science may be changing accordingly.",
"corpus_id": 6453262,
"score": 1
},
{
"doc_id": "6088061",
"title": "Analysis of Welding Defects in Spot Welding Process U-I Curves",
"abstract": "High speed collecting and managing system of spot welding signal can achieve the A/D conversion and data collection. The data of welding current, electrode voltage as well as welding cycle has been collected. Subsequently, they were processed with data acquisition and memory module, wavelet filtering of digit signal module, U-I curves and energy analyzing module, respectively. Based on the collections of welding current and electrode voltage, a criterion splash occurrence and the defective of loose weld by U-I curves change and acreage were proposed. The results prove that the criterion can decipher the welding splash and defective of loose weld accurately.",
"corpus_id": 6088061,
"score": 0
},
{
"doc_id": "14589917",
"title": "Frequent value compression in packet-based NoC architectures",
"abstract": "The proliferation of Chip Multiprocessors (CMPs) has led to the integration of large on-chip caches. For scalability reasons, a large on-chip cache is often divided into smaller banks that are interconnected through packet-based Network-on-Chip (NoC). With increasing number of cores and cache banks integrated on a single die, the on-chip network introduces significant communication latency and power consumption. In this paper, we propose a novel scheme that exploits Frequent Value compression to optimize the power and performance of NoC. Our experimental results show that the proposed scheme reduces the router power by up to 16.7%, with CPI reduction as much as 23.5% in our setting. Comparing to the recent zero pattern compression scheme, the frequent value scheme saves up to 11.0% more router power and has up to 14.5% more CPI reduction. Hardware design of the FV table and its overhead are also presented.",
"corpus_id": 14589917,
"score": 0
},
{
"doc_id": "6645019",
"title": "An Energy Reporting Aggregation Method Based on EAFM and DTRM in WSN",
"abstract": "Analyzed the energy consumption disciplinarian of the nodes in WSN, the node’s Energy Attenuation Forecast Model(EAFM) can be established. A Difference-threshold Reporting Mechanism (DTRM) is used to report the residual energy of nodes. The energy collection mechanism based on EAFM and DTRM can reduce energy data reporting times significantly,improve the efficient of energy data collection, save the node's energy at the same time. The experiments in the platform of telosb nodes show that the predicable rate is between 70% and85%, and this method can extend the node’s life by 1% ~ 4.5%.",
"corpus_id": 6645019,
"score": 0
},
{
"doc_id": "13798358",
"title": "A durable and energy efficient main memory using phase change memory technology",
"abstract": "Using nonvolatile memories in memory hierarchy has been investigated to reduce its energy consumption because nonvolatile memories consume zero leakage power in memory cells. One of the difficulties is, however, that the endurance of most nonvolatile memory technologies is much shorter than the conventional SRAM and DRAM technology. This has limited its usage to only the low levels of a memory hierarchy, e.g., disks, that is far from the CPU.\n In this paper, we study the use of a new type of nonvolatile memories -- the Phase Change Memory (PCM) as the main memory for a 3D stacked chip. The main challenges we face are the limited PCM endurance, longer access latencies, and higher dynamic power compared to the conventional DRAM technology. We propose techniques to extend the endurance of the PCM to an average of 13 (for MLC PCM cell) to 22 (for SLC PCM) years. We also study the design choices of implementing PCM to achieve the best tradeoff between energy and performance. Our design reduced the total energy of an already low-power DRAM main memory of the same capacity by 65%, and energy-delay2 product by 60%. These results indicate that it is feasible to use PCM technology in place of DRAM in the main memory for better energy efficiency.",
"corpus_id": 13798358,
"score": 0
},
{
"doc_id": "2961354",
"title": "On Adding Link Dimensional Dynamism to CSMA/CA Based MAC Protocols",
"abstract": "Though the popular IEEE 802.11 DCF is designed primarily for wireless LAN (WLAN) environments, today it is being widely used for wide area wireless mesh networking. The protocol parameters of IEEE 802.11 such as timeout values, interframe spaces, and slot durations, which are sufficient for a general WLAN environment need to be modified in order to efficiently operate in wide area wireless mesh networks. The current wide area wireless mesh network deployments use manual configuration of these parameters to the upper limit which essentially makes the networks operate at lower system efficiency. In this paper, we propose d802.11 (dynamic 802.11) which dynamically adapts the protocol parameters in order to operate at varying link distances. We present three strategies, (i) multiplicative timer back-off (MTB), (ii) additive timer back-off (ATB), and (iii) link RTT memorization (LRM), to adapt the ACK_TIMEOUT in d802.11 in order to provide better adaptation for varying link dimensions. Through extensive simulation experiments we observed significant performance improvement for the proposed strategies. We also theoretically modeled the maximum throughput as a function of the link dimension for the proposed system. Our results show that the LRM technique provides the best adaptation compared to all other schemes.",
"corpus_id": 2961354,
"score": 0
}
] |
arnetminer | {
"doc_id": "6088061",
"title": "Analysis of Welding Defects in Spot Welding Process U-I Curves",
"abstract": "High speed collecting and managing system of spot welding signal can achieve the A/D conversion and data collection. The data of welding current, electrode voltage as well as welding cycle has been collected. Subsequently, they were processed with data acquisition and memory module, wavelet filtering of digit signal module, U-I curves and energy analyzing module, respectively. Based on the collections of welding current and electrode voltage, a criterion splash occurrence and the defective of loose weld by U-I curves change and acreage were proposed. The results prove that the criterion can decipher the welding splash and defective of loose weld accurately.",
"corpus_id": 6088061
} | [
{
"doc_id": "16952040",
"title": "A Speaker Verification System Based on EMD",
"abstract": "Most of the speech utterance feature extraction methods are based on the assumptions: utterance signal is short-term stable and independent between each other adjacent frames. This approach ignores the dynamic characteristics of speech signal. For the time-varying characteristics of the speech utterance, we propose a new feature extraction method based on empirical mode decomposition EMD. We can extract the LPCC feature parameters from different stages of IMFs which are decomposed by EMD process. A speaker verification system based on EMD is proposed and it is shown that it is better performance than the one with traditional LPCC features based on short-term process.",
"corpus_id": 16952040,
"score": 1
},
{
"doc_id": "6645019",
"title": "An Energy Reporting Aggregation Method Based on EAFM and DTRM in WSN",
"abstract": "Analyzed the energy consumption disciplinarian of the nodes in WSN, the node’s Energy Attenuation Forecast Model(EAFM) can be established. A Difference-threshold Reporting Mechanism (DTRM) is used to report the residual energy of nodes. The energy collection mechanism based on EAFM and DTRM can reduce energy data reporting times significantly,improve the efficient of energy data collection, save the node's energy at the same time. The experiments in the platform of telosb nodes show that the predicable rate is between 70% and85%, and this method can extend the node’s life by 1% ~ 4.5%.",
"corpus_id": 6645019,
"score": 0
},
{
"doc_id": "16634499",
"title": "An Approximate Controller for Nonlinear Networked Control Systems with Time Delay",
"abstract": "An approximation controller is developed to obtain an optimal control for nonlinear networked control systems (NCS) with time delay. In the approach only the non-linear compensating term, solution of a sequence of adjoint vector differential equations, is required iteration. By taking the finite iteration of non-linear compensating term of optimal solution sequence, a suboptimal control law for NCS can be obtained.",
"corpus_id": 16634499,
"score": 0
},
{
"doc_id": "19223951",
"title": "A comparison between the China Scientific and Technical Papers and Citations Database and the Science Citation Index in terms of journal hierarchies and interjournal citation relations",
"abstract": "The journal structure in the China Scientific and Technical Papers and Citations Database (CSTPCD) is analysed from three perspectives: the database level, the specialty level and the institutional level (i.e., university journals versus journals issued by the Chinese Academy of Sciences). The results are compared with those for (Chinese) journals included in the Science Citation Index. The frequency of journal-journal citation relations in the CSTPCD is an order of magnitude lower than in the SCI. Chinese journals, especially high-quality journals, prefer to cite international journals rather than domestic ones. However, Chinese journals do not get an equivalent reception from their international counterparts. The international visibility of Chinese journals is low, but varies among fields of science. Journals of the Chinese Academy of Sciences (CAS) have a better reception in the international scientific community than university journals.",
"corpus_id": 19223951,
"score": 0
},
{
"doc_id": "22092303",
"title": "Nanotechnology as a field of science: Its delineation in terms of journals and patents",
"abstract": "The Journal Citation Reports of the Science Citation Index 2004 were used to delineate a core set of nanotechnology journals and a nanotechnology-relevant set. In comparison with 2003, the core set has grown and the relevant set has decreased. This suggests a higher degree of codification in the field of nanotechnology: the field has become more focused in terms of citation practices. Using the citing patterns among journals at the aggregate level, a core group of ten nanotechnology journals in the vector space can be delineated on the criterion of betweenness centrality. National contributions to this core group of journals are evaluated for the years 2003, 2004, and 2005. Additionally, the specific class of nanotechnology patents in the database of the U. S. Patent and Trade Office (USPTO) is analyzed to determine if non-patent literature references can be used as a source for the delineation of the knowledge base in terms of scientific journals. The references are primarily to general science journals and letters, and therefore not specific enough for the purpose of delineating a journal set.",
"corpus_id": 22092303,
"score": 0
},
{
"doc_id": "6453262",
"title": "Are the contributions of China and Korea upsetting the world system of science?",
"abstract": "SummaryInstitutions and their aggregates are not the right units of analysis for developing a science policy with cognitive goals in view. Institutions, however, can be compared in terms of their performance with reference to their previous stages. King's (2004) 'The scientific impact of nations' has provided the data for this comparison. Evaluation of the data from this perspective along the time axis leads to completely different and hitherto overlooked conclusions: a new dynamic can be revealed which points to a group of emerging nations. These nations do not increase their contributions marginally, but their national science systems grow endogenously. In addition to publications, their citation rates keep pace with the exponential growth patterns, albeit with a delay. The center of gravity of the world system of science may be changing accordingly.",
"corpus_id": 6453262,
"score": 0
}
] |
arnetminer | {
"doc_id": "6453262",
"title": "Are the contributions of China and Korea upsetting the world system of science?",
"abstract": "SummaryInstitutions and their aggregates are not the right units of analysis for developing a science policy with cognitive goals in view. Institutions, however, can be compared in terms of their performance with reference to their previous stages. King's (2004) 'The scientific impact of nations' has provided the data for this comparison. Evaluation of the data from this perspective along the time axis leads to completely different and hitherto overlooked conclusions: a new dynamic can be revealed which points to a group of emerging nations. These nations do not increase their contributions marginally, but their national science systems grow endogenously. In addition to publications, their citation rates keep pace with the exponential growth patterns, albeit with a delay. The center of gravity of the world system of science may be changing accordingly.",
"corpus_id": 6453262
} | [
{
"doc_id": "19223951",
"title": "A comparison between the China Scientific and Technical Papers and Citations Database and the Science Citation Index in terms of journal hierarchies and interjournal citation relations",
"abstract": "The journal structure in the China Scientific and Technical Papers and Citations Database (CSTPCD) is analysed from three perspectives: the database level, the specialty level and the institutional level (i.e., university journals versus journals issued by the Chinese Academy of Sciences). The results are compared with those for (Chinese) journals included in the Science Citation Index. The frequency of journal-journal citation relations in the CSTPCD is an order of magnitude lower than in the SCI. Chinese journals, especially high-quality journals, prefer to cite international journals rather than domestic ones. However, Chinese journals do not get an equivalent reception from their international counterparts. The international visibility of Chinese journals is low, but varies among fields of science. Journals of the Chinese Academy of Sciences (CAS) have a better reception in the international scientific community than university journals.",
"corpus_id": 19223951,
"score": 1
},
{
"doc_id": "15353452",
"title": "Hybrid Intelligent System for Supervisory Control of Mineral Grinding Process",
"abstract": "The particle size is the important technical performance index of the grinding process, which closely related to the overall performance of the mineral processing. In this paper, we mainly concern on the determination of the particle size for the supervisory control of the grinding process by the technical performance index decision system. The overall structure of the system and introduce of every part are given briefly. The experiment results and its compare with the neural network method show its validity and efficiency",
"corpus_id": 15353452,
"score": 0
},
{
"doc_id": "5738095",
"title": "Local Elastic Registration of Multimodal Medical Image Using Robust Point Matching and Compact Support RBF",
"abstract": "A novel local elastic registration of multimodal medical image method is proposed in this paper. At first, local deformation regions are detected by evaluating the variation of mutual information in re-quantified gray space of images. The re-quantified image retains anatomical structure of the organ well and reduces the gray levels greatly. Mutual information performs better in the quantification space and can be used to detect whether the deformation happens in small sampling images. Next, edges of the local deformation regions are detected. Fuzzy clustering method is performed on edge points and the clustering centers are chosen as candidate landmarks. Robust point matching is used to estimate landmarks correspondence in the local deformation regions. Finally, a new compact support radial basis function CSTPF has been adopted to deform image, which cost less bending energy than other RBFs. Local registration experiments of multimodal medical images show the feasibility of our method.",
"corpus_id": 5738095,
"score": 0
},
{
"doc_id": "11399662",
"title": "Adaptive Buffer Management for Efficient Code Dissemination in Multi-Application Wireless Sensor Networks",
"abstract": "Future wireless sensor networks (WSNs) are projected to run multiple applications in the same network infrastructure. While such multi-application WSNs (MA-WSNs) are economically more efficient and adapt better to the changing environments than traditional single-application WSNs, they usually require frequent code redistribution on wireless sensors, making it critical to design energy efficient post-deployment code dissemination protocols in MA-WSNs. Different applications in MA-WSNs often share some common code segments. Therefore when there is a need to disseminate a new application from the sink node, it is possible to disseminate its shared code segments from peer sensors instead of disseminating everything from the sink node. While dissemination protocols have been proposed to handle code of each single type, it is challenging to achieve energy efficiency when the code contains both types and needs simultaneous dissemination. In this paper we utilize an adaptive buffer management approach to achieve efficient code dissemination in MA-WSNs. Our experimental results show that adaptive buffer management can reduce the completion time and the message overhead up to 10% and 20% respectively.",
"corpus_id": 11399662,
"score": 0
},
{
"doc_id": "6413061",
"title": "A Binary Correlation Matrix Memory k-NN Classifier with Hardware Implementation",
"abstract": "This paper describes a generic and fast classifier that uses a binary CMM (Correlation Matrix Memory) neural network for storing and matching a large amount of patterns efficiently, and a k-NN rule for classification. To meet CMM input requirements, a robust encoding method is proposed to convert numerical inputs into binary ones with the maximally achievable uniformity. To reduce the execution bottleneck, a hardware implementation of the CMM is described, which shows the network with on-board training and testing operates at over 200 times the speed of a current mid-range workstation, and is scaleable to very large problems. The CMM classifier has been tested on several benchmarks and, comparing with a simple k-NN classifier, it gave less than 1% lower accuracy and over 4 and 12 times speed-ups in software and hardware respectively.",
"corpus_id": 6413061,
"score": 0
},
{
"doc_id": "206599167",
"title": "A Fuzzy Logic Expert System for Fault Diagnosis and Security Assessment of Power Transformers",
"abstract": null,
"corpus_id": 206599167,
"score": 0
}
] |
arnetminer | {
"doc_id": "207163195",
"title": "Review spam detection",
"abstract": "It is now a common practice for e-commerce Web sites to enable their customers to write reviews of products that they have purchased. Such reviews provide valuable sources of information on these products. They are used by potential customers to find opinions of existing users before deciding to purchase a product. They are also used by product manufacturers to identify problems of their products and to find competitive intelligence information about their competitors. Unfortunately, this importance of reviews also gives good incentive for spam, which contains false positive or malicious negative opinions. In this paper, we make an attempt to study review spam and spam detection. To the best of our knowledge, there is still no reported study on this problem.",
"corpus_id": 207163195
} | [
{
"doc_id": "12241191",
"title": "Finding Actionable Knowledge via Automated Comparison",
"abstract": "The problem of finding interesting and actionable patterns is a major challenge in data mining. It has been studied by many data mining researchers. The issue is that data mining algorithms often generate too many patterns, which make it very hard for the user to find those truly useful ones. Over the years many techniques have been proposed. However, few have made it to real-life applications. At the end of 2005, we built a data mining system for Motorola (called Opportunity Map) to enable the user to explore the space of a large number of rules in order to find actionable knowledge. The approach is based on the concept of rule cubes and operations on rule cubes. A rule cube is similar to a data cube, but stores rules. Since its deployment, some issues have also been identified during the regular use of the system in Motorola. One of the key issues is that although the operations on rule cubes are flexible, each operation is primitive and has to be initiated by the user. Finding a piece of actionable knowledge typically involves many operations and intense visual inspections, which are labor-intensive and time-consuming. From interactions with our users, we identified a generic problem that is crucial for finding actionable knowledge. The problem involves extensive comparison of sub-populations and identification of the cause of their differences. This paper first defines the problem and then proposes an effective method to solve the problem automatically. To the best of our knowledge, there is no reported study of this problem. The new method has been added to the Opportunity Map system and is now in daily use in Motorola.",
"corpus_id": 12241191,
"score": 1
},
{
"doc_id": "8786674",
"title": "Entity discovery and assignment for opinion mining applications",
"abstract": "Opinion mining became an important topic of study in recent years due to its wide range of applications. There are also many companies offering opinion mining services. One problem that has not been studied so far is the assignment of entities that have been talked about in each sentence. Let us use forum discussions about products as an example to make the problem concrete. In a typical discussion post, the author may give opinions on multiple products and also compare them. The issue is how to detect what products have been talked about in each sentence. If the sentence contains the product names, they need to be identified. We call this problem entity discovery. If the product names are not explicitly mentioned in the sentence but are implied due to the use of pronouns and language conventions, we need to infer the products. We call this problem entity assignment. These problems are important because without knowing what products each sentence talks about the opinion mined from the sentence is of little use. In this paper, we study these problems and propose two effective methods to solve the problems. Entity discovery is based on pattern discovery and entity assignment is based on mining of comparative sentences. Experimental results using a large number of forum posts demonstrate the effectiveness of the technique. Our system has also been successfully tested in a commercial setting.",
"corpus_id": 8786674,
"score": 1
},
{
"doc_id": "2182731",
"title": "Speed-up iterative frequent itemset mining with constraint changes",
"abstract": "Mining of frequent itemsets is a fundamental data mining task. Past research has proposed many efficient algorithms for this purpose. Recent work also highlighted the importance of using constraints to focus the mining process to mine only those relevant itemsets. In practice, data mining is often an interactive and iterative process. The user typically changes constraints and runs the mining algorithm many times before being satisfied with the final results. This interactive process is very time consuming. Existing mining algorithms are unable to take advantage of this iterative process to use previous mining results to speed up the current mining process. This results in an enormous waste of time and computation. In this paper, we propose an efficient technique to utilize previous mining results to improve the efficiency of current mining when constraints are changed. We first introduce the concept of tree boundary to summarize useful information available from previous mining. We then show that the tree boundary provides an effective and efficient framework for the new mining. The proposed technique has been implemented in the context of two existing frequent itemset mining algorithms, FP-tree and tree projection. Experiment results on both synthetic and real-life datasets show that the proposed approach achieves a dramatic saving of computation.",
"corpus_id": 2182731,
"score": 1
},
{
"doc_id": "61155076",
"title": "Opinion Mining",
"abstract": "Описанл процедуру формування інтегрального показника об’єкта згідно з відгуками користувачів під час застосування методів оцінювання опінії текстової інформації у web-документах. Запропоновано використання лінгвістичних змінних та застосування вагових коефіцієнтів для достовірнішого результату оцінювання емоційного забарвлення текстової інформації. Ключові слова: опінія, емоційне забарвлення, об’єкт, інтегральний показник, лінгвістична змінна, ваговий коефіцієнт.",
"corpus_id": 61155076,
"score": 1
},
{
"doc_id": "6193650",
"title": "Dealing with different distributions in learning from",
"abstract": "In the problem of learning with positive and unlabeled examples, existing research all assumes that positive examples P and the hidden positive examples in the unlabeled set U are generated from the same distribution. This assumption may be violated in practice. In such cases, existing methods perform poorly. This paper proposes a novel technique A-EM to deal with the problem. Experimental results with product page classification demonstrate the effectiveness of the proposed technique.",
"corpus_id": 6193650,
"score": 1
},
{
"doc_id": "3413319",
"title": "Filtering Spam in Social Tagging System with Dynamic Behavior Analysis",
"abstract": "Spam in social tagging systems introduced by some malicious participants has become a serious problem for its global popularizing. Some studies which can be deduced to static user data analysis have been presented to combat tag spam, but either they do not give an exact evaluation or the algorithms’ performances are not good enough. In this paper, we proposed a novel method based on analysis of dynamic user behavior data for the notion that users’ behaviors in social tagging system can reflect the quality of tags more accurately. Through modeling the different categories of participants’ behaviors, we extract tag-associated actions which can be used to estimate whether tag is spam, and then present our algorithm that can filter the tag spam in the results of social search. The experiment results show that our method indeed outperforms the existing methods based on static data and effectively defends against the tag spam in various spam attacks.",
"corpus_id": 3413319,
"score": 0
},
{
"doc_id": "6106184",
"title": "Substitution effect on the geometry and electronic structure of the ferrocene",
"abstract": "The substitution effects on the geometry and the electronic structure of the ferrocene are systematically and comparatively studied using the density functional theory. It is found that -NH(2) and -OH substituents exert different influence on the geometry from -CH(3), -SiH(3), -PH(2), and -SH substituents. The topological analysis shows that all the C-C bonds in a-g are typical opened-shell interactions while the Fe-C bonds are typical closed-shell interactions. NBO analysis indicates that the cooperated interaction of d --> pi* and feedback pi --> d + 4s enhances the Fe-ligand interaction. The energy partitioning analysis demonstrates that the substituents with the second row elements lead to stronger iron-ligand interactions than those with the third row elements. The molecular electrostatic potential predicts that the electrophiles are expected to attack preferably the N, O, P, or S atoms in Fer-NH(2), Fer-OH, Fer-PH(2), and Fer-SH, and attack the ring C atoms in Fer-SiH(3) and Fer-CH(3). In turn, the nucleophiles are supposed to interact predominantly by attacking the hydrogen atoms. The simulated theoretical excitation spectra show that the maximum absorption peaks are red-shifted when the substituents going from second row elements to the third row elements.",
"corpus_id": 6106184,
"score": 0
},
{
"doc_id": "13160546",
"title": "A Population-Based Incremental Learning Algorithm with Elitist Strategy",
"abstract": "The population-based incremental learning (PBIL) is a novel evolutionary algorithm combined the mechanisms of the Genetic Algorithm with competitive learning. In this paper, the influence of the number of selected best solutions on the convergence speed of the PBIL is studied by experiment. Based on experimental results, a PBIL algorithm with elitist strategy, named Double Learning PBIL (DLPBIL), is proposed. The new algorithm learns both the selected best solutions in current population and the optimal solution found so far in the algorithm at same time. Experimental results show that the DLPBIL out-performs the standard PBIL. Both the convergence speed and the solution quality are improved.",
"corpus_id": 13160546,
"score": 0
},
{
"doc_id": "17435545",
"title": "Improved spiral sense reconstruction using a multiscale wavelet model",
"abstract": "SENSE has been widely accepted and extensively studied in the community of parallel MRI. Although many regularization approaches have been developed to address the ill-conditioning problem for Cartesian SENSE, fewer efforts have been made to address this problem when the sampling trajectory is non-Cartesian. For non-Cartesian SENSE using the iterative conjugate gradient method, ill- conditioning can degrade not only the signal-to-noise ratio, but also the convergence behavior. This paper proposes a regularization technique for non-Cartesian SENSE using a multiscale wavelet model. The technique models the desired image as a random field whose wavelet transform coefficients obey a generalized Gaussian distribution. The effectiveness of the proposed method has been validated by in vivo experiments.",
"corpus_id": 17435545,
"score": 0
},
{
"doc_id": "6775430",
"title": "ARDB—Antibiotic Resistance Genes Database",
"abstract": "The treatment of infections is increasingly compromised by the ability of bacteria to develop resistance to antibiotics through mutations or through the acquisition of resistance genes. Antibiotic resistance genes also have the potential to be used for bio-terror purposes through genetically modified organisms. In order to facilitate the identification and characterization of these genes, we have created a manually curated database—the Antibiotic Resistance Genes Database (ARDB)—unifying most of the publicly available information on antibiotic resistance. Each gene and resistance type is annotated with rich information, including resistance profile, mechanism of action, ontology, COG and CDD annotations, as well as external links to sequence and protein databases. Our database also supports sequence similarity searches and implements an initial version of a tool for characterizing common mutations that confer antibiotic resistance. The information we provide can be used as compendium of antibiotic resistance factors as well as to identify the resistance genes of newly sequenced genes, genomes, or metagenomes. Currently, ARDB contains resistance information for 13 293 genes, 377 types, 257 antibiotics, 632 genomes, 933 species and 124 genera. ARDB is available at http://ardb.cbcb.umd.edu/.",
"corpus_id": 6775430,
"score": 0
}
] |
arnetminer | {
"doc_id": "2490147",
"title": "Discovering Overlapping Communities of Named Entities",
"abstract": "Although community discovery based on social network analysis has been studied extensively in the Web hyperlink environment, limited research has been done in the case of named entities in text documents. The co-occurrence of entities in documents usually implies some connections among them. Investigating such connections can reveal important patterns. In this paper, we mine communities among named entities in Web documents and text corpus. Most existing works on community discovery generate a partition of the entity network, assuming each entity belongs to one community. However, in the scenario of named entities, an entity may participate in several communities. For example, a person is in the communities of his/her family, colleagues, and friends. In this paper, we propose a novel technique to mine overlapping communities of named entities. This technique is based on triangle formation, expansion, and clustering with content similarity. Our experimental results show that the proposed technique is highly effective.",
"corpus_id": 2490147
} | [
{
"doc_id": "61155076",
"title": "Opinion Mining",
"abstract": "Описанл процедуру формування інтегрального показника об’єкта згідно з відгуками користувачів під час застосування методів оцінювання опінії текстової інформації у web-документах. Запропоновано використання лінгвістичних змінних та застосування вагових коефіцієнтів для достовірнішого результату оцінювання емоційного забарвлення текстової інформації. Ключові слова: опінія, емоційне забарвлення, об’єкт, інтегральний показник, лінгвістична змінна, ваговий коефіцієнт.",
"corpus_id": 61155076,
"score": 1
},
{
"doc_id": "2182731",
"title": "Speed-up iterative frequent itemset mining with constraint changes",
"abstract": "Mining of frequent itemsets is a fundamental data mining task. Past research has proposed many efficient algorithms for this purpose. Recent work also highlighted the importance of using constraints to focus the mining process to mine only those relevant itemsets. In practice, data mining is often an interactive and iterative process. The user typically changes constraints and runs the mining algorithm many times before being satisfied with the final results. This interactive process is very time consuming. Existing mining algorithms are unable to take advantage of this iterative process to use previous mining results to speed up the current mining process. This results in an enormous waste of time and computation. In this paper, we propose an efficient technique to utilize previous mining results to improve the efficiency of current mining when constraints are changed. We first introduce the concept of tree boundary to summarize useful information available from previous mining. We then show that the tree boundary provides an effective and efficient framework for the new mining. The proposed technique has been implemented in the context of two existing frequent itemset mining algorithms, FP-tree and tree projection. Experiment results on both synthetic and real-life datasets show that the proposed approach achieves a dramatic saving of computation.",
"corpus_id": 2182731,
"score": 1
},
{
"doc_id": "6193650",
"title": "Dealing with different distributions in learning from",
"abstract": "In the problem of learning with positive and unlabeled examples, existing research all assumes that positive examples P and the hidden positive examples in the unlabeled set U are generated from the same distribution. This assumption may be violated in practice. In such cases, existing methods perform poorly. This paper proposes a novel technique A-EM to deal with the problem. Experimental results with product page classification demonstrate the effectiveness of the proposed technique.",
"corpus_id": 6193650,
"score": 1
},
{
"doc_id": "8786674",
"title": "Entity discovery and assignment for opinion mining applications",
"abstract": "Opinion mining became an important topic of study in recent years due to its wide range of applications. There are also many companies offering opinion mining services. One problem that has not been studied so far is the assignment of entities that have been talked about in each sentence. Let us use forum discussions about products as an example to make the problem concrete. In a typical discussion post, the author may give opinions on multiple products and also compare them. The issue is how to detect what products have been talked about in each sentence. If the sentence contains the product names, they need to be identified. We call this problem entity discovery. If the product names are not explicitly mentioned in the sentence but are implied due to the use of pronouns and language conventions, we need to infer the products. We call this problem entity assignment. These problems are important because without knowing what products each sentence talks about the opinion mined from the sentence is of little use. In this paper, we study these problems and propose two effective methods to solve the problems. Entity discovery is based on pattern discovery and entity assignment is based on mining of comparative sentences. Experimental results using a large number of forum posts demonstrate the effectiveness of the technique. Our system has also been successfully tested in a commercial setting.",
"corpus_id": 8786674,
"score": 1
},
{
"doc_id": "12241191",
"title": "Finding Actionable Knowledge via Automated Comparison",
"abstract": "The problem of finding interesting and actionable patterns is a major challenge in data mining. It has been studied by many data mining researchers. The issue is that data mining algorithms often generate too many patterns, which make it very hard for the user to find those truly useful ones. Over the years many techniques have been proposed. However, few have made it to real-life applications. At the end of 2005, we built a data mining system for Motorola (called Opportunity Map) to enable the user to explore the space of a large number of rules in order to find actionable knowledge. The approach is based on the concept of rule cubes and operations on rule cubes. A rule cube is similar to a data cube, but stores rules. Since its deployment, some issues have also been identified during the regular use of the system in Motorola. One of the key issues is that although the operations on rule cubes are flexible, each operation is primitive and has to be initiated by the user. Finding a piece of actionable knowledge typically involves many operations and intense visual inspections, which are labor-intensive and time-consuming. From interactions with our users, we identified a generic problem that is crucial for finding actionable knowledge. The problem involves extensive comparison of sub-populations and identification of the cause of their differences. This paper first defines the problem and then proposes an effective method to solve the problem automatically. To the best of our knowledge, there is no reported study of this problem. The new method has been added to the Opportunity Map system and is now in daily use in Motorola.",
"corpus_id": 12241191,
"score": 1
},
{
"doc_id": "7630190",
"title": "Theoretical study on the OH + CH3NHC(O)OCH3 reaction",
"abstract": "The multiple-channel reactions OH + CH3NHC(O)OCH3 --> products are investigated by direct dynamics method. The optimized geometries, frequencies, and minimum energy path are all obtained at the MP2/6-311+G(d,p) level, and energetic information is further refined by the BMC-CCSD (single-point) method. The rate constants for every reaction channels, R1, R2, R3, and R4, are calculated by canonical variational transition state theory with small-curvature tunneling correction over the temperature range 200-1000 K. The total rate constants are in good agreement with the available experimental data and the two-parameter expression k(T) = 3.95 x 10(-12) exp(15.41/T) cm3 molecule(-1) s(-1) over the temperature range 200-1000 K is given. Our calculations indicate that hydrogen abstraction channels R1 and R2 are the major channels due to the smaller barrier height among four channels considered, and the other two channels to yield CH3NC(O)OCH3 + H2O and CH3NHC(O)(OH)OCH3 + H2O are minor channels over the whole temperature range.",
"corpus_id": 7630190,
"score": 0
},
{
"doc_id": "5989425",
"title": "A New Performance Evaluation Model and AHP-Based Analysis Method in Service-Oriented Workflow",
"abstract": "In service-oriented architecture, services and workflows are closely related so that the research on service-oriented workflow attracts the attention of academia. Because of the loosely-coupled, autonomic and dynamic nature of service, the operation and performance evaluation of workflow meet some challenges, such as how to judge the quality of service (QoS) and what is the relation between QoS and workflow performance. In this paper we are going to address these challenges. First the definition of service is proposed, and the characteristics and operation mechanism of service-oriented workflow are presented. Then a service-oriented workflow performance evaluation model is described which combines the performance of the business system and IT system. The key performance indicators (KPI) are also depicted with their formal representation. Finally the improved Analytic Hierarchy Process is brought forward to analyze the correlation between different KPIs and select services.",
"corpus_id": 5989425,
"score": 0
},
{
"doc_id": "30880806",
"title": "Particle swarm optimization for function optimization in noisy environment",
"abstract": "As a novel evolutionary searching technique, particle swarm optimization (PSO) has gained wide research and effective applications in the field of function optimization. However, to the best of our knowledge, most studies based on PSO are aimed at deterministic optimization problems. In this paper, the performance of PSO for function optimization in noisy environment is investigated, and an effective hybrid PSO approach named PSOOHT is proposed. In the PSOOHT, the population-based search mechanism of PSO is applied for well exploration and exploitation, and the optimal computing budget allocation (OCBA) technique is used to allocate limited sampling budgets to provide reliable evaluation and identification for good particles. Meanwhile, hypothesis test (HT) is also applied in the hybrid approach to reserve good particles and to maintain the diversity of the swarm as well. Numerical simulations based on several well-known function benchmarks with noise are carried out, and the effect of noise magnitude is also investigated as well. The results and comparisons demonstrate the superiority of PSOOHT in terms of searching quality and robustness.",
"corpus_id": 30880806,
"score": 0
},
{
"doc_id": "3413319",
"title": "Filtering Spam in Social Tagging System with Dynamic Behavior Analysis",
"abstract": "Spam in social tagging systems introduced by some malicious participants has become a serious problem for its global popularizing. Some studies which can be deduced to static user data analysis have been presented to combat tag spam, but either they do not give an exact evaluation or the algorithms’ performances are not good enough. In this paper, we proposed a novel method based on analysis of dynamic user behavior data for the notion that users’ behaviors in social tagging system can reflect the quality of tags more accurately. Through modeling the different categories of participants’ behaviors, we extract tag-associated actions which can be used to estimate whether tag is spam, and then present our algorithm that can filter the tag spam in the results of social search. The experiment results show that our method indeed outperforms the existing methods based on static data and effectively defends against the tag spam in various spam attacks.",
"corpus_id": 3413319,
"score": 0
},
{
"doc_id": "9933281",
"title": "A Learning Process Using SVMs for Multi-agents Decision Classification",
"abstract": "In order to resolve decision classification problem in multiple agents system, this paper first introduces the architecture of multiple agents system. It then proposes a support vector machines based assessment approach, which has the ability to learn the rules form previous assessment results from domain experts. Finally, the experiment are conducted on the artificially dataset to illustrate how the proposed works, and the results show the proposed method has effective learning ability for decision classification problems.",
"corpus_id": 9933281,
"score": 0
}
] |
arnetminer | {
"doc_id": "9654945",
"title": "Discovering unexpected information from your competitors' web sites",
"abstract": "Ever since the beginning of the Web, finding useful information from the Web has been an important problem. Existing approaches include keyword-based search, wrapper-based information extraction, Web query and user preferences. These approaches essentially find information that matches the user's explicit specifications. This paper argues that this is insufficient. There is another type of information that is also of great interest, i.e., unexpected information, which is unanticipated by the user. Finding unexpected information is useful in many applications. For example, it is useful for a company to find unexpected information bout its competitors, e.g., unexpected services and products that its competitors offer. With this information, the company can learn from its competitors and/or design counter measures to improve its competitiveness. Since the number of pages of a typical commercial site is very large and there are also many relevant sites (competitors), it is very difficult for a human user to view each page to discover the unexpected information. Automated assistance is needed. In this paper, we propose a number of methods to help the user find various types of unexpected information from his/her competitors' Web sites. Experiment results show that these techniques are very useful in practice and also efficient.",
"corpus_id": 9654945
} | [
{
"doc_id": "12241191",
"title": "Finding Actionable Knowledge via Automated Comparison",
"abstract": "The problem of finding interesting and actionable patterns is a major challenge in data mining. It has been studied by many data mining researchers. The issue is that data mining algorithms often generate too many patterns, which make it very hard for the user to find those truly useful ones. Over the years many techniques have been proposed. However, few have made it to real-life applications. At the end of 2005, we built a data mining system for Motorola (called Opportunity Map) to enable the user to explore the space of a large number of rules in order to find actionable knowledge. The approach is based on the concept of rule cubes and operations on rule cubes. A rule cube is similar to a data cube, but stores rules. Since its deployment, some issues have also been identified during the regular use of the system in Motorola. One of the key issues is that although the operations on rule cubes are flexible, each operation is primitive and has to be initiated by the user. Finding a piece of actionable knowledge typically involves many operations and intense visual inspections, which are labor-intensive and time-consuming. From interactions with our users, we identified a generic problem that is crucial for finding actionable knowledge. The problem involves extensive comparison of sub-populations and identification of the cause of their differences. This paper first defines the problem and then proposes an effective method to solve the problem automatically. To the best of our knowledge, there is no reported study of this problem. The new method has been added to the Opportunity Map system and is now in daily use in Motorola.",
"corpus_id": 12241191,
"score": 1
},
{
"doc_id": "8786674",
"title": "Entity discovery and assignment for opinion mining applications",
"abstract": "Opinion mining became an important topic of study in recent years due to its wide range of applications. There are also many companies offering opinion mining services. One problem that has not been studied so far is the assignment of entities that have been talked about in each sentence. Let us use forum discussions about products as an example to make the problem concrete. In a typical discussion post, the author may give opinions on multiple products and also compare them. The issue is how to detect what products have been talked about in each sentence. If the sentence contains the product names, they need to be identified. We call this problem entity discovery. If the product names are not explicitly mentioned in the sentence but are implied due to the use of pronouns and language conventions, we need to infer the products. We call this problem entity assignment. These problems are important because without knowing what products each sentence talks about the opinion mined from the sentence is of little use. In this paper, we study these problems and propose two effective methods to solve the problems. Entity discovery is based on pattern discovery and entity assignment is based on mining of comparative sentences. Experimental results using a large number of forum posts demonstrate the effectiveness of the technique. Our system has also been successfully tested in a commercial setting.",
"corpus_id": 8786674,
"score": 1
},
{
"doc_id": "61155076",
"title": "Opinion Mining",
"abstract": "Описанл процедуру формування інтегрального показника об’єкта згідно з відгуками користувачів під час застосування методів оцінювання опінії текстової інформації у web-документах. Запропоновано використання лінгвістичних змінних та застосування вагових коефіцієнтів для достовірнішого результату оцінювання емоційного забарвлення текстової інформації. Ключові слова: опінія, емоційне забарвлення, об’єкт, інтегральний показник, лінгвістична змінна, ваговий коефіцієнт.",
"corpus_id": 61155076,
"score": 1
},
{
"doc_id": "2182731",
"title": "Speed-up iterative frequent itemset mining with constraint changes",
"abstract": "Mining of frequent itemsets is a fundamental data mining task. Past research has proposed many efficient algorithms for this purpose. Recent work also highlighted the importance of using constraints to focus the mining process to mine only those relevant itemsets. In practice, data mining is often an interactive and iterative process. The user typically changes constraints and runs the mining algorithm many times before being satisfied with the final results. This interactive process is very time consuming. Existing mining algorithms are unable to take advantage of this iterative process to use previous mining results to speed up the current mining process. This results in an enormous waste of time and computation. In this paper, we propose an efficient technique to utilize previous mining results to improve the efficiency of current mining when constraints are changed. We first introduce the concept of tree boundary to summarize useful information available from previous mining. We then show that the tree boundary provides an effective and efficient framework for the new mining. The proposed technique has been implemented in the context of two existing frequent itemset mining algorithms, FP-tree and tree projection. Experiment results on both synthetic and real-life datasets show that the proposed approach achieves a dramatic saving of computation.",
"corpus_id": 2182731,
"score": 1
},
{
"doc_id": "6193650",
"title": "Dealing with different distributions in learning from",
"abstract": "In the problem of learning with positive and unlabeled examples, existing research all assumes that positive examples P and the hidden positive examples in the unlabeled set U are generated from the same distribution. This assumption may be violated in practice. In such cases, existing methods perform poorly. This paper proposes a novel technique A-EM to deal with the problem. Experimental results with product page classification demonstrate the effectiveness of the proposed technique.",
"corpus_id": 6193650,
"score": 1
},
{
"doc_id": "14570653",
"title": "A New Parallel Segmentation Model Based on Dictionary and Mutual Information",
"abstract": "It is difficult to compute the word frequency for mutual information segmentation. Statistic of word frequency of parallel mutual information is integrated with dictionary segmentation to improve efficiency in this paper. The parallel model and dispatching policy are presented, the paper also gives the speed up ratio of parallel model at the same time, periods pattern string and non periods pattern string are optimized in parallel model. Experiment show that the algorithm is available. The parallel model also can use for other segmentation algorithms that base on statistic of word frequency.",
"corpus_id": 14570653,
"score": 0
},
{
"doc_id": "205926081",
"title": "Solvability of multi-point boundary value problem at resonance (III)",
"abstract": "In this paper, we consider the following second order ordinary differential equation(1.1)x^'^'=f(t,x(t),x^'(t))+e(t),t@?(0,1),subject to one of the following boundary value conditions: (1.2)x(0)=@?\"i\"=\"1^m^-^2@a\"ix(@x\"i),x(1)=@?\"j\"=\"1^n^-^2@b\"jx(@h\"j),(1.3)x(0)=@?\"i\"=\"1^m^-^2@a\"ix(@x\"i),x^'(1)=@?\"j\"=\"1^n^-^2@b\"jx^'(@h\"j),(1.4)x^'(0)=@?\"i\"=\"1^m^-^2@a\"ix^'(@x\"i),x(1)=@?\"j\"=\"1^n^-^2@b\"jx(@h\"j),where @a\"i(1=",
"corpus_id": 205926081,
"score": 0
},
{
"doc_id": "7063097",
"title": "Accommodating colorblind users in image search",
"abstract": "There are about 8% of men and 0.8% of women suffering from colorblindness. Due to certain loss of color information, the existing image search techniques may not provide satisfactory results for these users. In this demonstration, we show an image search system that can accommodate colorblind users. It can help these special users find and enjoy what they want by providing multiple services for them, including search results reranking, image recoloring and color indication.",
"corpus_id": 7063097,
"score": 0
},
{
"doc_id": "5516091",
"title": "An New Global Dynamic Scheduling Algorithm with Multi-Hop Path Splitting and Multi-Pathing Using GridFTP",
"abstract": null,
"corpus_id": 5516091,
"score": 0
},
{
"doc_id": "10223976",
"title": "A combinational feature selection and ensemble neural network method for classification of gene expression data",
"abstract": "BackgroundMicroarray experiments are becoming a powerful tool for clinical diagnosis, as they have the potential to discover gene expression patterns that are characteristic for a particular disease. To date, this problem has received most attention in the context of cancer research, especially in tumor classification. Various feature selection methods and classifier design strategies also have been generally used and compared. However, most published articles on tumor classification have applied a certain technique to a certain dataset, and recently several researchers compared these techniques based on several public datasets. But, it has been verified that differently selected features reflect different aspects of the dataset and some selected features can obtain better solutions on some certain problems. At the same time, faced with a large amount of microarray data with little knowledge, it is difficult to find the intrinsic characteristics using traditional methods. In this paper, we attempt to introduce a combinational feature selection method in conjunction with ensemble neural networks to generally improve the accuracy and robustness of sample classification.ResultsWe validate our new method on several recent publicly available datasets both with predictive accuracy of testing samples and through cross validation. Compared with the best performance of other current methods, remarkably improved results can be obtained using our new strategy on a wide range of different datasets.ConclusionsThus, we conclude that our methods can obtain more information in microarray data to get more accurate classification and also can help to extract the latent marker genes of the diseases for better diagnosis and treatment.",
"corpus_id": 10223976,
"score": 0
}
] |
arnetminer | {
"doc_id": "5571905",
"title": "Classification rule discovery with ant colony optimization",
"abstract": "Ant-based algorithms or ant colony optimization (ACO) algorithms have been applied successfully to combinatorial optimization problems. More recently, Parpinelli and colleagues applied ACO to data mining classification problems, where they introduced a classification algorithm called Ant/spl I.bar/Miner. In this paper, we present an improvement to Ant/spl I.bar/Miner (we call it Ant/spl I.bar/Miner3). The proposed version was tested on two standard problems and performed better than the original Ant/spl I.bar/Miner algorithm.",
"corpus_id": 5571905
} | [
{
"doc_id": "120066926",
"title": "Entropy-based metrics in swarm clustering",
"abstract": "Ant-based clustering methods have received significant attention as robust methods for clustering. Most ant-based algorithms use local density as a metric for determining the ants' propensities to pick up or deposit a data item; however, a number of authors in classical clustering methods have pointed out the advantages of entropy-based metrics for clustering. We introduced an entropy metric into an ant-based clustering algorithm and compared it with other closely related algorithms using local density. The results strongly support the value of entropy metrics, obtaining faster and more accurate results. Entropy governs the pickup and drop behaviors, while movement is guided by the density gradient. Entropy measures also require fewer training parameters than density-based clustering. The remaining parameters are subjected to robustness studies, and a detailed analysis is performed. \n \nIn the second phase of the study, we further investigated Ramos and Abraham's (In: Proc 2003 IEEE Congr Evol Comput, Hoboken, NJ: IEEE Press; 2003. pp 1370–1375) contention that ant-based methods are particularly suited to incremental clustering. Contrary to expectations, we did not find substantial differences between the efficiencies of incremental and nonincremental approaches to data clustering. © 2009 Wiley Periodicals, Inc.",
"corpus_id": 120066926,
"score": 1
},
{
"doc_id": "15650153",
"title": "Incremental Clustering Based on Swarm Intelligence",
"abstract": "We propose methods for incrementally constructing a knowledge model for a dynamically changing database, using a swarm of special agents (ie an ant colony) and imitating their natural cluster-forming behavior. We use information-theoretic metrics to overcome some inherent problems of ant-based clustering, obtaining faster and more accurate results. Entropy governs the pick-up and drop behaviors, while movement is guided by pheromones. The primary benefits are fast clustering, and a reduced parameter set. We compared the method both with static clustering (repeatedly applied), and with the previous dynamic approaches of other authors. It generated clusters of similar quality to the static method, at significantly reduced computational cost, so that it can be used in dynamic situations where the static method is infeasible. It gave better results than previous dynamic approaches, with a much-reduced tuning parameter set. It is simple to use, and applicable to continuously- and batch-updated databases.",
"corpus_id": 15650153,
"score": 1
},
{
"doc_id": "11706749",
"title": "A Graph-Based Algorithm for Mining Maximal Frequent Itemsets",
"abstract": "Association rule mining is an important research branch of data mining, and computing frequent itemsets is the main problem. The paper is designed to find maximal frequent itemsets only. It presents an algorithm based on a frequent pattern graph, which can find maximal frequent itemsets quickly. A breadth-first-search and a depth-first-search techniques are used to produce all maximal frequent itemsets of a database. The paper also analyzes the complexity of the algorithm, and explains the computation procedure by examples. It has high time efficiency and less space complexity for computing maximal frequent itemsets.",
"corpus_id": 11706749,
"score": 1
},
{
"doc_id": "232928",
"title": "Integrating Classification and Association Rule Mining",
"abstract": "Classification rule mining aims to discover a small set of rules in the database that forms an accurate classifier. Association rule mining finds all the rules existing in the database that satisfy some minimum support and minimum confidence constraints. For association rule mining, the target of discovery is not pre-determined, while for classification rule mining there is one and only one predetermined target. In this paper, we propose to integrate these two mining techniques. The integration is done by focusing on mining a special subset of association rules, called class association rules (CARs). An efficient algorithm is also given for building a classifier based on the set of discovered CARs. Experimental results show that the classifier built this way is, in general, more accurate than that produced by the state-of-the-art classification system C4.5. In addition, this integration helps to solve a number of problems that exist in the current classification systems.",
"corpus_id": 232928,
"score": 0
},
{
"doc_id": "2646885",
"title": "NET - A System for Extracting Web Data from Flat and Nested Data Records",
"abstract": "This paper studies automatic extraction of structured data from Web pages. Each of such pages may contain several groups of structured data records. Existing automatic methods still have several limitations. In this paper, we propose a more effective method for the task. Given a page, our method first builds a tag tree based on visual information. It then performs a post-order traversal of the tree and matches subtrees in the process using a tree edit distance method and visual cues. After the process ends, data records are found and data items in them are aligned and extracted. The method can extract data from both flat and nested data records. Experimental evaluation shows that the method performs the extraction task accurately.",
"corpus_id": 2646885,
"score": 0
},
{
"doc_id": "18339174",
"title": "Robot Obstacle Avoidance based on an Improved Ant Colony Algorithm",
"abstract": "Obstacle Avoidance for mobile robot is a nondeterministic polynomial hard (NP-hard) problem. Ant algorithm is the bionic algorithm which simulated ants foraging behavior, which can effectively solve the problems of this kind. In this paper, we propose an improved ant colony algorithm for robot obstacle avoidance, in which heuristic information is adjusted at run-time during the searching process. The proposed algorithm can effectively alleviate the local optimum problem, global optimal path can be robustly found in our experiments.",
"corpus_id": 18339174,
"score": 0
},
{
"doc_id": "2750727",
"title": "Stability Analysis of Social Foraging Swarm with Interaction Time Delays",
"abstract": "This paper considers a swarm model with an attraction-repulsion function involving variable communication time lags and an attractant/repellent. It is proved that for quadratic attractant/repellent profiles the members of the swarm with time delays will aggregate and form a cohesive cluster of finite size in a finite time. Moreover, all the swarm members will converge to more favorable areas of the quadratic attractant/repellent profiles under certain conditions in the presence of communication delays.",
"corpus_id": 2750727,
"score": 0
},
{
"doc_id": "6106184",
"title": "Substitution effect on the geometry and electronic structure of the ferrocene",
"abstract": "The substitution effects on the geometry and the electronic structure of the ferrocene are systematically and comparatively studied using the density functional theory. It is found that -NH(2) and -OH substituents exert different influence on the geometry from -CH(3), -SiH(3), -PH(2), and -SH substituents. The topological analysis shows that all the C-C bonds in a-g are typical opened-shell interactions while the Fe-C bonds are typical closed-shell interactions. NBO analysis indicates that the cooperated interaction of d --> pi* and feedback pi --> d + 4s enhances the Fe-ligand interaction. The energy partitioning analysis demonstrates that the substituents with the second row elements lead to stronger iron-ligand interactions than those with the third row elements. The molecular electrostatic potential predicts that the electrophiles are expected to attack preferably the N, O, P, or S atoms in Fer-NH(2), Fer-OH, Fer-PH(2), and Fer-SH, and attack the ring C atoms in Fer-SiH(3) and Fer-CH(3). In turn, the nucleophiles are supposed to interact predominantly by attacking the hydrogen atoms. The simulated theoretical excitation spectra show that the maximum absorption peaks are red-shifted when the substituents going from second row elements to the third row elements.",
"corpus_id": 6106184,
"score": 0
}
] |
arnetminer | {
"doc_id": "23656778",
"title": "Theoretical study on the Br + CH3SCH3 reaction",
"abstract": "The multiple-channel reactions Br + CH(3)SCH(3) --> products are investigated by direct dynamics method. The optimized geometries, frequencies, and minimum energy path are all obtained at the MP2/6-31+G(d,p) level, and energetic information is further refined by the G3(MP2) (single-point) theory. The rate constants for every reaction channels, Br + CH(3)SCH(3) --> CH(3)SCH(2) + HBr (R1), Br + CH(3)SCH(3) --> CH(3)SBr + CH(3) (R2), and Br + CH(3)SCH(3) -->CH(3)S + CH(3)Br (R3), are calculated by canonical variational transition state theory with small-curvature tunneling correction over the temperature range 200-3000 K. The total rate constants are in good agreement with the available experimental data, and the two-parameter expression k(T) = 2.68 x 10(-12) exp(-1235.24/T) cm(3)/(molecule s) over the temperature range 200-3000 K is given. Our calculations indicate that hydrogen abstraction channel is the major channel due to the smallest barrier height among three channels considered, and the other two channels to yield CH(3)SBr + CH(3) and CH(3)S + CH(3)Br are minor channels over the whole temperature range.",
"corpus_id": 23656778
} | [
{
"doc_id": "6106184",
"title": "Substitution effect on the geometry and electronic structure of the ferrocene",
"abstract": "The substitution effects on the geometry and the electronic structure of the ferrocene are systematically and comparatively studied using the density functional theory. It is found that -NH(2) and -OH substituents exert different influence on the geometry from -CH(3), -SiH(3), -PH(2), and -SH substituents. The topological analysis shows that all the C-C bonds in a-g are typical opened-shell interactions while the Fe-C bonds are typical closed-shell interactions. NBO analysis indicates that the cooperated interaction of d --> pi* and feedback pi --> d + 4s enhances the Fe-ligand interaction. The energy partitioning analysis demonstrates that the substituents with the second row elements lead to stronger iron-ligand interactions than those with the third row elements. The molecular electrostatic potential predicts that the electrophiles are expected to attack preferably the N, O, P, or S atoms in Fer-NH(2), Fer-OH, Fer-PH(2), and Fer-SH, and attack the ring C atoms in Fer-SiH(3) and Fer-CH(3). In turn, the nucleophiles are supposed to interact predominantly by attacking the hydrogen atoms. The simulated theoretical excitation spectra show that the maximum absorption peaks are red-shifted when the substituents going from second row elements to the third row elements.",
"corpus_id": 6106184,
"score": 1
},
{
"doc_id": "21603768",
"title": "Theoretical study on the reaction of SiH(CH3)3 with SiH3 radical",
"abstract": "The multiple-channel reactions SiH(3) + SiH(CH(3))(3) --> products are investigated by direct dynamics method. The minimum energy path (MEP) is calculated at the MP2/6-31+G(d,p) level, and energetic information is further refined by the MC-QCISD (single-point) method. The rate constants for individual reaction channels are calculated by the improved canonical variational transition state theory with small-curvature tunneling correction over the temperature range of 200-2400 K. The theoretical three-parameter expression k(T) = 2.44 x 10(-23)T(3.94) exp(-4309.55/T) cm(3)/(molecule s) is given. Our calculations indicate that hydrogen abstraction channel R1 from SiH group is the major channel because of the smaller barrier height among five channels considered.",
"corpus_id": 21603768,
"score": 1
},
{
"doc_id": "7630190",
"title": "Theoretical study on the OH + CH3NHC(O)OCH3 reaction",
"abstract": "The multiple-channel reactions OH + CH3NHC(O)OCH3 --> products are investigated by direct dynamics method. The optimized geometries, frequencies, and minimum energy path are all obtained at the MP2/6-311+G(d,p) level, and energetic information is further refined by the BMC-CCSD (single-point) method. The rate constants for every reaction channels, R1, R2, R3, and R4, are calculated by canonical variational transition state theory with small-curvature tunneling correction over the temperature range 200-1000 K. The total rate constants are in good agreement with the available experimental data and the two-parameter expression k(T) = 3.95 x 10(-12) exp(15.41/T) cm3 molecule(-1) s(-1) over the temperature range 200-1000 K is given. Our calculations indicate that hydrogen abstraction channels R1 and R2 are the major channels due to the smaller barrier height among four channels considered, and the other two channels to yield CH3NC(O)OCH3 + H2O and CH3NHC(O)(OH)OCH3 + H2O are minor channels over the whole temperature range.",
"corpus_id": 7630190,
"score": 1
},
{
"doc_id": "19308334",
"title": "A Reputation Management Scheme Based on Global Trust Model for Peer-to-Peer Virtual Communities",
"abstract": "Peer-to-peer virtual communities are often established dynamically with peers that are unrelated and unknown to each other. Peers have to manage the risk involved with the transactions without prior knowledge about each other's reputation. SimiTrust, a reputation management scheme, is proposed for P2P virtual communities. A unique global trust value, computed by aggregating similarity-weighted recommendations of the peers who have interacted with him and reflecting the degree that the community as a whole trusts a peer, is assigned to each peer in the community. Different from previous global-trust based schemes, SimiTrust does not need any pre-trusted peers to ensure algorithm convergence and invalidates the assumption that the peers with high trust value will give the honest recommendation. Theoretical analyses and experiments show that the scheme is still robust under more general conditions where malicious peers cooperate in an attempt to deliberately subvert the system, converges more quickly and decreases the number of inauthentic files downloaded more effectively than previous schemes.",
"corpus_id": 19308334,
"score": 0
},
{
"doc_id": "195706110",
"title": "A Scalable Peer-to-Peer Overlay for Applications with Time Constraints",
"abstract": "With the development of Internet, p2p is increasingly receiving attention in research. Recently, a class of p2p applications with time constraints appear. These applications require a short time to locate the resource and(or) a low transit delay between the resource user and the resource holder, such as Skype, MSN. In this paper we propose a scalable p2p overlay for applications with time constraints. Our system provides supports for just two operations for uplayered p2p applications: (1) Given a resource key and the node's IP who holds the resource, it registers the resource information to the associated node in at most two overlay hops; and (2) Given a resource key and a time constraint(0 for no constraint), it returns if possible a path(one or two overlay hops) to the resource holder, and the transit delay of the path is lower than the time constraint. Results from theoretical analysis and simulations show that our system is viable and scalable.",
"corpus_id": 195706110,
"score": 0
},
{
"doc_id": "19924205",
"title": "Intelligent air travel and tourist information systems",
"abstract": null,
"corpus_id": 19924205,
"score": 0
},
{
"doc_id": "11696437",
"title": "Flocking of Multi-Vehicle Systems With A Leader",
"abstract": "We study the coordinated motion of a group of vehicles with a leader based on nearest neighbor rules. The leader may be a special vehicle in the group or an external signal (virtual leader) to steer the vehicle group. The control law consists of two parts: a potential field force that makes the vehicles attract to each other and at the same time avoid collision in the group, and an alignment force that makes all vehicles' headings and speeds to converge respectively to common values. We assume that the interaction patterns are either fixed or switching based on the information topologies of the leader and member vehicles. Our approach is based on the Lyapunov theory and basic graph theory. We also consider the effect of noise on the collective dynamics of the group. Numerical simulations are worked out to illustrate the analytical results",
"corpus_id": 11696437,
"score": 0
},
{
"doc_id": "32625268",
"title": "An Energy-Minimizing Mesh Parameterization",
"abstract": "In this paper, we propose a new energy-minimizing mesh parameterization method, which linearly combines two new energies EQ and EM. It not only avoids triangles overlap in the parameter domain, but also is invariant under rotation, translation and scale transformations. We first parameterize the original 3D mesh to the parameter plane by using the energy-minimizing parameterization, and get the optimal effect by optimizing the weights wij gradually. Experimental results indicate that this optimized energy-minimizing method has low distortion and good stability.",
"corpus_id": 32625268,
"score": 0
}
] |
arnetminer | {
"doc_id": "2750727",
"title": "Stability Analysis of Social Foraging Swarm with Interaction Time Delays",
"abstract": "This paper considers a swarm model with an attraction-repulsion function involving variable communication time lags and an attractant/repellent. It is proved that for quadratic attractant/repellent profiles the members of the swarm with time delays will aggregate and form a cohesive cluster of finite size in a finite time. Moreover, all the swarm members will converge to more favorable areas of the quadratic attractant/repellent profiles under certain conditions in the presence of communication delays.",
"corpus_id": 2750727
} | [
{
"doc_id": "429518",
"title": "Collective Behavior of Dynamic Swarm with General Topology and Complex Communication Time-Delays",
"abstract": "This paper presents a complex interaction delayed swarm model with a general topology to study the collective behavior of a group of autonomous agents using the nearest neighbor rulers in the presence of communication delays. It is proved that under certain conditions the swarm members will converge to a finite region around the swarm weighted center and move together in a cohesive cluster following the motion of the weighted center. For general cases, the delayed swarm may display more complex dynamics, including oscillation and divergence, depending on the delay values. This suggests that the time delay may have significant consequence in the collective dynamics of swarms.",
"corpus_id": 429518,
"score": 1
},
{
"doc_id": "11696437",
"title": "Flocking of Multi-Vehicle Systems With A Leader",
"abstract": "We study the coordinated motion of a group of vehicles with a leader based on nearest neighbor rules. The leader may be a special vehicle in the group or an external signal (virtual leader) to steer the vehicle group. The control law consists of two parts: a potential field force that makes the vehicles attract to each other and at the same time avoid collision in the group, and an alignment force that makes all vehicles' headings and speeds to converge respectively to common values. We assume that the interaction patterns are either fixed or switching based on the information topologies of the leader and member vehicles. Our approach is based on the Lyapunov theory and basic graph theory. We also consider the effect of noise on the collective dynamics of the group. Numerical simulations are worked out to illustrate the analytical results",
"corpus_id": 11696437,
"score": 1
},
{
"doc_id": "17391590",
"title": "Complex Analysis of Anisotropic Swarms",
"abstract": "This paper considers a continuous time swarm model with individuals moving with a nutrient profile (or an attractant/repellent) in an n-dimensional space. The swarm behavior is a result of a balance between inter-individual interplays as well as the interplays of the swarm agents with their environment. It is proved that the swarm members aggregate and eventually form a cohesive cluster of finite size around the swarm weighted center in a finite time under certain conditions.",
"corpus_id": 17391590,
"score": 1
},
{
"doc_id": "6447863",
"title": "Controllability of a Leader-Follower Dynamic Network with Interaction Time Delays",
"abstract": "This paper studies the controllability of a leader-follower network of dynamic agents in the presence of communication delays. The network dynamics is governed by nearest neighbor rules over a fixed communication topology. The leader is a particular agent acting as an external input to steer the other member agents. We derive sufficient conditions of the controllability of the dynamic network and give an example to illustrate the main results.",
"corpus_id": 6447863,
"score": 1
},
{
"doc_id": "11561145",
"title": "Collective Behavior Analysis of a Class of Social Foraging Swarms",
"abstract": "This paper considers an anisotropic swarm model that consists of a group of mobile autonomous agents with an attraction-repulsion function that can guarantee collision avoidance between agents and a Gaussian-type attractant/repellent nutrient profile. The swarm behavior is a result of a balance between inter-individual interplays as well as the interplays of the swarm individuals (agents) with their environment. It is proved that the members of a reciprocal swarm will aggregate and eventually form a cohesive cluster of finite size. It is shown that the swarm system is completely stable, that is, every solution converges to the equilibrium point set of the system. Moreover, it is also shown that all the swarm individuals will converge to more favorable areas of the Gaussian profile under certain conditions. The results of this paper provide further insight into the effect of the interaction pattern on self-organized motion for a Gaussian-type attractant/repellent nutrient profile in a swarm system.",
"corpus_id": 11561145,
"score": 1
},
{
"doc_id": "10143256",
"title": "An EM based training algorithm for cross-language text categorization",
"abstract": "Due to the globalization on the Web, many companies and institutions need to efficiently organize and search repositories containing multilingual documents. The management of these heterogeneous text collections increases the costs significantly because experts of different languages are required to organize these collections. Cross-language text categorization can provide techniques to extend existing automatic classification systems in one language to new languages without requiring additional intervention of human experts. In this paper, we propose a learning algorithm based on the EM scheme which can be used to train text classifiers in a multilingual environment. In particular, in the proposed approach, we assume that a predefined category set and a collection of labeled training data is available for a given language L/sub 1/. A classifier for a different language L/sub 2/ is trained by translating the available labeled training set for L/sub 1/ to L/sub 2/ and by using an additional set of unlabeled documents from L/sub 2/. This technique allows us to extract correct statistical properties of the language L/sub 2/ which are not completely available in automatically translated examples, because of the different characteristics of language L/sub 1/ and of the approximation of the translation process. Our experimental results show that the performance of the proposed method is very promising when applied on a test document set extracted from newsgroups in English and Italian.",
"corpus_id": 10143256,
"score": 0
},
{
"doc_id": "26305172",
"title": "Hybrid Particle Swarm Optimization for Flow Shop Scheduling with Stochastic Processing Time",
"abstract": "The stochastic flow shop scheduling with uncertain processing time is a typical NP-hard combinatorial optimization problem and represents an important area in production scheduling, which is difficult because of inaccurate objective estimation, huge search space, and multiple local minima. As a novel evolutionary technique, particle swarm optimization (PSO) has gained much attention and wide applications for both function and combinatorial problems, but there is no research on PSO for stochastic scheduling cases. In this paper, a class of PSO approach with simulated annealing (SA) and hypothesis test (HT), namely PSOSAHT is proposed for stochastic flow shop scheduling with uncertain processing time with respect to the makespan criterion (i.e. minimizing the maximum completion time). Simulation results demonstrate the feasibility, effectiveness and robustness of the proposed hybrid algorithm. Meanwhile, the effects of noise magnitude and number of evaluation on searching performances are also investigated.",
"corpus_id": 26305172,
"score": 0
},
{
"doc_id": "13900102",
"title": "Web Page Cleaning for Web Mining through Feature Weighting",
"abstract": "Unlike conventional data or text, Web pages typically contain a large amount of information that is not part of the main contents of the pages, e.g., banner ads, navigation bars, and copyright notices. Such irrelevant information (which we call Web page noise) in Web pages can seriously harm Web mining, e.g., clustering and classification. In this paper, we propose a novel feature weighting technique to deal with Web page noise to enhance Web mining. This method first builds a compressed structure tree to capture the common structure and comparable blocks in a set of Web pages. It then uses an information based measure to evaluate the importance of each node in the compressed structure tree. Based on the tree and its node importance values, our method assigns a weight to each word feature in its content block. The resulting weights are used in Web mining. We evaluated the proposed technique with two Web mining tasks, Web page clustering and Web page classification. Experimental results show that our weighting method is able to dramatically improve the mining results.",
"corpus_id": 13900102,
"score": 0
},
{
"doc_id": "20603294",
"title": "Development Of An Soi-Based Micro Check Valve",
"abstract": "This paper presents a bulk micromachined check valve with very high frequency and extremely low leak rates. The valve is designed to have a hexagonal orifice, a hexagonal membrane flap and three flexible tethers. The three elbow-shaped flexible tethers are used both to secure the membrane flap to the valve seat and to abtain a large flap displacement in the forward flow direction. SOI wafer and DRIE technology are used to implement this micro valve. A very simple farbication process has been developed, and only two photolithographic masks are employed. Preliminary testing on a 1.5 milimeters size check valve shows that a maximum flow rate (DI water) of 35.6ml/min was obtained at pressure drop of 65.5kPa and negligible leakage rate in the reverse flow direction observed at pressure up to 600kPa.",
"corpus_id": 20603294,
"score": 0
},
{
"doc_id": "17284590",
"title": "Performance Analysis of the HLLACF",
"abstract": "With the popular using of anonymous communication systems, security and overhead traffic attract more attention. HLLCAF was presented to improve the performance of anonymous communication systems. This paper analyzes HLLACF's security and an evaluation algorithm for security is presented. In the end, a simulation experiment and result analysis is given. The theoretic analysis and simulation experiment indicate that the HLLACF can prevent AS-Level passive attack and other similar attacks well while decreasing communication delay and HLLACF also scales well.",
"corpus_id": 17284590,
"score": 0
}
] |
arnetminer | {
"doc_id": "215471",
"title": "Nesting One-Against-One Algorithm Based on SVMs for Pattern Classification",
"abstract": "Support vector machines (SVMs), which were originally designed for binary classifications, are an excellent tool for machine learning. For the multiclass classifications, they are usually converted into binary ones before they can be used to classify the examples. In the one-against-one algorithm with SVMs, there exists an unclassifiable region where the data samples cannot be classified by its decision function. This paper extends the one-against-one algorithm to handle this problem. We also give the convergence and computational complexity analysis of the proposed method. Finally, one-against-one, fuzzy decision function (FDF), and decision-directed acyclic graph (DDAG) algorithms and our proposed method are compared using five University of California at Irvine (UCI) data sets. The results report that the proposed method can handle the unclassifiable region better than others.",
"corpus_id": 215471
} | [
{
"doc_id": "9933281",
"title": "A Learning Process Using SVMs for Multi-agents Decision Classification",
"abstract": "In order to resolve decision classification problem in multiple agents system, this paper first introduces the architecture of multiple agents system. It then proposes a support vector machines based assessment approach, which has the ability to learn the rules form previous assessment results from domain experts. Finally, the experiment are conducted on the artificially dataset to illustrate how the proposed works, and the results show the proposed method has effective learning ability for decision classification problems.",
"corpus_id": 9933281,
"score": 1
},
{
"doc_id": "8341217",
"title": "Multi-Space-Mapped SVMs for Multi-class Classification",
"abstract": "In SVMs-based multiple classification, it is not always possible to find an appropriate kernel function to map all the classes from different distribution functions into a feature space where they are linearly separable from each other. This is even worse if the number of classes is very large. As a result, the classification accuracy is not as good as expected. In order to improve the performance of SVMs-based multi-classifiers, this paper proposes a method, named multi-space-mapped SVMs, to map the classes into different feature spaces and then classify them. The proposed method reduces the requirements for the kernel function. Substantial experiments have been conducted on one-against-all, one-against-one, FSVM, DDAG algorithms and our algorithm using six UCI data sets. The statistical results show that the proposed method has a higher probability of finding appropriate kernel functions than traditional methods and outperforms others.",
"corpus_id": 8341217,
"score": 1
},
{
"doc_id": "44873280",
"title": "Twi-Map Support Vector Machine for Multi-classification Problems",
"abstract": "In this paper, a novel method called Twi-Map Support Vector Machines (TMSVM) for multi-classification problems is presented. Our ideas are as follows: Firstly, the training data set is mapped into a high-dimensional feature space. Secondly, we calculate the distances between the training data points and hyperplanes. Thirdly, we view the new vector consisting of the distances as new training data point. Finally, we map the new training data points into another high-dimensional feature space with the same kernel function and construct the optimal hyperplanes. In order to examine the training accuracy and the generalization performance of the proposed algorithm, One-against-One algorithm, Fuzzy Least Square Support Vector Machine (FLS-SVM) and the proposed algorithm are applied to five UCI data sets. Comparison results obtained by using three algorithms are given. The results show that the training accuracy and the testing one of the proposed algorithm are higher than those of One-against-One and FLS-SVM.",
"corpus_id": 44873280,
"score": 1
},
{
"doc_id": "18666865",
"title": "Multi-sphere Support Vector Data Description for Outliers Detection on Multi-distribution Data",
"abstract": "SVDD has been proved a powerful tool for outlier detection. However, in detecting outliers on multi-distribution data, namely there are distinctive distributions in the data, it is very challenging for SVDD to generate a hyper-sphere for distinguishing outliers from normal data. Even if such a hyper-sphere can be identified, its performance is usually not good enough. This paper proposes an multi-sphere SVDD approach, named MS-SVDD, for outlier detection on multi-distribution data. First, an adaptive sphere detection method is proposed to detect data distributions in the dataset. The data is partitioned in terms of the identified data distributions, and the corresponding SVDD classifiers are constructed separately. Substantial experiments on both artificial and real-world datasets have demonstrated that the proposed approach outperforms original SVDD.",
"corpus_id": 18666865,
"score": 1
},
{
"doc_id": "206597617",
"title": "Binary Tree Support Vector Machine Based on Kernel Fisher Discriminant for Multi-classification",
"abstract": "In order to improve the accuracy of the conventional algorithms for multi-classifications, we propose a binary tree support vector machine based on Kernel Fisher Discriminant in this paper. To examine the training accuracy and the generalization performance of the proposed algorithm, One-against-All, One-against-One and the proposed algorithms are applied to five UCI data sets. The experimental results show that in general, the training and the testing accuracy of the proposed algorithm is the best one, and there exist no unclassifiable regions in the proposed algorithm.",
"corpus_id": 206597617,
"score": 1
},
{
"doc_id": "207576208",
"title": "Clustering through decision tree construction",
"abstract": "Clustering aims to find the intrinsic structure of data by organizing data objects into similarity groups or clusters. It is often called unsupervised learning as no class labels denoting an a priori partition of the objects are given. This is in contrast with supervised learning (e.g., classification) for which the data objects are already labeled with known classes. Past research in clustering has produced many algorithms. However, these algorithms have some major shortcomings. In this paper, we propose a novel clustering technique, which is based on a supervised learning technique called decision tree construction. The new technique is able to overcome many of these shortcomings. The key idea is to use a decision tree to partition the data space into cluster and empty (sparse) regions at different levels of details. The technique is able to find \"natural\" clusters in large high dimensional spaces eff iciently. It is suitable for clustering in the full dimensional space as well as in subspaces. It also provides comprehensible descriptions of clusters. Experiment results on both synthetic data and real-li fe data show that the technique is effective and also scales well for large high dimensional datasets.",
"corpus_id": 207576208,
"score": 0
},
{
"doc_id": "29971030",
"title": "Measuring the meaning in time series clustering of text search queries",
"abstract": "We use a combination of proven methods from time series analysis and machine learning to explore the relationship between temporal and semantic similarity in web query logs; we discover that the combination of correlation and cycles is a good, but not perfect, sign of semantic relationship.",
"corpus_id": 29971030,
"score": 0
},
{
"doc_id": "11706749",
"title": "A Graph-Based Algorithm for Mining Maximal Frequent Itemsets",
"abstract": "Association rule mining is an important research branch of data mining, and computing frequent itemsets is the main problem. The paper is designed to find maximal frequent itemsets only. It presents an algorithm based on a frequent pattern graph, which can find maximal frequent itemsets quickly. A breadth-first-search and a depth-first-search techniques are used to produce all maximal frequent itemsets of a database. The paper also analyzes the complexity of the algorithm, and explains the computation procedure by examples. It has high time efficiency and less space complexity for computing maximal frequent itemsets.",
"corpus_id": 11706749,
"score": 0
},
{
"doc_id": "12104280",
"title": "Classification using support vector machines with graded resolution",
"abstract": "A method which we call support vector machine with graded resolution (SVM-GR) is proposed in this paper. During the training of the SVM-GR, we first form data granules to train the SVM-GR and remove those data granules that are not support vectors. We then use the remaining training samples to train the SVM-GR. Compared with the traditional SVM, our SVM-GR algorithm requires fewer training samples and support vectors, hence the computational time and memory requirements for the SVM-GR are much smaller than those of a conventional SVM that use the entire dataset. Experiments on benchmark data sets show that the generalization performance of the SVM-GR is comparable to the traditional SVM.",
"corpus_id": 12104280,
"score": 0
},
{
"doc_id": "8023891",
"title": "Using micro information units for internet search",
"abstract": "Internet search is one of the most important applications of the Web. A search engine takes the user's keywords to retrieve and to rank those pages that contain the keywords. One shortcoming of existing search techniques is that they do not give due consideration to the micro-structures of a Web page. A Web page is often populated with a number of small information units, which we call micro information units (MIU). Each unit focuses on a specific topic and occupies a specific area of the page. During the search, if all the keywords in the user query occur in a single MIU of a page, the top ranking results returned by a search engine are generally relevant and useful. However, if the query words scatter at different MIUs in a page, the pages returned can be quite irrelevant (which causes low precision). The reason for this is that although a page has information on individual MIUs, it may not have information on their intersections. In this paper, we propose a technique to solve this problem. At the off-line pre-processing stage, we segment each page to identify the MIUs in the page, and index the keywords of the page according to the MIUs in which they occur. In searching, our retrieval and ranking algorithm utilizes this additional information to return those most relevant pages. Experimental results show that this method is able to significantly improve the search precision.",
"corpus_id": 8023891,
"score": 0
}
] |
arnetminer | {
"doc_id": "16202549",
"title": "A memetic approach to the automatic design of high-performance analog integrated circuits",
"abstract": "This article introduces an evolution-based methodology, named memetic single-objective evolutionary algorithm (MSOEA), for automated sizing of high-performance analog integrated circuits. Memetic algorithms may achieve higher global and local search ability by properly combining operators from different standard evolutionary algorithms. By integrating operators from the differential evolution algorithm, from the real-coded genetic algorithm, operators inspired by the simulated annealing algorithm, and a set of constraint handling techniques, MSOEA specializes in handling analog circuit design problems with numerous and tight design constraints. The method has been tested through the sizing of several analog circuits. The results show that design specifications are met and objective functions are highly optimized. Comparisons with available methods like genetic algorithm and differential evolution in conjunction with static penalty functions, as well as with intelligent selection-based differential evolution, are also carried out, showing that the proposed algorithm has important advantages in terms of constraint handling ability and optimization quality.",
"corpus_id": 16202549
} | [
{
"doc_id": "32506945",
"title": "DE and NLP Based QPLS Algorithm",
"abstract": "As a novel evolutionary computing technique, Differential Evolution (DE) has been considered to be an effective optimization method for complex optimization problems, and achieved many successful applications in engineering. In this paper, a new algorithm of Quadratic Partial Least Squares (QPLS) based on Nonlinear Programming (NLP) is presented. And DE is used to solve the NLP so as to calculate the optimal input weights and the parameters of inner relationship. The simulation results based on the soft measurement of diesel oil solidifying point on a real crude distillation unit demonstrate that the superiority of the proposed algorithm to linear PLS and QPLS which is based on Sequential Quadratic Programming (SQP) in terms of fitting accuracy and computational costs.",
"corpus_id": 32506945,
"score": 1
},
{
"doc_id": "33407170",
"title": "Manufacturing Grid: Needs, Concept, and Architecture",
"abstract": "As a new approach, grid technology is rapidly used in scientific computing, large-scale data management, and collaborative work. But in the field of manufacturing, the application of grid is just at the beginning. The paper proposes the concept of manufacturing. The needs, definition and architecture of manufacturing gird are discussed, which explains why needs manufacturing grid, what is manufacturing grid and how to construct a manufacturing grid system.",
"corpus_id": 33407170,
"score": 1
},
{
"doc_id": "44715797",
"title": "Constrained Nonlinear State Estimation - A Differential Evolution Based Moving Horizon Approach",
"abstract": "A solution is proposed to estimate the states in the nonlinear discrete time system. Moving Horizon Estimation (MHE) is used to obtain the approximated states by minimizing a criterion that is the Euclidean form of the difference between the estimated outputs and the measured ones over a finite time horizon. The differential evolution (DE) algorithm is incorporated into the implementation of MHE in order to solve the optimization problem which is presented as a nonlinear programming problem due to the constraints. The effectiveness of the approach is illustrated in simulated systems that have appeared in the moving horizon estimation literature.",
"corpus_id": 44715797,
"score": 1
},
{
"doc_id": "21860578",
"title": "An Effective PSO-Based Memetic Algorithm for Flow Shop Scheduling",
"abstract": "This paper proposes an effective particle swarm optimization (PSO)-based memetic algorithm (MA) for the permutation flow shop scheduling problem (PFSSP) with the objective to minimize the maximum completion time, which is a typical non-deterministic polynomial-time (NP) hard combinatorial optimization problem. In the proposed PSO-based MA (PSOMA), both PSO-based searching operators and some special local searching operators are designed to balance the exploration and exploitation abilities. In particular, the PSOMA applies the evolutionary searching mechanism of PSO, which is characterized by individual improvement, population cooperation, and competition to effectively perform exploration. On the other hand, the PSOMA utilizes several adaptive local searches to perform exploitation. First, to make PSO suitable for solving PFSSP, a ranked-order value rule based on random key representation is presented to convert the continuous position values of particles to job permutations. Second, to generate an initial swarm with certain quality and diversity, the famous Nawaz-Enscore-Ham (NEH) heuristic is incorporated into the initialization of population. Third, to balance the exploration and exploitation abilities, after the standard PSO-based searching operation, a new local search technique named NEH_1 insertion is probabilistically applied to some good particles selected by using a roulette wheel mechanism with a specified probability. Fourth, to enrich the searching behaviors and to avoid premature convergence, a simulated annealing (SA)-based local search with multiple different neighborhoods is designed and incorporated into the PSOMA. Meanwhile, an effective adaptive meta-Lamarckian learning strategy is employed to decide which neighborhood to be used in SA-based local search. Finally, to further enhance the exploitation ability, a pairwise-based local search is applied after the SA-based search. Simulation results based on benchmarks demonstrate the effectiveness of the PSOMA. Additionally, the effects of some parameters on optimization performances are also discussed",
"corpus_id": 21860578,
"score": 1
},
{
"doc_id": "5989425",
"title": "A New Performance Evaluation Model and AHP-Based Analysis Method in Service-Oriented Workflow",
"abstract": "In service-oriented architecture, services and workflows are closely related so that the research on service-oriented workflow attracts the attention of academia. Because of the loosely-coupled, autonomic and dynamic nature of service, the operation and performance evaluation of workflow meet some challenges, such as how to judge the quality of service (QoS) and what is the relation between QoS and workflow performance. In this paper we are going to address these challenges. First the definition of service is proposed, and the characteristics and operation mechanism of service-oriented workflow are presented. Then a service-oriented workflow performance evaluation model is described which combines the performance of the business system and IT system. The key performance indicators (KPI) are also depicted with their formal representation. Finally the improved Analytic Hierarchy Process is brought forward to analyze the correlation between different KPIs and select services.",
"corpus_id": 5989425,
"score": 1
},
{
"doc_id": "159216",
"title": "A weight based compact genetic algorithm",
"abstract": "In order to improve the performance of the compact Genetic Algorithm (cGA) to solve difficult optimization problems, an improved cGA which named as the weight based compact Genetic Algorithm (wcGA) is proposed. In the wcGA, S individuals are generated from the probability vector in each generation, when the winner competing with the other S-1 individuals to update the probability vector, different weights are multiplied to each solution according to the sequence of the solution ranked in the S-1 individuals. Experimental results on three kinds of Benchmark functions show that the proposed algorithm has higher optimal precision than that of the standard cGA and the cGA simulating higher selection pressures.",
"corpus_id": 159216,
"score": 0
},
{
"doc_id": "6989307",
"title": "Guest editorial",
"abstract": null,
"corpus_id": 6989307,
"score": 0
},
{
"doc_id": "45821814",
"title": "Positive solutions of a nonlinear four-point boundary value problems",
"abstract": "In this paper, by using the Krasnoselskii's theorem in a cone, we study the existence of at least one or two positive solutions to the four-point boundary value problemy^''(t)+a(t)f(y(t))=0,00. As an application, we also give some examples to demonstrate our results.",
"corpus_id": 45821814,
"score": 0
},
{
"doc_id": "2834598",
"title": "Scheduling via reinforcement",
"abstract": "Abstract Scheduling of a job shop in the presence of limited resources is a challenging decision making process. It is a problem of allocation of different resources to meet various shop needs (or to satisfy the constraints). Past research has had only limited success in properly handling this resource problem. In this paper a detailed analysis of the scheduling process, within an AI framework, is proposed and this highlights the deficiencies in current techniques. It is suggested that the central problem is how to have a global perspective in the scheduling process. In order to deal with the underlying problem, the concept of ‘reinforcement planning’ is proposed. This method builds a reinforcement schedule and uses it to refine out a detailed schedule by inter-level analysis and communication. Based on this idea a scheduling system called RESS-I has been implemented to automatically construct the job shop schedule.",
"corpus_id": 2834598,
"score": 0
},
{
"doc_id": "17844333",
"title": "Hybrid Algorithm Combining Ant Colony Algorithm with Genetic Algorithm for Continuous Domain",
"abstract": "Ant colony algorithm is a kind of new heuristic biological modeling method which has the ability of parallel processing and global searching. By use of the properties of ant colony algorithm and genetic algorithm, the hybrid algorithm which adopts genetic algorithm to distribute the original pheromone is proposed to solve the continuous optimization problem. Several solutions are obtained using the ant colony algorithm through pheromone accumulation and renewal. Finally, by using crossover and mutation operation of genetic algorithm, some effective solutions are obtained. The results of experiments show better performances of the new algorithm based on six continuous test functions compared with the methods available in literature.",
"corpus_id": 17844333,
"score": 0
}
] |
arnetminer | {
"doc_id": "17229395",
"title": "Region-of-interest coding of 3D mesh based on wavelet transform",
"abstract": "A scheme for the region of interest (ROI) coding of 3D meshes is proposed for the first time. The ROI is encoded with higher fidelity than the rest region, and the \"priority\" of ROI relative to the rest region (background, BG) can be specified by encoder or decoder (user). Wavelet transform is used on 3D mesh and zerotrees are adopted to organize the coefficients. The wavelet coefficients of ROI are scaled up and encoded with a modified set partitioning in hierarchical trees (SPIHT) algorithm. In additional, a fast algorithm is proposed for creating the ROI mask. Once the quality of reconstructed ROI becomes high enough, the transmission can be intermitted and much transmission bandwidth and storage space will be saved consequently.",
"corpus_id": 17229395
} | [
{
"doc_id": "39145518",
"title": "Boundary Constrained Manifold Unfolding",
"abstract": null,
"corpus_id": 39145518,
"score": 1
},
{
"doc_id": "32625268",
"title": "An Energy-Minimizing Mesh Parameterization",
"abstract": "In this paper, we propose a new energy-minimizing mesh parameterization method, which linearly combines two new energies EQ and EM. It not only avoids triangles overlap in the parameter domain, but also is invariant under rotation, translation and scale transformations. We first parameterize the original 3D mesh to the parameter plane by using the energy-minimizing parameterization, and get the optimal effect by optimizing the weights wij gradually. Experimental results indicate that this optimized energy-minimizing method has low distortion and good stability.",
"corpus_id": 32625268,
"score": 1
},
{
"doc_id": "13551397",
"title": "A Blind Watermarking of 3D Triangular Meshes Using Geometry Image",
"abstract": null,
"corpus_id": 13551397,
"score": 1
},
{
"doc_id": "17077403",
"title": "Rate-Distortion Optimized Progressive Geometry Compression",
"abstract": "During progressive transmission of 3D geometry models, the transmission order of details at different region has great effects on the quality of reconstructed models at low bit-rate. This work presents a ratedistortion (R-D) optimized progressive geometry compression scheme to improve the quality of reconstructed models by adjusting the transmission order of details. In this scheme, the input mesh is partitioned into parts, then each part is encoded into bit-stream independently, and the encoded bit-streams are truncated into segments while getting the R-D characteristics of every segment, at last all segments are assembled into a codestream based on R-D optimization, which ensure the region with rich detail will be transmitted early and make the reconstructed mesh achieve better quality as soon as possible. Experimental results show that, as compared with the well-known PGC method, the proposed one provides better R-D performance. Moreover, it provides a novel way to realize the region of interest (ROI) coding of 3D meshes. Keywords--Rate-distortion optimization; Progressive compression; Mesh partition",
"corpus_id": 17077403,
"score": 1
},
{
"doc_id": "29104494",
"title": "Mesh Editing in ROI with Dual Laplacian",
"abstract": null,
"corpus_id": 29104494,
"score": 1
},
{
"doc_id": "18169252",
"title": "Fully automatic and segmentation-robust classification of breast tumors based on local texture analysis of ultrasound images",
"abstract": "Region of interest (ROI) is a region used to extract features. In breast ultrasound (BUS) image, the ROI is a breast tumor region. Because of poor image quality (low SNR (signal/noise ratio), low contrast, blurry boundaries, etc.), it is difficult to segment the BUS image accurately and produce a ROI which precisely covers the tumor region. Due to the requirement of accurate ROI for feature extraction, fully automatic classification of BUS images becomes a difficult task. In this paper, a novel fully automatic classification method for BUS images is proposed which can be divided into two steps: ''ROI generation step'' and ''ROI classification step''. The ROI generation step focuses on finding a credible ROI instead of finding the precise tumor location. The ROI classification step employs a novel feature extraction and classification strategy. First, some points in the ROI are selected as the ''classification checkpoints'' which are evenly distributed in the ROI, and the local texture features around each classification checkpoint are extracted. For each ROI, all the classification checkpoints are classified. Finally, the class of the BUS image is determined by analyzing every classification checkpoint in the corresponding ROI. Both steps were implemented by utilizing a supervised texture classification approach. The experiments demonstrate that the proposed method is very robust to the segmentation of BUS images, and very effective and useful for classifying breast tumors.",
"corpus_id": 18169252,
"score": 0
},
{
"doc_id": "15938540",
"title": "A small listener for heterogeneous mobile devices: a service enabler with a uniform Web object view",
"abstract": "We recently developed \"system on mobile devices\" (SyD) middleware for rapidly developing and deploying collaborative distributed applications over a collection of autonomous Web objects and data-stores, independent of the underlying device, data, or network. SyDListener is a key component of SyD middleware. SyDListener provides a set of interfaces and classes that allows distributed SyD-based application components to communicate seamlessly in mobile environments. SyDListener provides a uniform object view of the underlying server application and enables client applications to remotely invoke those methods using XML messages. SyDListener is implemented as a multi-threaded wrapper with simple persistence management and asynchronous invocation functionality for J2ME mobile information device profile (MIDP) on connected limited device configuration (CLDL) devices. We discuss the functionality, architecture, implementation, and performance of SyDListener. We believe it is the first comprehensive working prototype of its kind for Java-enabled handhelds with a small footprint of 10 KB.",
"corpus_id": 15938540,
"score": 0
},
{
"doc_id": "8786674",
"title": "Entity discovery and assignment for opinion mining applications",
"abstract": "Opinion mining became an important topic of study in recent years due to its wide range of applications. There are also many companies offering opinion mining services. One problem that has not been studied so far is the assignment of entities that have been talked about in each sentence. Let us use forum discussions about products as an example to make the problem concrete. In a typical discussion post, the author may give opinions on multiple products and also compare them. The issue is how to detect what products have been talked about in each sentence. If the sentence contains the product names, they need to be identified. We call this problem entity discovery. If the product names are not explicitly mentioned in the sentence but are implied due to the use of pronouns and language conventions, we need to infer the products. We call this problem entity assignment. These problems are important because without knowing what products each sentence talks about the opinion mined from the sentence is of little use. In this paper, we study these problems and propose two effective methods to solve the problems. Entity discovery is based on pattern discovery and entity assignment is based on mining of comparative sentences. Experimental results using a large number of forum posts demonstrate the effectiveness of the technique. Our system has also been successfully tested in a commercial setting.",
"corpus_id": 8786674,
"score": 0
},
{
"doc_id": "14013953",
"title": "Opinion Extraction and Summarization on the Web",
"abstract": "The Web has become an excellent source for gathering consumer opinions. There are now numerous Web sources containing such opinions, e.g., product reviews, forums, discussion groups. and blogs. Techniques are now being developed to exploit these sources to help organizations and individuals to gain such important information easily and quickly. In this paper, we first discuss several aspects of the problem in the AI context, and then present some results of our existing work published in KDD-04 and WWW-05.",
"corpus_id": 14013953,
"score": 0
},
{
"doc_id": "215471",
"title": "Nesting One-Against-One Algorithm Based on SVMs for Pattern Classification",
"abstract": "Support vector machines (SVMs), which were originally designed for binary classifications, are an excellent tool for machine learning. For the multiclass classifications, they are usually converted into binary ones before they can be used to classify the examples. In the one-against-one algorithm with SVMs, there exists an unclassifiable region where the data samples cannot be classified by its decision function. This paper extends the one-against-one algorithm to handle this problem. We also give the convergence and computational complexity analysis of the proposed method. Finally, one-against-one, fuzzy decision function (FDF), and decision-directed acyclic graph (DDAG) algorithms and our proposed method are compared using five University of California at Irvine (UCI) data sets. The results report that the proposed method can handle the unclassifiable region better than others.",
"corpus_id": 215471,
"score": 0
}
] |
arnetminer | {
"doc_id": "44358158",
"title": "Proceedings of the Seventh SIAM International Conference on Data Mining, April 26-28, 2007, Minneapolis, Minnesota, USA",
"abstract": null,
"corpus_id": 44358158
} | [
{
"doc_id": "61155076",
"title": "Opinion Mining",
"abstract": "Описанл процедуру формування інтегрального показника об’єкта згідно з відгуками користувачів під час застосування методів оцінювання опінії текстової інформації у web-документах. Запропоновано використання лінгвістичних змінних та застосування вагових коефіцієнтів для достовірнішого результату оцінювання емоційного забарвлення текстової інформації. Ключові слова: опінія, емоційне забарвлення, об’єкт, інтегральний показник, лінгвістична змінна, ваговий коефіцієнт.",
"corpus_id": 61155076,
"score": 1
},
{
"doc_id": "12241191",
"title": "Finding Actionable Knowledge via Automated Comparison",
"abstract": "The problem of finding interesting and actionable patterns is a major challenge in data mining. It has been studied by many data mining researchers. The issue is that data mining algorithms often generate too many patterns, which make it very hard for the user to find those truly useful ones. Over the years many techniques have been proposed. However, few have made it to real-life applications. At the end of 2005, we built a data mining system for Motorola (called Opportunity Map) to enable the user to explore the space of a large number of rules in order to find actionable knowledge. The approach is based on the concept of rule cubes and operations on rule cubes. A rule cube is similar to a data cube, but stores rules. Since its deployment, some issues have also been identified during the regular use of the system in Motorola. One of the key issues is that although the operations on rule cubes are flexible, each operation is primitive and has to be initiated by the user. Finding a piece of actionable knowledge typically involves many operations and intense visual inspections, which are labor-intensive and time-consuming. From interactions with our users, we identified a generic problem that is crucial for finding actionable knowledge. The problem involves extensive comparison of sub-populations and identification of the cause of their differences. This paper first defines the problem and then proposes an effective method to solve the problem automatically. To the best of our knowledge, there is no reported study of this problem. The new method has been added to the Opportunity Map system and is now in daily use in Motorola.",
"corpus_id": 12241191,
"score": 1
},
{
"doc_id": "6193650",
"title": "Dealing with different distributions in learning from",
"abstract": "In the problem of learning with positive and unlabeled examples, existing research all assumes that positive examples P and the hidden positive examples in the unlabeled set U are generated from the same distribution. This assumption may be violated in practice. In such cases, existing methods perform poorly. This paper proposes a novel technique A-EM to deal with the problem. Experimental results with product page classification demonstrate the effectiveness of the proposed technique.",
"corpus_id": 6193650,
"score": 1
},
{
"doc_id": "2182731",
"title": "Speed-up iterative frequent itemset mining with constraint changes",
"abstract": "Mining of frequent itemsets is a fundamental data mining task. Past research has proposed many efficient algorithms for this purpose. Recent work also highlighted the importance of using constraints to focus the mining process to mine only those relevant itemsets. In practice, data mining is often an interactive and iterative process. The user typically changes constraints and runs the mining algorithm many times before being satisfied with the final results. This interactive process is very time consuming. Existing mining algorithms are unable to take advantage of this iterative process to use previous mining results to speed up the current mining process. This results in an enormous waste of time and computation. In this paper, we propose an efficient technique to utilize previous mining results to improve the efficiency of current mining when constraints are changed. We first introduce the concept of tree boundary to summarize useful information available from previous mining. We then show that the tree boundary provides an effective and efficient framework for the new mining. The proposed technique has been implemented in the context of two existing frequent itemset mining algorithms, FP-tree and tree projection. Experiment results on both synthetic and real-life datasets show that the proposed approach achieves a dramatic saving of computation.",
"corpus_id": 2182731,
"score": 1
},
{
"doc_id": "8786674",
"title": "Entity discovery and assignment for opinion mining applications",
"abstract": "Opinion mining became an important topic of study in recent years due to its wide range of applications. There are also many companies offering opinion mining services. One problem that has not been studied so far is the assignment of entities that have been talked about in each sentence. Let us use forum discussions about products as an example to make the problem concrete. In a typical discussion post, the author may give opinions on multiple products and also compare them. The issue is how to detect what products have been talked about in each sentence. If the sentence contains the product names, they need to be identified. We call this problem entity discovery. If the product names are not explicitly mentioned in the sentence but are implied due to the use of pronouns and language conventions, we need to infer the products. We call this problem entity assignment. These problems are important because without knowing what products each sentence talks about the opinion mined from the sentence is of little use. In this paper, we study these problems and propose two effective methods to solve the problems. Entity discovery is based on pattern discovery and entity assignment is based on mining of comparative sentences. Experimental results using a large number of forum posts demonstrate the effectiveness of the technique. Our system has also been successfully tested in a commercial setting.",
"corpus_id": 8786674,
"score": 1
},
{
"doc_id": "8632899",
"title": "Accessible image search",
"abstract": "There are about 8% of men and 0.8% of women suffering from colorblindness. We show that the existing image search techniques cannot provide satisfactory results for these users, since many images will not be well perceived by them due to the loss of color information. In this paper, we introduce a scheme named Accessible Image Search (AIS) to accommodate these users. Different from the general image search scheme that aims at returning more relevant results, AIS further takes into account the colorblind accessibilities of the returned results, i.e., the image qualities in the eyes of colorblind users. The scheme includes two components: accessibility assessment and accessibility improvement. For accessibility assessment, we introduce an analysisbased method and a learning-based method. Based on the measured accessibility scores, different reranking methods can be performed to prioritize the images with high accessibilities. In accessibility improvement component, we propose an efficient recoloring algorithm to modify the colors of the images such that they can be better perceived by colorblind users. We also propose the Accessibility Average Precision (AAP) for AIS as a complementary performance evaluation measure to the conventional relevance-based evaluation methods. Experimental results with more than 60,000 images and 20 anonymous colorblind users demonstrate the effectiveness and usefulness of the proposed scheme.",
"corpus_id": 8632899,
"score": 0
},
{
"doc_id": "11706749",
"title": "A Graph-Based Algorithm for Mining Maximal Frequent Itemsets",
"abstract": "Association rule mining is an important research branch of data mining, and computing frequent itemsets is the main problem. The paper is designed to find maximal frequent itemsets only. It presents an algorithm based on a frequent pattern graph, which can find maximal frequent itemsets quickly. A breadth-first-search and a depth-first-search techniques are used to produce all maximal frequent itemsets of a database. The paper also analyzes the complexity of the algorithm, and explains the computation procedure by examples. It has high time efficiency and less space complexity for computing maximal frequent itemsets.",
"corpus_id": 11706749,
"score": 0
},
{
"doc_id": "12839401",
"title": "On the Spectral Properties and Stabilization of Acoustic Flow",
"abstract": "In this paper we use perturbation theory to study the spectral properties and energy decay of two-dimensional acoustic flow (cf. [J.T. Beale, Indiana Univ. Math. J., 25 (1976), pp.895--917], [P.M. Morse and K.U. Ingard, Theoretical Acoustics, McGraw-Hill, New York, 1968]):$\\phi_{tt}-c^2\\Delta \\phi=0$ in $\\Omega\\times(0,\\infty)$, $m\\delta_{tt}+d\\delta_t+k\\delta=-\\rho\\phi_t$ and $\\phi_x=\\delta_t$ on $\\Gamma_0\\times(0,\\infty)$, $\\frac{\\partial\\phi}{\\partial\\nu}=0$ on $\\Gamma_1\\times(0,\\infty)$ with initial data $\\phi(0)=\\phi_0,\\ \\phi_t(0)=\\phi_1$ in $\\Omega$ and $\\delta(0)=\\delta_0,\\ \\delta_t(0)=\\delta_1$ on $\\Gamma_0$, where $\\Omega=(0,1)\\times (0,1)$, $\\Gamma_0=\\{(1,y); \\0<y <1\\}$, $\\Gamma_1=\\partial\\Omega\\setminus\\Gamma_0$, and $\\nu$ is the external normal direction on the boundary. Locations of eigenvalues of the infinitesimal generator of semigroup associated with the above system are estimated. A certain \"Fourier\" expansion is obtained. That the energy decays to zero and like t-1 (even like $t^{-\\beta}...",
"corpus_id": 12839401,
"score": 0
},
{
"doc_id": "1930626",
"title": "Multi-agent System for Custom Relationship Management with SVMs Tool",
"abstract": "Distributed data mining in the CRM is to learn available knowledge from the customer relationship so as to instruct the strategic behavior. In order to resolve the CRM in distributed data mining, this paper proposes the architecture of distributed data mining for CRM, and then utilizes the support vector machine tool to separate the customs into several classes and manage them. In the end, the practical experiments about one Chinese company are conducted to show the good performance of the proposed approach.",
"corpus_id": 1930626,
"score": 0
},
{
"doc_id": "42464493",
"title": "Designing Neural Networks Using Hybrid Particle Swarm Optimization",
"abstract": "Evolving artificial neural network is an important issue in both evolutionary computation (EC) and neural networks (NN) fields. In this paper, a hybrid particle swarm optimization (PSO) is proposed by incorporating differential evolution (DE) and chaos into the classic PSO. By combining DE operation with PSO, the exploration and exploitation abilities can be well balanced, and the diversity of swarms can be reasonably maintained. Moreover, by hybridizing chaotic local search (CLS), DE operator and PSO operator, searching behavior can be enriched and the ability to avoid being trapped in local optima can be well enhanced. Then, the proposed hybrid PSO (named CPSODE) is applied to design multi-layer feed-forward neural network. Simulation results and comparisons demonstrate the effectiveness and efficiency of the proposed hybrid PSO.",
"corpus_id": 42464493,
"score": 0
}
] |
arnetminer | {
"doc_id": "206600908",
"title": "Improved Differential Evolution with Dynamic Population Size",
"abstract": "As a novel evolutionary computing technique, recently Differential Evolution (DE) has attracted much attention and wide applications due to its simple concept and easy implementation. However, all the control parameters of the classic DE (crossover rate, scaling factor, and population size) keep fixed during the searching process. To improve the performance of DE, an improved DE (IDE) with dynamic population size is proposed in this paper. Simulation results and comparisons based on some well-known benchmarks and an IIR design problem show the good efficiency of the proposed IDE.",
"corpus_id": 206600908
} | [
{
"doc_id": "44715797",
"title": "Constrained Nonlinear State Estimation - A Differential Evolution Based Moving Horizon Approach",
"abstract": "A solution is proposed to estimate the states in the nonlinear discrete time system. Moving Horizon Estimation (MHE) is used to obtain the approximated states by minimizing a criterion that is the Euclidean form of the difference between the estimated outputs and the measured ones over a finite time horizon. The differential evolution (DE) algorithm is incorporated into the implementation of MHE in order to solve the optimization problem which is presented as a nonlinear programming problem due to the constraints. The effectiveness of the approach is illustrated in simulated systems that have appeared in the moving horizon estimation literature.",
"corpus_id": 44715797,
"score": 1
},
{
"doc_id": "5989425",
"title": "A New Performance Evaluation Model and AHP-Based Analysis Method in Service-Oriented Workflow",
"abstract": "In service-oriented architecture, services and workflows are closely related so that the research on service-oriented workflow attracts the attention of academia. Because of the loosely-coupled, autonomic and dynamic nature of service, the operation and performance evaluation of workflow meet some challenges, such as how to judge the quality of service (QoS) and what is the relation between QoS and workflow performance. In this paper we are going to address these challenges. First the definition of service is proposed, and the characteristics and operation mechanism of service-oriented workflow are presented. Then a service-oriented workflow performance evaluation model is described which combines the performance of the business system and IT system. The key performance indicators (KPI) are also depicted with their formal representation. Finally the improved Analytic Hierarchy Process is brought forward to analyze the correlation between different KPIs and select services.",
"corpus_id": 5989425,
"score": 1
},
{
"doc_id": "32506945",
"title": "DE and NLP Based QPLS Algorithm",
"abstract": "As a novel evolutionary computing technique, Differential Evolution (DE) has been considered to be an effective optimization method for complex optimization problems, and achieved many successful applications in engineering. In this paper, a new algorithm of Quadratic Partial Least Squares (QPLS) based on Nonlinear Programming (NLP) is presented. And DE is used to solve the NLP so as to calculate the optimal input weights and the parameters of inner relationship. The simulation results based on the soft measurement of diesel oil solidifying point on a real crude distillation unit demonstrate that the superiority of the proposed algorithm to linear PLS and QPLS which is based on Sequential Quadratic Programming (SQP) in terms of fitting accuracy and computational costs.",
"corpus_id": 32506945,
"score": 1
},
{
"doc_id": "33407170",
"title": "Manufacturing Grid: Needs, Concept, and Architecture",
"abstract": "As a new approach, grid technology is rapidly used in scientific computing, large-scale data management, and collaborative work. But in the field of manufacturing, the application of grid is just at the beginning. The paper proposes the concept of manufacturing. The needs, definition and architecture of manufacturing gird are discussed, which explains why needs manufacturing grid, what is manufacturing grid and how to construct a manufacturing grid system.",
"corpus_id": 33407170,
"score": 1
},
{
"doc_id": "21860578",
"title": "An Effective PSO-Based Memetic Algorithm for Flow Shop Scheduling",
"abstract": "This paper proposes an effective particle swarm optimization (PSO)-based memetic algorithm (MA) for the permutation flow shop scheduling problem (PFSSP) with the objective to minimize the maximum completion time, which is a typical non-deterministic polynomial-time (NP) hard combinatorial optimization problem. In the proposed PSO-based MA (PSOMA), both PSO-based searching operators and some special local searching operators are designed to balance the exploration and exploitation abilities. In particular, the PSOMA applies the evolutionary searching mechanism of PSO, which is characterized by individual improvement, population cooperation, and competition to effectively perform exploration. On the other hand, the PSOMA utilizes several adaptive local searches to perform exploitation. First, to make PSO suitable for solving PFSSP, a ranked-order value rule based on random key representation is presented to convert the continuous position values of particles to job permutations. Second, to generate an initial swarm with certain quality and diversity, the famous Nawaz-Enscore-Ham (NEH) heuristic is incorporated into the initialization of population. Third, to balance the exploration and exploitation abilities, after the standard PSO-based searching operation, a new local search technique named NEH_1 insertion is probabilistically applied to some good particles selected by using a roulette wheel mechanism with a specified probability. Fourth, to enrich the searching behaviors and to avoid premature convergence, a simulated annealing (SA)-based local search with multiple different neighborhoods is designed and incorporated into the PSOMA. Meanwhile, an effective adaptive meta-Lamarckian learning strategy is employed to decide which neighborhood to be used in SA-based local search. Finally, to further enhance the exploitation ability, a pairwise-based local search is applied after the SA-based search. Simulation results based on benchmarks demonstrate the effectiveness of the PSOMA. Additionally, the effects of some parameters on optimization performances are also discussed",
"corpus_id": 21860578,
"score": 1
},
{
"doc_id": "11160367",
"title": "Mining Comparative Sentences and Relations",
"abstract": "This paper studies a text mining problem, comparative sentence mining (CSM). A comparative sentence expresses an ordering relation between two sets of entities with respect to some common features. For example, the comparative sentence \"Canon's optics are better than those of Sony and Nikon\" expresses the comparative relation: (better, {optics}, {Canon}, {Sony, Nikon}). Given a set of evaluative texts on the Web, e.g., reviews, forum postings, and news articles, the task of comparative sentence mining is (1) to identify comparative sentences from the texts and (2) to extract comparative relations from the identified comparative sentences. This problem has many applications. For example, a product manufacturer wants to know customer opinions of its products in comparison with those of its competitors. In this paper, we propose two novel techniques based on two new types of sequential rules to perform the tasks. Experimental evaluation has been conducted using different types of evaluative texts from the Web. Results show that our techniques are very promising.",
"corpus_id": 11160367,
"score": 0
},
{
"doc_id": "45164309",
"title": "Solvability of multi-point boundary value problem at resonance--Part IV",
"abstract": null,
"corpus_id": 45164309,
"score": 0
},
{
"doc_id": "16556351",
"title": "Post-Analysis of Learned Rules",
"abstract": "Rule induction research implicitly assumes that after producing the rules from a dataset, these rules will be used directly by an expert system or a human user. In real-life applications, the situation may not be as simple as that, particularly, when the user of the rules is a human being. The human user almost always has some previous concepts or knowledge about the domain represented by the dataset. Naturally, he/she wishes to know how the new rules compare with his/her existing knowledge. In dynamic domains where the rules may change over time, it is important to know what the changes are. These aspects of research have largely been ignored in the past. With the increasing use of machine leaming tcclmiques in practical applications such as data mining, this issue of post analysis of rules warrants greater emphasis and attention. In this paper, we propose a technique to deal with this problem. A system has been implemented to perform the post analysis of classification rules genemted by systems such as C4.5. The proposed technique is general and highly interactive. It will be particularly useful in data mining and data analysis.",
"corpus_id": 16556351,
"score": 0
},
{
"doc_id": "18163134",
"title": "Noise-Analysis Based Threshold-Choosing Algorithm in Motion Estimation",
"abstract": "A novel threshold choosing method for the threshold-based skip mechanism is presented, in which the threshold is obtained from the analysis of the video device induced noise variance. Simulation results show that the proposed method can remarkably reduce the computation time consumption with only marginal performance penalty.",
"corpus_id": 18163134,
"score": 0
},
{
"doc_id": "206597617",
"title": "Binary Tree Support Vector Machine Based on Kernel Fisher Discriminant for Multi-classification",
"abstract": "In order to improve the accuracy of the conventional algorithms for multi-classifications, we propose a binary tree support vector machine based on Kernel Fisher Discriminant in this paper. To examine the training accuracy and the generalization performance of the proposed algorithm, One-against-All, One-against-One and the proposed algorithms are applied to five UCI data sets. The experimental results show that in general, the training and the testing accuracy of the proposed algorithm is the best one, and there exist no unclassifiable regions in the proposed algorithm.",
"corpus_id": 206597617,
"score": 0
}
] |
arnetminer | {
"doc_id": "29560725",
"title": "Guest Editors' Introduction: Special Section on Mining and Searching the Web",
"abstract": "WITH the phenomenal growth of the Web, there is an ever-increasing volume of information being published on numerous Web sites. This vast amount of accessible information has raised many new opportunities and challenges for knowledge discovery and data engineering researchers. For programs that seek to analyze Web content, the heterogeneity in authorship and the consequent lack of structure are formidable hurdles. Discovering and extracting novel and useful knowledge from Web sources call for innovative approaches that draw from a wide range of fields spanning data mining, machine learning, statistics, databases, information retrieval, artificial intelligence, and natural language processing. In Web search, although general-purpose search engines are very useful, finding specific or targeted information can still be a frustrating experience. Highly effective, domainspecific, and personalized search techniques are not yet mainstream. In e-commerce, a whole range of online techniques are also needed to support such applications. For example, in online shopping, there are no human shop assistants to help customers. Instead, automated techniques are needed to learn from the behaviors of users in order to provide effective recommendations and assistance. Mining, extracting, and integrating Web information are challenging problems as well because there is still no mature technique to integrate information from structured (stored database), ad hoc structured (shopping sites), and unstructured (product reviews) sources. Clearly, format standards for semistructured data will not solve all of these problems. This special issue of IEEE Transactions on Knowledge and Data Engineering brings together some of the latest research results in the field. It presents seven papers which deal with a wide range of problems. All of the accepted papers propose some novel and/or principled techniques to solve these problems. Of the seven papers, three focus on domain specific and personalized Web search, one proposes a principled technique for collaborative filtering, one studies Web page cleaning for identifying informative structures and content blocks in Web pages, one studies classification of Web pages based on positive and unlabeled training examples, and one studies the clustering of XML data for efficient storage and querying of such data. The first paper by Michelangelo Diligenti, Marco Gori, and Marco Maggini studies Web page scoring for Web search and resource discovery. Current methods for the purpose are mainly based on the analysis of hyperlinks. The structure of the hyperlinks is the result of collaborative activities of the community of Web authors. Web authors usually like to link resources they consider authoritative, and authority emerges from the dynamics of popularity of the resources on the Web. This paper proposes a general probabilistic framework based on random walk of links for Web page scoring that incorporates and extends many existing models. Their results show that the proposed framework is effective and is particularly suited for focused or vertical search. The second paper by Satoshi Oyama, Takashi Kokubo, and Toru Ishida describes an interesting technique for domain specific Web search. The basic idea is to find a set of domain specific keywords (which the authors call keyword spices) that can be used as the context of the search queries in the domain. A nice algorithm based on text classification is given for identifying a reasonably complete set of such keyword spices. To perform text classification, it collects training pages from the Web through a search using an initial set of keywords of the domain. The main advantage of the proposed method is that it does not need to collect and index domain specific pages as most domain specific search engines do. The work is also related to research in query expansion and modification, but deals with a slightly different problem and offers different approaches. The third paper by Fang Liu, Clement Yu, and Weiyi Meng also studies Web search, more specifically, personalized Web search. Since general-purpose search engines do not consider user’s interests, their search results may not be interesting to a specific user. Personalized search aims at carrying out search for each user incorporating his/her interests. In this paper, the authors propose to employ a user profile and a general profile to constrain the search. The user profile is learned from the user’s search history, which contains the user interested categories and weighted terms in the categories. The general profile is built using the categories from the Open Directory Project. The key advance of the technique is that it maps each user query to some categories. At the search time, the system first uses the profiles to infer the categories of the search terms in question. Then, the search terms are augmented with each category as the context to perform search. The search results are then merged to produce a single result ranking. A comprehensive experimental evaluation is described in the paper. The fourth paper by Hung-Yu Kao, Shian-Hua Liu, JanMing Ho, and Ming-Syan Chen focuses on the cleaning of 2 IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, VOL. 16, NO. 1, JANUARY 2004",
"corpus_id": 29560725
} | [
{
"doc_id": "6193650",
"title": "Dealing with different distributions in learning from",
"abstract": "In the problem of learning with positive and unlabeled examples, existing research all assumes that positive examples P and the hidden positive examples in the unlabeled set U are generated from the same distribution. This assumption may be violated in practice. In such cases, existing methods perform poorly. This paper proposes a novel technique A-EM to deal with the problem. Experimental results with product page classification demonstrate the effectiveness of the proposed technique.",
"corpus_id": 6193650,
"score": 1
},
{
"doc_id": "12241191",
"title": "Finding Actionable Knowledge via Automated Comparison",
"abstract": "The problem of finding interesting and actionable patterns is a major challenge in data mining. It has been studied by many data mining researchers. The issue is that data mining algorithms often generate too many patterns, which make it very hard for the user to find those truly useful ones. Over the years many techniques have been proposed. However, few have made it to real-life applications. At the end of 2005, we built a data mining system for Motorola (called Opportunity Map) to enable the user to explore the space of a large number of rules in order to find actionable knowledge. The approach is based on the concept of rule cubes and operations on rule cubes. A rule cube is similar to a data cube, but stores rules. Since its deployment, some issues have also been identified during the regular use of the system in Motorola. One of the key issues is that although the operations on rule cubes are flexible, each operation is primitive and has to be initiated by the user. Finding a piece of actionable knowledge typically involves many operations and intense visual inspections, which are labor-intensive and time-consuming. From interactions with our users, we identified a generic problem that is crucial for finding actionable knowledge. The problem involves extensive comparison of sub-populations and identification of the cause of their differences. This paper first defines the problem and then proposes an effective method to solve the problem automatically. To the best of our knowledge, there is no reported study of this problem. The new method has been added to the Opportunity Map system and is now in daily use in Motorola.",
"corpus_id": 12241191,
"score": 1
},
{
"doc_id": "8786674",
"title": "Entity discovery and assignment for opinion mining applications",
"abstract": "Opinion mining became an important topic of study in recent years due to its wide range of applications. There are also many companies offering opinion mining services. One problem that has not been studied so far is the assignment of entities that have been talked about in each sentence. Let us use forum discussions about products as an example to make the problem concrete. In a typical discussion post, the author may give opinions on multiple products and also compare them. The issue is how to detect what products have been talked about in each sentence. If the sentence contains the product names, they need to be identified. We call this problem entity discovery. If the product names are not explicitly mentioned in the sentence but are implied due to the use of pronouns and language conventions, we need to infer the products. We call this problem entity assignment. These problems are important because without knowing what products each sentence talks about the opinion mined from the sentence is of little use. In this paper, we study these problems and propose two effective methods to solve the problems. Entity discovery is based on pattern discovery and entity assignment is based on mining of comparative sentences. Experimental results using a large number of forum posts demonstrate the effectiveness of the technique. Our system has also been successfully tested in a commercial setting.",
"corpus_id": 8786674,
"score": 1
},
{
"doc_id": "61155076",
"title": "Opinion Mining",
"abstract": "Описанл процедуру формування інтегрального показника об’єкта згідно з відгуками користувачів під час застосування методів оцінювання опінії текстової інформації у web-документах. Запропоновано використання лінгвістичних змінних та застосування вагових коефіцієнтів для достовірнішого результату оцінювання емоційного забарвлення текстової інформації. Ключові слова: опінія, емоційне забарвлення, об’єкт, інтегральний показник, лінгвістична змінна, ваговий коефіцієнт.",
"corpus_id": 61155076,
"score": 1
},
{
"doc_id": "2182731",
"title": "Speed-up iterative frequent itemset mining with constraint changes",
"abstract": "Mining of frequent itemsets is a fundamental data mining task. Past research has proposed many efficient algorithms for this purpose. Recent work also highlighted the importance of using constraints to focus the mining process to mine only those relevant itemsets. In practice, data mining is often an interactive and iterative process. The user typically changes constraints and runs the mining algorithm many times before being satisfied with the final results. This interactive process is very time consuming. Existing mining algorithms are unable to take advantage of this iterative process to use previous mining results to speed up the current mining process. This results in an enormous waste of time and computation. In this paper, we propose an efficient technique to utilize previous mining results to improve the efficiency of current mining when constraints are changed. We first introduce the concept of tree boundary to summarize useful information available from previous mining. We then show that the tree boundary provides an effective and efficient framework for the new mining. The proposed technique has been implemented in the context of two existing frequent itemset mining algorithms, FP-tree and tree projection. Experiment results on both synthetic and real-life datasets show that the proposed approach achieves a dramatic saving of computation.",
"corpus_id": 2182731,
"score": 1
},
{
"doc_id": "410427",
"title": "Database and location management schemes for mobile communications",
"abstract": "Signaling traffic incurred in tracking mobile users and delivering enhanced services causes an additional load in the network. Efficient database and location management schemes are needed to meet the challenges from high density and mobility of users, and various service features. In this paper, the general location control and management function is treated as the combination of two parts, the global and local scope. New schemes and methods are proposed, and improvements achieved over established basic schemes are shown by using simulations.",
"corpus_id": 410427,
"score": 0
},
{
"doc_id": "26305172",
"title": "Hybrid Particle Swarm Optimization for Flow Shop Scheduling with Stochastic Processing Time",
"abstract": "The stochastic flow shop scheduling with uncertain processing time is a typical NP-hard combinatorial optimization problem and represents an important area in production scheduling, which is difficult because of inaccurate objective estimation, huge search space, and multiple local minima. As a novel evolutionary technique, particle swarm optimization (PSO) has gained much attention and wide applications for both function and combinatorial problems, but there is no research on PSO for stochastic scheduling cases. In this paper, a class of PSO approach with simulated annealing (SA) and hypothesis test (HT), namely PSOSAHT is proposed for stochastic flow shop scheduling with uncertain processing time with respect to the makespan criterion (i.e. minimizing the maximum completion time). Simulation results demonstrate the feasibility, effectiveness and robustness of the proposed hybrid algorithm. Meanwhile, the effects of noise magnitude and number of evaluation on searching performances are also investigated.",
"corpus_id": 26305172,
"score": 0
},
{
"doc_id": "10223976",
"title": "A combinational feature selection and ensemble neural network method for classification of gene expression data",
"abstract": "BackgroundMicroarray experiments are becoming a powerful tool for clinical diagnosis, as they have the potential to discover gene expression patterns that are characteristic for a particular disease. To date, this problem has received most attention in the context of cancer research, especially in tumor classification. Various feature selection methods and classifier design strategies also have been generally used and compared. However, most published articles on tumor classification have applied a certain technique to a certain dataset, and recently several researchers compared these techniques based on several public datasets. But, it has been verified that differently selected features reflect different aspects of the dataset and some selected features can obtain better solutions on some certain problems. At the same time, faced with a large amount of microarray data with little knowledge, it is difficult to find the intrinsic characteristics using traditional methods. In this paper, we attempt to introduce a combinational feature selection method in conjunction with ensemble neural networks to generally improve the accuracy and robustness of sample classification.ResultsWe validate our new method on several recent publicly available datasets both with predictive accuracy of testing samples and through cross validation. Compared with the best performance of other current methods, remarkably improved results can be obtained using our new strategy on a wide range of different datasets.ConclusionsThus, we conclude that our methods can obtain more information in microarray data to get more accurate classification and also can help to extract the latent marker genes of the diseases for better diagnosis and treatment.",
"corpus_id": 10223976,
"score": 0
},
{
"doc_id": "8632899",
"title": "Accessible image search",
"abstract": "There are about 8% of men and 0.8% of women suffering from colorblindness. We show that the existing image search techniques cannot provide satisfactory results for these users, since many images will not be well perceived by them due to the loss of color information. In this paper, we introduce a scheme named Accessible Image Search (AIS) to accommodate these users. Different from the general image search scheme that aims at returning more relevant results, AIS further takes into account the colorblind accessibilities of the returned results, i.e., the image qualities in the eyes of colorblind users. The scheme includes two components: accessibility assessment and accessibility improvement. For accessibility assessment, we introduce an analysisbased method and a learning-based method. Based on the measured accessibility scores, different reranking methods can be performed to prioritize the images with high accessibilities. In accessibility improvement component, we propose an efficient recoloring algorithm to modify the colors of the images such that they can be better perceived by colorblind users. We also propose the Accessibility Average Precision (AAP) for AIS as a complementary performance evaluation measure to the conventional relevance-based evaluation methods. Experimental results with more than 60,000 images and 20 anonymous colorblind users demonstrate the effectiveness and usefulness of the proposed scheme.",
"corpus_id": 8632899,
"score": 0
},
{
"doc_id": "17229395",
"title": "Region-of-interest coding of 3D mesh based on wavelet transform",
"abstract": "A scheme for the region of interest (ROI) coding of 3D meshes is proposed for the first time. The ROI is encoded with higher fidelity than the rest region, and the \"priority\" of ROI relative to the rest region (background, BG) can be specified by encoder or decoder (user). Wavelet transform is used on 3D mesh and zerotrees are adopted to organize the coefficients. The wavelet coefficients of ROI are scaled up and encoded with a modified set partitioning in hierarchical trees (SPIHT) algorithm. In additional, a fast algorithm is proposed for creating the ROI mask. Once the quality of reconstructed ROI becomes high enough, the transmission can be intermitted and much transmission bandwidth and storage space will be saved consequently.",
"corpus_id": 17229395,
"score": 0
}
] |
arnetminer | {
"doc_id": "6193650",
"title": "Dealing with different distributions in learning from",
"abstract": "In the problem of learning with positive and unlabeled examples, existing research all assumes that positive examples P and the hidden positive examples in the unlabeled set U are generated from the same distribution. This assumption may be violated in practice. In such cases, existing methods perform poorly. This paper proposes a novel technique A-EM to deal with the problem. Experimental results with product page classification demonstrate the effectiveness of the proposed technique.",
"corpus_id": 6193650
} | [
{
"doc_id": "61155076",
"title": "Opinion Mining",
"abstract": "Описанл процедуру формування інтегрального показника об’єкта згідно з відгуками користувачів під час застосування методів оцінювання опінії текстової інформації у web-документах. Запропоновано використання лінгвістичних змінних та застосування вагових коефіцієнтів для достовірнішого результату оцінювання емоційного забарвлення текстової інформації. Ключові слова: опінія, емоційне забарвлення, об’єкт, інтегральний показник, лінгвістична змінна, ваговий коефіцієнт.",
"corpus_id": 61155076,
"score": 1
},
{
"doc_id": "8786674",
"title": "Entity discovery and assignment for opinion mining applications",
"abstract": "Opinion mining became an important topic of study in recent years due to its wide range of applications. There are also many companies offering opinion mining services. One problem that has not been studied so far is the assignment of entities that have been talked about in each sentence. Let us use forum discussions about products as an example to make the problem concrete. In a typical discussion post, the author may give opinions on multiple products and also compare them. The issue is how to detect what products have been talked about in each sentence. If the sentence contains the product names, they need to be identified. We call this problem entity discovery. If the product names are not explicitly mentioned in the sentence but are implied due to the use of pronouns and language conventions, we need to infer the products. We call this problem entity assignment. These problems are important because without knowing what products each sentence talks about the opinion mined from the sentence is of little use. In this paper, we study these problems and propose two effective methods to solve the problems. Entity discovery is based on pattern discovery and entity assignment is based on mining of comparative sentences. Experimental results using a large number of forum posts demonstrate the effectiveness of the technique. Our system has also been successfully tested in a commercial setting.",
"corpus_id": 8786674,
"score": 1
},
{
"doc_id": "2182731",
"title": "Speed-up iterative frequent itemset mining with constraint changes",
"abstract": "Mining of frequent itemsets is a fundamental data mining task. Past research has proposed many efficient algorithms for this purpose. Recent work also highlighted the importance of using constraints to focus the mining process to mine only those relevant itemsets. In practice, data mining is often an interactive and iterative process. The user typically changes constraints and runs the mining algorithm many times before being satisfied with the final results. This interactive process is very time consuming. Existing mining algorithms are unable to take advantage of this iterative process to use previous mining results to speed up the current mining process. This results in an enormous waste of time and computation. In this paper, we propose an efficient technique to utilize previous mining results to improve the efficiency of current mining when constraints are changed. We first introduce the concept of tree boundary to summarize useful information available from previous mining. We then show that the tree boundary provides an effective and efficient framework for the new mining. The proposed technique has been implemented in the context of two existing frequent itemset mining algorithms, FP-tree and tree projection. Experiment results on both synthetic and real-life datasets show that the proposed approach achieves a dramatic saving of computation.",
"corpus_id": 2182731,
"score": 1
},
{
"doc_id": "15810340",
"title": "Discovering the set of fundamental rule changes",
"abstract": "The world around us changes constantly. Knowing what has changed is an important part of our lives. For businesses, recognizing changes is also crucial. It allows businesses to adapt themselves to the changing market needs. In this paper, we study changes of association rules from one time period to another. One approach is to compare the supports and/or confidences of each rule in the two time periods and report the differences. This technique, however, is too simplistic as it tends to report a huge number of rule changes, and many of them are, in fact, simply the snowball effect of a small subset of fundamental changes. Here, we present a technique to highlight the small subset of fundamental changes. A change is fundamental if it cannot be explained by some other changes. The proposed technique has been applied to a number of real-life datasets. Experiments results show that the number of rules whose changes are unexplainable is quite small (about 20% of the total number of changes discovered), and many of these unexplainable changes reflect some fundamental shifts in the application domain.",
"corpus_id": 15810340,
"score": 1
},
{
"doc_id": "12241191",
"title": "Finding Actionable Knowledge via Automated Comparison",
"abstract": "The problem of finding interesting and actionable patterns is a major challenge in data mining. It has been studied by many data mining researchers. The issue is that data mining algorithms often generate too many patterns, which make it very hard for the user to find those truly useful ones. Over the years many techniques have been proposed. However, few have made it to real-life applications. At the end of 2005, we built a data mining system for Motorola (called Opportunity Map) to enable the user to explore the space of a large number of rules in order to find actionable knowledge. The approach is based on the concept of rule cubes and operations on rule cubes. A rule cube is similar to a data cube, but stores rules. Since its deployment, some issues have also been identified during the regular use of the system in Motorola. One of the key issues is that although the operations on rule cubes are flexible, each operation is primitive and has to be initiated by the user. Finding a piece of actionable knowledge typically involves many operations and intense visual inspections, which are labor-intensive and time-consuming. From interactions with our users, we identified a generic problem that is crucial for finding actionable knowledge. The problem involves extensive comparison of sub-populations and identification of the cause of their differences. This paper first defines the problem and then proposes an effective method to solve the problem automatically. To the best of our knowledge, there is no reported study of this problem. The new method has been added to the Opportunity Map system and is now in daily use in Motorola.",
"corpus_id": 12241191,
"score": 1
},
{
"doc_id": "11561145",
"title": "Collective Behavior Analysis of a Class of Social Foraging Swarms",
"abstract": "This paper considers an anisotropic swarm model that consists of a group of mobile autonomous agents with an attraction-repulsion function that can guarantee collision avoidance between agents and a Gaussian-type attractant/repellent nutrient profile. The swarm behavior is a result of a balance between inter-individual interplays as well as the interplays of the swarm individuals (agents) with their environment. It is proved that the members of a reciprocal swarm will aggregate and eventually form a cohesive cluster of finite size. It is shown that the swarm system is completely stable, that is, every solution converges to the equilibrium point set of the system. Moreover, it is also shown that all the swarm individuals will converge to more favorable areas of the Gaussian profile under certain conditions. The results of this paper provide further insight into the effect of the interaction pattern on self-organized motion for a Gaussian-type attractant/repellent nutrient profile in a swarm system.",
"corpus_id": 11561145,
"score": 0
},
{
"doc_id": "17391590",
"title": "Complex Analysis of Anisotropic Swarms",
"abstract": "This paper considers a continuous time swarm model with individuals moving with a nutrient profile (or an attractant/repellent) in an n-dimensional space. The swarm behavior is a result of a balance between inter-individual interplays as well as the interplays of the swarm agents with their environment. It is proved that the swarm members aggregate and eventually form a cohesive cluster of finite size around the swarm weighted center in a finite time under certain conditions.",
"corpus_id": 17391590,
"score": 0
},
{
"doc_id": "33407170",
"title": "Manufacturing Grid: Needs, Concept, and Architecture",
"abstract": "As a new approach, grid technology is rapidly used in scientific computing, large-scale data management, and collaborative work. But in the field of manufacturing, the application of grid is just at the beginning. The paper proposes the concept of manufacturing. The needs, definition and architecture of manufacturing gird are discussed, which explains why needs manufacturing grid, what is manufacturing grid and how to construct a manufacturing grid system.",
"corpus_id": 33407170,
"score": 0
},
{
"doc_id": "9933281",
"title": "A Learning Process Using SVMs for Multi-agents Decision Classification",
"abstract": "In order to resolve decision classification problem in multiple agents system, this paper first introduces the architecture of multiple agents system. It then proposes a support vector machines based assessment approach, which has the ability to learn the rules form previous assessment results from domain experts. Finally, the experiment are conducted on the artificially dataset to illustrate how the proposed works, and the results show the proposed method has effective learning ability for decision classification problems.",
"corpus_id": 9933281,
"score": 0
},
{
"doc_id": "1710451",
"title": "Joint estimation of image and coil sensitivities in parallel MRI",
"abstract": "Parallel magnetic resonance imaging (MRI) using multichannel receiver coils has emerged as an effective tool to reduce imaging time in various dynamic imaging applications. However, there are still a number of image reconstruction issues that have not been fully addressed, thereby limiting the level of speed enhancement achievable with the technology. This paper considers the inaccuracy of coil sensitivities in conventional reconstruction methods such as SENSE, and reformulates the image reconstruction problem as a joint estimation of the coil sensitivities and the desired image, which is solved by an iterative algorithm. Experimental results demonstrate the effectiveness of the proposed method especially when large acceleration factors are used",
"corpus_id": 1710451,
"score": 0
}
] |
arnetminer | {
"doc_id": "1493739",
"title": "Expanding Domain Sentiment Lexicon through Double Propagation",
"abstract": "In most sentiment analysis applications, the sentiment lexicon plays a key role. However, it is hard, if not impossible, to collect and maintain a universal sentiment lexicon for all application domains because different words may be used in different domains. The main existing technique extracts such sentiment words from a large domain corpus based on different conjunctions and the idea of sentiment coherency in a sentence. In this paper, we propose a novel propagation approach that exploits the relations between sentiment words and topics or product features that the sentiment words modify, and also sentiment words and product features themselves to extract new sentiment words. As the method propagates information through both sentiment words and features, we call it double propagation. The extraction rules are designed based on relations described in dependency trees. A new method is also proposed to assign polarities to newly discovered sentiment words in a domain. Experimental results show that our approach is able to extract a large number of new sentiment words. The polarity assignment method is also effective.",
"corpus_id": 1493739
} | [
{
"doc_id": "61155076",
"title": "Opinion Mining",
"abstract": "Описанл процедуру формування інтегрального показника об’єкта згідно з відгуками користувачів під час застосування методів оцінювання опінії текстової інформації у web-документах. Запропоновано використання лінгвістичних змінних та застосування вагових коефіцієнтів для достовірнішого результату оцінювання емоційного забарвлення текстової інформації. Ключові слова: опінія, емоційне забарвлення, об’єкт, інтегральний показник, лінгвістична змінна, ваговий коефіцієнт.",
"corpus_id": 61155076,
"score": 1
},
{
"doc_id": "12241191",
"title": "Finding Actionable Knowledge via Automated Comparison",
"abstract": "The problem of finding interesting and actionable patterns is a major challenge in data mining. It has been studied by many data mining researchers. The issue is that data mining algorithms often generate too many patterns, which make it very hard for the user to find those truly useful ones. Over the years many techniques have been proposed. However, few have made it to real-life applications. At the end of 2005, we built a data mining system for Motorola (called Opportunity Map) to enable the user to explore the space of a large number of rules in order to find actionable knowledge. The approach is based on the concept of rule cubes and operations on rule cubes. A rule cube is similar to a data cube, but stores rules. Since its deployment, some issues have also been identified during the regular use of the system in Motorola. One of the key issues is that although the operations on rule cubes are flexible, each operation is primitive and has to be initiated by the user. Finding a piece of actionable knowledge typically involves many operations and intense visual inspections, which are labor-intensive and time-consuming. From interactions with our users, we identified a generic problem that is crucial for finding actionable knowledge. The problem involves extensive comparison of sub-populations and identification of the cause of their differences. This paper first defines the problem and then proposes an effective method to solve the problem automatically. To the best of our knowledge, there is no reported study of this problem. The new method has been added to the Opportunity Map system and is now in daily use in Motorola.",
"corpus_id": 12241191,
"score": 1
},
{
"doc_id": "2182731",
"title": "Speed-up iterative frequent itemset mining with constraint changes",
"abstract": "Mining of frequent itemsets is a fundamental data mining task. Past research has proposed many efficient algorithms for this purpose. Recent work also highlighted the importance of using constraints to focus the mining process to mine only those relevant itemsets. In practice, data mining is often an interactive and iterative process. The user typically changes constraints and runs the mining algorithm many times before being satisfied with the final results. This interactive process is very time consuming. Existing mining algorithms are unable to take advantage of this iterative process to use previous mining results to speed up the current mining process. This results in an enormous waste of time and computation. In this paper, we propose an efficient technique to utilize previous mining results to improve the efficiency of current mining when constraints are changed. We first introduce the concept of tree boundary to summarize useful information available from previous mining. We then show that the tree boundary provides an effective and efficient framework for the new mining. The proposed technique has been implemented in the context of two existing frequent itemset mining algorithms, FP-tree and tree projection. Experiment results on both synthetic and real-life datasets show that the proposed approach achieves a dramatic saving of computation.",
"corpus_id": 2182731,
"score": 1
},
{
"doc_id": "6193650",
"title": "Dealing with different distributions in learning from",
"abstract": "In the problem of learning with positive and unlabeled examples, existing research all assumes that positive examples P and the hidden positive examples in the unlabeled set U are generated from the same distribution. This assumption may be violated in practice. In such cases, existing methods perform poorly. This paper proposes a novel technique A-EM to deal with the problem. Experimental results with product page classification demonstrate the effectiveness of the proposed technique.",
"corpus_id": 6193650,
"score": 1
},
{
"doc_id": "8786674",
"title": "Entity discovery and assignment for opinion mining applications",
"abstract": "Opinion mining became an important topic of study in recent years due to its wide range of applications. There are also many companies offering opinion mining services. One problem that has not been studied so far is the assignment of entities that have been talked about in each sentence. Let us use forum discussions about products as an example to make the problem concrete. In a typical discussion post, the author may give opinions on multiple products and also compare them. The issue is how to detect what products have been talked about in each sentence. If the sentence contains the product names, they need to be identified. We call this problem entity discovery. If the product names are not explicitly mentioned in the sentence but are implied due to the use of pronouns and language conventions, we need to infer the products. We call this problem entity assignment. These problems are important because without knowing what products each sentence talks about the opinion mined from the sentence is of little use. In this paper, we study these problems and propose two effective methods to solve the problems. Entity discovery is based on pattern discovery and entity assignment is based on mining of comparative sentences. Experimental results using a large number of forum posts demonstrate the effectiveness of the technique. Our system has also been successfully tested in a commercial setting.",
"corpus_id": 8786674,
"score": 1
},
{
"doc_id": "10223976",
"title": "A combinational feature selection and ensemble neural network method for classification of gene expression data",
"abstract": "BackgroundMicroarray experiments are becoming a powerful tool for clinical diagnosis, as they have the potential to discover gene expression patterns that are characteristic for a particular disease. To date, this problem has received most attention in the context of cancer research, especially in tumor classification. Various feature selection methods and classifier design strategies also have been generally used and compared. However, most published articles on tumor classification have applied a certain technique to a certain dataset, and recently several researchers compared these techniques based on several public datasets. But, it has been verified that differently selected features reflect different aspects of the dataset and some selected features can obtain better solutions on some certain problems. At the same time, faced with a large amount of microarray data with little knowledge, it is difficult to find the intrinsic characteristics using traditional methods. In this paper, we attempt to introduce a combinational feature selection method in conjunction with ensemble neural networks to generally improve the accuracy and robustness of sample classification.ResultsWe validate our new method on several recent publicly available datasets both with predictive accuracy of testing samples and through cross validation. Compared with the best performance of other current methods, remarkably improved results can be obtained using our new strategy on a wide range of different datasets.ConclusionsThus, we conclude that our methods can obtain more information in microarray data to get more accurate classification and also can help to extract the latent marker genes of the diseases for better diagnosis and treatment.",
"corpus_id": 10223976,
"score": 0
},
{
"doc_id": "9847835",
"title": "A Grid-Based System for the Multi-reservoir Optimal Scheduling in Huaihe River Basin",
"abstract": "The up- and mid-stream of Huaihe River Basin is a complex system of reservoirs and river-ways. It is difficult for flood control and reservoir scheduling. It is ineffective to perform sequential computations for optimal scheduling of multi-reservoir due to the system complexity. In this paper, we implemented the multi-reservoir optimal scheduling algorithm in a Grid environment. Key components as multiple Protocols were developed within the layers of Grid architecture. The proposed Grid computing architecture provides an innovative design of multi-reservoir optimal scheduling system for increasing the accuracy of flood control and speedup of computing.",
"corpus_id": 9847835,
"score": 0
},
{
"doc_id": "3413319",
"title": "Filtering Spam in Social Tagging System with Dynamic Behavior Analysis",
"abstract": "Spam in social tagging systems introduced by some malicious participants has become a serious problem for its global popularizing. Some studies which can be deduced to static user data analysis have been presented to combat tag spam, but either they do not give an exact evaluation or the algorithms’ performances are not good enough. In this paper, we proposed a novel method based on analysis of dynamic user behavior data for the notion that users’ behaviors in social tagging system can reflect the quality of tags more accurately. Through modeling the different categories of participants’ behaviors, we extract tag-associated actions which can be used to estimate whether tag is spam, and then present our algorithm that can filter the tag spam in the results of social search. The experiment results show that our method indeed outperforms the existing methods based on static data and effectively defends against the tag spam in various spam attacks.",
"corpus_id": 3413319,
"score": 0
},
{
"doc_id": "14599521",
"title": "SyD: A Middleware Testbed for Collaborative Applications over Small Heterogeneous Devices and Data Stores",
"abstract": "Developing a collaborative application running on a collection of heterogeneous, possibly mobile, devices, each potentially hosting data stores, using existing middleware technologies such as JXTA, BREW, compact .NET and J2ME requires too many ad-hoc techniques as well as cumbersome and time-consuming programming. Our System on Mobile Devices (SyD) middleware, on the other hand, has a modular architecture that makes such application development very systematic and streamlined. The architecture supports transactions over mobile data stores, with a range of remote group invocation options and embedded interdependencies among such data store objects. The architecture further provides a persistent uniform object view, group transaction with Quality of Service (QoS) specifications, and XML vocabulary for inter-device communication. This paper presents the basic SyD concepts and introduces the architecture and the design of the SyD middleware and its components. We also provide guidelines for SyD application development and deployment process. We include the basic performance figures of SyD components and a few SyD applications on Personal Digital Assistant (PDA) platforms. We believe that SyD is the first comprehensive working prototype of its kind, with a small code footprint of 112 KB with 76 KB being device-resident, and has a good potential for incorporating many ideas for performance extensions, scalability, QoS, workflows and security.",
"corpus_id": 14599521,
"score": 0
},
{
"doc_id": "42790149",
"title": "Dynamic Complexities in a Lotka-volterra Predator-prey Model Concerning impulsive Control Strategy",
"abstract": "Based on the classical Lotka–Volterra predator–prey system, an impulsive differential equation to model the process of periodically releasing natural enemies and spraying pesticides at different fixed times for pest control is proposed and investigated. It is proved that there exists a globally asymptotically stable pest-eradication periodic solution when the impulsive period is less than some critical value. Otherwise, the system can be permanent. We observe that our impulsive control strategy is more effective than the classical one if we take chemical control efficiently. Numerical results show that the system we considered has more complex dynamics including period-doubling bifurcation, symmetry-breaking bifurcation, period-halving bifurcation, quasi-periodic oscillation, chaos and nonunique dynamics, meaning that several attractors coexist. Finally, a pest–predator stage-structured model for the pest concerning this kind of impulsive control strategy is proposed, and we also show that there exists a ...",
"corpus_id": 42790149,
"score": 0
}
] |
arnetminer | {
"doc_id": "3127397",
"title": "Visually Aided Exploration of Interesting Association Rules",
"abstract": "Association rules are a class of important regularities in databases. They are found to be very useful in practical applications. However, the number of association rules discovered in a database can be huge, thus making manual inspection and analysis of the rules difficult. In this paper, we propose a new framework to allow the user to explore the discovered rules to identify those interesting ones. This framework has two components, an interestingness analysis component, and a visualization component. The interestingness analysis component analyzes and organizes the discovered rules according to various interestingness criteria with respect to the user's existing knowledge. The visualization component enables the user to visually explore those potentially interesting rules. The key strength of the visualization component is that from a single screen, the user is able to obtain a global and yet detailed picture of various interesting aspects of the discovered rules. Enhanced with color effects, the user can easily and quickly focus his/her attention on the more interesting/useful rules.",
"corpus_id": 3127397
} | [
{
"doc_id": "6193650",
"title": "Dealing with different distributions in learning from",
"abstract": "In the problem of learning with positive and unlabeled examples, existing research all assumes that positive examples P and the hidden positive examples in the unlabeled set U are generated from the same distribution. This assumption may be violated in practice. In such cases, existing methods perform poorly. This paper proposes a novel technique A-EM to deal with the problem. Experimental results with product page classification demonstrate the effectiveness of the proposed technique.",
"corpus_id": 6193650,
"score": 1
},
{
"doc_id": "12241191",
"title": "Finding Actionable Knowledge via Automated Comparison",
"abstract": "The problem of finding interesting and actionable patterns is a major challenge in data mining. It has been studied by many data mining researchers. The issue is that data mining algorithms often generate too many patterns, which make it very hard for the user to find those truly useful ones. Over the years many techniques have been proposed. However, few have made it to real-life applications. At the end of 2005, we built a data mining system for Motorola (called Opportunity Map) to enable the user to explore the space of a large number of rules in order to find actionable knowledge. The approach is based on the concept of rule cubes and operations on rule cubes. A rule cube is similar to a data cube, but stores rules. Since its deployment, some issues have also been identified during the regular use of the system in Motorola. One of the key issues is that although the operations on rule cubes are flexible, each operation is primitive and has to be initiated by the user. Finding a piece of actionable knowledge typically involves many operations and intense visual inspections, which are labor-intensive and time-consuming. From interactions with our users, we identified a generic problem that is crucial for finding actionable knowledge. The problem involves extensive comparison of sub-populations and identification of the cause of their differences. This paper first defines the problem and then proposes an effective method to solve the problem automatically. To the best of our knowledge, there is no reported study of this problem. The new method has been added to the Opportunity Map system and is now in daily use in Motorola.",
"corpus_id": 12241191,
"score": 1
},
{
"doc_id": "2182731",
"title": "Speed-up iterative frequent itemset mining with constraint changes",
"abstract": "Mining of frequent itemsets is a fundamental data mining task. Past research has proposed many efficient algorithms for this purpose. Recent work also highlighted the importance of using constraints to focus the mining process to mine only those relevant itemsets. In practice, data mining is often an interactive and iterative process. The user typically changes constraints and runs the mining algorithm many times before being satisfied with the final results. This interactive process is very time consuming. Existing mining algorithms are unable to take advantage of this iterative process to use previous mining results to speed up the current mining process. This results in an enormous waste of time and computation. In this paper, we propose an efficient technique to utilize previous mining results to improve the efficiency of current mining when constraints are changed. We first introduce the concept of tree boundary to summarize useful information available from previous mining. We then show that the tree boundary provides an effective and efficient framework for the new mining. The proposed technique has been implemented in the context of two existing frequent itemset mining algorithms, FP-tree and tree projection. Experiment results on both synthetic and real-life datasets show that the proposed approach achieves a dramatic saving of computation.",
"corpus_id": 2182731,
"score": 1
},
{
"doc_id": "8786674",
"title": "Entity discovery and assignment for opinion mining applications",
"abstract": "Opinion mining became an important topic of study in recent years due to its wide range of applications. There are also many companies offering opinion mining services. One problem that has not been studied so far is the assignment of entities that have been talked about in each sentence. Let us use forum discussions about products as an example to make the problem concrete. In a typical discussion post, the author may give opinions on multiple products and also compare them. The issue is how to detect what products have been talked about in each sentence. If the sentence contains the product names, they need to be identified. We call this problem entity discovery. If the product names are not explicitly mentioned in the sentence but are implied due to the use of pronouns and language conventions, we need to infer the products. We call this problem entity assignment. These problems are important because without knowing what products each sentence talks about the opinion mined from the sentence is of little use. In this paper, we study these problems and propose two effective methods to solve the problems. Entity discovery is based on pattern discovery and entity assignment is based on mining of comparative sentences. Experimental results using a large number of forum posts demonstrate the effectiveness of the technique. Our system has also been successfully tested in a commercial setting.",
"corpus_id": 8786674,
"score": 1
},
{
"doc_id": "61155076",
"title": "Opinion Mining",
"abstract": "Описанл процедуру формування інтегрального показника об’єкта згідно з відгуками користувачів під час застосування методів оцінювання опінії текстової інформації у web-документах. Запропоновано використання лінгвістичних змінних та застосування вагових коефіцієнтів для достовірнішого результату оцінювання емоційного забарвлення текстової інформації. Ключові слова: опінія, емоційне забарвлення, об’єкт, інтегральний показник, лінгвістична змінна, ваговий коефіцієнт.",
"corpus_id": 61155076,
"score": 1
},
{
"doc_id": "410427",
"title": "Database and location management schemes for mobile communications",
"abstract": "Signaling traffic incurred in tracking mobile users and delivering enhanced services causes an additional load in the network. Efficient database and location management schemes are needed to meet the challenges from high density and mobility of users, and various service features. In this paper, the general location control and management function is treated as the combination of two parts, the global and local scope. New schemes and methods are proposed, and improvements achieved over established basic schemes are shown by using simulations.",
"corpus_id": 410427,
"score": 0
},
{
"doc_id": "44654033",
"title": "Existence and uniqueness of solutionsto first-order multipoint boundary value problems",
"abstract": "In this paper, we give existence and uniqueness results for solutions of multipoint boundary value problems of the form \nx'=f(t,x(t))+e(t),t∈(0,1),∑j=1mAjx(ηj)=0, \nwhere ƒ : [0,1] × Rn → Rn is a Caratheodory function, Ajs (j = 1, 2,, m) are constant square matrices of order n, 0 < η1 η2 << ηm−1, < ηm ⪯ 1, and e(t) ∈ L1([0,1], Rn). The existence of solutions is proven by the coincidence degree theory. As an application, we also give one example to demonstrate our results.",
"corpus_id": 44654033,
"score": 0
},
{
"doc_id": "7630190",
"title": "Theoretical study on the OH + CH3NHC(O)OCH3 reaction",
"abstract": "The multiple-channel reactions OH + CH3NHC(O)OCH3 --> products are investigated by direct dynamics method. The optimized geometries, frequencies, and minimum energy path are all obtained at the MP2/6-311+G(d,p) level, and energetic information is further refined by the BMC-CCSD (single-point) method. The rate constants for every reaction channels, R1, R2, R3, and R4, are calculated by canonical variational transition state theory with small-curvature tunneling correction over the temperature range 200-1000 K. The total rate constants are in good agreement with the available experimental data and the two-parameter expression k(T) = 3.95 x 10(-12) exp(15.41/T) cm3 molecule(-1) s(-1) over the temperature range 200-1000 K is given. Our calculations indicate that hydrogen abstraction channels R1 and R2 are the major channels due to the smaller barrier height among four channels considered, and the other two channels to yield CH3NC(O)OCH3 + H2O and CH3NHC(O)(OH)OCH3 + H2O are minor channels over the whole temperature range.",
"corpus_id": 7630190,
"score": 0
},
{
"doc_id": "3413319",
"title": "Filtering Spam in Social Tagging System with Dynamic Behavior Analysis",
"abstract": "Spam in social tagging systems introduced by some malicious participants has become a serious problem for its global popularizing. Some studies which can be deduced to static user data analysis have been presented to combat tag spam, but either they do not give an exact evaluation or the algorithms’ performances are not good enough. In this paper, we proposed a novel method based on analysis of dynamic user behavior data for the notion that users’ behaviors in social tagging system can reflect the quality of tags more accurately. Through modeling the different categories of participants’ behaviors, we extract tag-associated actions which can be used to estimate whether tag is spam, and then present our algorithm that can filter the tag spam in the results of social search. The experiment results show that our method indeed outperforms the existing methods based on static data and effectively defends against the tag spam in various spam attacks.",
"corpus_id": 3413319,
"score": 0
},
{
"doc_id": "14675051",
"title": "REGULARIZED SENSE RECONSTRUCTION USING ITERATIVELY REFINED TOTAL VARIATION METHOD",
"abstract": "SENSE has been widely accepted as one of the standard reconstruction algorithms for parallel MRI. When large acceleration factors are employed, the SENSE reconstruction becomes very ill-conditioned. For Cartesian SENSE, Tikhonov regularization has been commonly used. However, the Tikhonov regularized image usually tends to be overly smooth, and a high-quality regularization image is desirable to alleviate this problem but is not available. In this paper, we propose a new SENSE regularization technique that is based on total variation with iterated refinement using Bregman iteration. It penalizes highly oscillatory noise but allows sharp edges in reconstruction without the need for prior information. In addition, the Bregman iteration refines the image details iteratively. The method is shown to be able to significantly reduce the artifacts in SENSE reconstruction",
"corpus_id": 14675051,
"score": 0
}
] |
arnetminer | {
"doc_id": "2485430",
"title": "Targeting the right students using data mining",
"abstract": "education domain offers a fertile ground for many interesting and challenging data mining applications. These applications can help both educators and students, and improve the quality of education. In this paper, we present a real-life application for the Gifted Education Programme (GEP) of the Ministry of Education (MOE) in Singapore. The application involves many data mining tasks. This paper focuses only on one task, namely, selecting students for remedial classes. Traditionally, a cut-off mark for each subject is used to select the weak students. That is, those students whose scores in a subject fall below the cut-off mark for the subject are advised to take further classes in the subject. In this paper, we show that this traditional method requires too many students to take part in the remedial classes. This not only increases the teaching load of the teachers, but also gives unnecessary burdens to students, which is particularly undesirable in our case because the GEP students are generally taking more subjects than non-GEP students, and the GEP students are encouraged to have more time to explore advanced topics. With the help of data mining, we are able to select the targeted students much more precisely.",
"corpus_id": 2485430
} | [
{
"doc_id": "61155076",
"title": "Opinion Mining",
"abstract": "Описанл процедуру формування інтегрального показника об’єкта згідно з відгуками користувачів під час застосування методів оцінювання опінії текстової інформації у web-документах. Запропоновано використання лінгвістичних змінних та застосування вагових коефіцієнтів для достовірнішого результату оцінювання емоційного забарвлення текстової інформації. Ключові слова: опінія, емоційне забарвлення, об’єкт, інтегральний показник, лінгвістична змінна, ваговий коефіцієнт.",
"corpus_id": 61155076,
"score": 1
},
{
"doc_id": "2182731",
"title": "Speed-up iterative frequent itemset mining with constraint changes",
"abstract": "Mining of frequent itemsets is a fundamental data mining task. Past research has proposed many efficient algorithms for this purpose. Recent work also highlighted the importance of using constraints to focus the mining process to mine only those relevant itemsets. In practice, data mining is often an interactive and iterative process. The user typically changes constraints and runs the mining algorithm many times before being satisfied with the final results. This interactive process is very time consuming. Existing mining algorithms are unable to take advantage of this iterative process to use previous mining results to speed up the current mining process. This results in an enormous waste of time and computation. In this paper, we propose an efficient technique to utilize previous mining results to improve the efficiency of current mining when constraints are changed. We first introduce the concept of tree boundary to summarize useful information available from previous mining. We then show that the tree boundary provides an effective and efficient framework for the new mining. The proposed technique has been implemented in the context of two existing frequent itemset mining algorithms, FP-tree and tree projection. Experiment results on both synthetic and real-life datasets show that the proposed approach achieves a dramatic saving of computation.",
"corpus_id": 2182731,
"score": 1
},
{
"doc_id": "12241191",
"title": "Finding Actionable Knowledge via Automated Comparison",
"abstract": "The problem of finding interesting and actionable patterns is a major challenge in data mining. It has been studied by many data mining researchers. The issue is that data mining algorithms often generate too many patterns, which make it very hard for the user to find those truly useful ones. Over the years many techniques have been proposed. However, few have made it to real-life applications. At the end of 2005, we built a data mining system for Motorola (called Opportunity Map) to enable the user to explore the space of a large number of rules in order to find actionable knowledge. The approach is based on the concept of rule cubes and operations on rule cubes. A rule cube is similar to a data cube, but stores rules. Since its deployment, some issues have also been identified during the regular use of the system in Motorola. One of the key issues is that although the operations on rule cubes are flexible, each operation is primitive and has to be initiated by the user. Finding a piece of actionable knowledge typically involves many operations and intense visual inspections, which are labor-intensive and time-consuming. From interactions with our users, we identified a generic problem that is crucial for finding actionable knowledge. The problem involves extensive comparison of sub-populations and identification of the cause of their differences. This paper first defines the problem and then proposes an effective method to solve the problem automatically. To the best of our knowledge, there is no reported study of this problem. The new method has been added to the Opportunity Map system and is now in daily use in Motorola.",
"corpus_id": 12241191,
"score": 1
},
{
"doc_id": "8786674",
"title": "Entity discovery and assignment for opinion mining applications",
"abstract": "Opinion mining became an important topic of study in recent years due to its wide range of applications. There are also many companies offering opinion mining services. One problem that has not been studied so far is the assignment of entities that have been talked about in each sentence. Let us use forum discussions about products as an example to make the problem concrete. In a typical discussion post, the author may give opinions on multiple products and also compare them. The issue is how to detect what products have been talked about in each sentence. If the sentence contains the product names, they need to be identified. We call this problem entity discovery. If the product names are not explicitly mentioned in the sentence but are implied due to the use of pronouns and language conventions, we need to infer the products. We call this problem entity assignment. These problems are important because without knowing what products each sentence talks about the opinion mined from the sentence is of little use. In this paper, we study these problems and propose two effective methods to solve the problems. Entity discovery is based on pattern discovery and entity assignment is based on mining of comparative sentences. Experimental results using a large number of forum posts demonstrate the effectiveness of the technique. Our system has also been successfully tested in a commercial setting.",
"corpus_id": 8786674,
"score": 1
},
{
"doc_id": "6193650",
"title": "Dealing with different distributions in learning from",
"abstract": "In the problem of learning with positive and unlabeled examples, existing research all assumes that positive examples P and the hidden positive examples in the unlabeled set U are generated from the same distribution. This assumption may be violated in practice. In such cases, existing methods perform poorly. This paper proposes a novel technique A-EM to deal with the problem. Experimental results with product page classification demonstrate the effectiveness of the proposed technique.",
"corpus_id": 6193650,
"score": 1
},
{
"doc_id": "6447863",
"title": "Controllability of a Leader-Follower Dynamic Network with Interaction Time Delays",
"abstract": "This paper studies the controllability of a leader-follower network of dynamic agents in the presence of communication delays. The network dynamics is governed by nearest neighbor rules over a fixed communication topology. The leader is a particular agent acting as an external input to steer the other member agents. We derive sufficient conditions of the controllability of the dynamic network and give an example to illustrate the main results.",
"corpus_id": 6447863,
"score": 0
},
{
"doc_id": "14599521",
"title": "SyD: A Middleware Testbed for Collaborative Applications over Small Heterogeneous Devices and Data Stores",
"abstract": "Developing a collaborative application running on a collection of heterogeneous, possibly mobile, devices, each potentially hosting data stores, using existing middleware technologies such as JXTA, BREW, compact .NET and J2ME requires too many ad-hoc techniques as well as cumbersome and time-consuming programming. Our System on Mobile Devices (SyD) middleware, on the other hand, has a modular architecture that makes such application development very systematic and streamlined. The architecture supports transactions over mobile data stores, with a range of remote group invocation options and embedded interdependencies among such data store objects. The architecture further provides a persistent uniform object view, group transaction with Quality of Service (QoS) specifications, and XML vocabulary for inter-device communication. This paper presents the basic SyD concepts and introduces the architecture and the design of the SyD middleware and its components. We also provide guidelines for SyD application development and deployment process. We include the basic performance figures of SyD components and a few SyD applications on Personal Digital Assistant (PDA) platforms. We believe that SyD is the first comprehensive working prototype of its kind, with a small code footprint of 112 KB with 76 KB being device-resident, and has a good potential for incorporating many ideas for performance extensions, scalability, QoS, workflows and security.",
"corpus_id": 14599521,
"score": 0
},
{
"doc_id": "1930626",
"title": "Multi-agent System for Custom Relationship Management with SVMs Tool",
"abstract": "Distributed data mining in the CRM is to learn available knowledge from the customer relationship so as to instruct the strategic behavior. In order to resolve the CRM in distributed data mining, this paper proposes the architecture of distributed data mining for CRM, and then utilizes the support vector machine tool to separate the customs into several classes and manage them. In the end, the practical experiments about one Chinese company are conducted to show the good performance of the proposed approach.",
"corpus_id": 1930626,
"score": 0
},
{
"doc_id": "203663524",
"title": "Multi-Agent Based Network Management Task Decomposition and Scheduling",
"abstract": "The rapid development of Internet makes network management on large-scale network a critical issue. But with the management task of large-scale network becoming more complicated, neither centralized network management nor agent based network management can satisfy the increasing demands. This paper presents a network management framework to support dynamic scheduling decisions. In this framework, some algorithms are proposed to decompose the whole network management task into several groups of sub-tasks. During the course of decomposition, different priorities are assigned to sub-tasks. Then based on the priorities of these sub-tasks, the strategies of agent scheduling are established. Priority-ranked sub-tasks are grouped according to their inter-dependences. Sub-tasks with the same priority are put into the same group and they can be performed in parallel manner, while different groups of sub-tasks with different priorities must be implemented according to the order of their priorities. An experiment has been done with the algorithms, the results of which demonstrate the advantage of the algorithms.",
"corpus_id": 203663524,
"score": 0
},
{
"doc_id": "23481142",
"title": "Robust content-based image indexing using contextual clues and automatic pseudofeedback",
"abstract": "Abstract.In this paper we present a robust information integration approach to identifying images of persons in large collections such as the Web. The underlying system relies on combining content analysis, which involves face detection and recognition, with context analysis, which involves extraction of text or HTML features. Two aspects are explored to test the robustness of this approach: sensitivity of the retrieval performance to the context analysis parameters and automatic construction of a facial image database via automatic pseudofeedback. For the sensitivity testing, we reevaluate system performance while varying context analysis parameters. This is compared with a learning approach where association rules among textual feature values and image relevance are learned via the CN2 algorithm. A face database is constructed by clustering after an initial retrieval relying on face detection and context analysis alone. Experimental results indicate that the approach is robust for identifying and indexing person images.",
"corpus_id": 23481142,
"score": 0
}
] |
arnetminer | {
"doc_id": "195706110",
"title": "A Scalable Peer-to-Peer Overlay for Applications with Time Constraints",
"abstract": "With the development of Internet, p2p is increasingly receiving attention in research. Recently, a class of p2p applications with time constraints appear. These applications require a short time to locate the resource and(or) a low transit delay between the resource user and the resource holder, such as Skype, MSN. In this paper we propose a scalable p2p overlay for applications with time constraints. Our system provides supports for just two operations for uplayered p2p applications: (1) Given a resource key and the node's IP who holds the resource, it registers the resource information to the associated node in at most two overlay hops; and (2) Given a resource key and a time constraint(0 for no constraint), it returns if possible a path(one or two overlay hops) to the resource holder, and the transit delay of the path is lower than the time constraint. Results from theoretical analysis and simulations show that our system is viable and scalable.",
"corpus_id": 195706110
} | [
{
"doc_id": "18763296",
"title": "A Scalable Peer-to-Peer Overlay for Applications with Time Constraints",
"abstract": "With the development of Internet, p2p is increasingly receiving attention in research. Recently, a class of p2p applications with time constraints appear. These applications require a short time to locate the resource and(or) a low transit delay between the resource user and the resource holder, such as Skype, MSN. In this paper we propose a scalable p2p overlay for applications with time constraints. Our system provides supports for just two operations for uplayered p2p applications: (1) Given a resource key and the node's IP who holds the resource, it registers the resource information to the associated node in at most two overlay hops; and (2) Given a resource key and a time constraint(0 for no constraint), it returns if possible a path(one or two overlay hops) to the resource holder, and the transit delay of the path is lower than the time constraint. Results from theoretical analysis and simulations show that our system is viable and scalable.",
"corpus_id": 18763296,
"score": 1
},
{
"doc_id": "17599458",
"title": "Subsequence Similarity Search under Time Shifting",
"abstract": "Time series data naturally arise in many application domains, and the similarity search for time series under dynamic time shifting is prevailing. But most recent research focused on the full length similarity match of two time series. In this paper a basic subsequence similarity search algorithm based on dynamic programming is proposed. For a given query time series, the algorithm can find out the most similar subsequence in a long time series. Furthermore two improved algorithms are also given in this paper. They can reduce the computation amount of the distance matrix for subsequence similarity search. Experiments on real and synthetic data sets show that the improved algorithms can significantly reduce the computation amount and running time compared to the basic algorithm",
"corpus_id": 17599458,
"score": 0
},
{
"doc_id": "17435545",
"title": "Improved spiral sense reconstruction using a multiscale wavelet model",
"abstract": "SENSE has been widely accepted and extensively studied in the community of parallel MRI. Although many regularization approaches have been developed to address the ill-conditioning problem for Cartesian SENSE, fewer efforts have been made to address this problem when the sampling trajectory is non-Cartesian. For non-Cartesian SENSE using the iterative conjugate gradient method, ill- conditioning can degrade not only the signal-to-noise ratio, but also the convergence behavior. This paper proposes a regularization technique for non-Cartesian SENSE using a multiscale wavelet model. The technique models the desired image as a random field whose wavelet transform coefficients obey a generalized Gaussian distribution. The effectiveness of the proposed method has been validated by in vivo experiments.",
"corpus_id": 17435545,
"score": 0
},
{
"doc_id": "31431918",
"title": "Tight Bounds on the Estimation Distance Using Wavelet",
"abstract": "Time series similarity search is of growing importance in many applications. Wavelet transforms are used as a dimensionality reduction technique to permit efficient similarity search over high-dimensional time series data. This paper proposes the tight upper and lower bounds on the estimation distance using wavelet transform, and we show that the traditional distance estimation is only part of our lower bound. According to the lower bound, we can exclude more dissimilar time series than traditional method. And according to the upper bound, we can directly judge whether two time series are similar, and further reduce the number of time series to process in original time domain. The experiments have shown that using the upper and lower tight bounds can significantly improve filter efficiency and reduce running time than traditional method.",
"corpus_id": 31431918,
"score": 0
},
{
"doc_id": "61155076",
"title": "Opinion Mining",
"abstract": "Описанл процедуру формування інтегрального показника об’єкта згідно з відгуками користувачів під час застосування методів оцінювання опінії текстової інформації у web-документах. Запропоновано використання лінгвістичних змінних та застосування вагових коефіцієнтів для достовірнішого результату оцінювання емоційного забарвлення текстової інформації. Ключові слова: опінія, емоційне забарвлення, об’єкт, інтегральний показник, лінгвістична змінна, ваговий коефіцієнт.",
"corpus_id": 61155076,
"score": 0
},
{
"doc_id": "9847835",
"title": "A Grid-Based System for the Multi-reservoir Optimal Scheduling in Huaihe River Basin",
"abstract": "The up- and mid-stream of Huaihe River Basin is a complex system of reservoirs and river-ways. It is difficult for flood control and reservoir scheduling. It is ineffective to perform sequential computations for optimal scheduling of multi-reservoir due to the system complexity. In this paper, we implemented the multi-reservoir optimal scheduling algorithm in a Grid environment. Key components as multiple Protocols were developed within the layers of Grid architecture. The proposed Grid computing architecture provides an innovative design of multi-reservoir optimal scheduling system for increasing the accuracy of flood control and speedup of computing.",
"corpus_id": 9847835,
"score": 0
}
] |
arnetminer | {
"doc_id": "14296672",
"title": "Learning to Identify Unexpected Instances in the Test Set",
"abstract": "Traditional classification involves building a classifier using labeled training examples from a set of predefined classes and then applying the classifier to classify test instances into the same set of classes. In practice, this paradigm can be problematic because the test data may contain instances that do not belong to any of the previously defined classes. Detecting such unexpected instances in the test set is an important issue in practice. The problem can be formulated as learning from positive and unlabeled examples (PU learning). However, current PU learning algorithms require a large proportion of negative instances in the unlabeled set to be effective. This paper proposes a novel technique to solve this problem in the text classification domain. The technique first generates a single artificial negative document AN. The sets P and {AN} are then used to build a naive Bayesian classifier. Our experiment results show that this method is significantly better than existing techniques.",
"corpus_id": 14296672
} | [
{
"doc_id": "8786674",
"title": "Entity discovery and assignment for opinion mining applications",
"abstract": "Opinion mining became an important topic of study in recent years due to its wide range of applications. There are also many companies offering opinion mining services. One problem that has not been studied so far is the assignment of entities that have been talked about in each sentence. Let us use forum discussions about products as an example to make the problem concrete. In a typical discussion post, the author may give opinions on multiple products and also compare them. The issue is how to detect what products have been talked about in each sentence. If the sentence contains the product names, they need to be identified. We call this problem entity discovery. If the product names are not explicitly mentioned in the sentence but are implied due to the use of pronouns and language conventions, we need to infer the products. We call this problem entity assignment. These problems are important because without knowing what products each sentence talks about the opinion mined from the sentence is of little use. In this paper, we study these problems and propose two effective methods to solve the problems. Entity discovery is based on pattern discovery and entity assignment is based on mining of comparative sentences. Experimental results using a large number of forum posts demonstrate the effectiveness of the technique. Our system has also been successfully tested in a commercial setting.",
"corpus_id": 8786674,
"score": 1
},
{
"doc_id": "61155076",
"title": "Opinion Mining",
"abstract": "Описанл процедуру формування інтегрального показника об’єкта згідно з відгуками користувачів під час застосування методів оцінювання опінії текстової інформації у web-документах. Запропоновано використання лінгвістичних змінних та застосування вагових коефіцієнтів для достовірнішого результату оцінювання емоційного забарвлення текстової інформації. Ключові слова: опінія, емоційне забарвлення, об’єкт, інтегральний показник, лінгвістична змінна, ваговий коефіцієнт.",
"corpus_id": 61155076,
"score": 1
},
{
"doc_id": "12241191",
"title": "Finding Actionable Knowledge via Automated Comparison",
"abstract": "The problem of finding interesting and actionable patterns is a major challenge in data mining. It has been studied by many data mining researchers. The issue is that data mining algorithms often generate too many patterns, which make it very hard for the user to find those truly useful ones. Over the years many techniques have been proposed. However, few have made it to real-life applications. At the end of 2005, we built a data mining system for Motorola (called Opportunity Map) to enable the user to explore the space of a large number of rules in order to find actionable knowledge. The approach is based on the concept of rule cubes and operations on rule cubes. A rule cube is similar to a data cube, but stores rules. Since its deployment, some issues have also been identified during the regular use of the system in Motorola. One of the key issues is that although the operations on rule cubes are flexible, each operation is primitive and has to be initiated by the user. Finding a piece of actionable knowledge typically involves many operations and intense visual inspections, which are labor-intensive and time-consuming. From interactions with our users, we identified a generic problem that is crucial for finding actionable knowledge. The problem involves extensive comparison of sub-populations and identification of the cause of their differences. This paper first defines the problem and then proposes an effective method to solve the problem automatically. To the best of our knowledge, there is no reported study of this problem. The new method has been added to the Opportunity Map system and is now in daily use in Motorola.",
"corpus_id": 12241191,
"score": 1
},
{
"doc_id": "2182731",
"title": "Speed-up iterative frequent itemset mining with constraint changes",
"abstract": "Mining of frequent itemsets is a fundamental data mining task. Past research has proposed many efficient algorithms for this purpose. Recent work also highlighted the importance of using constraints to focus the mining process to mine only those relevant itemsets. In practice, data mining is often an interactive and iterative process. The user typically changes constraints and runs the mining algorithm many times before being satisfied with the final results. This interactive process is very time consuming. Existing mining algorithms are unable to take advantage of this iterative process to use previous mining results to speed up the current mining process. This results in an enormous waste of time and computation. In this paper, we propose an efficient technique to utilize previous mining results to improve the efficiency of current mining when constraints are changed. We first introduce the concept of tree boundary to summarize useful information available from previous mining. We then show that the tree boundary provides an effective and efficient framework for the new mining. The proposed technique has been implemented in the context of two existing frequent itemset mining algorithms, FP-tree and tree projection. Experiment results on both synthetic and real-life datasets show that the proposed approach achieves a dramatic saving of computation.",
"corpus_id": 2182731,
"score": 1
},
{
"doc_id": "6193650",
"title": "Dealing with different distributions in learning from",
"abstract": "In the problem of learning with positive and unlabeled examples, existing research all assumes that positive examples P and the hidden positive examples in the unlabeled set U are generated from the same distribution. This assumption may be violated in practice. In such cases, existing methods perform poorly. This paper proposes a novel technique A-EM to deal with the problem. Experimental results with product page classification demonstrate the effectiveness of the proposed technique.",
"corpus_id": 6193650,
"score": 1
},
{
"doc_id": "32186791",
"title": "A singular integral of the composite operator",
"abstract": "We establish the Poincare-type inequalities for the composition of the homotopy operator and the projection operator. We also obtain some estimates for the integral of the composite operator with a singular density.",
"corpus_id": 32186791,
"score": 0
},
{
"doc_id": "23656778",
"title": "Theoretical study on the Br + CH3SCH3 reaction",
"abstract": "The multiple-channel reactions Br + CH(3)SCH(3) --> products are investigated by direct dynamics method. The optimized geometries, frequencies, and minimum energy path are all obtained at the MP2/6-31+G(d,p) level, and energetic information is further refined by the G3(MP2) (single-point) theory. The rate constants for every reaction channels, Br + CH(3)SCH(3) --> CH(3)SCH(2) + HBr (R1), Br + CH(3)SCH(3) --> CH(3)SBr + CH(3) (R2), and Br + CH(3)SCH(3) -->CH(3)S + CH(3)Br (R3), are calculated by canonical variational transition state theory with small-curvature tunneling correction over the temperature range 200-3000 K. The total rate constants are in good agreement with the available experimental data, and the two-parameter expression k(T) = 2.68 x 10(-12) exp(-1235.24/T) cm(3)/(molecule s) over the temperature range 200-3000 K is given. Our calculations indicate that hydrogen abstraction channel is the major channel due to the smallest barrier height among three channels considered, and the other two channels to yield CH(3)SBr + CH(3) and CH(3)S + CH(3)Br are minor channels over the whole temperature range.",
"corpus_id": 23656778,
"score": 0
},
{
"doc_id": "9933281",
"title": "A Learning Process Using SVMs for Multi-agents Decision Classification",
"abstract": "In order to resolve decision classification problem in multiple agents system, this paper first introduces the architecture of multiple agents system. It then proposes a support vector machines based assessment approach, which has the ability to learn the rules form previous assessment results from domain experts. Finally, the experiment are conducted on the artificially dataset to illustrate how the proposed works, and the results show the proposed method has effective learning ability for decision classification problems.",
"corpus_id": 9933281,
"score": 0
},
{
"doc_id": "7650649",
"title": "Fully Automatic Text Categorization by Exploiting WordNet",
"abstract": "This paper proposes a Fully Automatic Categorization approach for Text (FACT) by exploiting the semantic features from WordNet and document clustering. In FACT, the training data is constructed automatically by using the knowledge of the category name. With the support of WordNet, it first uses the category name to generate a set of features for the corresponding category. Then, a set of documents is labeled according to such features. To reduce the possible bias originating from the category name and generated features, document clustering is used to refine the quality of initial labeling. The training data are subsequently constructed to train the discriminative classifier. The empirical experiments show that the best performance of FACT can achieve more than 90% of the baseline SVM classifiers in F1 measure, which demonstrates the effectiveness of the proposed approach.",
"corpus_id": 7650649,
"score": 0
},
{
"doc_id": "2887407",
"title": "Variable Weights Decision-Making and Its Fuzzy Inference Implementation",
"abstract": "This paper investigates multiple attribute decision-making (MADM) problems with preference information on alternatives. Principle of variable weights evaluation is introduced first. Then, a new algorithm, fuzzy inference variable weights method, is proposed. The principle of fuzzy logic control (FLC) and available techniques of variable weights evaluation are merged together, to formulate the new method to determine the adjusted weights of attributes for MADM problems with preferences. An example is provided to illustrate the utility and effectiveness of the proposed method.",
"corpus_id": 2887407,
"score": 0
}
] |
arnetminer | {
"doc_id": "33407170",
"title": "Manufacturing Grid: Needs, Concept, and Architecture",
"abstract": "As a new approach, grid technology is rapidly used in scientific computing, large-scale data management, and collaborative work. But in the field of manufacturing, the application of grid is just at the beginning. The paper proposes the concept of manufacturing. The needs, definition and architecture of manufacturing gird are discussed, which explains why needs manufacturing grid, what is manufacturing grid and how to construct a manufacturing grid system.",
"corpus_id": 33407170
} | [
{
"doc_id": "21860578",
"title": "An Effective PSO-Based Memetic Algorithm for Flow Shop Scheduling",
"abstract": "This paper proposes an effective particle swarm optimization (PSO)-based memetic algorithm (MA) for the permutation flow shop scheduling problem (PFSSP) with the objective to minimize the maximum completion time, which is a typical non-deterministic polynomial-time (NP) hard combinatorial optimization problem. In the proposed PSO-based MA (PSOMA), both PSO-based searching operators and some special local searching operators are designed to balance the exploration and exploitation abilities. In particular, the PSOMA applies the evolutionary searching mechanism of PSO, which is characterized by individual improvement, population cooperation, and competition to effectively perform exploration. On the other hand, the PSOMA utilizes several adaptive local searches to perform exploitation. First, to make PSO suitable for solving PFSSP, a ranked-order value rule based on random key representation is presented to convert the continuous position values of particles to job permutations. Second, to generate an initial swarm with certain quality and diversity, the famous Nawaz-Enscore-Ham (NEH) heuristic is incorporated into the initialization of population. Third, to balance the exploration and exploitation abilities, after the standard PSO-based searching operation, a new local search technique named NEH_1 insertion is probabilistically applied to some good particles selected by using a roulette wheel mechanism with a specified probability. Fourth, to enrich the searching behaviors and to avoid premature convergence, a simulated annealing (SA)-based local search with multiple different neighborhoods is designed and incorporated into the PSOMA. Meanwhile, an effective adaptive meta-Lamarckian learning strategy is employed to decide which neighborhood to be used in SA-based local search. Finally, to further enhance the exploitation ability, a pairwise-based local search is applied after the SA-based search. Simulation results based on benchmarks demonstrate the effectiveness of the PSOMA. Additionally, the effects of some parameters on optimization performances are also discussed",
"corpus_id": 21860578,
"score": 1
},
{
"doc_id": "14537451",
"title": "Research on Architecture and Key Technology for Service-Oriented Workflow Performance Analysis",
"abstract": "With the advent of SOA and Grid technology, the service has become the most important element of information systems. Because of the characteristic of service, the operation and performance management of workflow meet some new difficulties. Firstly a three-dimensional model of service is proposed. Then the characteristics of workflow in service-oriented environments are presented, based on which the workflow performance analysis architecture is described. As key technologies, workflow performance evaluation and analysis are discussed, including a multi-layer performance evaluation model and three kinds of performance analysis methods.",
"corpus_id": 14537451,
"score": 1
},
{
"doc_id": "32506945",
"title": "DE and NLP Based QPLS Algorithm",
"abstract": "As a novel evolutionary computing technique, Differential Evolution (DE) has been considered to be an effective optimization method for complex optimization problems, and achieved many successful applications in engineering. In this paper, a new algorithm of Quadratic Partial Least Squares (QPLS) based on Nonlinear Programming (NLP) is presented. And DE is used to solve the NLP so as to calculate the optimal input weights and the parameters of inner relationship. The simulation results based on the soft measurement of diesel oil solidifying point on a real crude distillation unit demonstrate that the superiority of the proposed algorithm to linear PLS and QPLS which is based on Sequential Quadratic Programming (SQP) in terms of fitting accuracy and computational costs.",
"corpus_id": 32506945,
"score": 1
},
{
"doc_id": "5989425",
"title": "A New Performance Evaluation Model and AHP-Based Analysis Method in Service-Oriented Workflow",
"abstract": "In service-oriented architecture, services and workflows are closely related so that the research on service-oriented workflow attracts the attention of academia. Because of the loosely-coupled, autonomic and dynamic nature of service, the operation and performance evaluation of workflow meet some challenges, such as how to judge the quality of service (QoS) and what is the relation between QoS and workflow performance. In this paper we are going to address these challenges. First the definition of service is proposed, and the characteristics and operation mechanism of service-oriented workflow are presented. Then a service-oriented workflow performance evaluation model is described which combines the performance of the business system and IT system. The key performance indicators (KPI) are also depicted with their formal representation. Finally the improved Analytic Hierarchy Process is brought forward to analyze the correlation between different KPIs and select services.",
"corpus_id": 5989425,
"score": 1
},
{
"doc_id": "44715797",
"title": "Constrained Nonlinear State Estimation - A Differential Evolution Based Moving Horizon Approach",
"abstract": "A solution is proposed to estimate the states in the nonlinear discrete time system. Moving Horizon Estimation (MHE) is used to obtain the approximated states by minimizing a criterion that is the Euclidean form of the difference between the estimated outputs and the measured ones over a finite time horizon. The differential evolution (DE) algorithm is incorporated into the implementation of MHE in order to solve the optimization problem which is presented as a nonlinear programming problem due to the constraints. The effectiveness of the approach is illustrated in simulated systems that have appeared in the moving horizon estimation literature.",
"corpus_id": 44715797,
"score": 1
},
{
"doc_id": "9933281",
"title": "A Learning Process Using SVMs for Multi-agents Decision Classification",
"abstract": "In order to resolve decision classification problem in multiple agents system, this paper first introduces the architecture of multiple agents system. It then proposes a support vector machines based assessment approach, which has the ability to learn the rules form previous assessment results from domain experts. Finally, the experiment are conducted on the artificially dataset to illustrate how the proposed works, and the results show the proposed method has effective learning ability for decision classification problems.",
"corpus_id": 9933281,
"score": 0
},
{
"doc_id": "5256099",
"title": "Intrusion detection system for high-speed network",
"abstract": "The increasing network throughput challenges the current Network Intrusion Detection Systems (NIDS) to have compatible high-performance data processing. In this paper, we describe an in-depth research on the related techniques of high-performance network intrusion detection and an implementation of a Rule-based High-performance Network Intrusion Detection System (RHPNIDS) for high-speed networks. By integrating several performance optimizing methods, the performance of RHPNIDS is very impressive compared with the popular open source NIDS Snort.",
"corpus_id": 5256099,
"score": 0
},
{
"doc_id": "23965389",
"title": "Design and Implementation of Control System on Embedded Downloading Server Based on C/S Architecture",
"abstract": "Based on Client-Server architecture, this paper proposes the design and implementation of system controlling embedded-Linux downloading server. The software architecture design of this system is composed of three parts: network layer, control layer, and application layer, which is modular and pluggable. The implementation also proved the feasibility, reliability and portability of this design.",
"corpus_id": 23965389,
"score": 0
},
{
"doc_id": "15365987",
"title": "Querying multiple sets of discovered rules",
"abstract": "Rule mining is an important data mining task that has been applied to numerous real-world applications. Often a rule mining system generates a large number of rules and only a small subset of them is really useful in applications. Although there exist some systems allowing the user to query the discovered rules, they are less suitable for complex ad hoc querying of multiple data mining rulebases to retrieve interesting rules. In this paper, we propose a new powerful rule query language Rule-QL for querying multiple rulebases that is modeled after SQL and has rigorous theoretical foundations of a rule-based calculus. In particular, we first propose a rule-based calculus RC based on the first-order logic, and then present the language Rule-QL that is at least as expressive as the safe fragment of RC. We also propose a number of efficient query evaluation techniques for Rule-QL and test them experimentally on some representative queries to demonstrate the feasibility of Rule-QL.",
"corpus_id": 15365987,
"score": 0
},
{
"doc_id": "205872509",
"title": "Conceptual design: issues and challenges",
"abstract": "Decisions made during conceptual design have signi®cant in ̄uence on the cost, performance, reliability, safety and environmental impact of a product. It has been estimated that design decisions account for more than 75% of ®nal product costs. It is, therefore, vital that designers have access to the right tools to support such design activities. In the early 1980s, researchers began to realize the impact of design decisions on downstream activities. As a result, different methodologies such as design for assembly, design for manufacturing and concurrent engineering, have been proposed. Software tools that implement these methodologies have also been developed. However, most of these tools are only applicable in the detailed design phase. Yet, even the highest standard of detailed design cannot compensate for a poor design concept formulated at the conceptual design phase. In spite of this, few CAD tools have been developed to support conceptual design activities. This is because knowledge of the design requirements and constraints during this early phase of a product's life cycle is usually imprecise and incomplete, making it dif®cult to utilize computer-based systems or prototypes. However, recent advances in ®elds such as fuzzy logic, computational geometry, constraints programming and so on have now made it possible for researchers to tackle some of the challenging issues in dealing with conceptual design activities. In this special issue, we have gathered together discussions on various aspects of conceptual design phase: from the capturing of the designer's intent, to modelling design constraints and solving them in an ef®cient manner, to verifying the correctness of the design. S.F. Qin et al. begin this issue with the article aFrom online sketching to 2D and 3D geometry: a fuzzy knowledge based systemo in which they look at the interesting research problem of capturing the user's sketching intentions and automatically generating the corresponding geometric primitives. The motivation of this work comes from the fact that most designers still prefer to express their creative design ideas through 2D sketches. It is therefore important for a computer-aided conceptual design system to allow sketched input. Qin et al. built a prototype system that allows 2D sketched input, interprets the input sketch into more geometrically exact 2D vision objects and, when needed, projects the 2D objects into 3D models. The system receives the sketched input data through a sequence of mouse button presses, mouse motions and mouse button release events. After that, the system attempts to identify each curve segments and generate a precise 2D primitive for the identi®ed segment. For each pair of 2D primitives identi®ed, a 2D relationship (connectivity, parallelism or perpendicularity) is inferred by the system. As the 2D geometry (primitives 1 relationships) slowly accumulates, the system continually checks to see whether it can recognize a 3D object or feature. Upon recognition, the 3D object/ feature will be placed in 3D space and new features can be built upon previous ones. Latif Al Hakim et al. approach the problem from the concurrent engineering perspective. Traditionally, design is a serial activity whereby reliability, manufacturability, maintainability, safety and other requirements are considered sequentially. In recent years, in an effort to increase competitiveness and reduce design life cycle, the concept of concurrent engineering is introduced. Instead of performing design-related activities in series, they are performed simultaneously. This approach greatly increases the complexity of the design process due to the highly interactive nature of the various design tasks. To support design activities adequately in such an environment, Latif Al Hakim et al. propose the incorporation of reliability with functional perspectives at the conceptual design stage. They use graph theory to represent a product and the relationships between its components. A product's components are represented as the vertices of a graph, while the edges of the graph represent the ̄ow of energy between components. With this representation, it is easy to visualize energy ̄ow between components and thus trace any loss of functionality. In addition, this representation allows one easily to take into consideration various constraints such as cost for further design re®nement. The third paper in this special issue points out the need for a suitable conceptual design representation scheme for smooth integration with downstream applications of the product development process. Brunnetti and Golob propose a feature-based representation scheme for capturing product semantics handled in the conceptual design phase. They believe that any computer-aided design system should accomplish two major goals. The ®rst is to support the ̄ow of information without loss along the product development process, and the second is to assist designers in Computer-Aided Design 32 (2000) 849±850 COMPUTER-AIDED DESIGN",
"corpus_id": 205872509,
"score": 0
}
] |
arnetminer | {
"doc_id": "13900102",
"title": "Web Page Cleaning for Web Mining through Feature Weighting",
"abstract": "Unlike conventional data or text, Web pages typically contain a large amount of information that is not part of the main contents of the pages, e.g., banner ads, navigation bars, and copyright notices. Such irrelevant information (which we call Web page noise) in Web pages can seriously harm Web mining, e.g., clustering and classification. In this paper, we propose a novel feature weighting technique to deal with Web page noise to enhance Web mining. This method first builds a compressed structure tree to capture the common structure and comparable blocks in a set of Web pages. It then uses an information based measure to evaluate the importance of each node in the compressed structure tree. Based on the tree and its node importance values, our method assigns a weight to each word feature in its content block. The resulting weights are used in Web mining. We evaluated the proposed technique with two Web mining tasks, Web page clustering and Web page classification. Experimental results show that our weighting method is able to dramatically improve the mining results.",
"corpus_id": 13900102
} | [
{
"doc_id": "61155076",
"title": "Opinion Mining",
"abstract": "Описанл процедуру формування інтегрального показника об’єкта згідно з відгуками користувачів під час застосування методів оцінювання опінії текстової інформації у web-документах. Запропоновано використання лінгвістичних змінних та застосування вагових коефіцієнтів для достовірнішого результату оцінювання емоційного забарвлення текстової інформації. Ключові слова: опінія, емоційне забарвлення, об’єкт, інтегральний показник, лінгвістична змінна, ваговий коефіцієнт.",
"corpus_id": 61155076,
"score": 1
},
{
"doc_id": "8786674",
"title": "Entity discovery and assignment for opinion mining applications",
"abstract": "Opinion mining became an important topic of study in recent years due to its wide range of applications. There are also many companies offering opinion mining services. One problem that has not been studied so far is the assignment of entities that have been talked about in each sentence. Let us use forum discussions about products as an example to make the problem concrete. In a typical discussion post, the author may give opinions on multiple products and also compare them. The issue is how to detect what products have been talked about in each sentence. If the sentence contains the product names, they need to be identified. We call this problem entity discovery. If the product names are not explicitly mentioned in the sentence but are implied due to the use of pronouns and language conventions, we need to infer the products. We call this problem entity assignment. These problems are important because without knowing what products each sentence talks about the opinion mined from the sentence is of little use. In this paper, we study these problems and propose two effective methods to solve the problems. Entity discovery is based on pattern discovery and entity assignment is based on mining of comparative sentences. Experimental results using a large number of forum posts demonstrate the effectiveness of the technique. Our system has also been successfully tested in a commercial setting.",
"corpus_id": 8786674,
"score": 1
},
{
"doc_id": "12241191",
"title": "Finding Actionable Knowledge via Automated Comparison",
"abstract": "The problem of finding interesting and actionable patterns is a major challenge in data mining. It has been studied by many data mining researchers. The issue is that data mining algorithms often generate too many patterns, which make it very hard for the user to find those truly useful ones. Over the years many techniques have been proposed. However, few have made it to real-life applications. At the end of 2005, we built a data mining system for Motorola (called Opportunity Map) to enable the user to explore the space of a large number of rules in order to find actionable knowledge. The approach is based on the concept of rule cubes and operations on rule cubes. A rule cube is similar to a data cube, but stores rules. Since its deployment, some issues have also been identified during the regular use of the system in Motorola. One of the key issues is that although the operations on rule cubes are flexible, each operation is primitive and has to be initiated by the user. Finding a piece of actionable knowledge typically involves many operations and intense visual inspections, which are labor-intensive and time-consuming. From interactions with our users, we identified a generic problem that is crucial for finding actionable knowledge. The problem involves extensive comparison of sub-populations and identification of the cause of their differences. This paper first defines the problem and then proposes an effective method to solve the problem automatically. To the best of our knowledge, there is no reported study of this problem. The new method has been added to the Opportunity Map system and is now in daily use in Motorola.",
"corpus_id": 12241191,
"score": 1
},
{
"doc_id": "6193650",
"title": "Dealing with different distributions in learning from",
"abstract": "In the problem of learning with positive and unlabeled examples, existing research all assumes that positive examples P and the hidden positive examples in the unlabeled set U are generated from the same distribution. This assumption may be violated in practice. In such cases, existing methods perform poorly. This paper proposes a novel technique A-EM to deal with the problem. Experimental results with product page classification demonstrate the effectiveness of the proposed technique.",
"corpus_id": 6193650,
"score": 1
},
{
"doc_id": "2182731",
"title": "Speed-up iterative frequent itemset mining with constraint changes",
"abstract": "Mining of frequent itemsets is a fundamental data mining task. Past research has proposed many efficient algorithms for this purpose. Recent work also highlighted the importance of using constraints to focus the mining process to mine only those relevant itemsets. In practice, data mining is often an interactive and iterative process. The user typically changes constraints and runs the mining algorithm many times before being satisfied with the final results. This interactive process is very time consuming. Existing mining algorithms are unable to take advantage of this iterative process to use previous mining results to speed up the current mining process. This results in an enormous waste of time and computation. In this paper, we propose an efficient technique to utilize previous mining results to improve the efficiency of current mining when constraints are changed. We first introduce the concept of tree boundary to summarize useful information available from previous mining. We then show that the tree boundary provides an effective and efficient framework for the new mining. The proposed technique has been implemented in the context of two existing frequent itemset mining algorithms, FP-tree and tree projection. Experiment results on both synthetic and real-life datasets show that the proposed approach achieves a dramatic saving of computation.",
"corpus_id": 2182731,
"score": 1
},
{
"doc_id": "10223976",
"title": "A combinational feature selection and ensemble neural network method for classification of gene expression data",
"abstract": "BackgroundMicroarray experiments are becoming a powerful tool for clinical diagnosis, as they have the potential to discover gene expression patterns that are characteristic for a particular disease. To date, this problem has received most attention in the context of cancer research, especially in tumor classification. Various feature selection methods and classifier design strategies also have been generally used and compared. However, most published articles on tumor classification have applied a certain technique to a certain dataset, and recently several researchers compared these techniques based on several public datasets. But, it has been verified that differently selected features reflect different aspects of the dataset and some selected features can obtain better solutions on some certain problems. At the same time, faced with a large amount of microarray data with little knowledge, it is difficult to find the intrinsic characteristics using traditional methods. In this paper, we attempt to introduce a combinational feature selection method in conjunction with ensemble neural networks to generally improve the accuracy and robustness of sample classification.ResultsWe validate our new method on several recent publicly available datasets both with predictive accuracy of testing samples and through cross validation. Compared with the best performance of other current methods, remarkably improved results can be obtained using our new strategy on a wide range of different datasets.ConclusionsThus, we conclude that our methods can obtain more information in microarray data to get more accurate classification and also can help to extract the latent marker genes of the diseases for better diagnosis and treatment.",
"corpus_id": 10223976,
"score": 0
},
{
"doc_id": "15968688",
"title": "Enhanced sensitivity refractive index sensor using tilted fiber Bragg grating with thinned cladding",
"abstract": "Short-period fiber Bragg gratings with gratings planes tilted at an angle 8° corresponding to the fiber axis show core mode and a large number of cladding-mode resonances in transmission. The differences between the cladding-mode resonance and the core-mode resonance are used to detect the variation of the surrounding refractive index; this refractive index sensor is immune to temperature effects by experimental demonstrations. After the cladding of the tilted fiber Bragg grating was etched by the hydrofluoric, TFBGs with different diameters and different-order cladding modes were investigated; the sensitivity of a TFBG to the external index can be significantly improved by reducing the cladding radius. Enhanced sensitivity and accuracy are achieved when the surrounding refractive index changes between 1.333 and 1.4532.",
"corpus_id": 15968688,
"score": 0
},
{
"doc_id": "32302868",
"title": "Non-linear Correlation Techniques in Educational Data Mining",
"abstract": "There is such an increasing interest in data mining and educational systems currently that has made educational data mining as a new growing research community. This paper explores how to develope new methods for discovering knowledge of data from educational context. The non-linear correlation technology was introduced and applied in the mining process in the whole knowledge achieved. Meanwhile, we have applied these methods in the real course management datasets and found correspondent results for the educators.",
"corpus_id": 32302868,
"score": 0
},
{
"doc_id": "8341217",
"title": "Multi-Space-Mapped SVMs for Multi-class Classification",
"abstract": "In SVMs-based multiple classification, it is not always possible to find an appropriate kernel function to map all the classes from different distribution functions into a feature space where they are linearly separable from each other. This is even worse if the number of classes is very large. As a result, the classification accuracy is not as good as expected. In order to improve the performance of SVMs-based multi-classifiers, this paper proposes a method, named multi-space-mapped SVMs, to map the classes into different feature spaces and then classify them. The proposed method reduces the requirements for the kernel function. Substantial experiments have been conducted on one-against-all, one-against-one, FSVM, DDAG algorithms and our algorithm using six UCI data sets. The statistical results show that the proposed method has a higher probability of finding appropriate kernel functions than traditional methods and outperforms others.",
"corpus_id": 8341217,
"score": 0
},
{
"doc_id": "206597617",
"title": "Binary Tree Support Vector Machine Based on Kernel Fisher Discriminant for Multi-classification",
"abstract": "In order to improve the accuracy of the conventional algorithms for multi-classifications, we propose a binary tree support vector machine based on Kernel Fisher Discriminant in this paper. To examine the training accuracy and the generalization performance of the proposed algorithm, One-against-All, One-against-One and the proposed algorithms are applied to five UCI data sets. The experimental results show that in general, the training and the testing accuracy of the proposed algorithm is the best one, and there exist no unclassifiable regions in the proposed algorithm.",
"corpus_id": 206597617,
"score": 0
}
] |
arnetminer | {
"doc_id": "3240731",
"title": "Mining Web pages for data records",
"abstract": "Data mining to extract information from Web pages can help provide value-added services. The MDR (mining data records) system exploits Web page structure and uses a string-matching algorithm to mine contiguous and noncontiguous data records.",
"corpus_id": 3240731
} | [
{
"doc_id": "61155076",
"title": "Opinion Mining",
"abstract": "Описанл процедуру формування інтегрального показника об’єкта згідно з відгуками користувачів під час застосування методів оцінювання опінії текстової інформації у web-документах. Запропоновано використання лінгвістичних змінних та застосування вагових коефіцієнтів для достовірнішого результату оцінювання емоційного забарвлення текстової інформації. Ключові слова: опінія, емоційне забарвлення, об’єкт, інтегральний показник, лінгвістична змінна, ваговий коефіцієнт.",
"corpus_id": 61155076,
"score": 1
},
{
"doc_id": "8786674",
"title": "Entity discovery and assignment for opinion mining applications",
"abstract": "Opinion mining became an important topic of study in recent years due to its wide range of applications. There are also many companies offering opinion mining services. One problem that has not been studied so far is the assignment of entities that have been talked about in each sentence. Let us use forum discussions about products as an example to make the problem concrete. In a typical discussion post, the author may give opinions on multiple products and also compare them. The issue is how to detect what products have been talked about in each sentence. If the sentence contains the product names, they need to be identified. We call this problem entity discovery. If the product names are not explicitly mentioned in the sentence but are implied due to the use of pronouns and language conventions, we need to infer the products. We call this problem entity assignment. These problems are important because without knowing what products each sentence talks about the opinion mined from the sentence is of little use. In this paper, we study these problems and propose two effective methods to solve the problems. Entity discovery is based on pattern discovery and entity assignment is based on mining of comparative sentences. Experimental results using a large number of forum posts demonstrate the effectiveness of the technique. Our system has also been successfully tested in a commercial setting.",
"corpus_id": 8786674,
"score": 1
},
{
"doc_id": "6193650",
"title": "Dealing with different distributions in learning from",
"abstract": "In the problem of learning with positive and unlabeled examples, existing research all assumes that positive examples P and the hidden positive examples in the unlabeled set U are generated from the same distribution. This assumption may be violated in practice. In such cases, existing methods perform poorly. This paper proposes a novel technique A-EM to deal with the problem. Experimental results with product page classification demonstrate the effectiveness of the proposed technique.",
"corpus_id": 6193650,
"score": 1
},
{
"doc_id": "2182731",
"title": "Speed-up iterative frequent itemset mining with constraint changes",
"abstract": "Mining of frequent itemsets is a fundamental data mining task. Past research has proposed many efficient algorithms for this purpose. Recent work also highlighted the importance of using constraints to focus the mining process to mine only those relevant itemsets. In practice, data mining is often an interactive and iterative process. The user typically changes constraints and runs the mining algorithm many times before being satisfied with the final results. This interactive process is very time consuming. Existing mining algorithms are unable to take advantage of this iterative process to use previous mining results to speed up the current mining process. This results in an enormous waste of time and computation. In this paper, we propose an efficient technique to utilize previous mining results to improve the efficiency of current mining when constraints are changed. We first introduce the concept of tree boundary to summarize useful information available from previous mining. We then show that the tree boundary provides an effective and efficient framework for the new mining. The proposed technique has been implemented in the context of two existing frequent itemset mining algorithms, FP-tree and tree projection. Experiment results on both synthetic and real-life datasets show that the proposed approach achieves a dramatic saving of computation.",
"corpus_id": 2182731,
"score": 1
},
{
"doc_id": "12241191",
"title": "Finding Actionable Knowledge via Automated Comparison",
"abstract": "The problem of finding interesting and actionable patterns is a major challenge in data mining. It has been studied by many data mining researchers. The issue is that data mining algorithms often generate too many patterns, which make it very hard for the user to find those truly useful ones. Over the years many techniques have been proposed. However, few have made it to real-life applications. At the end of 2005, we built a data mining system for Motorola (called Opportunity Map) to enable the user to explore the space of a large number of rules in order to find actionable knowledge. The approach is based on the concept of rule cubes and operations on rule cubes. A rule cube is similar to a data cube, but stores rules. Since its deployment, some issues have also been identified during the regular use of the system in Motorola. One of the key issues is that although the operations on rule cubes are flexible, each operation is primitive and has to be initiated by the user. Finding a piece of actionable knowledge typically involves many operations and intense visual inspections, which are labor-intensive and time-consuming. From interactions with our users, we identified a generic problem that is crucial for finding actionable knowledge. The problem involves extensive comparison of sub-populations and identification of the cause of their differences. This paper first defines the problem and then proposes an effective method to solve the problem automatically. To the best of our knowledge, there is no reported study of this problem. The new method has been added to the Opportunity Map system and is now in daily use in Motorola.",
"corpus_id": 12241191,
"score": 1
},
{
"doc_id": "19080227",
"title": "Positive solutions of a nonlinear three-point boundary value problem",
"abstract": "In this paper, by using Krasnoselskii's fixed point theorem in a cone, we study the existence of single and multiple positive solutions to the three-point boundary value problem (BVP)y''(t)+a(t)f(y(t))=0,0",
"corpus_id": 19080227,
"score": 0
},
{
"doc_id": "32302868",
"title": "Non-linear Correlation Techniques in Educational Data Mining",
"abstract": "There is such an increasing interest in data mining and educational systems currently that has made educational data mining as a new growing research community. This paper explores how to develope new methods for discovering knowledge of data from educational context. The non-linear correlation technology was introduced and applied in the mining process in the whole knowledge achieved. Meanwhile, we have applied these methods in the real course management datasets and found correspondent results for the educators.",
"corpus_id": 32302868,
"score": 0
},
{
"doc_id": "11561145",
"title": "Collective Behavior Analysis of a Class of Social Foraging Swarms",
"abstract": "This paper considers an anisotropic swarm model that consists of a group of mobile autonomous agents with an attraction-repulsion function that can guarantee collision avoidance between agents and a Gaussian-type attractant/repellent nutrient profile. The swarm behavior is a result of a balance between inter-individual interplays as well as the interplays of the swarm individuals (agents) with their environment. It is proved that the members of a reciprocal swarm will aggregate and eventually form a cohesive cluster of finite size. It is shown that the swarm system is completely stable, that is, every solution converges to the equilibrium point set of the system. Moreover, it is also shown that all the swarm individuals will converge to more favorable areas of the Gaussian profile under certain conditions. The results of this paper provide further insight into the effect of the interaction pattern on self-organized motion for a Gaussian-type attractant/repellent nutrient profile in a swarm system.",
"corpus_id": 11561145,
"score": 0
},
{
"doc_id": "1710451",
"title": "Joint estimation of image and coil sensitivities in parallel MRI",
"abstract": "Parallel magnetic resonance imaging (MRI) using multichannel receiver coils has emerged as an effective tool to reduce imaging time in various dynamic imaging applications. However, there are still a number of image reconstruction issues that have not been fully addressed, thereby limiting the level of speed enhancement achievable with the technology. This paper considers the inaccuracy of coil sensitivities in conventional reconstruction methods such as SENSE, and reformulates the image reconstruction problem as a joint estimation of the coil sensitivities and the desired image, which is solved by an iterative algorithm. Experimental results demonstrate the effectiveness of the proposed method especially when large acceleration factors are used",
"corpus_id": 1710451,
"score": 0
},
{
"doc_id": "1930626",
"title": "Multi-agent System for Custom Relationship Management with SVMs Tool",
"abstract": "Distributed data mining in the CRM is to learn available knowledge from the customer relationship so as to instruct the strategic behavior. In order to resolve the CRM in distributed data mining, this paper proposes the architecture of distributed data mining for CRM, and then utilizes the support vector machine tool to separate the customs into several classes and manage them. In the end, the practical experiments about one Chinese company are conducted to show the good performance of the proposed approach.",
"corpus_id": 1930626,
"score": 0
}
] |
arnetminer | {
"doc_id": "11674952",
"title": "The Design of the GPS-Based Surveying Robot Automatic Monitoring System for Underground Mining Safety",
"abstract": "Earth subsidence in underground mining is an unavoidable problem in mining production, and timely and scientific observation and early warning is one of the important factors in the security of mining production. Though the surveying robot (i.e. automatic electronic total station) can automatically (or semi-automatically) monitor ground deformation for underground mining, the stability of the station location (monitor base station) has great impact on the monitor precision and when the measurement vision is covered, the surveying robot fails to monitor the corresponding deformation point. In order to tackle the above problem, the author and the research team have integrated the technology of GPS (Global Positioning System) with surveying robot and developed the GPS-based surveying robot automatic monitoring system for underground mining safety, which completely solves the foresaid problem, simplifies the monitor program and reduces the fixed investment cost of monitor. The article introduces the structure and working principle of the GPS-based surveying robot automatic monitoring system for underground mining safety, presents examples of monitor.",
"corpus_id": 11674952
} | [
{
"doc_id": "19997506",
"title": "Prediction and Analysis of Landslide Based on Fuzzy Theory",
"abstract": "There are internal and external reasons for the occurrence of landslip. Through the analysis on a large number of investigation materials of landslip, the growth inducements of landslip are found out. By use of theories in blur maths, we conduct order arrangement of these inducements in accordance their significance, put forward blur judgment rule of landslip estimation and present applicable examples.",
"corpus_id": 19997506,
"score": 1
},
{
"doc_id": "2367747",
"title": "Top 10 algorithms in data mining",
"abstract": "This paper presents the top 10 data mining algorithms identified by the IEEE International Conference on Data Mining (ICDM) in December 2006: C4.5, k-Means, SVM, Apriori, EM, PageRank, AdaBoost, kNN, Naive Bayes, and CART. These top 10 algorithms are among the most influential data mining algorithms in the research community. With each algorithm, we provide a description of the algorithm, discuss the impact of the algorithm, and review current and further research on the algorithm. These 10 algorithms cover classification, clustering, statistical learning, association analysis, and link mining, which are all among the most important topics in data mining research and development.",
"corpus_id": 2367747,
"score": 0
},
{
"doc_id": "10102687",
"title": "Mining interesting knowledge using DM-II",
"abstract": "Data mining aims to develop a new generation of tools to intelligently assist humans in analyzing mountains of data. Over the past few years, great progress has been made in both research and applications of data mining. Data mining systems have helped many businesses by exposing previously unknown patterns in their databases, which were used to improve profits, enhance customer services, and ultimately achieve a competitive advantage. In this paper, we present our unique data mining system DM-II (Data Mining Integration and Interestingness). DM-II is a PC-based system working in Windows 95/98/NT environment. Apart from the normal components of a data mining system, DM-II has a number of unique and advanced sub-systems. These sub-systems have been applied in many real-life applications, including education applications, insurance application, accident application, disease application, drug screening application, image classification, etc. Here, we focus on discussing the following sub-systems: CBA: CBA originally stands for Classification-Based on Associations [lo]. It has now been extended with a number of other advanced features. In all, CBA has the following capabilities: Building classifiers using association rules: Traditionally, association rule mining and classifier building are regarded as two different data mining tasks. CBA unifies the two tasks. It is able to ,generate association rules and builds a classifier using a special subset of the association rules. What is significant is that CBA, in general, produces more accurate classifiers compared to the state-ofthe-art classification system C4.5 [ 161. It also helps to solve some outstanding problems with the existing classification systems. Pruning and summarizing the discovered associations: One major problem with association",
"corpus_id": 10102687,
"score": 0
},
{
"doc_id": "449953",
"title": "Learning to Classify Documents with Only a Small Positive Training Set",
"abstract": "Many real-world classification applications fall into the class of positive and unlabeled (PU) learning problems. In many such applications, not only could the negative training examples be missing, the number of positive examples available for learning may also be fairly limited due to the impracticality of hand-labeling a large number of training examples. Current PU learning techniques have focused mostly on identifying reliable negative instances from the unlabeled set U. In this paper, we address the oft-overlooked PU learning problem when the number of training examples in the positive set Pis small. We propose a novel technique LPLP (Learning from Probabilistically Labeled Positive examples) and apply the approach to classify product pages from commercial websites. The experimental results demonstrate that our approach outperforms existing methods significantly, even in the challenging cases where the positive examples in Pand the hidden positive examples in Uwere not drawn from the same distribution.",
"corpus_id": 449953,
"score": 0
},
{
"doc_id": "3958127",
"title": "Positive Unlabeled Learning for Data Stream Classification",
"abstract": "Learning from positive and unlabeled examples (PU learning) has been investigated in recent years as an alternative learning model for dealing with situations where negative training examples are not available. It has many real world applications, but it has yet to be applied in the data stream environment where it is highly possible that only a small set of positive data and no negative data is available. An important challenge is to address the issue of concept drift in the data stream environment, which is not easily handled by the traditional PU learning techniques. This paper studies how to devise PU learning techniques for the data stream environment. Unlike existing data stream classification methods that assume both positive and negative training data are available for learning, we propose a novel PU learning technique LELC (PU Learning by Extracting Likely positive and negative micro-Clusters) for document classification. LELC only requires a small set of positive examples and a set of unlabeled examples which is easily obtainable in the data stream environment to build accurate classifiers. Experimental results show that LELC is a PU learning method that can effectively address the issues in the data stream environment with significantly better speed and accuracy on capturing concept drift than the existing state-of-the-art PU learning techniques.",
"corpus_id": 3958127,
"score": 0
},
{
"doc_id": "16495280",
"title": "A Formal Specification for Web Services Composition and Verification",
"abstract": "Due to the promising features of Web services, their deployment and research are booming. Among them, various techniques for Web service composition have been developed. In this paper, we propose a new composition framework. We use automata to describe behaviors of Web services. Each of underlying Web services can interact with others through asynchronous messages passing according to its interaction role (client or server). All these messages are recorded by a virtual global observer and the observation result is just the composition conversation of Web services. We also develop a formal a top-down verification mechanism on this framework and provide some realizable conditions for a successful composition",
"corpus_id": 16495280,
"score": 0
}
] |
arnetminer | {
"doc_id": "17435545",
"title": "Improved spiral sense reconstruction using a multiscale wavelet model",
"abstract": "SENSE has been widely accepted and extensively studied in the community of parallel MRI. Although many regularization approaches have been developed to address the ill-conditioning problem for Cartesian SENSE, fewer efforts have been made to address this problem when the sampling trajectory is non-Cartesian. For non-Cartesian SENSE using the iterative conjugate gradient method, ill- conditioning can degrade not only the signal-to-noise ratio, but also the convergence behavior. This paper proposes a regularization technique for non-Cartesian SENSE using a multiscale wavelet model. The technique models the desired image as a random field whose wavelet transform coefficients obey a generalized Gaussian distribution. The effectiveness of the proposed method has been validated by in vivo experiments.",
"corpus_id": 17435545
} | [
{
"doc_id": "14675051",
"title": "REGULARIZED SENSE RECONSTRUCTION USING ITERATIVELY REFINED TOTAL VARIATION METHOD",
"abstract": "SENSE has been widely accepted as one of the standard reconstruction algorithms for parallel MRI. When large acceleration factors are employed, the SENSE reconstruction becomes very ill-conditioned. For Cartesian SENSE, Tikhonov regularization has been commonly used. However, the Tikhonov regularized image usually tends to be overly smooth, and a high-quality regularization image is desirable to alleviate this problem but is not available. In this paper, we propose a new SENSE regularization technique that is based on total variation with iterated refinement using Bregman iteration. It penalizes highly oscillatory noise but allows sharp edges in reconstruction without the need for prior information. In addition, the Bregman iteration refines the image details iteratively. The method is shown to be able to significantly reduce the artifacts in SENSE reconstruction",
"corpus_id": 14675051,
"score": 1
},
{
"doc_id": "1710451",
"title": "Joint estimation of image and coil sensitivities in parallel MRI",
"abstract": "Parallel magnetic resonance imaging (MRI) using multichannel receiver coils has emerged as an effective tool to reduce imaging time in various dynamic imaging applications. However, there are still a number of image reconstruction issues that have not been fully addressed, thereby limiting the level of speed enhancement achievable with the technology. This paper considers the inaccuracy of coil sensitivities in conventional reconstruction methods such as SENSE, and reformulates the image reconstruction problem as a joint estimation of the coil sensitivities and the desired image, which is solved by an iterative algorithm. Experimental results demonstrate the effectiveness of the proposed method especially when large acceleration factors are used",
"corpus_id": 1710451,
"score": 1
},
{
"doc_id": "12209134",
"title": "JOINT ESTIMATION OF IMAGE AND COIL SENSITIVITIES IN PARALLEL SPIRAL MRI",
"abstract": "Spiral MRI has received increasing attention due to its reduced T 2*-decay and robustness against bulk physiologic motion. In parallel imaging, spiral trajectories are especially of great interest due to their inherent self-calibration capabilities, which is especially useful for dynamic imaging applications such as fMRI and cardiac imaging. The existing self-calibration techniques for spiral use the k-space center data that are sampled densely in the accelerated acquisition for coil sensitivity estimation. There exists a trade-off in choosing the radius of the center data: it must be sufficiently large to contain all major spatial frequencies of coil sensitivity, but not too large to cause significant aliasing artifacts due to undersampling below Nyquist rate as the trajectory moves away from the center k-space. To address this tradeoff, we generalize the JSENSE approach, which has demonstrated success in Cartesian case, to spiral trajectory. Specifically, the method jointly estimates the coil sensitivities and reconstructs the desired image through cross validations so that the sensitivities are estimated from the full data recovered by SENSE instead of the center k-space data only, thereby increasing high frequency information without introducing aliasing artifacts. We use experimental results to show the proposed method improves sensitivities, which leads to a more accurate SENSE reconstruction",
"corpus_id": 12209134,
"score": 1
},
{
"doc_id": "23481142",
"title": "Robust content-based image indexing using contextual clues and automatic pseudofeedback",
"abstract": "Abstract.In this paper we present a robust information integration approach to identifying images of persons in large collections such as the Web. The underlying system relies on combining content analysis, which involves face detection and recognition, with context analysis, which involves extraction of text or HTML features. Two aspects are explored to test the robustness of this approach: sensitivity of the retrieval performance to the context analysis parameters and automatic construction of a facial image database via automatic pseudofeedback. For the sensitivity testing, we reevaluate system performance while varying context analysis parameters. This is compared with a learning approach where association rules among textual feature values and image relevance are learned via the CN2 algorithm. A face database is constructed by clustering after an initial retrieval relying on face detection and context analysis alone. Experimental results indicate that the approach is robust for identifying and indexing person images.",
"corpus_id": 23481142,
"score": 0
},
{
"doc_id": "39231279",
"title": "An Intelligent Differential Evolution Algorithm for Designing Trading-Ratio System of Water Market",
"abstract": "As a novel optimization technique, neural network based optimization has gained much attention and some applications during the past decade. To enhance the performance of Differential Evolution Algorithm (DEA), which is an evolutionary computation technique through individual improvement plus population cooperation and competition, an intelligent Differential Evolution Algorithm (IDEA) is proposed by incorporating neural network based search behaviors into classic DEA. Firstly, DEA operators are used for exploration by updating individuals so as to maintain the diversity of population and speedup the search process. Secondly, a multi-layer feed-forward neural network is employed for local exploitation to avoid being trapped in local optima and improve the convergence of the IDEA. Simulation results and comparisons based on well-known benchmarks and optimal designing of trading-ratio system for water market demonstrate that the IDEA can effectively enhance the searching efficiency and greatly improve the searching quality.",
"corpus_id": 39231279,
"score": 0
},
{
"doc_id": "17205706",
"title": "Rule interestingness analysis using OLAP operations",
"abstract": "The problem of interestingness of discovered rules has been investigated by many researchers. The issue is that data mining algorithms often generate too many rules, which make it very hard for the user to find the interesting ones. Over the years many techniques have been proposed. However, few have made it to real-life applications. Since August 2004, we have been working on a major application for Motorola. The objective is to find causes of cellular phone call failures from a large amount of usage log data. Class association rules have been shown to be suitable for this type of diagnostic data mining application. We were also able to put several existing interestingness methods to the test, which revealed some major shortcomings. One of the main problems is that most existing methods treat rules individually. However, we discovered that users seldom regard a single rule to be interesting by itself. A rule is only interesting in the context of some other rules. Furthermore, in many cases, each individual rule may not be interesting, but a group of them together can represent an important piece of knowledge. This led us to discover a deficiency of the current rule mining paradigm. Using non-zero minimum support and non-zero minimum confidence eliminates a large amount of context information, which makes rule analysis difficult. This paper proposes a novel approach to deal with all of these issues, which casts rule analysis as OLAP operations and general impression mining. This approach enables the user to explore the knowledge space to find useful knowledge easily and systematically. It also provides a natural framework for visualization. As an evidence of its effectiveness, our system, called Opportunity Map, based on these ideas has been deployed, and it is in daily use in Motorola for finding actionable knowledge from its engineering and other types of data sets.",
"corpus_id": 17205706,
"score": 0
},
{
"doc_id": "31431918",
"title": "Tight Bounds on the Estimation Distance Using Wavelet",
"abstract": "Time series similarity search is of growing importance in many applications. Wavelet transforms are used as a dimensionality reduction technique to permit efficient similarity search over high-dimensional time series data. This paper proposes the tight upper and lower bounds on the estimation distance using wavelet transform, and we show that the traditional distance estimation is only part of our lower bound. According to the lower bound, we can exclude more dissimilar time series than traditional method. And according to the upper bound, we can directly judge whether two time series are similar, and further reduce the number of time series to process in original time domain. The experiments have shown that using the upper and lower tight bounds can significantly improve filter efficiency and reduce running time than traditional method.",
"corpus_id": 31431918,
"score": 0
},
{
"doc_id": "6193650",
"title": "Dealing with different distributions in learning from",
"abstract": "In the problem of learning with positive and unlabeled examples, existing research all assumes that positive examples P and the hidden positive examples in the unlabeled set U are generated from the same distribution. This assumption may be violated in practice. In such cases, existing methods perform poorly. This paper proposes a novel technique A-EM to deal with the problem. Experimental results with product page classification demonstrate the effectiveness of the proposed technique.",
"corpus_id": 6193650,
"score": 0
}
] |
arnetminer | {
"doc_id": "159216",
"title": "A weight based compact genetic algorithm",
"abstract": "In order to improve the performance of the compact Genetic Algorithm (cGA) to solve difficult optimization problems, an improved cGA which named as the weight based compact Genetic Algorithm (wcGA) is proposed. In the wcGA, S individuals are generated from the probability vector in each generation, when the winner competing with the other S-1 individuals to update the probability vector, different weights are multiplied to each solution according to the sequence of the solution ranked in the S-1 individuals. Experimental results on three kinds of Benchmark functions show that the proposed algorithm has higher optimal precision than that of the standard cGA and the cGA simulating higher selection pressures.",
"corpus_id": 159216
} | [
{
"doc_id": "13160546",
"title": "A Population-Based Incremental Learning Algorithm with Elitist Strategy",
"abstract": "The population-based incremental learning (PBIL) is a novel evolutionary algorithm combined the mechanisms of the Genetic Algorithm with competitive learning. In this paper, the influence of the number of selected best solutions on the convergence speed of the PBIL is studied by experiment. Based on experimental results, a PBIL algorithm with elitist strategy, named Double Learning PBIL (DLPBIL), is proposed. The new algorithm learns both the selected best solutions in current population and the optimal solution found so far in the algorithm at same time. Experimental results show that the DLPBIL out-performs the standard PBIL. Both the convergence speed and the solution quality are improved.",
"corpus_id": 13160546,
"score": 1
},
{
"doc_id": "5978304",
"title": "Estimation of Distribution Algorithms for the Machine-Part Cell Formation",
"abstract": "The machine-part cell formation is a NP- complete combinational optimization in cellular manufacturing system. Previous researches have revealed that although the genetic algorithm (GA) can get high quality solutions, special selection strategy, crossover and mutation operators as well as the parameters must be defined previously to solve the problem efficiently and flexibly. The Estimation of Distribution Algorithms (EDAs) has recently been recognized as a new computing paradigm in evolutionary computation which can overcome some drawbacks of the traditional GA mentioned above. In this paper, two kinds of the EDAs, UMDA and EBNA BIC are applied to solve the machine-part cell formation problem. Simulation results on six well known problems show that the UMDA and EBNA BIC can attain satisfied solutions more simply and efficiently.",
"corpus_id": 5978304,
"score": 1
},
{
"doc_id": "13120258",
"title": "Hybrid Ant Colony Algorithm and Its Application on Function Optimization",
"abstract": "A new hybrid ant colony algorithm was proposed. Firstly, weight factor was introduced to the binary ant colony algorithm, and then we obtained a new probability by combining probability model of Population based incremental learning (PBIL) with transfer probability of ants pheromone . The new population are produced by probability model of PBIL, transfer probability of ants pheromone and the probability of proposed algorithm so that population polymorphism is ensured and the optimal convergence rate and the ability of breaking away from the local minima are improved. Optimization simulation results based on the benchmark test functions show that the hybrid algorithm has higher convergence rate and stability than binary ant colony algorithm (BACA) and Population based incremental learning (PBIL).",
"corpus_id": 13120258,
"score": 1
},
{
"doc_id": "17698779",
"title": "Neural Network Identification Method Applied to the Nonlinear System",
"abstract": "A kind of neural network solution method has been proposed in the paper aiming at a class of non-linear process control system with the characteristic of time delay. In this scheme, a new-type associative memory neural network is used to model the controlled system, and the fuzzy neural network with inverse identification structure is adopted to control the nonlinear process system. This fuzzy neural network control method adopts the structure of three layers combine neural network identifier with inverse structure. Computer simulation and lab application show that it is effective to adopt this scheme to control on-linear process system with time delay.",
"corpus_id": 17698779,
"score": 1
},
{
"doc_id": "16942668",
"title": "Learning from Positive and Unlabeled Examples with Different Data Distributions",
"abstract": "We study the problem of learning from positive and unlabeled examples. Although several techniques exist for dealing with this problem, they all assume that positive examples in the positive set P and the positive examples in the unlabeled set U are generated from the same distribution. This assumption may be violated in practice. For example, one wants to collect all printer pages from the Web. One can use the printer pages from one site as the set P of positive pages and use product pages from another site as U. One wants to classify the pages in U into printer pages and non-printer pages. Although printer pages from the two sites have many similarities, they can also be quite different because different sites often present similar products in different styles and have different focuses. In such cases, existing methods perform poorly. This paper proposes a novel technique A-EM to deal with the problem. Experiment results with product page classification demonstrate the effectiveness of the proposed technique.",
"corpus_id": 16942668,
"score": 0
},
{
"doc_id": "17844333",
"title": "Hybrid Algorithm Combining Ant Colony Algorithm with Genetic Algorithm for Continuous Domain",
"abstract": "Ant colony algorithm is a kind of new heuristic biological modeling method which has the ability of parallel processing and global searching. By use of the properties of ant colony algorithm and genetic algorithm, the hybrid algorithm which adopts genetic algorithm to distribute the original pheromone is proposed to solve the continuous optimization problem. Several solutions are obtained using the ant colony algorithm through pheromone accumulation and renewal. Finally, by using crossover and mutation operation of genetic algorithm, some effective solutions are obtained. The results of experiments show better performances of the new algorithm based on six continuous test functions compared with the methods available in literature.",
"corpus_id": 17844333,
"score": 0
},
{
"doc_id": "13900102",
"title": "Web Page Cleaning for Web Mining through Feature Weighting",
"abstract": "Unlike conventional data or text, Web pages typically contain a large amount of information that is not part of the main contents of the pages, e.g., banner ads, navigation bars, and copyright notices. Such irrelevant information (which we call Web page noise) in Web pages can seriously harm Web mining, e.g., clustering and classification. In this paper, we propose a novel feature weighting technique to deal with Web page noise to enhance Web mining. This method first builds a compressed structure tree to capture the common structure and comparable blocks in a set of Web pages. It then uses an information based measure to evaluate the importance of each node in the compressed structure tree. Based on the tree and its node importance values, our method assigns a weight to each word feature in its content block. The resulting weights are used in Web mining. We evaluated the proposed technique with two Web mining tasks, Web page clustering and Web page classification. Experimental results show that our weighting method is able to dramatically improve the mining results.",
"corpus_id": 13900102,
"score": 0
},
{
"doc_id": "206597617",
"title": "Binary Tree Support Vector Machine Based on Kernel Fisher Discriminant for Multi-classification",
"abstract": "In order to improve the accuracy of the conventional algorithms for multi-classifications, we propose a binary tree support vector machine based on Kernel Fisher Discriminant in this paper. To examine the training accuracy and the generalization performance of the proposed algorithm, One-against-All, One-against-One and the proposed algorithms are applied to five UCI data sets. The experimental results show that in general, the training and the testing accuracy of the proposed algorithm is the best one, and there exist no unclassifiable regions in the proposed algorithm.",
"corpus_id": 206597617,
"score": 0
},
{
"doc_id": "410427",
"title": "Database and location management schemes for mobile communications",
"abstract": "Signaling traffic incurred in tracking mobile users and delivering enhanced services causes an additional load in the network. Efficient database and location management schemes are needed to meet the challenges from high density and mobility of users, and various service features. In this paper, the general location control and management function is treated as the combination of two parts, the global and local scope. New schemes and methods are proposed, and improvements achieved over established basic schemes are shown by using simulations.",
"corpus_id": 410427,
"score": 0
}
] |
arnetminer | {
"doc_id": "9249843",
"title": "Mining Latent Associations of Objects Using a Typed Mixture Model--A Case Study on Expert/Expertise Mining",
"abstract": "This paper studies the problem of discovering latent associations among objects in text documents. Specifically, given two sets of objects and various types of co-occurrence data concerning the objects existing in texts, we aim to discover the hidden or latent associative relationships between the two sets of objects. Existing methods are not directly applicable as they are unable to consider all this information. For example, the probabilistic mixture model called Separable Mixture Model (SMM) proposed by Hofmann can use only one type of co-occurrences to mine latent associations. This paper proposes a more general probabilistic mixture model called the Typed Separable Mixture Model (TSMM), which is able to use all types of co-occurrences within a single framework. Experimental results based on the expert/expertise mining task show that TSMM outperforms SMM significantly.",
"corpus_id": 9249843
} | [
{
"doc_id": "8786674",
"title": "Entity discovery and assignment for opinion mining applications",
"abstract": "Opinion mining became an important topic of study in recent years due to its wide range of applications. There are also many companies offering opinion mining services. One problem that has not been studied so far is the assignment of entities that have been talked about in each sentence. Let us use forum discussions about products as an example to make the problem concrete. In a typical discussion post, the author may give opinions on multiple products and also compare them. The issue is how to detect what products have been talked about in each sentence. If the sentence contains the product names, they need to be identified. We call this problem entity discovery. If the product names are not explicitly mentioned in the sentence but are implied due to the use of pronouns and language conventions, we need to infer the products. We call this problem entity assignment. These problems are important because without knowing what products each sentence talks about the opinion mined from the sentence is of little use. In this paper, we study these problems and propose two effective methods to solve the problems. Entity discovery is based on pattern discovery and entity assignment is based on mining of comparative sentences. Experimental results using a large number of forum posts demonstrate the effectiveness of the technique. Our system has also been successfully tested in a commercial setting.",
"corpus_id": 8786674,
"score": 1
},
{
"doc_id": "61155076",
"title": "Opinion Mining",
"abstract": "Описанл процедуру формування інтегрального показника об’єкта згідно з відгуками користувачів під час застосування методів оцінювання опінії текстової інформації у web-документах. Запропоновано використання лінгвістичних змінних та застосування вагових коефіцієнтів для достовірнішого результату оцінювання емоційного забарвлення текстової інформації. Ключові слова: опінія, емоційне забарвлення, об’єкт, інтегральний показник, лінгвістична змінна, ваговий коефіцієнт.",
"corpus_id": 61155076,
"score": 1
},
{
"doc_id": "12241191",
"title": "Finding Actionable Knowledge via Automated Comparison",
"abstract": "The problem of finding interesting and actionable patterns is a major challenge in data mining. It has been studied by many data mining researchers. The issue is that data mining algorithms often generate too many patterns, which make it very hard for the user to find those truly useful ones. Over the years many techniques have been proposed. However, few have made it to real-life applications. At the end of 2005, we built a data mining system for Motorola (called Opportunity Map) to enable the user to explore the space of a large number of rules in order to find actionable knowledge. The approach is based on the concept of rule cubes and operations on rule cubes. A rule cube is similar to a data cube, but stores rules. Since its deployment, some issues have also been identified during the regular use of the system in Motorola. One of the key issues is that although the operations on rule cubes are flexible, each operation is primitive and has to be initiated by the user. Finding a piece of actionable knowledge typically involves many operations and intense visual inspections, which are labor-intensive and time-consuming. From interactions with our users, we identified a generic problem that is crucial for finding actionable knowledge. The problem involves extensive comparison of sub-populations and identification of the cause of their differences. This paper first defines the problem and then proposes an effective method to solve the problem automatically. To the best of our knowledge, there is no reported study of this problem. The new method has been added to the Opportunity Map system and is now in daily use in Motorola.",
"corpus_id": 12241191,
"score": 1
},
{
"doc_id": "2182731",
"title": "Speed-up iterative frequent itemset mining with constraint changes",
"abstract": "Mining of frequent itemsets is a fundamental data mining task. Past research has proposed many efficient algorithms for this purpose. Recent work also highlighted the importance of using constraints to focus the mining process to mine only those relevant itemsets. In practice, data mining is often an interactive and iterative process. The user typically changes constraints and runs the mining algorithm many times before being satisfied with the final results. This interactive process is very time consuming. Existing mining algorithms are unable to take advantage of this iterative process to use previous mining results to speed up the current mining process. This results in an enormous waste of time and computation. In this paper, we propose an efficient technique to utilize previous mining results to improve the efficiency of current mining when constraints are changed. We first introduce the concept of tree boundary to summarize useful information available from previous mining. We then show that the tree boundary provides an effective and efficient framework for the new mining. The proposed technique has been implemented in the context of two existing frequent itemset mining algorithms, FP-tree and tree projection. Experiment results on both synthetic and real-life datasets show that the proposed approach achieves a dramatic saving of computation.",
"corpus_id": 2182731,
"score": 1
},
{
"doc_id": "6193650",
"title": "Dealing with different distributions in learning from",
"abstract": "In the problem of learning with positive and unlabeled examples, existing research all assumes that positive examples P and the hidden positive examples in the unlabeled set U are generated from the same distribution. This assumption may be violated in practice. In such cases, existing methods perform poorly. This paper proposes a novel technique A-EM to deal with the problem. Experimental results with product page classification demonstrate the effectiveness of the proposed technique.",
"corpus_id": 6193650,
"score": 1
},
{
"doc_id": "33407170",
"title": "Manufacturing Grid: Needs, Concept, and Architecture",
"abstract": "As a new approach, grid technology is rapidly used in scientific computing, large-scale data management, and collaborative work. But in the field of manufacturing, the application of grid is just at the beginning. The paper proposes the concept of manufacturing. The needs, definition and architecture of manufacturing gird are discussed, which explains why needs manufacturing grid, what is manufacturing grid and how to construct a manufacturing grid system.",
"corpus_id": 33407170,
"score": 0
},
{
"doc_id": "26759633",
"title": "Adaptive video coding using mixed-domain filter banks having optimal-shaped subbands",
"abstract": "This contribution presents a mixed-domain filter bank. This filter bank can realize arbitrary (and thereby optimal) spectral decomposition and therefore is very suitable for the efficient subband coding of video signals. A coding scheme using the proposed filter bank is considered, which takes advantage of the spectral distribution of the signal and is especially selective to motion.<<ETX>>",
"corpus_id": 26759633,
"score": 0
},
{
"doc_id": "3874732",
"title": "A Neural Network Approach To Shape From Shading",
"abstract": "In this paper, we propose a method of recovering shape from shading that solves directly for the surface height using neural networks. The main motivation of this paper is to provide an answer to the open problem proposed by Zhou and Chellappa [11]. We first formulate the shape from shading problem by combining a triangular element surface model with a linearized reflectance map. Then, we use a linear feed-forward network architecture with six layers to compute the surface height with a singular value decomposition. The weights in the model initialized using eigenvectors and eigen-values of the stiffness matrix of objective functional. Experimental results show that our solution is very effective.",
"corpus_id": 3874732,
"score": 0
},
{
"doc_id": "3413319",
"title": "Filtering Spam in Social Tagging System with Dynamic Behavior Analysis",
"abstract": "Spam in social tagging systems introduced by some malicious participants has become a serious problem for its global popularizing. Some studies which can be deduced to static user data analysis have been presented to combat tag spam, but either they do not give an exact evaluation or the algorithms’ performances are not good enough. In this paper, we proposed a novel method based on analysis of dynamic user behavior data for the notion that users’ behaviors in social tagging system can reflect the quality of tags more accurately. Through modeling the different categories of participants’ behaviors, we extract tag-associated actions which can be used to estimate whether tag is spam, and then present our algorithm that can filter the tag spam in the results of social search. The experiment results show that our method indeed outperforms the existing methods based on static data and effectively defends against the tag spam in various spam attacks.",
"corpus_id": 3413319,
"score": 0
},
{
"doc_id": "39145518",
"title": "Boundary Constrained Manifold Unfolding",
"abstract": null,
"corpus_id": 39145518,
"score": 0
}
] |
arnetminer | {
"doc_id": "11383614",
"title": "Mining data records in Web pages",
"abstract": "A large amount of information on the Web is contained in regularly structured objects, which we call data records. Such data records are important because they often present the essential information of their host pages, e.g., lists of products or services. It is useful to mine such data records in order to extract information from them to provide value-added services. Existing automatic techniques are not satisfactory because of their poor accuracies. In this paper, we propose a more effective technique to perform the task. The technique is based on two observations about data records on the Web and a string matching algorithm. The proposed technique is able to mine both contiguous and non-contiguous data records. Our experimental results show that the proposed technique outperforms existing techniques substantially.",
"corpus_id": 11383614
} | [
{
"doc_id": "61155076",
"title": "Opinion Mining",
"abstract": "Описанл процедуру формування інтегрального показника об’єкта згідно з відгуками користувачів під час застосування методів оцінювання опінії текстової інформації у web-документах. Запропоновано використання лінгвістичних змінних та застосування вагових коефіцієнтів для достовірнішого результату оцінювання емоційного забарвлення текстової інформації. Ключові слова: опінія, емоційне забарвлення, об’єкт, інтегральний показник, лінгвістична змінна, ваговий коефіцієнт.",
"corpus_id": 61155076,
"score": 1
},
{
"doc_id": "6193650",
"title": "Dealing with different distributions in learning from",
"abstract": "In the problem of learning with positive and unlabeled examples, existing research all assumes that positive examples P and the hidden positive examples in the unlabeled set U are generated from the same distribution. This assumption may be violated in practice. In such cases, existing methods perform poorly. This paper proposes a novel technique A-EM to deal with the problem. Experimental results with product page classification demonstrate the effectiveness of the proposed technique.",
"corpus_id": 6193650,
"score": 1
},
{
"doc_id": "12241191",
"title": "Finding Actionable Knowledge via Automated Comparison",
"abstract": "The problem of finding interesting and actionable patterns is a major challenge in data mining. It has been studied by many data mining researchers. The issue is that data mining algorithms often generate too many patterns, which make it very hard for the user to find those truly useful ones. Over the years many techniques have been proposed. However, few have made it to real-life applications. At the end of 2005, we built a data mining system for Motorola (called Opportunity Map) to enable the user to explore the space of a large number of rules in order to find actionable knowledge. The approach is based on the concept of rule cubes and operations on rule cubes. A rule cube is similar to a data cube, but stores rules. Since its deployment, some issues have also been identified during the regular use of the system in Motorola. One of the key issues is that although the operations on rule cubes are flexible, each operation is primitive and has to be initiated by the user. Finding a piece of actionable knowledge typically involves many operations and intense visual inspections, which are labor-intensive and time-consuming. From interactions with our users, we identified a generic problem that is crucial for finding actionable knowledge. The problem involves extensive comparison of sub-populations and identification of the cause of their differences. This paper first defines the problem and then proposes an effective method to solve the problem automatically. To the best of our knowledge, there is no reported study of this problem. The new method has been added to the Opportunity Map system and is now in daily use in Motorola.",
"corpus_id": 12241191,
"score": 1
},
{
"doc_id": "8786674",
"title": "Entity discovery and assignment for opinion mining applications",
"abstract": "Opinion mining became an important topic of study in recent years due to its wide range of applications. There are also many companies offering opinion mining services. One problem that has not been studied so far is the assignment of entities that have been talked about in each sentence. Let us use forum discussions about products as an example to make the problem concrete. In a typical discussion post, the author may give opinions on multiple products and also compare them. The issue is how to detect what products have been talked about in each sentence. If the sentence contains the product names, they need to be identified. We call this problem entity discovery. If the product names are not explicitly mentioned in the sentence but are implied due to the use of pronouns and language conventions, we need to infer the products. We call this problem entity assignment. These problems are important because without knowing what products each sentence talks about the opinion mined from the sentence is of little use. In this paper, we study these problems and propose two effective methods to solve the problems. Entity discovery is based on pattern discovery and entity assignment is based on mining of comparative sentences. Experimental results using a large number of forum posts demonstrate the effectiveness of the technique. Our system has also been successfully tested in a commercial setting.",
"corpus_id": 8786674,
"score": 1
},
{
"doc_id": "2182731",
"title": "Speed-up iterative frequent itemset mining with constraint changes",
"abstract": "Mining of frequent itemsets is a fundamental data mining task. Past research has proposed many efficient algorithms for this purpose. Recent work also highlighted the importance of using constraints to focus the mining process to mine only those relevant itemsets. In practice, data mining is often an interactive and iterative process. The user typically changes constraints and runs the mining algorithm many times before being satisfied with the final results. This interactive process is very time consuming. Existing mining algorithms are unable to take advantage of this iterative process to use previous mining results to speed up the current mining process. This results in an enormous waste of time and computation. In this paper, we propose an efficient technique to utilize previous mining results to improve the efficiency of current mining when constraints are changed. We first introduce the concept of tree boundary to summarize useful information available from previous mining. We then show that the tree boundary provides an effective and efficient framework for the new mining. The proposed technique has been implemented in the context of two existing frequent itemset mining algorithms, FP-tree and tree projection. Experiment results on both synthetic and real-life datasets show that the proposed approach achieves a dramatic saving of computation.",
"corpus_id": 2182731,
"score": 1
},
{
"doc_id": "18666865",
"title": "Multi-sphere Support Vector Data Description for Outliers Detection on Multi-distribution Data",
"abstract": "SVDD has been proved a powerful tool for outlier detection. However, in detecting outliers on multi-distribution data, namely there are distinctive distributions in the data, it is very challenging for SVDD to generate a hyper-sphere for distinguishing outliers from normal data. Even if such a hyper-sphere can be identified, its performance is usually not good enough. This paper proposes an multi-sphere SVDD approach, named MS-SVDD, for outlier detection on multi-distribution data. First, an adaptive sphere detection method is proposed to detect data distributions in the dataset. The data is partitioned in terms of the identified data distributions, and the corresponding SVDD classifiers are constructed separately. Substantial experiments on both artificial and real-world datasets have demonstrated that the proposed approach outperforms original SVDD.",
"corpus_id": 18666865,
"score": 0
},
{
"doc_id": "3413319",
"title": "Filtering Spam in Social Tagging System with Dynamic Behavior Analysis",
"abstract": "Spam in social tagging systems introduced by some malicious participants has become a serious problem for its global popularizing. Some studies which can be deduced to static user data analysis have been presented to combat tag spam, but either they do not give an exact evaluation or the algorithms’ performances are not good enough. In this paper, we proposed a novel method based on analysis of dynamic user behavior data for the notion that users’ behaviors in social tagging system can reflect the quality of tags more accurately. Through modeling the different categories of participants’ behaviors, we extract tag-associated actions which can be used to estimate whether tag is spam, and then present our algorithm that can filter the tag spam in the results of social search. The experiment results show that our method indeed outperforms the existing methods based on static data and effectively defends against the tag spam in various spam attacks.",
"corpus_id": 3413319,
"score": 0
},
{
"doc_id": "33407170",
"title": "Manufacturing Grid: Needs, Concept, and Architecture",
"abstract": "As a new approach, grid technology is rapidly used in scientific computing, large-scale data management, and collaborative work. But in the field of manufacturing, the application of grid is just at the beginning. The paper proposes the concept of manufacturing. The needs, definition and architecture of manufacturing gird are discussed, which explains why needs manufacturing grid, what is manufacturing grid and how to construct a manufacturing grid system.",
"corpus_id": 33407170,
"score": 0
},
{
"doc_id": "19080227",
"title": "Positive solutions of a nonlinear three-point boundary value problem",
"abstract": "In this paper, by using Krasnoselskii's fixed point theorem in a cone, we study the existence of single and multiple positive solutions to the three-point boundary value problem (BVP)y''(t)+a(t)f(y(t))=0,0",
"corpus_id": 19080227,
"score": 0
},
{
"doc_id": "206597617",
"title": "Binary Tree Support Vector Machine Based on Kernel Fisher Discriminant for Multi-classification",
"abstract": "In order to improve the accuracy of the conventional algorithms for multi-classifications, we propose a binary tree support vector machine based on Kernel Fisher Discriminant in this paper. To examine the training accuracy and the generalization performance of the proposed algorithm, One-against-All, One-against-One and the proposed algorithms are applied to five UCI data sets. The experimental results show that in general, the training and the testing accuracy of the proposed algorithm is the best one, and there exist no unclassifiable regions in the proposed algorithm.",
"corpus_id": 206597617,
"score": 0
}
] |
arnetminer | {
"doc_id": "207774406",
"title": "Controlling FD and MVD Inferences in MLS",
"abstract": null,
"corpus_id": 207774406
} | [
{
"doc_id": "2182731",
"title": "Speed-up iterative frequent itemset mining with constraint changes",
"abstract": "Mining of frequent itemsets is a fundamental data mining task. Past research has proposed many efficient algorithms for this purpose. Recent work also highlighted the importance of using constraints to focus the mining process to mine only those relevant itemsets. In practice, data mining is often an interactive and iterative process. The user typically changes constraints and runs the mining algorithm many times before being satisfied with the final results. This interactive process is very time consuming. Existing mining algorithms are unable to take advantage of this iterative process to use previous mining results to speed up the current mining process. This results in an enormous waste of time and computation. In this paper, we propose an efficient technique to utilize previous mining results to improve the efficiency of current mining when constraints are changed. We first introduce the concept of tree boundary to summarize useful information available from previous mining. We then show that the tree boundary provides an effective and efficient framework for the new mining. The proposed technique has been implemented in the context of two existing frequent itemset mining algorithms, FP-tree and tree projection. Experiment results on both synthetic and real-life datasets show that the proposed approach achieves a dramatic saving of computation.",
"corpus_id": 2182731,
"score": 1
},
{
"doc_id": "6193650",
"title": "Dealing with different distributions in learning from",
"abstract": "In the problem of learning with positive and unlabeled examples, existing research all assumes that positive examples P and the hidden positive examples in the unlabeled set U are generated from the same distribution. This assumption may be violated in practice. In such cases, existing methods perform poorly. This paper proposes a novel technique A-EM to deal with the problem. Experimental results with product page classification demonstrate the effectiveness of the proposed technique.",
"corpus_id": 6193650,
"score": 1
},
{
"doc_id": "12241191",
"title": "Finding Actionable Knowledge via Automated Comparison",
"abstract": "The problem of finding interesting and actionable patterns is a major challenge in data mining. It has been studied by many data mining researchers. The issue is that data mining algorithms often generate too many patterns, which make it very hard for the user to find those truly useful ones. Over the years many techniques have been proposed. However, few have made it to real-life applications. At the end of 2005, we built a data mining system for Motorola (called Opportunity Map) to enable the user to explore the space of a large number of rules in order to find actionable knowledge. The approach is based on the concept of rule cubes and operations on rule cubes. A rule cube is similar to a data cube, but stores rules. Since its deployment, some issues have also been identified during the regular use of the system in Motorola. One of the key issues is that although the operations on rule cubes are flexible, each operation is primitive and has to be initiated by the user. Finding a piece of actionable knowledge typically involves many operations and intense visual inspections, which are labor-intensive and time-consuming. From interactions with our users, we identified a generic problem that is crucial for finding actionable knowledge. The problem involves extensive comparison of sub-populations and identification of the cause of their differences. This paper first defines the problem and then proposes an effective method to solve the problem automatically. To the best of our knowledge, there is no reported study of this problem. The new method has been added to the Opportunity Map system and is now in daily use in Motorola.",
"corpus_id": 12241191,
"score": 1
},
{
"doc_id": "61155076",
"title": "Opinion Mining",
"abstract": "Описанл процедуру формування інтегрального показника об’єкта згідно з відгуками користувачів під час застосування методів оцінювання опінії текстової інформації у web-документах. Запропоновано використання лінгвістичних змінних та застосування вагових коефіцієнтів для достовірнішого результату оцінювання емоційного забарвлення текстової інформації. Ключові слова: опінія, емоційне забарвлення, об’єкт, інтегральний показник, лінгвістична змінна, ваговий коефіцієнт.",
"corpus_id": 61155076,
"score": 1
},
{
"doc_id": "8786674",
"title": "Entity discovery and assignment for opinion mining applications",
"abstract": "Opinion mining became an important topic of study in recent years due to its wide range of applications. There are also many companies offering opinion mining services. One problem that has not been studied so far is the assignment of entities that have been talked about in each sentence. Let us use forum discussions about products as an example to make the problem concrete. In a typical discussion post, the author may give opinions on multiple products and also compare them. The issue is how to detect what products have been talked about in each sentence. If the sentence contains the product names, they need to be identified. We call this problem entity discovery. If the product names are not explicitly mentioned in the sentence but are implied due to the use of pronouns and language conventions, we need to infer the products. We call this problem entity assignment. These problems are important because without knowing what products each sentence talks about the opinion mined from the sentence is of little use. In this paper, we study these problems and propose two effective methods to solve the problems. Entity discovery is based on pattern discovery and entity assignment is based on mining of comparative sentences. Experimental results using a large number of forum posts demonstrate the effectiveness of the technique. Our system has also been successfully tested in a commercial setting.",
"corpus_id": 8786674,
"score": 1
},
{
"doc_id": "23965389",
"title": "Design and Implementation of Control System on Embedded Downloading Server Based on C/S Architecture",
"abstract": "Based on Client-Server architecture, this paper proposes the design and implementation of system controlling embedded-Linux downloading server. The software architecture design of this system is composed of three parts: network layer, control layer, and application layer, which is modular and pluggable. The implementation also proved the feasibility, reliability and portability of this design.",
"corpus_id": 23965389,
"score": 0
},
{
"doc_id": "14885080",
"title": "An Efficient and Accurate Method for 3D-Point Reconstruction from Multiple Views",
"abstract": "In this paper we consider the problem of finding the position of a point in space given its projections in multiple images taken by cameras with known calibration and pose. Ideally the 3D point can be obtained as the intersection of multiple known rays in space. However, with noise the rays do not meet at a single point generally. Therefore, it is necessary to find a best point of intersection. In this paper we propose a modification of the method (Ma et al., 2001. Journal of Communications in Information and Systems, (1):51–73) based on the multiple-view epipolar constraints. The solution is simple in concept and straightforward to implement. It includes generally two steps: first, image points are corrected through approximating the error model to the first order, and then the 3D point can be reconstructed from the corrected image points using any generic triangulation method. Experiments are conducted both on simulated data and on real data to test the proposed method against previous methods. It is shown that results obtained with the proposed method are consistently more accurate than those of other linear methods. When the measurement error of image points is relatively small, its results are comparable to those of maximum likelihood estimation using Newton-type optimizers; and when processing image-point correspondences cross a small number of views, the proposed method is by far more efficient than the Newton-type optimizers.",
"corpus_id": 14885080,
"score": 0
},
{
"doc_id": "30880806",
"title": "Particle swarm optimization for function optimization in noisy environment",
"abstract": "As a novel evolutionary searching technique, particle swarm optimization (PSO) has gained wide research and effective applications in the field of function optimization. However, to the best of our knowledge, most studies based on PSO are aimed at deterministic optimization problems. In this paper, the performance of PSO for function optimization in noisy environment is investigated, and an effective hybrid PSO approach named PSOOHT is proposed. In the PSOOHT, the population-based search mechanism of PSO is applied for well exploration and exploitation, and the optimal computing budget allocation (OCBA) technique is used to allocate limited sampling budgets to provide reliable evaluation and identification for good particles. Meanwhile, hypothesis test (HT) is also applied in the hybrid approach to reserve good particles and to maintain the diversity of the swarm as well. Numerical simulations based on several well-known function benchmarks with noise are carried out, and the effect of noise magnitude is also investigated as well. The results and comparisons demonstrate the superiority of PSOOHT in terms of searching quality and robustness.",
"corpus_id": 30880806,
"score": 0
},
{
"doc_id": "5571905",
"title": "Classification rule discovery with ant colony optimization",
"abstract": "Ant-based algorithms or ant colony optimization (ACO) algorithms have been applied successfully to combinatorial optimization problems. More recently, Parpinelli and colleagues applied ACO to data mining classification problems, where they introduced a classification algorithm called Ant/spl I.bar/Miner. In this paper, we present an improvement to Ant/spl I.bar/Miner (we call it Ant/spl I.bar/Miner3). The proposed version was tested on two standard problems and performed better than the original Ant/spl I.bar/Miner algorithm.",
"corpus_id": 5571905,
"score": 0
},
{
"doc_id": "45821814",
"title": "Positive solutions of a nonlinear four-point boundary value problems",
"abstract": "In this paper, by using the Krasnoselskii's theorem in a cone, we study the existence of at least one or two positive solutions to the four-point boundary value problemy^''(t)+a(t)f(y(t))=0,00. As an application, we also give some examples to demonstrate our results.",
"corpus_id": 45821814,
"score": 0
}
] |
arnetminer | {
"doc_id": "12209134",
"title": "JOINT ESTIMATION OF IMAGE AND COIL SENSITIVITIES IN PARALLEL SPIRAL MRI",
"abstract": "Spiral MRI has received increasing attention due to its reduced T 2*-decay and robustness against bulk physiologic motion. In parallel imaging, spiral trajectories are especially of great interest due to their inherent self-calibration capabilities, which is especially useful for dynamic imaging applications such as fMRI and cardiac imaging. The existing self-calibration techniques for spiral use the k-space center data that are sampled densely in the accelerated acquisition for coil sensitivity estimation. There exists a trade-off in choosing the radius of the center data: it must be sufficiently large to contain all major spatial frequencies of coil sensitivity, but not too large to cause significant aliasing artifacts due to undersampling below Nyquist rate as the trajectory moves away from the center k-space. To address this tradeoff, we generalize the JSENSE approach, which has demonstrated success in Cartesian case, to spiral trajectory. Specifically, the method jointly estimates the coil sensitivities and reconstructs the desired image through cross validations so that the sensitivities are estimated from the full data recovered by SENSE instead of the center k-space data only, thereby increasing high frequency information without introducing aliasing artifacts. We use experimental results to show the proposed method improves sensitivities, which leads to a more accurate SENSE reconstruction",
"corpus_id": 12209134
} | [
{
"doc_id": "14675051",
"title": "REGULARIZED SENSE RECONSTRUCTION USING ITERATIVELY REFINED TOTAL VARIATION METHOD",
"abstract": "SENSE has been widely accepted as one of the standard reconstruction algorithms for parallel MRI. When large acceleration factors are employed, the SENSE reconstruction becomes very ill-conditioned. For Cartesian SENSE, Tikhonov regularization has been commonly used. However, the Tikhonov regularized image usually tends to be overly smooth, and a high-quality regularization image is desirable to alleviate this problem but is not available. In this paper, we propose a new SENSE regularization technique that is based on total variation with iterated refinement using Bregman iteration. It penalizes highly oscillatory noise but allows sharp edges in reconstruction without the need for prior information. In addition, the Bregman iteration refines the image details iteratively. The method is shown to be able to significantly reduce the artifacts in SENSE reconstruction",
"corpus_id": 14675051,
"score": 1
},
{
"doc_id": "1710451",
"title": "Joint estimation of image and coil sensitivities in parallel MRI",
"abstract": "Parallel magnetic resonance imaging (MRI) using multichannel receiver coils has emerged as an effective tool to reduce imaging time in various dynamic imaging applications. However, there are still a number of image reconstruction issues that have not been fully addressed, thereby limiting the level of speed enhancement achievable with the technology. This paper considers the inaccuracy of coil sensitivities in conventional reconstruction methods such as SENSE, and reformulates the image reconstruction problem as a joint estimation of the coil sensitivities and the desired image, which is solved by an iterative algorithm. Experimental results demonstrate the effectiveness of the proposed method especially when large acceleration factors are used",
"corpus_id": 1710451,
"score": 1
},
{
"doc_id": "10223976",
"title": "A combinational feature selection and ensemble neural network method for classification of gene expression data",
"abstract": "BackgroundMicroarray experiments are becoming a powerful tool for clinical diagnosis, as they have the potential to discover gene expression patterns that are characteristic for a particular disease. To date, this problem has received most attention in the context of cancer research, especially in tumor classification. Various feature selection methods and classifier design strategies also have been generally used and compared. However, most published articles on tumor classification have applied a certain technique to a certain dataset, and recently several researchers compared these techniques based on several public datasets. But, it has been verified that differently selected features reflect different aspects of the dataset and some selected features can obtain better solutions on some certain problems. At the same time, faced with a large amount of microarray data with little knowledge, it is difficult to find the intrinsic characteristics using traditional methods. In this paper, we attempt to introduce a combinational feature selection method in conjunction with ensemble neural networks to generally improve the accuracy and robustness of sample classification.ResultsWe validate our new method on several recent publicly available datasets both with predictive accuracy of testing samples and through cross validation. Compared with the best performance of other current methods, remarkably improved results can be obtained using our new strategy on a wide range of different datasets.ConclusionsThus, we conclude that our methods can obtain more information in microarray data to get more accurate classification and also can help to extract the latent marker genes of the diseases for better diagnosis and treatment.",
"corpus_id": 10223976,
"score": 0
},
{
"doc_id": "12487339",
"title": "Mining community structure of named entities from free text",
"abstract": "Although community discovery has been studied extensively in the Web environment, limited research has been done in the case of free text. Co-occurrence of words and entities in sentences and documents usually implies connections among them. In this paper, we investigate the co-occurrences of named entities in text, and mine communities among these entities. We show that identifying communities from free text can be transformed into a graph clustering problem. A hierarchical clustering algorithm is then proposed. Our experiment shows that the algorithm is effective to discover named entity communities from text documents.",
"corpus_id": 12487339,
"score": 0
},
{
"doc_id": "11383614",
"title": "Mining data records in Web pages",
"abstract": "A large amount of information on the Web is contained in regularly structured objects, which we call data records. Such data records are important because they often present the essential information of their host pages, e.g., lists of products or services. It is useful to mine such data records in order to extract information from them to provide value-added services. Existing automatic techniques are not satisfactory because of their poor accuracies. In this paper, we propose a more effective technique to perform the task. The technique is based on two observations about data records on the Web and a string matching algorithm. The proposed technique is able to mine both contiguous and non-contiguous data records. Our experimental results show that the proposed technique outperforms existing techniques substantially.",
"corpus_id": 11383614,
"score": 0
},
{
"doc_id": "14202528",
"title": "Adding the temporal dimension to search - a case study in publication search",
"abstract": "The most well known search techniques are perhaps the PageRank and HITS algorithms. In this paper, we argue that these algorithms miss an important dimension, the temporal dimension. Quality pages in the past may not be quality pages now or in the future. These techniques favor older pages because these pages have many in-links accumulated over time. New pages, which may be of high quality, have few or no in-links and are left behind. Research publication search has the same problem. If we use the PageRank or HITS algorithm, those older or classic papers are ranked high due to the large number of citations that they received in the past. This paper studies the temporal dimension of search in the context of research publication. A number of methods are proposed to deal with the problem based on analyzing the behavior history and the source of each publication. These methods are evaluated empirically. Our results show that they are highly effective.",
"corpus_id": 14202528,
"score": 0
},
{
"doc_id": "31577892",
"title": "Demand flow network",
"abstract": "This paper had brought up the conceptual model of demand flow network (DFN) on the basis of supply chains theories. The DFN model, DFN value model and DFN core competency model had also been established and analyzed separately in accordance with related theory. The application of DFN theory also had been studied",
"corpus_id": 31577892,
"score": 0
}
] |
arnetminer | {
"doc_id": "32553213",
"title": "Integrating rules and constraints",
"abstract": "Constraint satisfaction problem (CSP) is a deductive problem of a special kind, while rule-based systems are the practical programs that have implemented many of the ideas and techniques of deductive systems. Incorporating CSP into a rule-based system allows the rule-based system to exploit the power of CSP techniques in handling this special class of problems. This paper shows how constraints and rules can be integrated. This integration also helps to deal with the problems of disjunctions, which are not handled satisfactorily in the current rule-based systems.<<ETX>>",
"corpus_id": 32553213
} | [
{
"doc_id": "6193650",
"title": "Dealing with different distributions in learning from",
"abstract": "In the problem of learning with positive and unlabeled examples, existing research all assumes that positive examples P and the hidden positive examples in the unlabeled set U are generated from the same distribution. This assumption may be violated in practice. In such cases, existing methods perform poorly. This paper proposes a novel technique A-EM to deal with the problem. Experimental results with product page classification demonstrate the effectiveness of the proposed technique.",
"corpus_id": 6193650,
"score": 1
},
{
"doc_id": "8786674",
"title": "Entity discovery and assignment for opinion mining applications",
"abstract": "Opinion mining became an important topic of study in recent years due to its wide range of applications. There are also many companies offering opinion mining services. One problem that has not been studied so far is the assignment of entities that have been talked about in each sentence. Let us use forum discussions about products as an example to make the problem concrete. In a typical discussion post, the author may give opinions on multiple products and also compare them. The issue is how to detect what products have been talked about in each sentence. If the sentence contains the product names, they need to be identified. We call this problem entity discovery. If the product names are not explicitly mentioned in the sentence but are implied due to the use of pronouns and language conventions, we need to infer the products. We call this problem entity assignment. These problems are important because without knowing what products each sentence talks about the opinion mined from the sentence is of little use. In this paper, we study these problems and propose two effective methods to solve the problems. Entity discovery is based on pattern discovery and entity assignment is based on mining of comparative sentences. Experimental results using a large number of forum posts demonstrate the effectiveness of the technique. Our system has also been successfully tested in a commercial setting.",
"corpus_id": 8786674,
"score": 1
},
{
"doc_id": "2182731",
"title": "Speed-up iterative frequent itemset mining with constraint changes",
"abstract": "Mining of frequent itemsets is a fundamental data mining task. Past research has proposed many efficient algorithms for this purpose. Recent work also highlighted the importance of using constraints to focus the mining process to mine only those relevant itemsets. In practice, data mining is often an interactive and iterative process. The user typically changes constraints and runs the mining algorithm many times before being satisfied with the final results. This interactive process is very time consuming. Existing mining algorithms are unable to take advantage of this iterative process to use previous mining results to speed up the current mining process. This results in an enormous waste of time and computation. In this paper, we propose an efficient technique to utilize previous mining results to improve the efficiency of current mining when constraints are changed. We first introduce the concept of tree boundary to summarize useful information available from previous mining. We then show that the tree boundary provides an effective and efficient framework for the new mining. The proposed technique has been implemented in the context of two existing frequent itemset mining algorithms, FP-tree and tree projection. Experiment results on both synthetic and real-life datasets show that the proposed approach achieves a dramatic saving of computation.",
"corpus_id": 2182731,
"score": 1
},
{
"doc_id": "61155076",
"title": "Opinion Mining",
"abstract": "Описанл процедуру формування інтегрального показника об’єкта згідно з відгуками користувачів під час застосування методів оцінювання опінії текстової інформації у web-документах. Запропоновано використання лінгвістичних змінних та застосування вагових коефіцієнтів для достовірнішого результату оцінювання емоційного забарвлення текстової інформації. Ключові слова: опінія, емоційне забарвлення, об’єкт, інтегральний показник, лінгвістична змінна, ваговий коефіцієнт.",
"corpus_id": 61155076,
"score": 1
},
{
"doc_id": "12241191",
"title": "Finding Actionable Knowledge via Automated Comparison",
"abstract": "The problem of finding interesting and actionable patterns is a major challenge in data mining. It has been studied by many data mining researchers. The issue is that data mining algorithms often generate too many patterns, which make it very hard for the user to find those truly useful ones. Over the years many techniques have been proposed. However, few have made it to real-life applications. At the end of 2005, we built a data mining system for Motorola (called Opportunity Map) to enable the user to explore the space of a large number of rules in order to find actionable knowledge. The approach is based on the concept of rule cubes and operations on rule cubes. A rule cube is similar to a data cube, but stores rules. Since its deployment, some issues have also been identified during the regular use of the system in Motorola. One of the key issues is that although the operations on rule cubes are flexible, each operation is primitive and has to be initiated by the user. Finding a piece of actionable knowledge typically involves many operations and intense visual inspections, which are labor-intensive and time-consuming. From interactions with our users, we identified a generic problem that is crucial for finding actionable knowledge. The problem involves extensive comparison of sub-populations and identification of the cause of their differences. This paper first defines the problem and then proposes an effective method to solve the problem automatically. To the best of our knowledge, there is no reported study of this problem. The new method has been added to the Opportunity Map system and is now in daily use in Motorola.",
"corpus_id": 12241191,
"score": 1
},
{
"doc_id": "18763296",
"title": "A Scalable Peer-to-Peer Overlay for Applications with Time Constraints",
"abstract": "With the development of Internet, p2p is increasingly receiving attention in research. Recently, a class of p2p applications with time constraints appear. These applications require a short time to locate the resource and(or) a low transit delay between the resource user and the resource holder, such as Skype, MSN. In this paper we propose a scalable p2p overlay for applications with time constraints. Our system provides supports for just two operations for uplayered p2p applications: (1) Given a resource key and the node's IP who holds the resource, it registers the resource information to the associated node in at most two overlay hops; and (2) Given a resource key and a time constraint(0 for no constraint), it returns if possible a path(one or two overlay hops) to the resource holder, and the transit delay of the path is lower than the time constraint. Results from theoretical analysis and simulations show that our system is viable and scalable.",
"corpus_id": 18763296,
"score": 0
},
{
"doc_id": "14885080",
"title": "An Efficient and Accurate Method for 3D-Point Reconstruction from Multiple Views",
"abstract": "In this paper we consider the problem of finding the position of a point in space given its projections in multiple images taken by cameras with known calibration and pose. Ideally the 3D point can be obtained as the intersection of multiple known rays in space. However, with noise the rays do not meet at a single point generally. Therefore, it is necessary to find a best point of intersection. In this paper we propose a modification of the method (Ma et al., 2001. Journal of Communications in Information and Systems, (1):51–73) based on the multiple-view epipolar constraints. The solution is simple in concept and straightforward to implement. It includes generally two steps: first, image points are corrected through approximating the error model to the first order, and then the 3D point can be reconstructed from the corrected image points using any generic triangulation method. Experiments are conducted both on simulated data and on real data to test the proposed method against previous methods. It is shown that results obtained with the proposed method are consistently more accurate than those of other linear methods. When the measurement error of image points is relatively small, its results are comparable to those of maximum likelihood estimation using Newton-type optimizers; and when processing image-point correspondences cross a small number of views, the proposed method is by far more efficient than the Newton-type optimizers.",
"corpus_id": 14885080,
"score": 0
},
{
"doc_id": "6366342",
"title": "ECO: an empirical-based compilation and optimization system",
"abstract": "In this paper, we describe a compilation system that automates much of the process of performance tuning that is currently done manually by application programmers interested in high performance. Due to the growing complexity of accurate performance prediction, our system incorporates empirical techniques to execute variants of code segments with representative data on the target architecture. In this paper, we discuss how empirical techniques and performance modeling can be effectively combined. We also discuss the role of historical information from prior runs, and programmer specifications supporting run-time adaptation. These techniques can be employed to alleviate some of the performance problems that lead to inefficiencies in key applications today: register pressure, cache conflict misses, and the trade-off between synchronization, parallelism and locality in SMPs.",
"corpus_id": 6366342,
"score": 0
},
{
"doc_id": "21860578",
"title": "An Effective PSO-Based Memetic Algorithm for Flow Shop Scheduling",
"abstract": "This paper proposes an effective particle swarm optimization (PSO)-based memetic algorithm (MA) for the permutation flow shop scheduling problem (PFSSP) with the objective to minimize the maximum completion time, which is a typical non-deterministic polynomial-time (NP) hard combinatorial optimization problem. In the proposed PSO-based MA (PSOMA), both PSO-based searching operators and some special local searching operators are designed to balance the exploration and exploitation abilities. In particular, the PSOMA applies the evolutionary searching mechanism of PSO, which is characterized by individual improvement, population cooperation, and competition to effectively perform exploration. On the other hand, the PSOMA utilizes several adaptive local searches to perform exploitation. First, to make PSO suitable for solving PFSSP, a ranked-order value rule based on random key representation is presented to convert the continuous position values of particles to job permutations. Second, to generate an initial swarm with certain quality and diversity, the famous Nawaz-Enscore-Ham (NEH) heuristic is incorporated into the initialization of population. Third, to balance the exploration and exploitation abilities, after the standard PSO-based searching operation, a new local search technique named NEH_1 insertion is probabilistically applied to some good particles selected by using a roulette wheel mechanism with a specified probability. Fourth, to enrich the searching behaviors and to avoid premature convergence, a simulated annealing (SA)-based local search with multiple different neighborhoods is designed and incorporated into the PSOMA. Meanwhile, an effective adaptive meta-Lamarckian learning strategy is employed to decide which neighborhood to be used in SA-based local search. Finally, to further enhance the exploitation ability, a pairwise-based local search is applied after the SA-based search. Simulation results based on benchmarks demonstrate the effectiveness of the PSOMA. Additionally, the effects of some parameters on optimization performances are also discussed",
"corpus_id": 21860578,
"score": 0
},
{
"doc_id": "5256099",
"title": "Intrusion detection system for high-speed network",
"abstract": "The increasing network throughput challenges the current Network Intrusion Detection Systems (NIDS) to have compatible high-performance data processing. In this paper, we describe an in-depth research on the related techniques of high-performance network intrusion detection and an implementation of a Rule-based High-performance Network Intrusion Detection System (RHPNIDS) for high-speed networks. By integrating several performance optimizing methods, the performance of RHPNIDS is very impressive compared with the popular open source NIDS Snort.",
"corpus_id": 5256099,
"score": 0
}
] |
arnetminer | {
"doc_id": "1520951",
"title": "Esub8: A novel tool to predict protein subcellular localizations in eukaryotic organisms",
"abstract": "BackgroundSubcellular localization of a new protein sequence is very important and fruitful for understanding its function. As the number of new genomes has dramatically increased over recent years, a reliable and efficient system to predict protein subcellular location is urgently needed.ResultsEsub8 was developed to predict protein subcellular localizations for eukaryotic proteins based on amino acid composition. In this research, the proteins are classified into the following eight groups: chloroplast, cytoplasm, extracellular, Golgi apparatus, lysosome, mitochondria, nucleus and peroxisome. We know subcellular localization is a typical classification problem; consequently, a one-against-one (1-v-1) multi-class support vector machine was introduced to construct the classifier. Unlike previous methods, ours considers the order information of protein sequences by a different method. Our method is tested in three subcellular localization predictions for prokaryotic proteins and four subcellular localization predictions for eukaryotic proteins on Reinhardt's dataset. The results are then compared to several other methods. The total prediction accuracies of two tests are both 100% by a self-consistency test, and are 92.9% and 84.14% by the jackknife test, respectively. Esub8 also provides excellent results: the total prediction accuracies are 100% by a self-consistency test and 87% by the jackknife test.ConclusionsOur method represents a different approach for predicting protein subcellular localization and achieved a satisfactory result; furthermore, we believe Esub8 will be a useful tool for predicting protein subcellular localizations in eukaryotic organisms.",
"corpus_id": 1520951
} | [
{
"doc_id": "15435032",
"title": "Characterizing the dynamic connectivity between genes by variable parameter regression and Kalman filtering based on temporal gene expression data",
"abstract": "MOTIVATION\nOne popular method for analyzing functional connectivity between genes is to cluster genes with similar expression profiles. The most popular metrics measuring the similarity (or dissimilarity) among genes include Pearson's correlation, linear regression coefficient and Euclidean distance. As these metrics only give some constant values, they can only depict a stationary connectivity between genes. However, the functional connectivity between genes usually changes with time. Here, we introduce a novel insight for characterizing the relationship between genes and find out a proper mathematical model, variable parameter regression and Kalman filtering to model it.\n\n\nRESULTS\nWe applied our algorithm to some simulated data and two pairs of real gene expression data. The changes of connectivity in simulated data are closely identical with the truth and the results of two pairs of gene expression data show that our method has successfully demonstrated the dynamic connectivity between genes.\n\n\nCONTACT\njiangtz@nlpr.ia.ac.cn.",
"corpus_id": 15435032,
"score": 1
},
{
"doc_id": "10223976",
"title": "A combinational feature selection and ensemble neural network method for classification of gene expression data",
"abstract": "BackgroundMicroarray experiments are becoming a powerful tool for clinical diagnosis, as they have the potential to discover gene expression patterns that are characteristic for a particular disease. To date, this problem has received most attention in the context of cancer research, especially in tumor classification. Various feature selection methods and classifier design strategies also have been generally used and compared. However, most published articles on tumor classification have applied a certain technique to a certain dataset, and recently several researchers compared these techniques based on several public datasets. But, it has been verified that differently selected features reflect different aspects of the dataset and some selected features can obtain better solutions on some certain problems. At the same time, faced with a large amount of microarray data with little knowledge, it is difficult to find the intrinsic characteristics using traditional methods. In this paper, we attempt to introduce a combinational feature selection method in conjunction with ensemble neural networks to generally improve the accuracy and robustness of sample classification.ResultsWe validate our new method on several recent publicly available datasets both with predictive accuracy of testing samples and through cross validation. Compared with the best performance of other current methods, remarkably improved results can be obtained using our new strategy on a wide range of different datasets.ConclusionsThus, we conclude that our methods can obtain more information in microarray data to get more accurate classification and also can help to extract the latent marker genes of the diseases for better diagnosis and treatment.",
"corpus_id": 10223976,
"score": 1
},
{
"doc_id": "16882534",
"title": "Systematic benchmarking of microarray data feature extraction and classification",
"abstract": "A combination of microarrays with classification methods is a promising approach to supporting clinical management decisions in oncology. The aim of this paper is to systematically benchmark the role of classification models. Each classification model is a combination of one feature extraction method and one classification method. We consider four feature extraction methods and five classification methods, from which 20 classification models can be derived. The feature extraction methods are t-statistics, non-parametric Wilcoxon statistics, ad hoc signal-to-noise statistics, and principal component analysis (PCA), and the classification methods are Fisher linear discriminant analysis (FLDA), the support vector machine (SVM), the k nearest-neighbour classifier (kNN), diagonal linear discriminant analysis (DLDA), and diagonal quadratic discriminant analysis (DQDA). Twenty randomizations of each of three binary cancer classification problems derived from publicly available datasets are examined. PCA plus FLDA is found to be the optimal classification model.",
"corpus_id": 16882534,
"score": 1
},
{
"doc_id": "3874732",
"title": "A Neural Network Approach To Shape From Shading",
"abstract": "In this paper, we propose a method of recovering shape from shading that solves directly for the surface height using neural networks. The main motivation of this paper is to provide an answer to the open problem proposed by Zhou and Chellappa [11]. We first formulate the shape from shading problem by combining a triangular element surface model with a linearized reflectance map. Then, we use a linear feed-forward network architecture with six layers to compute the surface height with a singular value decomposition. The weights in the model initialized using eigenvectors and eigen-values of the stiffness matrix of objective functional. Experimental results show that our solution is very effective.",
"corpus_id": 3874732,
"score": 1
},
{
"doc_id": "10102687",
"title": "Mining interesting knowledge using DM-II",
"abstract": "Data mining aims to develop a new generation of tools to intelligently assist humans in analyzing mountains of data. Over the past few years, great progress has been made in both research and applications of data mining. Data mining systems have helped many businesses by exposing previously unknown patterns in their databases, which were used to improve profits, enhance customer services, and ultimately achieve a competitive advantage. In this paper, we present our unique data mining system DM-II (Data Mining Integration and Interestingness). DM-II is a PC-based system working in Windows 95/98/NT environment. Apart from the normal components of a data mining system, DM-II has a number of unique and advanced sub-systems. These sub-systems have been applied in many real-life applications, including education applications, insurance application, accident application, disease application, drug screening application, image classification, etc. Here, we focus on discussing the following sub-systems: CBA: CBA originally stands for Classification-Based on Associations [lo]. It has now been extended with a number of other advanced features. In all, CBA has the following capabilities: Building classifiers using association rules: Traditionally, association rule mining and classifier building are regarded as two different data mining tasks. CBA unifies the two tasks. It is able to ,generate association rules and builds a classifier using a special subset of the association rules. What is significant is that CBA, in general, produces more accurate classifiers compared to the state-ofthe-art classification system C4.5 [ 161. It also helps to solve some outstanding problems with the existing classification systems. Pruning and summarizing the discovered associations: One major problem with association",
"corpus_id": 10102687,
"score": 0
},
{
"doc_id": "1433082",
"title": "The design of nonuniform-band maximally decimated filter banks",
"abstract": "A design method for nonuniform-band maximally decimated filter banks is presented. It is based on the quadrature mirror filter (QMF) design method and allows the direct frequency domain design of two-band filter banks having arbitrary rational decimation ratios. The numerical design of nonuniform-band filter banks is achieved using a simple structure in which elementary modulators are used in the highpass channel to obtain almost-perfect reconstruction.<<ETX>>",
"corpus_id": 1433082,
"score": 0
},
{
"doc_id": "15896869",
"title": "Business applications of data mining",
"abstract": "They help identify and predict individual, as well as aggregate, behavior, as illustrated by four application domains: direct mail, retail, automobile insurance, and health care.",
"corpus_id": 15896869,
"score": 0
},
{
"doc_id": "215471",
"title": "Nesting One-Against-One Algorithm Based on SVMs for Pattern Classification",
"abstract": "Support vector machines (SVMs), which were originally designed for binary classifications, are an excellent tool for machine learning. For the multiclass classifications, they are usually converted into binary ones before they can be used to classify the examples. In the one-against-one algorithm with SVMs, there exists an unclassifiable region where the data samples cannot be classified by its decision function. This paper extends the one-against-one algorithm to handle this problem. We also give the convergence and computational complexity analysis of the proposed method. Finally, one-against-one, fuzzy decision function (FDF), and decision-directed acyclic graph (DDAG) algorithms and our proposed method are compared using five University of California at Irvine (UCI) data sets. The results report that the proposed method can handle the unclassifiable region better than others.",
"corpus_id": 215471,
"score": 0
},
{
"doc_id": "207412473",
"title": "A Chinese question classification using one-vs-one method as a learning tool",
"abstract": "Question classification plays an important role in the question answering system and the errors of question classification will probably result in the failure of question answering. Thus, how to enhance the accuracy is an open question. In order to enhance the accuracies of the Chinese question classification, this paper extends one-against-one method based on SVMs to resolve the problems. The results show the good performance of the algorithm for Chinese question classification problems.",
"corpus_id": 207412473,
"score": 0
}
] |
arnetminer | {
"doc_id": "12487339",
"title": "Mining community structure of named entities from free text",
"abstract": "Although community discovery has been studied extensively in the Web environment, limited research has been done in the case of free text. Co-occurrence of words and entities in sentences and documents usually implies connections among them. In this paper, we investigate the co-occurrences of named entities in text, and mine communities among these entities. We show that identifying communities from free text can be transformed into a graph clustering problem. A hierarchical clustering algorithm is then proposed. Our experiment shows that the algorithm is effective to discover named entity communities from text documents.",
"corpus_id": 12487339
} | [
{
"doc_id": "6193650",
"title": "Dealing with different distributions in learning from",
"abstract": "In the problem of learning with positive and unlabeled examples, existing research all assumes that positive examples P and the hidden positive examples in the unlabeled set U are generated from the same distribution. This assumption may be violated in practice. In such cases, existing methods perform poorly. This paper proposes a novel technique A-EM to deal with the problem. Experimental results with product page classification demonstrate the effectiveness of the proposed technique.",
"corpus_id": 6193650,
"score": 1
},
{
"doc_id": "12241191",
"title": "Finding Actionable Knowledge via Automated Comparison",
"abstract": "The problem of finding interesting and actionable patterns is a major challenge in data mining. It has been studied by many data mining researchers. The issue is that data mining algorithms often generate too many patterns, which make it very hard for the user to find those truly useful ones. Over the years many techniques have been proposed. However, few have made it to real-life applications. At the end of 2005, we built a data mining system for Motorola (called Opportunity Map) to enable the user to explore the space of a large number of rules in order to find actionable knowledge. The approach is based on the concept of rule cubes and operations on rule cubes. A rule cube is similar to a data cube, but stores rules. Since its deployment, some issues have also been identified during the regular use of the system in Motorola. One of the key issues is that although the operations on rule cubes are flexible, each operation is primitive and has to be initiated by the user. Finding a piece of actionable knowledge typically involves many operations and intense visual inspections, which are labor-intensive and time-consuming. From interactions with our users, we identified a generic problem that is crucial for finding actionable knowledge. The problem involves extensive comparison of sub-populations and identification of the cause of their differences. This paper first defines the problem and then proposes an effective method to solve the problem automatically. To the best of our knowledge, there is no reported study of this problem. The new method has been added to the Opportunity Map system and is now in daily use in Motorola.",
"corpus_id": 12241191,
"score": 1
},
{
"doc_id": "61155076",
"title": "Opinion Mining",
"abstract": "Описанл процедуру формування інтегрального показника об’єкта згідно з відгуками користувачів під час застосування методів оцінювання опінії текстової інформації у web-документах. Запропоновано використання лінгвістичних змінних та застосування вагових коефіцієнтів для достовірнішого результату оцінювання емоційного забарвлення текстової інформації. Ключові слова: опінія, емоційне забарвлення, об’єкт, інтегральний показник, лінгвістична змінна, ваговий коефіцієнт.",
"corpus_id": 61155076,
"score": 1
},
{
"doc_id": "2182731",
"title": "Speed-up iterative frequent itemset mining with constraint changes",
"abstract": "Mining of frequent itemsets is a fundamental data mining task. Past research has proposed many efficient algorithms for this purpose. Recent work also highlighted the importance of using constraints to focus the mining process to mine only those relevant itemsets. In practice, data mining is often an interactive and iterative process. The user typically changes constraints and runs the mining algorithm many times before being satisfied with the final results. This interactive process is very time consuming. Existing mining algorithms are unable to take advantage of this iterative process to use previous mining results to speed up the current mining process. This results in an enormous waste of time and computation. In this paper, we propose an efficient technique to utilize previous mining results to improve the efficiency of current mining when constraints are changed. We first introduce the concept of tree boundary to summarize useful information available from previous mining. We then show that the tree boundary provides an effective and efficient framework for the new mining. The proposed technique has been implemented in the context of two existing frequent itemset mining algorithms, FP-tree and tree projection. Experiment results on both synthetic and real-life datasets show that the proposed approach achieves a dramatic saving of computation.",
"corpus_id": 2182731,
"score": 1
},
{
"doc_id": "8786674",
"title": "Entity discovery and assignment for opinion mining applications",
"abstract": "Opinion mining became an important topic of study in recent years due to its wide range of applications. There are also many companies offering opinion mining services. One problem that has not been studied so far is the assignment of entities that have been talked about in each sentence. Let us use forum discussions about products as an example to make the problem concrete. In a typical discussion post, the author may give opinions on multiple products and also compare them. The issue is how to detect what products have been talked about in each sentence. If the sentence contains the product names, they need to be identified. We call this problem entity discovery. If the product names are not explicitly mentioned in the sentence but are implied due to the use of pronouns and language conventions, we need to infer the products. We call this problem entity assignment. These problems are important because without knowing what products each sentence talks about the opinion mined from the sentence is of little use. In this paper, we study these problems and propose two effective methods to solve the problems. Entity discovery is based on pattern discovery and entity assignment is based on mining of comparative sentences. Experimental results using a large number of forum posts demonstrate the effectiveness of the technique. Our system has also been successfully tested in a commercial setting.",
"corpus_id": 8786674,
"score": 1
},
{
"doc_id": "29971030",
"title": "Measuring the meaning in time series clustering of text search queries",
"abstract": "We use a combination of proven methods from time series analysis and machine learning to explore the relationship between temporal and semantic similarity in web query logs; we discover that the combination of correlation and cycles is a good, but not perfect, sign of semantic relationship.",
"corpus_id": 29971030,
"score": 0
},
{
"doc_id": "32625268",
"title": "An Energy-Minimizing Mesh Parameterization",
"abstract": "In this paper, we propose a new energy-minimizing mesh parameterization method, which linearly combines two new energies EQ and EM. It not only avoids triangles overlap in the parameter domain, but also is invariant under rotation, translation and scale transformations. We first parameterize the original 3D mesh to the parameter plane by using the energy-minimizing parameterization, and get the optimal effect by optimizing the weights wij gradually. Experimental results indicate that this optimized energy-minimizing method has low distortion and good stability.",
"corpus_id": 32625268,
"score": 0
},
{
"doc_id": "13160546",
"title": "A Population-Based Incremental Learning Algorithm with Elitist Strategy",
"abstract": "The population-based incremental learning (PBIL) is a novel evolutionary algorithm combined the mechanisms of the Genetic Algorithm with competitive learning. In this paper, the influence of the number of selected best solutions on the convergence speed of the PBIL is studied by experiment. Based on experimental results, a PBIL algorithm with elitist strategy, named Double Learning PBIL (DLPBIL), is proposed. The new algorithm learns both the selected best solutions in current population and the optimal solution found so far in the algorithm at same time. Experimental results show that the DLPBIL out-performs the standard PBIL. Both the convergence speed and the solution quality are improved.",
"corpus_id": 13160546,
"score": 0
},
{
"doc_id": "205926081",
"title": "Solvability of multi-point boundary value problem at resonance (III)",
"abstract": "In this paper, we consider the following second order ordinary differential equation(1.1)x^'^'=f(t,x(t),x^'(t))+e(t),t@?(0,1),subject to one of the following boundary value conditions: (1.2)x(0)=@?\"i\"=\"1^m^-^2@a\"ix(@x\"i),x(1)=@?\"j\"=\"1^n^-^2@b\"jx(@h\"j),(1.3)x(0)=@?\"i\"=\"1^m^-^2@a\"ix(@x\"i),x^'(1)=@?\"j\"=\"1^n^-^2@b\"jx^'(@h\"j),(1.4)x^'(0)=@?\"i\"=\"1^m^-^2@a\"ix^'(@x\"i),x(1)=@?\"j\"=\"1^n^-^2@b\"jx(@h\"j),where @a\"i(1=",
"corpus_id": 205926081,
"score": 0
},
{
"doc_id": "45268591",
"title": "WBEM Based Distributed Network Monitoring",
"abstract": "In the paper we identify the needs in efficient management of distributed networks and some problematic areas in the field. Then we introduce Web Based Enterprise Management (WBEM) to address the problem of providing a unified way to model all kinds of managed elements in a single information model in heterogeneous network environments. The advantages brought about by the use of WBEM in network management solve some critical problems existing in current network management. Based on the brief description of components of WBEM, we discuss in depth the basic WBEM instrumentation and multi-tiered WBEM enabled management infrastructure. We also apply this multi-tiered management infrastructure to a network management application scenario to monitor network activities for unexpected behaviors.",
"corpus_id": 45268591,
"score": 0
}
] |
arnetminer | {
"doc_id": "7650649",
"title": "Fully Automatic Text Categorization by Exploiting WordNet",
"abstract": "This paper proposes a Fully Automatic Categorization approach for Text (FACT) by exploiting the semantic features from WordNet and document clustering. In FACT, the training data is constructed automatically by using the knowledge of the category name. With the support of WordNet, it first uses the category name to generate a set of features for the corresponding category. Then, a set of documents is labeled according to such features. To reduce the possible bias originating from the category name and generated features, document clustering is used to refine the quality of initial labeling. The training data are subsequently constructed to train the discriminative classifier. The empirical experiments show that the best performance of FACT can achieve more than 90% of the baseline SVM classifiers in F1 measure, which demonstrates the effectiveness of the proposed approach.",
"corpus_id": 7650649
} | [
{
"doc_id": "35469905",
"title": "Repairing Inconsistency and Uncertainty in DL-based Ontologies",
"abstract": "A method of detecting malignant or pre-malignant conditions of the cervix, and test kits therefor, involves selecting a fraction of a cervical cell sample consisting predominantly of epithelial cells and determining characteristics indicative of the malignant or pre-malignant conditions therein.",
"corpus_id": 35469905,
"score": 1
},
{
"doc_id": "7901828",
"title": "Text Classification by Labeling Words",
"abstract": "Traditionally, text classifiers are built from labeled training examples. Labeling is usually done manually by human experts (or the users), which is a labor intensive and time consuming process. In the past few years, researchers investigated various forms of semi-supervised learning to reduce the burden of manual labeling. In this paper, we propose a different approach. Instead of labeling a set of documents, the proposed method labels a set of representative words for each class. It then uses these words to extract a set of documents for each class from a set of unlabeled documents to form the initial training set. The EM algorithm is then applied to build the classifier. The key issue of the approach is how to obtain a set of representative words for each class. One way is to ask the user to provide them, which is difficult because the user usually can only give a few words (which are insufficient for accurate learning). We propose a method to solve the problem. It combines clustering and feature selection. The technique can effectively rank the words in the unlabeled set according to their importance. The user then selects/labels some words from the ranked list for each class. This process requires less effort than providing words with no help or manual labelillg of documents. Our results show that the new method is highly effective and promising.",
"corpus_id": 7901828,
"score": 0
},
{
"doc_id": "10143256",
"title": "An EM based training algorithm for cross-language text categorization",
"abstract": "Due to the globalization on the Web, many companies and institutions need to efficiently organize and search repositories containing multilingual documents. The management of these heterogeneous text collections increases the costs significantly because experts of different languages are required to organize these collections. Cross-language text categorization can provide techniques to extend existing automatic classification systems in one language to new languages without requiring additional intervention of human experts. In this paper, we propose a learning algorithm based on the EM scheme which can be used to train text classifiers in a multilingual environment. In particular, in the proposed approach, we assume that a predefined category set and a collection of labeled training data is available for a given language L/sub 1/. A classifier for a different language L/sub 2/ is trained by translating the available labeled training set for L/sub 1/ to L/sub 2/ and by using an additional set of unlabeled documents from L/sub 2/. This technique allows us to extract correct statistical properties of the language L/sub 2/ which are not completely available in automatically translated examples, because of the different characteristics of language L/sub 1/ and of the approximation of the translation process. Our experimental results show that the performance of the proposed method is very promising when applied on a test document set extracted from newsgroups in English and Italian.",
"corpus_id": 10143256,
"score": 0
},
{
"doc_id": "12487339",
"title": "Mining community structure of named entities from free text",
"abstract": "Although community discovery has been studied extensively in the Web environment, limited research has been done in the case of free text. Co-occurrence of words and entities in sentences and documents usually implies connections among them. In this paper, we investigate the co-occurrences of named entities in text, and mine communities among these entities. We show that identifying communities from free text can be transformed into a graph clustering problem. A hierarchical clustering algorithm is then proposed. Our experiment shows that the algorithm is effective to discover named entity communities from text documents.",
"corpus_id": 12487339,
"score": 0
},
{
"doc_id": "11669811",
"title": "Multi-level organization and summarization of the discovered rules",
"abstract": "Many existing data mining techniques often p roduce a large number of rules, which make it very difficult for manual inspection o f the rules to identify those interesting ones. This problem represents a major gap between the results of data mining and the understanding and use of the mining results. In this paper, we a rgue that t he key problem is not with the large number of rules because if there are indeed many rules that exist in data, they should b e discovered. The main p roblem is with ou r inability to organize, summarize and present the rules in such a way that they can b e ea sily analyzed b y the user. In this paper, we propose a technique to intuitively organize a nd summarize the discovered rules. With this organization, the discovered rules can b e presented to the user in the way as we think and talk about knowledge in ou r daily lives. This organization also allows the user to view the discovered rules at different levels of details, and to focus his/her attention on those interesting aspects. This paper presents this technique a nd u ses it t o o rganize, summarize a nd present t he knowledge e mbedded in a decision tree, and a set of association rules. Experiment results and p ractical applications show that the technique is both intuitive and effective.",
"corpus_id": 11669811,
"score": 0
},
{
"doc_id": "10102687",
"title": "Mining interesting knowledge using DM-II",
"abstract": "Data mining aims to develop a new generation of tools to intelligently assist humans in analyzing mountains of data. Over the past few years, great progress has been made in both research and applications of data mining. Data mining systems have helped many businesses by exposing previously unknown patterns in their databases, which were used to improve profits, enhance customer services, and ultimately achieve a competitive advantage. In this paper, we present our unique data mining system DM-II (Data Mining Integration and Interestingness). DM-II is a PC-based system working in Windows 95/98/NT environment. Apart from the normal components of a data mining system, DM-II has a number of unique and advanced sub-systems. These sub-systems have been applied in many real-life applications, including education applications, insurance application, accident application, disease application, drug screening application, image classification, etc. Here, we focus on discussing the following sub-systems: CBA: CBA originally stands for Classification-Based on Associations [lo]. It has now been extended with a number of other advanced features. In all, CBA has the following capabilities: Building classifiers using association rules: Traditionally, association rule mining and classifier building are regarded as two different data mining tasks. CBA unifies the two tasks. It is able to ,generate association rules and builds a classifier using a special subset of the association rules. What is significant is that CBA, in general, produces more accurate classifiers compared to the state-ofthe-art classification system C4.5 [ 161. It also helps to solve some outstanding problems with the existing classification systems. Pruning and summarizing the discovered associations: One major problem with association",
"corpus_id": 10102687,
"score": 0
}
] |
arnetminer | {
"doc_id": "6387426",
"title": "Identifying comparative sentences in text documents",
"abstract": "This paper studies the problem of identifying comparative sentences in text documents. The problem is related to but quite different from sentiment/opinion sentence identification or classification. Sentiment classification studies the problem of classifying a document or a sentence based on the subjective opinion of the author. An important application area of sentiment/opinion identification is business intelligence as a product manufacturer always wants to know consumers' opinions on its products. Comparisons on the other hand can be subjective or objective. Furthermore, a comparison is not concerned with an object in isolation. Instead, it compares the object with others. An example opinion sentence is \"the sound quality of CD player X is poor\". An example comparative sentence is \"the sound quality of CD player X is not as good as that of CD player Y\". Clearly, these two sentences give different information. Their language constructs are quite different too. Identifying comparative sentences is also useful in practice because direct comparisons are perhaps one of the most convincing ways of evaluation, which may even be more important than opinions on each individual object. This paper proposes to study the comparative sentence identification problem. It first categorizes comparative sentences into different types, and then presents a novel integrated pattern discovery and supervised learning approach to identifying comparative sentences from text documents. Experiment results using three types of documents, news articles, consumer reviews of products, and Internet forum postings, show a precision of 79% and recall of 81%. More detailed results are given in the paper.",
"corpus_id": 6387426
} | [
{
"doc_id": "12241191",
"title": "Finding Actionable Knowledge via Automated Comparison",
"abstract": "The problem of finding interesting and actionable patterns is a major challenge in data mining. It has been studied by many data mining researchers. The issue is that data mining algorithms often generate too many patterns, which make it very hard for the user to find those truly useful ones. Over the years many techniques have been proposed. However, few have made it to real-life applications. At the end of 2005, we built a data mining system for Motorola (called Opportunity Map) to enable the user to explore the space of a large number of rules in order to find actionable knowledge. The approach is based on the concept of rule cubes and operations on rule cubes. A rule cube is similar to a data cube, but stores rules. Since its deployment, some issues have also been identified during the regular use of the system in Motorola. One of the key issues is that although the operations on rule cubes are flexible, each operation is primitive and has to be initiated by the user. Finding a piece of actionable knowledge typically involves many operations and intense visual inspections, which are labor-intensive and time-consuming. From interactions with our users, we identified a generic problem that is crucial for finding actionable knowledge. The problem involves extensive comparison of sub-populations and identification of the cause of their differences. This paper first defines the problem and then proposes an effective method to solve the problem automatically. To the best of our knowledge, there is no reported study of this problem. The new method has been added to the Opportunity Map system and is now in daily use in Motorola.",
"corpus_id": 12241191,
"score": 1
},
{
"doc_id": "61155076",
"title": "Opinion Mining",
"abstract": "Описанл процедуру формування інтегрального показника об’єкта згідно з відгуками користувачів під час застосування методів оцінювання опінії текстової інформації у web-документах. Запропоновано використання лінгвістичних змінних та застосування вагових коефіцієнтів для достовірнішого результату оцінювання емоційного забарвлення текстової інформації. Ключові слова: опінія, емоційне забарвлення, об’єкт, інтегральний показник, лінгвістична змінна, ваговий коефіцієнт.",
"corpus_id": 61155076,
"score": 1
},
{
"doc_id": "6193650",
"title": "Dealing with different distributions in learning from",
"abstract": "In the problem of learning with positive and unlabeled examples, existing research all assumes that positive examples P and the hidden positive examples in the unlabeled set U are generated from the same distribution. This assumption may be violated in practice. In such cases, existing methods perform poorly. This paper proposes a novel technique A-EM to deal with the problem. Experimental results with product page classification demonstrate the effectiveness of the proposed technique.",
"corpus_id": 6193650,
"score": 1
},
{
"doc_id": "2182731",
"title": "Speed-up iterative frequent itemset mining with constraint changes",
"abstract": "Mining of frequent itemsets is a fundamental data mining task. Past research has proposed many efficient algorithms for this purpose. Recent work also highlighted the importance of using constraints to focus the mining process to mine only those relevant itemsets. In practice, data mining is often an interactive and iterative process. The user typically changes constraints and runs the mining algorithm many times before being satisfied with the final results. This interactive process is very time consuming. Existing mining algorithms are unable to take advantage of this iterative process to use previous mining results to speed up the current mining process. This results in an enormous waste of time and computation. In this paper, we propose an efficient technique to utilize previous mining results to improve the efficiency of current mining when constraints are changed. We first introduce the concept of tree boundary to summarize useful information available from previous mining. We then show that the tree boundary provides an effective and efficient framework for the new mining. The proposed technique has been implemented in the context of two existing frequent itemset mining algorithms, FP-tree and tree projection. Experiment results on both synthetic and real-life datasets show that the proposed approach achieves a dramatic saving of computation.",
"corpus_id": 2182731,
"score": 1
},
{
"doc_id": "8786674",
"title": "Entity discovery and assignment for opinion mining applications",
"abstract": "Opinion mining became an important topic of study in recent years due to its wide range of applications. There are also many companies offering opinion mining services. One problem that has not been studied so far is the assignment of entities that have been talked about in each sentence. Let us use forum discussions about products as an example to make the problem concrete. In a typical discussion post, the author may give opinions on multiple products and also compare them. The issue is how to detect what products have been talked about in each sentence. If the sentence contains the product names, they need to be identified. We call this problem entity discovery. If the product names are not explicitly mentioned in the sentence but are implied due to the use of pronouns and language conventions, we need to infer the products. We call this problem entity assignment. These problems are important because without knowing what products each sentence talks about the opinion mined from the sentence is of little use. In this paper, we study these problems and propose two effective methods to solve the problems. Entity discovery is based on pattern discovery and entity assignment is based on mining of comparative sentences. Experimental results using a large number of forum posts demonstrate the effectiveness of the technique. Our system has also been successfully tested in a commercial setting.",
"corpus_id": 8786674,
"score": 1
},
{
"doc_id": "9933281",
"title": "A Learning Process Using SVMs for Multi-agents Decision Classification",
"abstract": "In order to resolve decision classification problem in multiple agents system, this paper first introduces the architecture of multiple agents system. It then proposes a support vector machines based assessment approach, which has the ability to learn the rules form previous assessment results from domain experts. Finally, the experiment are conducted on the artificially dataset to illustrate how the proposed works, and the results show the proposed method has effective learning ability for decision classification problems.",
"corpus_id": 9933281,
"score": 0
},
{
"doc_id": "8341217",
"title": "Multi-Space-Mapped SVMs for Multi-class Classification",
"abstract": "In SVMs-based multiple classification, it is not always possible to find an appropriate kernel function to map all the classes from different distribution functions into a feature space where they are linearly separable from each other. This is even worse if the number of classes is very large. As a result, the classification accuracy is not as good as expected. In order to improve the performance of SVMs-based multi-classifiers, this paper proposes a method, named multi-space-mapped SVMs, to map the classes into different feature spaces and then classify them. The proposed method reduces the requirements for the kernel function. Substantial experiments have been conducted on one-against-all, one-against-one, FSVM, DDAG algorithms and our algorithm using six UCI data sets. The statistical results show that the proposed method has a higher probability of finding appropriate kernel functions than traditional methods and outperforms others.",
"corpus_id": 8341217,
"score": 0
},
{
"doc_id": "1710451",
"title": "Joint estimation of image and coil sensitivities in parallel MRI",
"abstract": "Parallel magnetic resonance imaging (MRI) using multichannel receiver coils has emerged as an effective tool to reduce imaging time in various dynamic imaging applications. However, there are still a number of image reconstruction issues that have not been fully addressed, thereby limiting the level of speed enhancement achievable with the technology. This paper considers the inaccuracy of coil sensitivities in conventional reconstruction methods such as SENSE, and reformulates the image reconstruction problem as a joint estimation of the coil sensitivities and the desired image, which is solved by an iterative algorithm. Experimental results demonstrate the effectiveness of the proposed method especially when large acceleration factors are used",
"corpus_id": 1710451,
"score": 0
},
{
"doc_id": "507578",
"title": "Numerical Experiments with Monte Carlo Methods and SPAI Preconditioner for Solving System of Linear Equations",
"abstract": "In this paper we present the results of experiments comparing the performance of the mixed Monte Carlo algorithms and SPAI preconditener with BICGSTAB. The experiments are carried out on a Silicon Graphics ONYX2 machine. Based on our experiments, we conclude that these techniques are comparable from the point of view of robustness and rates of convergence, with the Monte Carlo approach performing better for some general cases and SPAI approach performing better in case of very sparse matrices.",
"corpus_id": 507578,
"score": 0
},
{
"doc_id": "23965389",
"title": "Design and Implementation of Control System on Embedded Downloading Server Based on C/S Architecture",
"abstract": "Based on Client-Server architecture, this paper proposes the design and implementation of system controlling embedded-Linux downloading server. The software architecture design of this system is composed of three parts: network layer, control layer, and application layer, which is modular and pluggable. The implementation also proved the feasibility, reliability and portability of this design.",
"corpus_id": 23965389,
"score": 0
}
] |
arnetminer | {
"doc_id": "5361328",
"title": "Using Decision Tree Induction for Discovering Holes in Data",
"abstract": "Existing research in machine learning and data mining has been focused on finding rules or regularities among the data cases. Recently, it was shown that those associations that are missing in data may also be interesting. These missing associations are the holes or empty regions. The existing algorithm for discovering holes has a number of shortcomings. It requires each hole to contain no data point, which is too restrictive for many real-life applications. It also has a very high complexity, and produces a huge number of holes. Additionally, the algorithm only works in a continuous space, and does not allow any discrete/nominal attribute. These drawbacks limit its applications. In this paper, we propose a novel approach to overcome these shortcomings. This approach transforms the holes-discovery problem into a supervised learning task, and then uses the decision tree induction technique for discovering holes in data.",
"corpus_id": 5361328
} | [
{
"doc_id": "6193650",
"title": "Dealing with different distributions in learning from",
"abstract": "In the problem of learning with positive and unlabeled examples, existing research all assumes that positive examples P and the hidden positive examples in the unlabeled set U are generated from the same distribution. This assumption may be violated in practice. In such cases, existing methods perform poorly. This paper proposes a novel technique A-EM to deal with the problem. Experimental results with product page classification demonstrate the effectiveness of the proposed technique.",
"corpus_id": 6193650,
"score": 1
},
{
"doc_id": "8786674",
"title": "Entity discovery and assignment for opinion mining applications",
"abstract": "Opinion mining became an important topic of study in recent years due to its wide range of applications. There are also many companies offering opinion mining services. One problem that has not been studied so far is the assignment of entities that have been talked about in each sentence. Let us use forum discussions about products as an example to make the problem concrete. In a typical discussion post, the author may give opinions on multiple products and also compare them. The issue is how to detect what products have been talked about in each sentence. If the sentence contains the product names, they need to be identified. We call this problem entity discovery. If the product names are not explicitly mentioned in the sentence but are implied due to the use of pronouns and language conventions, we need to infer the products. We call this problem entity assignment. These problems are important because without knowing what products each sentence talks about the opinion mined from the sentence is of little use. In this paper, we study these problems and propose two effective methods to solve the problems. Entity discovery is based on pattern discovery and entity assignment is based on mining of comparative sentences. Experimental results using a large number of forum posts demonstrate the effectiveness of the technique. Our system has also been successfully tested in a commercial setting.",
"corpus_id": 8786674,
"score": 1
},
{
"doc_id": "12241191",
"title": "Finding Actionable Knowledge via Automated Comparison",
"abstract": "The problem of finding interesting and actionable patterns is a major challenge in data mining. It has been studied by many data mining researchers. The issue is that data mining algorithms often generate too many patterns, which make it very hard for the user to find those truly useful ones. Over the years many techniques have been proposed. However, few have made it to real-life applications. At the end of 2005, we built a data mining system for Motorola (called Opportunity Map) to enable the user to explore the space of a large number of rules in order to find actionable knowledge. The approach is based on the concept of rule cubes and operations on rule cubes. A rule cube is similar to a data cube, but stores rules. Since its deployment, some issues have also been identified during the regular use of the system in Motorola. One of the key issues is that although the operations on rule cubes are flexible, each operation is primitive and has to be initiated by the user. Finding a piece of actionable knowledge typically involves many operations and intense visual inspections, which are labor-intensive and time-consuming. From interactions with our users, we identified a generic problem that is crucial for finding actionable knowledge. The problem involves extensive comparison of sub-populations and identification of the cause of their differences. This paper first defines the problem and then proposes an effective method to solve the problem automatically. To the best of our knowledge, there is no reported study of this problem. The new method has been added to the Opportunity Map system and is now in daily use in Motorola.",
"corpus_id": 12241191,
"score": 1
},
{
"doc_id": "2182731",
"title": "Speed-up iterative frequent itemset mining with constraint changes",
"abstract": "Mining of frequent itemsets is a fundamental data mining task. Past research has proposed many efficient algorithms for this purpose. Recent work also highlighted the importance of using constraints to focus the mining process to mine only those relevant itemsets. In practice, data mining is often an interactive and iterative process. The user typically changes constraints and runs the mining algorithm many times before being satisfied with the final results. This interactive process is very time consuming. Existing mining algorithms are unable to take advantage of this iterative process to use previous mining results to speed up the current mining process. This results in an enormous waste of time and computation. In this paper, we propose an efficient technique to utilize previous mining results to improve the efficiency of current mining when constraints are changed. We first introduce the concept of tree boundary to summarize useful information available from previous mining. We then show that the tree boundary provides an effective and efficient framework for the new mining. The proposed technique has been implemented in the context of two existing frequent itemset mining algorithms, FP-tree and tree projection. Experiment results on both synthetic and real-life datasets show that the proposed approach achieves a dramatic saving of computation.",
"corpus_id": 2182731,
"score": 1
},
{
"doc_id": "61155076",
"title": "Opinion Mining",
"abstract": "Описанл процедуру формування інтегрального показника об’єкта згідно з відгуками користувачів під час застосування методів оцінювання опінії текстової інформації у web-документах. Запропоновано використання лінгвістичних змінних та застосування вагових коефіцієнтів для достовірнішого результату оцінювання емоційного забарвлення текстової інформації. Ключові слова: опінія, емоційне забарвлення, об’єкт, інтегральний показник, лінгвістична змінна, ваговий коефіцієнт.",
"corpus_id": 61155076,
"score": 1
},
{
"doc_id": "29675805",
"title": "Stability Analysis of Swarm with Interaction Time Delays Using Nearest Neighbors Information",
"abstract": "This paper investigates the collective behavior of a leader-follower system with communication time lags based on nearest neighbors information. The leaders proposed in the swarm can obtain the information from the environment, but the followers can not. Under the common assumptions, we prove that the individuals of the swarm with the local information will aggregate and form a cohesive cluster of finite size. Moreover, we can clearly see the effects of the time delays on the dynamic of the swarm from the simulations.",
"corpus_id": 29675805,
"score": 0
},
{
"doc_id": "28158263",
"title": "A Novel RBF Neural Network with Fast Training and Accurate Generalization",
"abstract": "For the reason of all the centers and radii needed to be adjusted iteratively, the learning speed of radial basis function (RBF) neural networks is always far slower than required, which obviously forms a bottleneck in many applications. To overcome such problem, we propose a fast and accurate RBF neural network in this paper. First we prove the universal approximation theorem for RBF neural networks with arbitrary centers and radii. Based on this theory, we propose a new learning algorithm called fast and accurate RBF neural network with random kernels (RBF-RK). With the arbitrary centers and radii, our RBF-RK algorithm only needs to adjust the output weights. The experimental results, on function approximation and classification problems, show that the new algorithm not only runs much faster than traditional learning algorithms, but also produces better or at least comparable generalization performance.",
"corpus_id": 28158263,
"score": 0
},
{
"doc_id": "22060041",
"title": "A Novel Fuzzy Neural Network with Fast Training and Accurate Generalization",
"abstract": "For the reason of all parameters in a conventional fuzzy neural network (FNN) needed to be adjusted iteratively, learning can be very slow and may suffer from local minima. To overcome these problems, we propose a novel FNN in this paper, which shows a fast speed and accurate generalization. First we state the universal approximation theorem for an FNN with random membership function parameters (FNN-RM). Since all the membership function parameters are arbitrarily chosen, the proposed FNN-RM algorithm needs to adjust only the output weights of FNNs. Experimental results on function approximation and classification problems show that the new algorithm not only provides thousands of times of speed-up over traditional learning algorithms, but also produces better generalization performance in comparison to other FNNs.",
"corpus_id": 22060041,
"score": 0
},
{
"doc_id": "8341217",
"title": "Multi-Space-Mapped SVMs for Multi-class Classification",
"abstract": "In SVMs-based multiple classification, it is not always possible to find an appropriate kernel function to map all the classes from different distribution functions into a feature space where they are linearly separable from each other. This is even worse if the number of classes is very large. As a result, the classification accuracy is not as good as expected. In order to improve the performance of SVMs-based multi-classifiers, this paper proposes a method, named multi-space-mapped SVMs, to map the classes into different feature spaces and then classify them. The proposed method reduces the requirements for the kernel function. Substantial experiments have been conducted on one-against-all, one-against-one, FSVM, DDAG algorithms and our algorithm using six UCI data sets. The statistical results show that the proposed method has a higher probability of finding appropriate kernel functions than traditional methods and outperforms others.",
"corpus_id": 8341217,
"score": 0
},
{
"doc_id": "10286680",
"title": "A Study on Information Extraction from PDF Files",
"abstract": "Portable Document Format (PDF) is increasingly being recognized as a common format of electronic documents. The prerequisite to management and indexing of PDF files is to extract information from them. This paper describes an approach for extracting information from PDF files. The key idea is to transform the text information parsed from PDF files into semi-structured information by injecting additional uniform tags. An extensible rule set is built on tags and other knowledge. Guided by the rules, one pattern matching algorithm based on a tree model is applied to obtain the necessary information. A further experiment proved that this method was effective.",
"corpus_id": 10286680,
"score": 0
}
] |
arnetminer | {
"doc_id": "42555464",
"title": "Nesting Algorithm for Multi-Classification Problems",
"abstract": "Support vector machines (SVMs) are originally designed for binary classifications. As for multi-classifications, they are usually converted into binary ones. In the conventional multi-classifiable algorithms, One-against-One algorithm is a very power method. However, there exists a middle unclassifiable region. In order to overcome this drawback, a novel method called Nesting Algorithm is presented in this paper. Our ideas are as follows: firstly, construct the optimal hyperplanes based on One-against-One approach. Secondly, if there exist data points in the middle unclassifiable region, select them to construct the optimal hyperplanes with the same hyperparameters. Thirdly, repeat the second step until there are no data points in the unclassifiable region or the region is disappeared. In this paper, we also prove the validity of the proposed algorithm for unclassifiable region and give the computational complexity analysis of the method. In order to examine the training accuracy and the generalization performance of the proposed algorithm, One-against-One algorithm, fuzzy least square support vector machine (FLS-SVM) and the proposed algorithm are applied to five UCI datasets. The results show that the training accuracy of the proposed algorithm is higher than the others, and its generalization performance is also comparable with them.",
"corpus_id": 42555464
} | [
{
"doc_id": "215471",
"title": "Nesting One-Against-One Algorithm Based on SVMs for Pattern Classification",
"abstract": "Support vector machines (SVMs), which were originally designed for binary classifications, are an excellent tool for machine learning. For the multiclass classifications, they are usually converted into binary ones before they can be used to classify the examples. In the one-against-one algorithm with SVMs, there exists an unclassifiable region where the data samples cannot be classified by its decision function. This paper extends the one-against-one algorithm to handle this problem. We also give the convergence and computational complexity analysis of the proposed method. Finally, one-against-one, fuzzy decision function (FDF), and decision-directed acyclic graph (DDAG) algorithms and our proposed method are compared using five University of California at Irvine (UCI) data sets. The results report that the proposed method can handle the unclassifiable region better than others.",
"corpus_id": 215471,
"score": 1
},
{
"doc_id": "9933281",
"title": "A Learning Process Using SVMs for Multi-agents Decision Classification",
"abstract": "In order to resolve decision classification problem in multiple agents system, this paper first introduces the architecture of multiple agents system. It then proposes a support vector machines based assessment approach, which has the ability to learn the rules form previous assessment results from domain experts. Finally, the experiment are conducted on the artificially dataset to illustrate how the proposed works, and the results show the proposed method has effective learning ability for decision classification problems.",
"corpus_id": 9933281,
"score": 1
},
{
"doc_id": "44873280",
"title": "Twi-Map Support Vector Machine for Multi-classification Problems",
"abstract": "In this paper, a novel method called Twi-Map Support Vector Machines (TMSVM) for multi-classification problems is presented. Our ideas are as follows: Firstly, the training data set is mapped into a high-dimensional feature space. Secondly, we calculate the distances between the training data points and hyperplanes. Thirdly, we view the new vector consisting of the distances as new training data point. Finally, we map the new training data points into another high-dimensional feature space with the same kernel function and construct the optimal hyperplanes. In order to examine the training accuracy and the generalization performance of the proposed algorithm, One-against-One algorithm, Fuzzy Least Square Support Vector Machine (FLS-SVM) and the proposed algorithm are applied to five UCI data sets. Comparison results obtained by using three algorithms are given. The results show that the training accuracy and the testing one of the proposed algorithm are higher than those of One-against-One and FLS-SVM.",
"corpus_id": 44873280,
"score": 1
},
{
"doc_id": "8341217",
"title": "Multi-Space-Mapped SVMs for Multi-class Classification",
"abstract": "In SVMs-based multiple classification, it is not always possible to find an appropriate kernel function to map all the classes from different distribution functions into a feature space where they are linearly separable from each other. This is even worse if the number of classes is very large. As a result, the classification accuracy is not as good as expected. In order to improve the performance of SVMs-based multi-classifiers, this paper proposes a method, named multi-space-mapped SVMs, to map the classes into different feature spaces and then classify them. The proposed method reduces the requirements for the kernel function. Substantial experiments have been conducted on one-against-all, one-against-one, FSVM, DDAG algorithms and our algorithm using six UCI data sets. The statistical results show that the proposed method has a higher probability of finding appropriate kernel functions than traditional methods and outperforms others.",
"corpus_id": 8341217,
"score": 1
},
{
"doc_id": "206597617",
"title": "Binary Tree Support Vector Machine Based on Kernel Fisher Discriminant for Multi-classification",
"abstract": "In order to improve the accuracy of the conventional algorithms for multi-classifications, we propose a binary tree support vector machine based on Kernel Fisher Discriminant in this paper. To examine the training accuracy and the generalization performance of the proposed algorithm, One-against-All, One-against-One and the proposed algorithms are applied to five UCI data sets. The experimental results show that in general, the training and the testing accuracy of the proposed algorithm is the best one, and there exist no unclassifiable regions in the proposed algorithm.",
"corpus_id": 206597617,
"score": 1
},
{
"doc_id": "6365125",
"title": "Complex Analysis of Anisotropic Swarms with Gaussian Profiles",
"abstract": "This paper considers an M-member \"individual-based\" continuous time swarm model with individuals moving with a nutrient profile (or an attractant/repellent) in an n- dimensional space. It is proved that the swarm members aggregate and eventually form a cohesive cluster of finite size for Gaussian Profiles. Moreover, all the swarm members will converge to more favorable areas of the Gaussian profile under certain conditions.",
"corpus_id": 6365125,
"score": 0
},
{
"doc_id": "22060041",
"title": "A Novel Fuzzy Neural Network with Fast Training and Accurate Generalization",
"abstract": "For the reason of all parameters in a conventional fuzzy neural network (FNN) needed to be adjusted iteratively, learning can be very slow and may suffer from local minima. To overcome these problems, we propose a novel FNN in this paper, which shows a fast speed and accurate generalization. First we state the universal approximation theorem for an FNN with random membership function parameters (FNN-RM). Since all the membership function parameters are arbitrarily chosen, the proposed FNN-RM algorithm needs to adjust only the output weights of FNNs. Experimental results on function approximation and classification problems show that the new algorithm not only provides thousands of times of speed-up over traditional learning algorithms, but also produces better generalization performance in comparison to other FNNs.",
"corpus_id": 22060041,
"score": 0
},
{
"doc_id": "11674952",
"title": "The Design of the GPS-Based Surveying Robot Automatic Monitoring System for Underground Mining Safety",
"abstract": "Earth subsidence in underground mining is an unavoidable problem in mining production, and timely and scientific observation and early warning is one of the important factors in the security of mining production. Though the surveying robot (i.e. automatic electronic total station) can automatically (or semi-automatically) monitor ground deformation for underground mining, the stability of the station location (monitor base station) has great impact on the monitor precision and when the measurement vision is covered, the surveying robot fails to monitor the corresponding deformation point. In order to tackle the above problem, the author and the research team have integrated the technology of GPS (Global Positioning System) with surveying robot and developed the GPS-based surveying robot automatic monitoring system for underground mining safety, which completely solves the foresaid problem, simplifies the monitor program and reduces the fixed investment cost of monitor. The article introduces the structure and working principle of the GPS-based surveying robot automatic monitoring system for underground mining safety, presents examples of monitor.",
"corpus_id": 11674952,
"score": 0
},
{
"doc_id": "17229395",
"title": "Region-of-interest coding of 3D mesh based on wavelet transform",
"abstract": "A scheme for the region of interest (ROI) coding of 3D meshes is proposed for the first time. The ROI is encoded with higher fidelity than the rest region, and the \"priority\" of ROI relative to the rest region (background, BG) can be specified by encoder or decoder (user). Wavelet transform is used on 3D mesh and zerotrees are adopted to organize the coefficients. The wavelet coefficients of ROI are scaled up and encoded with a modified set partitioning in hierarchical trees (SPIHT) algorithm. In additional, a fast algorithm is proposed for creating the ROI mask. Once the quality of reconstructed ROI becomes high enough, the transmission can be intermitted and much transmission bandwidth and storage space will be saved consequently.",
"corpus_id": 17229395,
"score": 0
},
{
"doc_id": "12104280",
"title": "Classification using support vector machines with graded resolution",
"abstract": "A method which we call support vector machine with graded resolution (SVM-GR) is proposed in this paper. During the training of the SVM-GR, we first form data granules to train the SVM-GR and remove those data granules that are not support vectors. We then use the remaining training samples to train the SVM-GR. Compared with the traditional SVM, our SVM-GR algorithm requires fewer training samples and support vectors, hence the computational time and memory requirements for the SVM-GR are much smaller than those of a conventional SVM that use the entire dataset. Experiments on benchmark data sets show that the generalization performance of the SVM-GR is comparable to the traditional SVM.",
"corpus_id": 12104280,
"score": 0
}
] |
arnetminer | {
"doc_id": "5978304",
"title": "Estimation of Distribution Algorithms for the Machine-Part Cell Formation",
"abstract": "The machine-part cell formation is a NP- complete combinational optimization in cellular manufacturing system. Previous researches have revealed that although the genetic algorithm (GA) can get high quality solutions, special selection strategy, crossover and mutation operators as well as the parameters must be defined previously to solve the problem efficiently and flexibly. The Estimation of Distribution Algorithms (EDAs) has recently been recognized as a new computing paradigm in evolutionary computation which can overcome some drawbacks of the traditional GA mentioned above. In this paper, two kinds of the EDAs, UMDA and EBNA BIC are applied to solve the machine-part cell formation problem. Simulation results on six well known problems show that the UMDA and EBNA BIC can attain satisfied solutions more simply and efficiently.",
"corpus_id": 5978304
} | [
{
"doc_id": "17698779",
"title": "Neural Network Identification Method Applied to the Nonlinear System",
"abstract": "A kind of neural network solution method has been proposed in the paper aiming at a class of non-linear process control system with the characteristic of time delay. In this scheme, a new-type associative memory neural network is used to model the controlled system, and the fuzzy neural network with inverse identification structure is adopted to control the nonlinear process system. This fuzzy neural network control method adopts the structure of three layers combine neural network identifier with inverse structure. Computer simulation and lab application show that it is effective to adopt this scheme to control on-linear process system with time delay.",
"corpus_id": 17698779,
"score": 1
},
{
"doc_id": "13120258",
"title": "Hybrid Ant Colony Algorithm and Its Application on Function Optimization",
"abstract": "A new hybrid ant colony algorithm was proposed. Firstly, weight factor was introduced to the binary ant colony algorithm, and then we obtained a new probability by combining probability model of Population based incremental learning (PBIL) with transfer probability of ants pheromone . The new population are produced by probability model of PBIL, transfer probability of ants pheromone and the probability of proposed algorithm so that population polymorphism is ensured and the optimal convergence rate and the ability of breaking away from the local minima are improved. Optimization simulation results based on the benchmark test functions show that the hybrid algorithm has higher convergence rate and stability than binary ant colony algorithm (BACA) and Population based incremental learning (PBIL).",
"corpus_id": 13120258,
"score": 1
},
{
"doc_id": "13160546",
"title": "A Population-Based Incremental Learning Algorithm with Elitist Strategy",
"abstract": "The population-based incremental learning (PBIL) is a novel evolutionary algorithm combined the mechanisms of the Genetic Algorithm with competitive learning. In this paper, the influence of the number of selected best solutions on the convergence speed of the PBIL is studied by experiment. Based on experimental results, a PBIL algorithm with elitist strategy, named Double Learning PBIL (DLPBIL), is proposed. The new algorithm learns both the selected best solutions in current population and the optimal solution found so far in the algorithm at same time. Experimental results show that the DLPBIL out-performs the standard PBIL. Both the convergence speed and the solution quality are improved.",
"corpus_id": 13160546,
"score": 1
},
{
"doc_id": "14202528",
"title": "Adding the temporal dimension to search - a case study in publication search",
"abstract": "The most well known search techniques are perhaps the PageRank and HITS algorithms. In this paper, we argue that these algorithms miss an important dimension, the temporal dimension. Quality pages in the past may not be quality pages now or in the future. These techniques favor older pages because these pages have many in-links accumulated over time. New pages, which may be of high quality, have few or no in-links and are left behind. Research publication search has the same problem. If we use the PageRank or HITS algorithm, those older or classic papers are ranked high due to the large number of citations that they received in the past. This paper studies the temporal dimension of search in the context of research publication. A number of methods are proposed to deal with the problem based on analyzing the behavior history and the source of each publication. These methods are evaluated empirically. Our results show that they are highly effective.",
"corpus_id": 14202528,
"score": 0
},
{
"doc_id": "17844333",
"title": "Hybrid Algorithm Combining Ant Colony Algorithm with Genetic Algorithm for Continuous Domain",
"abstract": "Ant colony algorithm is a kind of new heuristic biological modeling method which has the ability of parallel processing and global searching. By use of the properties of ant colony algorithm and genetic algorithm, the hybrid algorithm which adopts genetic algorithm to distribute the original pheromone is proposed to solve the continuous optimization problem. Several solutions are obtained using the ant colony algorithm through pheromone accumulation and renewal. Finally, by using crossover and mutation operation of genetic algorithm, some effective solutions are obtained. The results of experiments show better performances of the new algorithm based on six continuous test functions compared with the methods available in literature.",
"corpus_id": 17844333,
"score": 0
},
{
"doc_id": "14322823",
"title": "Learning with Positive and Unlabeled Examples Using Weighted Logistic Regression",
"abstract": "The problem of learning with positive and unlabeled examples arises frequently in retrieval applications. We transform the problem into a problem of learning with noise by labeling all unlabeled examples as negative and use a linear function to learn from the noisy examples. To learn a linear function with noise, we perform logistic regression after weighting the examples to handle noise rates of greater than a half. With appropriate regularization, the cost function of the logistic regression problem is convex, allowing the problem to be solved efficiently. We also propose a performance measure that can be estimated from positive and unlabeled examples for evaluating retrieval performance. The measure, which is proportional to the product of precision and recall, can be used with a validation set to select regularization parameters for logistic regression. Experiments on a text classification corpus show that the methods proposed are effective.",
"corpus_id": 14322823,
"score": 0
},
{
"doc_id": "1737886",
"title": "An Effective Approach for Hiding Sensitive Knowledge in Data Publishing",
"abstract": "Recent efforts have been made to address the problem of privacy preservation in data publishing. However, they mainly focus on preserving data privacy. In this paper, we address another aspect of privacy preservation in data publishing, where some of the knowledge implied by a dataset are regarded as private or sensitive information. In particular, we consider that the data are stored in a transaction database, and the knowledge is represented in the form of patterns. We present a data sanitization algorithm, called SanDB, for effectively protecting a set of sensitive patterns, meanwhile attempting to minimize the impact of data sanitization on the non-sensitive patterns. The experimental results show that SanDB can achieve significant improvement over the best approach presented in the literature.",
"corpus_id": 1737886,
"score": 0
},
{
"doc_id": "12209134",
"title": "JOINT ESTIMATION OF IMAGE AND COIL SENSITIVITIES IN PARALLEL SPIRAL MRI",
"abstract": "Spiral MRI has received increasing attention due to its reduced T 2*-decay and robustness against bulk physiologic motion. In parallel imaging, spiral trajectories are especially of great interest due to their inherent self-calibration capabilities, which is especially useful for dynamic imaging applications such as fMRI and cardiac imaging. The existing self-calibration techniques for spiral use the k-space center data that are sampled densely in the accelerated acquisition for coil sensitivity estimation. There exists a trade-off in choosing the radius of the center data: it must be sufficiently large to contain all major spatial frequencies of coil sensitivity, but not too large to cause significant aliasing artifacts due to undersampling below Nyquist rate as the trajectory moves away from the center k-space. To address this tradeoff, we generalize the JSENSE approach, which has demonstrated success in Cartesian case, to spiral trajectory. Specifically, the method jointly estimates the coil sensitivities and reconstructs the desired image through cross validations so that the sensitivities are estimated from the full data recovered by SENSE instead of the center k-space data only, thereby increasing high frequency information without introducing aliasing artifacts. We use experimental results to show the proposed method improves sensitivities, which leads to a more accurate SENSE reconstruction",
"corpus_id": 12209134,
"score": 0
}
] |
arnetminer | {
"doc_id": "17465692",
"title": "Research activities in database management and information retrieval at University of Illinois at Chicago",
"abstract": "Today, millions of people employ powerful search engines such as Google to retrieve information from the Web on a daily basis. In spite of the success, there are problems associated with such powerful search engines. First, the number of pages which are captured by a single search engine is a few billion, while it has been reported that the entire Web has about 500 billion pages and is rapidly growing. Thus, the coverage of the Web by a single search engine is rather small. Second, an index database has to be built to contain the key information of the captured Web pages. This database is huge and takes substantial amount of time to refresh its contents. Thus, it is not surprising that substantial amount of information in the indexed database can be weeks out-of-date. Third, in order to retrieve information from the large database when there are a large number of queries, enormous hardware resources are needed. It has been reported that Google is utilizing many thousands of computers.",
"corpus_id": 17465692
} | [
{
"doc_id": "2182731",
"title": "Speed-up iterative frequent itemset mining with constraint changes",
"abstract": "Mining of frequent itemsets is a fundamental data mining task. Past research has proposed many efficient algorithms for this purpose. Recent work also highlighted the importance of using constraints to focus the mining process to mine only those relevant itemsets. In practice, data mining is often an interactive and iterative process. The user typically changes constraints and runs the mining algorithm many times before being satisfied with the final results. This interactive process is very time consuming. Existing mining algorithms are unable to take advantage of this iterative process to use previous mining results to speed up the current mining process. This results in an enormous waste of time and computation. In this paper, we propose an efficient technique to utilize previous mining results to improve the efficiency of current mining when constraints are changed. We first introduce the concept of tree boundary to summarize useful information available from previous mining. We then show that the tree boundary provides an effective and efficient framework for the new mining. The proposed technique has been implemented in the context of two existing frequent itemset mining algorithms, FP-tree and tree projection. Experiment results on both synthetic and real-life datasets show that the proposed approach achieves a dramatic saving of computation.",
"corpus_id": 2182731,
"score": 1
},
{
"doc_id": "6193650",
"title": "Dealing with different distributions in learning from",
"abstract": "In the problem of learning with positive and unlabeled examples, existing research all assumes that positive examples P and the hidden positive examples in the unlabeled set U are generated from the same distribution. This assumption may be violated in practice. In such cases, existing methods perform poorly. This paper proposes a novel technique A-EM to deal with the problem. Experimental results with product page classification demonstrate the effectiveness of the proposed technique.",
"corpus_id": 6193650,
"score": 1
},
{
"doc_id": "61155076",
"title": "Opinion Mining",
"abstract": "Описанл процедуру формування інтегрального показника об’єкта згідно з відгуками користувачів під час застосування методів оцінювання опінії текстової інформації у web-документах. Запропоновано використання лінгвістичних змінних та застосування вагових коефіцієнтів для достовірнішого результату оцінювання емоційного забарвлення текстової інформації. Ключові слова: опінія, емоційне забарвлення, об’єкт, інтегральний показник, лінгвістична змінна, ваговий коефіцієнт.",
"corpus_id": 61155076,
"score": 1
},
{
"doc_id": "8786674",
"title": "Entity discovery and assignment for opinion mining applications",
"abstract": "Opinion mining became an important topic of study in recent years due to its wide range of applications. There are also many companies offering opinion mining services. One problem that has not been studied so far is the assignment of entities that have been talked about in each sentence. Let us use forum discussions about products as an example to make the problem concrete. In a typical discussion post, the author may give opinions on multiple products and also compare them. The issue is how to detect what products have been talked about in each sentence. If the sentence contains the product names, they need to be identified. We call this problem entity discovery. If the product names are not explicitly mentioned in the sentence but are implied due to the use of pronouns and language conventions, we need to infer the products. We call this problem entity assignment. These problems are important because without knowing what products each sentence talks about the opinion mined from the sentence is of little use. In this paper, we study these problems and propose two effective methods to solve the problems. Entity discovery is based on pattern discovery and entity assignment is based on mining of comparative sentences. Experimental results using a large number of forum posts demonstrate the effectiveness of the technique. Our system has also been successfully tested in a commercial setting.",
"corpus_id": 8786674,
"score": 1
},
{
"doc_id": "12241191",
"title": "Finding Actionable Knowledge via Automated Comparison",
"abstract": "The problem of finding interesting and actionable patterns is a major challenge in data mining. It has been studied by many data mining researchers. The issue is that data mining algorithms often generate too many patterns, which make it very hard for the user to find those truly useful ones. Over the years many techniques have been proposed. However, few have made it to real-life applications. At the end of 2005, we built a data mining system for Motorola (called Opportunity Map) to enable the user to explore the space of a large number of rules in order to find actionable knowledge. The approach is based on the concept of rule cubes and operations on rule cubes. A rule cube is similar to a data cube, but stores rules. Since its deployment, some issues have also been identified during the regular use of the system in Motorola. One of the key issues is that although the operations on rule cubes are flexible, each operation is primitive and has to be initiated by the user. Finding a piece of actionable knowledge typically involves many operations and intense visual inspections, which are labor-intensive and time-consuming. From interactions with our users, we identified a generic problem that is crucial for finding actionable knowledge. The problem involves extensive comparison of sub-populations and identification of the cause of their differences. This paper first defines the problem and then proposes an effective method to solve the problem automatically. To the best of our knowledge, there is no reported study of this problem. The new method has been added to the Opportunity Map system and is now in daily use in Motorola.",
"corpus_id": 12241191,
"score": 1
},
{
"doc_id": "45268591",
"title": "WBEM Based Distributed Network Monitoring",
"abstract": "In the paper we identify the needs in efficient management of distributed networks and some problematic areas in the field. Then we introduce Web Based Enterprise Management (WBEM) to address the problem of providing a unified way to model all kinds of managed elements in a single information model in heterogeneous network environments. The advantages brought about by the use of WBEM in network management solve some critical problems existing in current network management. Based on the brief description of components of WBEM, we discuss in depth the basic WBEM instrumentation and multi-tiered WBEM enabled management infrastructure. We also apply this multi-tiered management infrastructure to a network management application scenario to monitor network activities for unexpected behaviors.",
"corpus_id": 45268591,
"score": 0
},
{
"doc_id": "32186791",
"title": "A singular integral of the composite operator",
"abstract": "We establish the Poincare-type inequalities for the composition of the homotopy operator and the projection operator. We also obtain some estimates for the integral of the composite operator with a singular density.",
"corpus_id": 32186791,
"score": 0
},
{
"doc_id": "29576879",
"title": "An Ontology-Based Approach to Knowledge Management",
"abstract": "Combining with the respective advantages of ontology and Web Service, this paper puts forward a knowledge management approach to overcome the problem of semantic heterogeneity in distributed environments. The emphasis is placed on establishing an XML-oriented semantic data model and the mapping between XML data based on a global semantic view, which enhances knowledge process efficiency, accuracy and the semantic interoperability as well.",
"corpus_id": 29576879,
"score": 0
},
{
"doc_id": "10286680",
"title": "A Study on Information Extraction from PDF Files",
"abstract": "Portable Document Format (PDF) is increasingly being recognized as a common format of electronic documents. The prerequisite to management and indexing of PDF files is to extract information from them. This paper describes an approach for extracting information from PDF files. The key idea is to transform the text information parsed from PDF files into semi-structured information by injecting additional uniform tags. An extensible rule set is built on tags and other knowledge. Guided by the rules, one pattern matching algorithm based on a tree model is applied to obtain the necessary information. A further experiment proved that this method was effective.",
"corpus_id": 10286680,
"score": 0
},
{
"doc_id": "14443186",
"title": "Design and Analysis of Test Signals for System Identification",
"abstract": "The toolbox supports virtually all polynomial (transfer function) and state-space model representations and model identification by nonparametric correlation and spectral analysis. Toolbox functions can identify continuousor discrete-time models with an arbitrary number of input and output channels. You can import and preprocess measured data, generate parametric and nonparametric models, and validate estimated models against measured data.",
"corpus_id": 14443186,
"score": 0
}
] |
arnetminer | {
"doc_id": "2646885",
"title": "NET - A System for Extracting Web Data from Flat and Nested Data Records",
"abstract": "This paper studies automatic extraction of structured data from Web pages. Each of such pages may contain several groups of structured data records. Existing automatic methods still have several limitations. In this paper, we propose a more effective method for the task. Given a page, our method first builds a tag tree based on visual information. It then performs a post-order traversal of the tree and matches subtrees in the process using a tree edit distance method and visual cues. After the process ends, data records are found and data items in them are aligned and extracted. The method can extract data from both flat and nested data records. Experimental evaluation shows that the method performs the extraction task accurately.",
"corpus_id": 2646885
} | [
{
"doc_id": "6193650",
"title": "Dealing with different distributions in learning from",
"abstract": "In the problem of learning with positive and unlabeled examples, existing research all assumes that positive examples P and the hidden positive examples in the unlabeled set U are generated from the same distribution. This assumption may be violated in practice. In such cases, existing methods perform poorly. This paper proposes a novel technique A-EM to deal with the problem. Experimental results with product page classification demonstrate the effectiveness of the proposed technique.",
"corpus_id": 6193650,
"score": 1
},
{
"doc_id": "61155076",
"title": "Opinion Mining",
"abstract": "Описанл процедуру формування інтегрального показника об’єкта згідно з відгуками користувачів під час застосування методів оцінювання опінії текстової інформації у web-документах. Запропоновано використання лінгвістичних змінних та застосування вагових коефіцієнтів для достовірнішого результату оцінювання емоційного забарвлення текстової інформації. Ключові слова: опінія, емоційне забарвлення, об’єкт, інтегральний показник, лінгвістична змінна, ваговий коефіцієнт.",
"corpus_id": 61155076,
"score": 1
},
{
"doc_id": "8786674",
"title": "Entity discovery and assignment for opinion mining applications",
"abstract": "Opinion mining became an important topic of study in recent years due to its wide range of applications. There are also many companies offering opinion mining services. One problem that has not been studied so far is the assignment of entities that have been talked about in each sentence. Let us use forum discussions about products as an example to make the problem concrete. In a typical discussion post, the author may give opinions on multiple products and also compare them. The issue is how to detect what products have been talked about in each sentence. If the sentence contains the product names, they need to be identified. We call this problem entity discovery. If the product names are not explicitly mentioned in the sentence but are implied due to the use of pronouns and language conventions, we need to infer the products. We call this problem entity assignment. These problems are important because without knowing what products each sentence talks about the opinion mined from the sentence is of little use. In this paper, we study these problems and propose two effective methods to solve the problems. Entity discovery is based on pattern discovery and entity assignment is based on mining of comparative sentences. Experimental results using a large number of forum posts demonstrate the effectiveness of the technique. Our system has also been successfully tested in a commercial setting.",
"corpus_id": 8786674,
"score": 1
},
{
"doc_id": "2182731",
"title": "Speed-up iterative frequent itemset mining with constraint changes",
"abstract": "Mining of frequent itemsets is a fundamental data mining task. Past research has proposed many efficient algorithms for this purpose. Recent work also highlighted the importance of using constraints to focus the mining process to mine only those relevant itemsets. In practice, data mining is often an interactive and iterative process. The user typically changes constraints and runs the mining algorithm many times before being satisfied with the final results. This interactive process is very time consuming. Existing mining algorithms are unable to take advantage of this iterative process to use previous mining results to speed up the current mining process. This results in an enormous waste of time and computation. In this paper, we propose an efficient technique to utilize previous mining results to improve the efficiency of current mining when constraints are changed. We first introduce the concept of tree boundary to summarize useful information available from previous mining. We then show that the tree boundary provides an effective and efficient framework for the new mining. The proposed technique has been implemented in the context of two existing frequent itemset mining algorithms, FP-tree and tree projection. Experiment results on both synthetic and real-life datasets show that the proposed approach achieves a dramatic saving of computation.",
"corpus_id": 2182731,
"score": 1
},
{
"doc_id": "12241191",
"title": "Finding Actionable Knowledge via Automated Comparison",
"abstract": "The problem of finding interesting and actionable patterns is a major challenge in data mining. It has been studied by many data mining researchers. The issue is that data mining algorithms often generate too many patterns, which make it very hard for the user to find those truly useful ones. Over the years many techniques have been proposed. However, few have made it to real-life applications. At the end of 2005, we built a data mining system for Motorola (called Opportunity Map) to enable the user to explore the space of a large number of rules in order to find actionable knowledge. The approach is based on the concept of rule cubes and operations on rule cubes. A rule cube is similar to a data cube, but stores rules. Since its deployment, some issues have also been identified during the regular use of the system in Motorola. One of the key issues is that although the operations on rule cubes are flexible, each operation is primitive and has to be initiated by the user. Finding a piece of actionable knowledge typically involves many operations and intense visual inspections, which are labor-intensive and time-consuming. From interactions with our users, we identified a generic problem that is crucial for finding actionable knowledge. The problem involves extensive comparison of sub-populations and identification of the cause of their differences. This paper first defines the problem and then proposes an effective method to solve the problem automatically. To the best of our knowledge, there is no reported study of this problem. The new method has been added to the Opportunity Map system and is now in daily use in Motorola.",
"corpus_id": 12241191,
"score": 1
},
{
"doc_id": "15435032",
"title": "Characterizing the dynamic connectivity between genes by variable parameter regression and Kalman filtering based on temporal gene expression data",
"abstract": "MOTIVATION\nOne popular method for analyzing functional connectivity between genes is to cluster genes with similar expression profiles. The most popular metrics measuring the similarity (or dissimilarity) among genes include Pearson's correlation, linear regression coefficient and Euclidean distance. As these metrics only give some constant values, they can only depict a stationary connectivity between genes. However, the functional connectivity between genes usually changes with time. Here, we introduce a novel insight for characterizing the relationship between genes and find out a proper mathematical model, variable parameter regression and Kalman filtering to model it.\n\n\nRESULTS\nWe applied our algorithm to some simulated data and two pairs of real gene expression data. The changes of connectivity in simulated data are closely identical with the truth and the results of two pairs of gene expression data show that our method has successfully demonstrated the dynamic connectivity between genes.\n\n\nCONTACT\njiangtz@nlpr.ia.ac.cn.",
"corpus_id": 15435032,
"score": 0
},
{
"doc_id": "32302868",
"title": "Non-linear Correlation Techniques in Educational Data Mining",
"abstract": "There is such an increasing interest in data mining and educational systems currently that has made educational data mining as a new growing research community. This paper explores how to develope new methods for discovering knowledge of data from educational context. The non-linear correlation technology was introduced and applied in the mining process in the whole knowledge achieved. Meanwhile, we have applied these methods in the real course management datasets and found correspondent results for the educators.",
"corpus_id": 32302868,
"score": 0
},
{
"doc_id": "14570653",
"title": "A New Parallel Segmentation Model Based on Dictionary and Mutual Information",
"abstract": "It is difficult to compute the word frequency for mutual information segmentation. Statistic of word frequency of parallel mutual information is integrated with dictionary segmentation to improve efficiency in this paper. The parallel model and dispatching policy are presented, the paper also gives the speed up ratio of parallel model at the same time, periods pattern string and non periods pattern string are optimized in parallel model. Experiment show that the algorithm is available. The parallel model also can use for other segmentation algorithms that base on statistic of word frequency.",
"corpus_id": 14570653,
"score": 0
},
{
"doc_id": "14885080",
"title": "An Efficient and Accurate Method for 3D-Point Reconstruction from Multiple Views",
"abstract": "In this paper we consider the problem of finding the position of a point in space given its projections in multiple images taken by cameras with known calibration and pose. Ideally the 3D point can be obtained as the intersection of multiple known rays in space. However, with noise the rays do not meet at a single point generally. Therefore, it is necessary to find a best point of intersection. In this paper we propose a modification of the method (Ma et al., 2001. Journal of Communications in Information and Systems, (1):51–73) based on the multiple-view epipolar constraints. The solution is simple in concept and straightforward to implement. It includes generally two steps: first, image points are corrected through approximating the error model to the first order, and then the 3D point can be reconstructed from the corrected image points using any generic triangulation method. Experiments are conducted both on simulated data and on real data to test the proposed method against previous methods. It is shown that results obtained with the proposed method are consistently more accurate than those of other linear methods. When the measurement error of image points is relatively small, its results are comparable to those of maximum likelihood estimation using Newton-type optimizers; and when processing image-point correspondences cross a small number of views, the proposed method is by far more efficient than the Newton-type optimizers.",
"corpus_id": 14885080,
"score": 0
},
{
"doc_id": "17077403",
"title": "Rate-Distortion Optimized Progressive Geometry Compression",
"abstract": "During progressive transmission of 3D geometry models, the transmission order of details at different region has great effects on the quality of reconstructed models at low bit-rate. This work presents a ratedistortion (R-D) optimized progressive geometry compression scheme to improve the quality of reconstructed models by adjusting the transmission order of details. In this scheme, the input mesh is partitioned into parts, then each part is encoded into bit-stream independently, and the encoded bit-streams are truncated into segments while getting the R-D characteristics of every segment, at last all segments are assembled into a codestream based on R-D optimization, which ensure the region with rich detail will be transmitted early and make the reconstructed mesh achieve better quality as soon as possible. Experimental results show that, as compared with the well-known PGC method, the proposed one provides better R-D performance. Moreover, it provides a novel way to realize the region of interest (ROI) coding of 3D meshes. Keywords--Rate-distortion optimization; Progressive compression; Mesh partition",
"corpus_id": 17077403,
"score": 0
}
] |
arnetminer | {
"doc_id": "9466712",
"title": "Dynamics of a two-species Lotka-Volterra competition system in a polluted environment with pulse toxicant input",
"abstract": "In most models of population dynamics in a polluted environment, the emission of toxicant is generally considered to be continuous, but it is often the case that toxicant is emitted in regular pulses. This paper deals with the effects of pulse toxicant input with constant rate on two-species Lotka-Volterra competition system in a polluted environment. The thresholds between persistence and extinction of each population are obtained. Moreover, our results indicate that the release amount of toxicant and the pulse period will affect the fate of each population. Finally, the results are verified through computer simulations.",
"corpus_id": 9466712
} | [
{
"doc_id": "6193650",
"title": "Dealing with different distributions in learning from",
"abstract": "In the problem of learning with positive and unlabeled examples, existing research all assumes that positive examples P and the hidden positive examples in the unlabeled set U are generated from the same distribution. This assumption may be violated in practice. In such cases, existing methods perform poorly. This paper proposes a novel technique A-EM to deal with the problem. Experimental results with product page classification demonstrate the effectiveness of the proposed technique.",
"corpus_id": 6193650,
"score": 1
},
{
"doc_id": "8786674",
"title": "Entity discovery and assignment for opinion mining applications",
"abstract": "Opinion mining became an important topic of study in recent years due to its wide range of applications. There are also many companies offering opinion mining services. One problem that has not been studied so far is the assignment of entities that have been talked about in each sentence. Let us use forum discussions about products as an example to make the problem concrete. In a typical discussion post, the author may give opinions on multiple products and also compare them. The issue is how to detect what products have been talked about in each sentence. If the sentence contains the product names, they need to be identified. We call this problem entity discovery. If the product names are not explicitly mentioned in the sentence but are implied due to the use of pronouns and language conventions, we need to infer the products. We call this problem entity assignment. These problems are important because without knowing what products each sentence talks about the opinion mined from the sentence is of little use. In this paper, we study these problems and propose two effective methods to solve the problems. Entity discovery is based on pattern discovery and entity assignment is based on mining of comparative sentences. Experimental results using a large number of forum posts demonstrate the effectiveness of the technique. Our system has also been successfully tested in a commercial setting.",
"corpus_id": 8786674,
"score": 1
},
{
"doc_id": "61155076",
"title": "Opinion Mining",
"abstract": "Описанл процедуру формування інтегрального показника об’єкта згідно з відгуками користувачів під час застосування методів оцінювання опінії текстової інформації у web-документах. Запропоновано використання лінгвістичних змінних та застосування вагових коефіцієнтів для достовірнішого результату оцінювання емоційного забарвлення текстової інформації. Ключові слова: опінія, емоційне забарвлення, об’єкт, інтегральний показник, лінгвістична змінна, ваговий коефіцієнт.",
"corpus_id": 61155076,
"score": 1
},
{
"doc_id": "2182731",
"title": "Speed-up iterative frequent itemset mining with constraint changes",
"abstract": "Mining of frequent itemsets is a fundamental data mining task. Past research has proposed many efficient algorithms for this purpose. Recent work also highlighted the importance of using constraints to focus the mining process to mine only those relevant itemsets. In practice, data mining is often an interactive and iterative process. The user typically changes constraints and runs the mining algorithm many times before being satisfied with the final results. This interactive process is very time consuming. Existing mining algorithms are unable to take advantage of this iterative process to use previous mining results to speed up the current mining process. This results in an enormous waste of time and computation. In this paper, we propose an efficient technique to utilize previous mining results to improve the efficiency of current mining when constraints are changed. We first introduce the concept of tree boundary to summarize useful information available from previous mining. We then show that the tree boundary provides an effective and efficient framework for the new mining. The proposed technique has been implemented in the context of two existing frequent itemset mining algorithms, FP-tree and tree projection. Experiment results on both synthetic and real-life datasets show that the proposed approach achieves a dramatic saving of computation.",
"corpus_id": 2182731,
"score": 1
},
{
"doc_id": "12241191",
"title": "Finding Actionable Knowledge via Automated Comparison",
"abstract": "The problem of finding interesting and actionable patterns is a major challenge in data mining. It has been studied by many data mining researchers. The issue is that data mining algorithms often generate too many patterns, which make it very hard for the user to find those truly useful ones. Over the years many techniques have been proposed. However, few have made it to real-life applications. At the end of 2005, we built a data mining system for Motorola (called Opportunity Map) to enable the user to explore the space of a large number of rules in order to find actionable knowledge. The approach is based on the concept of rule cubes and operations on rule cubes. A rule cube is similar to a data cube, but stores rules. Since its deployment, some issues have also been identified during the regular use of the system in Motorola. One of the key issues is that although the operations on rule cubes are flexible, each operation is primitive and has to be initiated by the user. Finding a piece of actionable knowledge typically involves many operations and intense visual inspections, which are labor-intensive and time-consuming. From interactions with our users, we identified a generic problem that is crucial for finding actionable knowledge. The problem involves extensive comparison of sub-populations and identification of the cause of their differences. This paper first defines the problem and then proposes an effective method to solve the problem automatically. To the best of our knowledge, there is no reported study of this problem. The new method has been added to the Opportunity Map system and is now in daily use in Motorola.",
"corpus_id": 12241191,
"score": 1
},
{
"doc_id": "1542038",
"title": "Analog circuit optimization system based on hybrid evolutionary algorithms",
"abstract": "This paper investigates a hybrid evolutionary-based design system for automated sizing of analog integrated circuits (ICs). A new algorithm, called competitive co-evolutionary differential evolution (CODE), is proposed to design analog ICs with practical user-defined specifications. On the basis of the combination of HSPICE and MATLAB, the system links circuit performances, evaluated through electrical simulation, to the optimization system in the MATLAB environment, once a circuit topology is selected. The system has been tested by typical and hard-to-design cases, such as complex analog blocks with stringent design requirements. The results show that the design specifications are closely met, even in highly-constrained situations. Comparisons with available methods like genetic algorithms and differential evolution, which use static penalty functions to handle design constraints, have also been carried out, showing that the proposed algorithm offers important advantages in terms of optimization quality and robustness. Moreover, the algorithm is shown to be efficient.",
"corpus_id": 1542038,
"score": 0
},
{
"doc_id": "29675805",
"title": "Stability Analysis of Swarm with Interaction Time Delays Using Nearest Neighbors Information",
"abstract": "This paper investigates the collective behavior of a leader-follower system with communication time lags based on nearest neighbors information. The leaders proposed in the swarm can obtain the information from the environment, but the followers can not. Under the common assumptions, we prove that the individuals of the swarm with the local information will aggregate and form a cohesive cluster of finite size. Moreover, we can clearly see the effects of the time delays on the dynamic of the swarm from the simulations.",
"corpus_id": 29675805,
"score": 0
},
{
"doc_id": "32625268",
"title": "An Energy-Minimizing Mesh Parameterization",
"abstract": "In this paper, we propose a new energy-minimizing mesh parameterization method, which linearly combines two new energies EQ and EM. It not only avoids triangles overlap in the parameter domain, but also is invariant under rotation, translation and scale transformations. We first parameterize the original 3D mesh to the parameter plane by using the energy-minimizing parameterization, and get the optimal effect by optimizing the weights wij gradually. Experimental results indicate that this optimized energy-minimizing method has low distortion and good stability.",
"corpus_id": 32625268,
"score": 0
},
{
"doc_id": "42790149",
"title": "Dynamic Complexities in a Lotka-volterra Predator-prey Model Concerning impulsive Control Strategy",
"abstract": "Based on the classical Lotka–Volterra predator–prey system, an impulsive differential equation to model the process of periodically releasing natural enemies and spraying pesticides at different fixed times for pest control is proposed and investigated. It is proved that there exists a globally asymptotically stable pest-eradication periodic solution when the impulsive period is less than some critical value. Otherwise, the system can be permanent. We observe that our impulsive control strategy is more effective than the classical one if we take chemical control efficiently. Numerical results show that the system we considered has more complex dynamics including period-doubling bifurcation, symmetry-breaking bifurcation, period-halving bifurcation, quasi-periodic oscillation, chaos and nonunique dynamics, meaning that several attractors coexist. Finally, a pest–predator stage-structured model for the pest concerning this kind of impulsive control strategy is proposed, and we also show that there exists a ...",
"corpus_id": 42790149,
"score": 0
},
{
"doc_id": "6106184",
"title": "Substitution effect on the geometry and electronic structure of the ferrocene",
"abstract": "The substitution effects on the geometry and the electronic structure of the ferrocene are systematically and comparatively studied using the density functional theory. It is found that -NH(2) and -OH substituents exert different influence on the geometry from -CH(3), -SiH(3), -PH(2), and -SH substituents. The topological analysis shows that all the C-C bonds in a-g are typical opened-shell interactions while the Fe-C bonds are typical closed-shell interactions. NBO analysis indicates that the cooperated interaction of d --> pi* and feedback pi --> d + 4s enhances the Fe-ligand interaction. The energy partitioning analysis demonstrates that the substituents with the second row elements lead to stronger iron-ligand interactions than those with the third row elements. The molecular electrostatic potential predicts that the electrophiles are expected to attack preferably the N, O, P, or S atoms in Fer-NH(2), Fer-OH, Fer-PH(2), and Fer-SH, and attack the ring C atoms in Fer-SiH(3) and Fer-CH(3). In turn, the nucleophiles are supposed to interact predominantly by attacking the hydrogen atoms. The simulated theoretical excitation spectra show that the maximum absorption peaks are red-shifted when the substituents going from second row elements to the third row elements.",
"corpus_id": 6106184,
"score": 0
}
] |
arnetminer | {
"doc_id": "19647536",
"title": "A memetic co-evolutionary differential evolution algorithm for constrained optimization",
"abstract": "In this paper, a memetic co-evolutionary differential evolution algorithm (MCODE) for constrained optimization is proposed. Two cooperative populations are constructed and evolved by independent differential evolution (DE) algorithm. The purpose of the first population is to minimize the objective function regardless of constraints, and that of the second population is to minimize the violation of constraints regardless of the objective function. Interaction and migration happens between the two populations when separate evolutions go on for several iterations, by migrating feasible solutions into the first group, and infeasible ones into the second group. Then, a Gaussian mutation is applied to the individuals when the best solution keep unchanged for several generations. The algorithm is tested by five famous benchmark problems, and is compared with methods based on penalty functions, co-evolutionary genetic algorithm (COGA), and co-evolutionary differential evolution algorithm (CODE). The results proved the proposed cooperative MCODE is very effective and efficient.",
"corpus_id": 19647536
} | [
{
"doc_id": "33407170",
"title": "Manufacturing Grid: Needs, Concept, and Architecture",
"abstract": "As a new approach, grid technology is rapidly used in scientific computing, large-scale data management, and collaborative work. But in the field of manufacturing, the application of grid is just at the beginning. The paper proposes the concept of manufacturing. The needs, definition and architecture of manufacturing gird are discussed, which explains why needs manufacturing grid, what is manufacturing grid and how to construct a manufacturing grid system.",
"corpus_id": 33407170,
"score": 1
},
{
"doc_id": "44715797",
"title": "Constrained Nonlinear State Estimation - A Differential Evolution Based Moving Horizon Approach",
"abstract": "A solution is proposed to estimate the states in the nonlinear discrete time system. Moving Horizon Estimation (MHE) is used to obtain the approximated states by minimizing a criterion that is the Euclidean form of the difference between the estimated outputs and the measured ones over a finite time horizon. The differential evolution (DE) algorithm is incorporated into the implementation of MHE in order to solve the optimization problem which is presented as a nonlinear programming problem due to the constraints. The effectiveness of the approach is illustrated in simulated systems that have appeared in the moving horizon estimation literature.",
"corpus_id": 44715797,
"score": 1
},
{
"doc_id": "5989425",
"title": "A New Performance Evaluation Model and AHP-Based Analysis Method in Service-Oriented Workflow",
"abstract": "In service-oriented architecture, services and workflows are closely related so that the research on service-oriented workflow attracts the attention of academia. Because of the loosely-coupled, autonomic and dynamic nature of service, the operation and performance evaluation of workflow meet some challenges, such as how to judge the quality of service (QoS) and what is the relation between QoS and workflow performance. In this paper we are going to address these challenges. First the definition of service is proposed, and the characteristics and operation mechanism of service-oriented workflow are presented. Then a service-oriented workflow performance evaluation model is described which combines the performance of the business system and IT system. The key performance indicators (KPI) are also depicted with their formal representation. Finally the improved Analytic Hierarchy Process is brought forward to analyze the correlation between different KPIs and select services.",
"corpus_id": 5989425,
"score": 1
},
{
"doc_id": "21860578",
"title": "An Effective PSO-Based Memetic Algorithm for Flow Shop Scheduling",
"abstract": "This paper proposes an effective particle swarm optimization (PSO)-based memetic algorithm (MA) for the permutation flow shop scheduling problem (PFSSP) with the objective to minimize the maximum completion time, which is a typical non-deterministic polynomial-time (NP) hard combinatorial optimization problem. In the proposed PSO-based MA (PSOMA), both PSO-based searching operators and some special local searching operators are designed to balance the exploration and exploitation abilities. In particular, the PSOMA applies the evolutionary searching mechanism of PSO, which is characterized by individual improvement, population cooperation, and competition to effectively perform exploration. On the other hand, the PSOMA utilizes several adaptive local searches to perform exploitation. First, to make PSO suitable for solving PFSSP, a ranked-order value rule based on random key representation is presented to convert the continuous position values of particles to job permutations. Second, to generate an initial swarm with certain quality and diversity, the famous Nawaz-Enscore-Ham (NEH) heuristic is incorporated into the initialization of population. Third, to balance the exploration and exploitation abilities, after the standard PSO-based searching operation, a new local search technique named NEH_1 insertion is probabilistically applied to some good particles selected by using a roulette wheel mechanism with a specified probability. Fourth, to enrich the searching behaviors and to avoid premature convergence, a simulated annealing (SA)-based local search with multiple different neighborhoods is designed and incorporated into the PSOMA. Meanwhile, an effective adaptive meta-Lamarckian learning strategy is employed to decide which neighborhood to be used in SA-based local search. Finally, to further enhance the exploitation ability, a pairwise-based local search is applied after the SA-based search. Simulation results based on benchmarks demonstrate the effectiveness of the PSOMA. Additionally, the effects of some parameters on optimization performances are also discussed",
"corpus_id": 21860578,
"score": 1
},
{
"doc_id": "32506945",
"title": "DE and NLP Based QPLS Algorithm",
"abstract": "As a novel evolutionary computing technique, Differential Evolution (DE) has been considered to be an effective optimization method for complex optimization problems, and achieved many successful applications in engineering. In this paper, a new algorithm of Quadratic Partial Least Squares (QPLS) based on Nonlinear Programming (NLP) is presented. And DE is used to solve the NLP so as to calculate the optimal input weights and the parameters of inner relationship. The simulation results based on the soft measurement of diesel oil solidifying point on a real crude distillation unit demonstrate that the superiority of the proposed algorithm to linear PLS and QPLS which is based on Sequential Quadratic Programming (SQP) in terms of fitting accuracy and computational costs.",
"corpus_id": 32506945,
"score": 1
},
{
"doc_id": "17844333",
"title": "Hybrid Algorithm Combining Ant Colony Algorithm with Genetic Algorithm for Continuous Domain",
"abstract": "Ant colony algorithm is a kind of new heuristic biological modeling method which has the ability of parallel processing and global searching. By use of the properties of ant colony algorithm and genetic algorithm, the hybrid algorithm which adopts genetic algorithm to distribute the original pheromone is proposed to solve the continuous optimization problem. Several solutions are obtained using the ant colony algorithm through pheromone accumulation and renewal. Finally, by using crossover and mutation operation of genetic algorithm, some effective solutions are obtained. The results of experiments show better performances of the new algorithm based on six continuous test functions compared with the methods available in literature.",
"corpus_id": 17844333,
"score": 0
},
{
"doc_id": "44850710",
"title": "Classification by Association Rule Analysis",
"abstract": "An interface between a dual mode mobile station and a piece of data terminal equipment supports both a logical digital interface and a logical analog interface, with the analog interface passing through a modem. At set-up of either a mobile originated or mobile terminated data/fax call, a supporting dual mode communications network determines whether a digital traffic channel is available on an air interface. If so, the channel is allocated to the mobile station, and data/fax communication with the data terminal equipment occurs using the logical digital interface. Otherwise, an analog voice channel is allocated, and the data/fax communication with the data terminal equipment occurs using the logical analog interface and modem. Within the supporting network, data/fax communications via the digital traffic channel are routed through an interworking functionality that terminates a radio link protocol utilized for communicating over the digital air interface. Analog voice channel data/fax communications are instead routed through a modem pool that terminates a selected modem protocol utilized by the modem for communicating over the analog air interface.",
"corpus_id": 44850710,
"score": 0
},
{
"doc_id": "29971030",
"title": "Measuring the meaning in time series clustering of text search queries",
"abstract": "We use a combination of proven methods from time series analysis and machine learning to explore the relationship between temporal and semantic similarity in web query logs; we discover that the combination of correlation and cycles is a good, but not perfect, sign of semantic relationship.",
"corpus_id": 29971030,
"score": 0
},
{
"doc_id": "159216",
"title": "A weight based compact genetic algorithm",
"abstract": "In order to improve the performance of the compact Genetic Algorithm (cGA) to solve difficult optimization problems, an improved cGA which named as the weight based compact Genetic Algorithm (wcGA) is proposed. In the wcGA, S individuals are generated from the probability vector in each generation, when the winner competing with the other S-1 individuals to update the probability vector, different weights are multiplied to each solution according to the sequence of the solution ranked in the S-1 individuals. Experimental results on three kinds of Benchmark functions show that the proposed algorithm has higher optimal precision than that of the standard cGA and the cGA simulating higher selection pressures.",
"corpus_id": 159216,
"score": 0
},
{
"doc_id": "5516091",
"title": "An New Global Dynamic Scheduling Algorithm with Multi-Hop Path Splitting and Multi-Pathing Using GridFTP",
"abstract": null,
"corpus_id": 5516091,
"score": 0
}
] |