Query Text
stringlengths
9
8.71k
Ranking 1
stringlengths
14
5.31k
Ranking 2
stringlengths
11
5.31k
Ranking 3
stringlengths
11
8.42k
Ranking 4
stringlengths
17
8.71k
Ranking 5
stringlengths
14
4.95k
Ranking 6
stringlengths
14
8.42k
Ranking 7
stringlengths
17
8.42k
Ranking 8
stringlengths
10
5.31k
Ranking 9
stringlengths
9
8.42k
Ranking 10
stringlengths
9
8.42k
Ranking 11
stringlengths
10
4.11k
Ranking 12
stringlengths
14
8.33k
Ranking 13
stringlengths
17
3.82k
score_0
float64
1
1.25
score_1
float64
0
0.25
score_2
float64
0
0.25
score_3
float64
0
0.24
score_4
float64
0
0.24
score_5
float64
0
0.24
score_6
float64
0
0.21
score_7
float64
0
0.1
score_8
float64
0
0.02
score_9
float64
0
0
score_10
float64
0
0
score_11
float64
0
0
score_12
float64
0
0
score_13
float64
0
0
Conceptual Graphs and First-Order Logic . Conceptual Structures (CS) Theory is a logic-based knowledgerepresentation formalism. To show that conceptual graphs have thepower of first-order logic, it is necessary to have a mapping between bothformalisms. A proof system, i.e. axioms and inference rules, for conceptualgraphs is also useful. It must be sound (no false statement is derivedfrom a true one) and complete (all possible tautologies can be derivedfrom the axioms). This paper shows that Sowa's original definition of...
A situated classification solution of a resource allocation task represented in a visual language The Sisyphus room allocation problem solving example has been solved using a situated classification approach. A solution was developed from the protocol provided in terms of three heuristic classification systems, one classifying people, another rooms, and another tasks on an agenda of recommended room allocations. The domain ontology, problem data, problem-solving method, and domain-specific classification rules, have each been represented in a visual language. These knowledge structures compile to statements in a term subsumption knowledge representation language, and are loaded and run in a knowledge representation server to solve the problem. The user interface has been designed to provide support for human intervention in under-determi ned and over- determined situations, allowing advantage to be taken of the additional choices available in the first case, and a compromise solution to be developed in the second.
Viewpoint Consistency in Z and LOTOS: A Case Study . Specification by viewpoints is advocated as a suitable methodof specifying complex systems. Each viewpoint describes the envisagedsystem from a particular perspective, using concepts and specificationlanguages best suited for that perspective.Inherent in any viewpoint approach is the need to check or manage theconsistency of viewpoints and to show that the different viewpoints donot impose contradictory requirements. In previous work we have describeda range of techniques for...
Ontology, Metadata, and Semiotics The Internet is a giant semiotic system. It is a massive collection of Peirce's three kinds of signs: icons, which show the form of something; indices, which point to something; and symbols, which represent something according to some convention. But current proposals for ontologies and metadata have overlooked some of the most important features of signs. A sign has three aspects: it is (1) an entity that represents (2) another entity to (3) an agent. By looking only at the signs themselves, some metadata proposals have lost sight of the entities they represent and the agents  human, animal, or robot  which interpret them. With its three branches of syntax, semantics, and pragmatics, semiotics provides guidelines for organizing and using signs to represent something to someone for some purpose. Besides representation, semiotics also supports methods for translating patterns of signs intended for one purpose to other patterns intended for different but related purposes. This article shows how the fundamental semiotic primitives are represented in semantically equivalent notations for logic, including controlled natural languages and various computer languages.
Formal methods: state of the art and future directions ing with credit is permitted. To copy otherwise, to republish, to post onservers, to redistribute to lists, or to use any component of this work in other works, requires priorspecific permission and/or a fee. Permissions may be requested from Publications Dept, ACMInc., 1515 Broadway, New York, NY 10036 USA, fax +1 (212) 869-0481, or permissions@acm.org.2 \Delta E.M. Clarke and J.M. WingAbout Programs---Mechanical verification, Specification techniques; F.4.1 [Mathematical Logicand...
Integrating multiple paradigms within the blackboard framework While early knowledge-based systems suffered the frequent criticism of having little relevance to the real world, an increasing number of current applications deal with complex, real-world problems. Due to the complexity of real-world situations, no one general software technique can produce adequate results in different problem domains, and artificial intelligence usually needs to be integrated with conventional paradigms for efficient solutions. The complexity and diversity of real-world applications have also forced the researchers in the AI field to focus more on the integration of diverse knowledge representation and reasoning techniques for solving challenging, real-world problems. Our development environment, BEST (Blackboard-based Expert Systems Toolkit), is aimed to provide the ability to produce large-scale, evolvable, heterogeneous intelligent systems. BEST incorporates the best of multiple programming paradigms in order to avoid restricting users to a single way of expressing either knowledge or data. It combines rule-based programming, object-oriented programming, logic programming, procedural programming and blackboard modelling in a single architecture for knowledge engineering, so that the user can tailor a style of programming to his application, using any or arbitrary combinations of methods to provide a complete solution. The deep integration of all these techniques yields a toolkit more effective even for a specific single application than any technique in isolation or collections of multiple techniques less fully integrated. Within the basic, knowledge-based programming paradigm, BEST offers a multiparadigm language for representing complex knowledge, including incomplete and uncertain knowledge. Its problem solving facilities include truth maintenance, inheritance over arbitrary relations, temporal and hypothetical reasoning, opportunistic control, automatic partitioning and scheduling, and both blackboard and distributed problem-solving paradigms.
Logical foundations of object-oriented and frame-based languages We propose a novel formalism, called Frame Logic (abbr., F-logic), that accounts in a clean and declarative fashion for most of the structural aspects of object-oriented and frame-based languages. These features include object identity, complex objects, inheritance, polymorphic types, query methods, encapsulation, and others. In a sense, F-logic stands in the same relationship to the object-oriented paradigm as classical predicate calculus stands to relational programming. F-logic has a model-theoretic semantics and a sound and complete resolution-based proof theory. A small number of fundamental concepts that come from object-oriented programming have direct representation in F-logic; other, secondary aspects of this paradigm are easily modeled as well. The paper also discusses semantic issues pertaining to programming with a deductive object-oriented language based on a subset of F-logic.
Systems analysis: a systemic analysis of a conceptual model Adopting an appropriate model for systems analysis, by avoiding a narrow focus on the requirements specification and increasing the use of the systems analyst's knowledge base, may lead to better software development and improved system life-cycle management.
A knowledge engineering approach to knowledge management Knowledge management facilitates the capture, storage, and dissemination of knowledge using information technology. Methods for managing knowledge have become an important issue in the past few decades, and the KM community has developed a wide range of technologies and applications for both academic research and practical applications. In this paper, we propose a knowledge engineering approach (KMKE) to knowledge management. First, a knowledge modeling approach is used to organize and express various types of knowledge in a unified knowledge representation. Second, a verification mechanism is used to verify knowledge models based on the formal semantics of the knowledge representation. Third, knowledge models are classified and stored in a hierarchical ontology system. Fourth, a knowledge query language is designed to enhance the dissemination of knowledge. Finally, a knowledge update process is applied to modify the knowledge storage with respect to users' needs. A knowledge management system for computer repair is used as an illustrative example.
Goal-Based Requirements Analysis Goals are a logical mechanism for identifying, organizing and justifying software requirements. Strategies are needed for the initial identification and construction of goals. In this paper we discuss goals from the perspective of two themes: goal analysis and goal evolution. We begin with an overview of the goal-based method we have developed and summarize our experiences in applying our method to a relatively large example. We illustrate some of the issues that practitioners face when using a goal-based approach to specify the requirements for a system and close the paper with a discussion of needed future research on goal-based requirements analysis and evolution. Keywords: goal identification, goal elaboration, goal refinement, scenario analysis, requirements engineering, requirements methods
A New Implementation Technique for Applicative Languages
Software Engineering Environments
Unifying wp and wlp Boolean-valued predicates over a state space are isomorphic to its char- acteristic functions into {0,1}. Enlarging that range to { 1,0,1} allows the definition of extended predicates whose associated transformers gen- eralise the conventional wp and wlp. The correspondingly extended healthiness conditions include the new 'sub-additivity', an arithmetic inequality over predicates. Keywords: Formal semantics, program correctness, weakest precon- dition, weakest liberal precondition, Egli-Milner order.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.115014
0.12
0.06
0.036676
0.015
0.005029
0.002282
0.000503
0.000001
0
0
0
0
0
Automated Requirements Elicitation: Combining A Model-Driven Approach With Concept Reuse Extracting pertinent and useful information from customers has long plagued the process of requirements elicitation. This paper presents a new approach to support the elicitation process. This approach combines various techniques for requirements elicitation which include model-based concept acquisition, goal-driven structured interview and concept reuse. Compared with the available approaches for requirements elicitation, the most significant feature of our approach is that it supports both the automation of interaction with customers by using domain terminology, not software terminology and the automated construction of application requirements models using model-based concept elicitation and concept reuse. The capacity of this approach comes from its rich knowledge which is clustered into several abstract levels.
A Framework For Integrating Multiple Perspectives In System-Development - Viewpoints This paper outlines a framework which supports the use of multiple perspectives in system development, and provides a means for developing and applying systems design methods. The framework uses "viewpoints" to partition the system specification. the development method and the formal representations used to express the system specifications. This VOSE (viewpoint-oriented systems engineering) framework can be used to support the design of heterogeneous and composite systems. We illustrate the use of the framework with a small example drawn from composite system development and give an account of prototype automated tools based on the framework.
Elements underlying the specification of requirements As more and more complex computer‐based systems are built, it becomes increasingly more difficult to specify or visualize the system prior to its construction. One way of simplifying these tasks is to view the requirements from multiple viewpoints. However, if these viewpoints examine the requirements using different notations, how can we know if they are consistent? This paper describes the elemental concepts that underlie all requirements. By reducing each view of requirements to networks of these elemental concepts, it becomes possible to better understand the relationships among the views.
Four dark corners of requirements engineering Research in requirements engineering has produced an extensive body of knowledge, but there are four areas in which the foundation of the discipline seems weak or obscure. This article shines some light in the "four dark corners," exposing problems and proposing solutions. We show that all descriptions involved in requirements engineering should be descriptions of the environment. We show that certain control information is necessary for sound requirements engineering, and we explain the close association between domain knowledge and refinement of requirements. Together these conclusions explain the precise nature of requirements, specifications, and domain knowledge, as well as the precise nature of the relationships among them. They establish minimum standards for what information should be represented in a requirements language. They also make it possible to determine exactly what it means for requirements engineering to be successfully completed. Categories and Subject Descriptors: D.2.1 (Software Engineering): Requirements/Specifica- tions—methodologies
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
A semantics of multiple inheritance this paper is to present a clean semantics of multiple inheritance and to show that, in the context of strongly-typed, statically-scoped languages, a sound typechecking algorithm exists. Multiple inheritance is also interpreted in a broad sense: instead of being limited to objects, it is extended in a natural way to union types and to higher-order functional types. This constitutes a semantic basis for the unification of functional and object-oriented programming.
The Manchester prototype dataflow computer The Manchester project has developed a powerful dataflow processor based on dynamic tagging. This processor is large enough to tackle realistic applications and exhibits impressive speedup for programs with sufficient parallelism.
A lazy evaluator A different way to execute pure LISP programs is presented. It delays the evaluation of parameters and list structures without ever having to perform more evaluation steps than the usual method. Although the central idea can be found in earlier work this paper is of interest since it treats a rather well-known language and works out an algorithm which avoids full substitution. A partial correctness proof using Scott-Strachey semantics is sketched in a later section.
A study of cross-validation and bootstrap for accuracy estimation and model selection We review accuracy estimation methods and compare the two most common methods crossvalidation and bootstrap. Recent experimental results on artificial data and theoretical re cults in restricted settings have shown that for selecting a good classifier from a set of classifiers (model selection), ten-fold cross-validation may be better than the more expensive leaveone-out cross-validation. We report on a largescale experiment--over half a million runs of C4.5 and a Naive-Bayes algorithm--to estimate the effects of different parameters on these algrithms on real-world datasets. For crossvalidation we vary the number of folds and whether the folds are stratified or not, for bootstrap, we vary the number of bootstrap samples. Our results indicate that for real-word datasets similar to ours, The best method to use for model selection is ten fold stratified cross validation even if computation power allows using more folds.
A Theory of Prioritizing Composition An operator for the composition of two processes, where one process has priority over the other process, is studied. Processes are described by action systems, and data refinement is used for transforming processes. The operator is shown to be compositional, i.e. monotonic with respect to refinement. It is argued that this operator is adequate for modelling priorities as found in programming languages and operating systems. Rules for introducing priorities and for raising and lowering priorities of processes are given. Dynamic priorities are modelled with special priority variables which can be freely mixed with other variables and the prioritising operator in program development. A number of applications show the use of prioritising composition for modelling and specification in general.
Repository support for multi-perspective requirements engineering Relationships among different modeling perspectives have been systematically investigated focusing either on given notations (e.g. UML) or on domain reference models (e.g. ARIS/SAP). In contrast, many successful informal methods for business analysis and requirements engineering (e.g. JAD) emphasize team negotiation, goal orientation and flexibility of modeling notations. This paper addresses the question how much formal and computerized support can be provided in such settings without destroying their creative tenor. Our solution is based on a novel modeling language, M-Telos, that integrates the adaptability and analysis advantages of the logic-based meta modeling language Telos with a module concept covering the structuring mechanisms of scalable software architectures. It comprises four components: (1) A modular conceptual modeling formalism organizes individual perspectives and their interrelationships. (2) Perspective schemata are linked to a conceptual meta meta model of shared domain terms, thus giving the architecture a semantic meaning and enabling adaptability and extensibility of the network of perspectives. (3) Inconsistency management across perspectives is handled in a goal-oriented manner, by formalizing analysis goals as meta rules which are automatically customized to perspective schemata. (4) Continuous incremental maintenance of inconsistency information is provided by exploiting recent view maintenance techniques from deductive databases. The approach has been implemented as an extension to the ConceptBase ‡ ‡ ConceptBase is available through web site http://www-i5.Informatik.RWTH-Aachen.de/Cbdor/index.html. meta database management system and has been applied in a number of real-world requirements engineering projects.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.2
0.013333
0.011111
0.006061
0
0
0
0
0
0
0
0
0
0
Mobile UNITY schemas for agent coordination Mobile UNITY refers to a notation system and proof logic initially designed to accommodate the special needs of the emerging field of mobile computing. The model allows one to define units of computation and mobility and the formal rules for coordination among them in a highly decoupled manner. In this paper, we reexamine the expressive power of the Mobile UNITY coordination constructs from a new perspective rooted in the notion that disciplined usage of a powerful formal model must rely on formally defined schemas. Several coordination schemas are introduced and formalized. They examine the relationship between Mobile UNITY and other computing models and illustrate the mechanics of employing Mobile UNITY as the basis for a formal semantic characterization of coordination models.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Procedures and atomicity refinement The introduction of an early return from a (remote) procedure call can increase the degree of parallelism in a parallel or distributed algorithm modeled by an action system. We define a return statement for procedures in an action systems framework and show that it corresponds to carrying out an atomicity refinement.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
A minimum entropy based switched adaptive predictor for lossless compression of images The gradient adjusted predictor (GAP) uses seven fixed slope quantization bins and a predictor is associated with each bin, for prediction of pixels. The slope bin boundary in the same appears to be fixed without employing a criterion function. This paper presents a technique for slope classification that results in slope bins which are optimum for a given set of images. It also presents two techniques that find predictors which are statistically optimal for each of the slope bins. Slope classification and the predictors associated with the slope bins are obtained off-line. To find a representative predictor for a bin, a set of least-squares (LS) based predictors are obtained for all the pixels belonging to that bin. A predictor, from the set of predictors, that results in the minimum prediction error energy is chosen to represent the bin. Alternatively, the predictor is chosen, from the same set, based on minimum entropy as the criterion. Simulation results, of the proposed method have shown a significant improvement in the compression performance as compared to the GAP. Computational complexity of the proposed method , excluding the training process, is of the same order as that of GAP.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Rapid prototyping of control systems using high level Petri nets This paper presents a rapid prototyping methodology for the carrying out of control systems in which high level Petri nets provide the common framework to integrate the main phases of software development: specification, validation, performance evaluation, implementation.Petri nets are shown to be translatable into Ada program structures concerning processes and their synchronizations.
Extending the Entity-Relationship Approach for Dynamic Modeling Purposes
The Software Development System This paper presents a discussion of the Software Development System (SDS), a methodology addressing the problems involved in the development of software for Ballistic Missile Defense systems. These are large, real-time, automated systems with a requirement for high reliability. The SDS is a broad approach attacking problems arising in requirements generation, software design, coding, and testing. The approach is highly requirements oriented and has resulted in the formulation of structuring concepts, a requirements statement language, process design language, and support software to be used throughout the development cycle. This methodology represents a significant advance in software technology for the development of software for a class of systems such as BMD. The support software has been implemented and is undergoing evaluation.
Applications and Extensions of SADT First Page of the Article
Petri net-based object-oriented modelling of distributed systems This paper presents an object-oriented approach for building distributed systems. An example taken from the field of computer integrated manufacturing systems is taken as a guideline. According to this approach a system is built up through three steps: control and synchronization aspects for each class of objects are treated first using PROT nets, which are a high-level extension to Petri nets; then data are introduced specifying the internal states of the objects as well as the messages they send each other; finally the connections between the objects are introduced by means of a data flow diagram between classes. The implementation uses ADA as the target language, exploiting its tasking and structuring mechanisms. The flexibility of the approach and the possibility of using a knowledge-based user interface promote rapid prototyping and reusability.
Software Performance Engineering Performance is critical to the success of today's software systems. However, many software products fail to meet their performance objectives when they are initially constructed. Fixing these problems is costly and causes schedule delays, cost overruns, lost productivity, damaged customer relations, missed market windows, lost revenues, and a host of other difficulties. This chapter presents software performance engineering (SPE), a systematic, quantitative approach to constructing software systems that meet performance objectives. SPE begins early in the software development process to model the performance of the proposed architecture and high-level design. The models help to identify potential performance problems when they can be fixed quickly and economically.
Modeling of Distributed Real-Time Systems in DisCo In this paper we describe adding of metric real time to joint actions, and to the DisCo specification language and tool that are based on them. No new concepts or constructs are needed: time is represented by variables in objects, and action durations are given by action parameters. Thus, we can reason about real-time properties in the same way as about other properties. The scheduling model is unrestrict- ed in the sense that every logically possible computation gets some scheduling. This is more general than maximal parallelism, and the properties proved under it are less sen- sitive to small changes in timing. Since real time is handled by existing DisCo constructs, the tool with its execution ca- pabilities can be used to simulate and animate also real- time properties of specifications.
An experiment in technology transfer: PAISLey specification of requirements for an undersea lightwave cable system From May to October 1985 members of the Undersea Systems Laboratory and the Computer Technology Research Laboratory of AT&T Bell Laboratories worked together to apply the executable specification language PAISLey to requirements for the “SL” communications system. This paper describes our experiences and answers three questions based on the results of the experiment: Can SL requirements be specified formally in PAISLey? Can members of the SL project learn to read and write specifications in PAISLey? How would the use of PAISLey affect the productivity of the software-development team and the quality of the resulting software?
Petri nets in software engineering The central issue of this contribution is a methodology for the use of nets in practical systems design. We show how nets of channels and agencies allow for a continuous and systematic transition from informal and unprecise to precise and formal specifications. This development methodology leads to the representation of dynamic systems behaviour (using Pr/T-Nets) which is apt to rapid prototyping and formal correctness proofs.
Are knowledge representations the answer to requirement analysis? A clear distinction between a requirement and a specification is crucial to an understanding of how and why knowledge representation techniques can be useful for the requirement stage. A useful distinction is to divide the requirement analysis phase into a problem specification and system specification phases. It is argued that it is necessary first to understand what kind of knowledge is in the requirement analysis process before worrying about representational schemes
Real-time constraints in a rapid prototyping language This paper presents real-time constraints of a prototyping language and some mechanisms for handling these constraints in rapidly prototyping embedded systems. Rapid prototyping of embedded systems can be accomplished using a Computer Aided Prototyping System (CAPS) and its associated Prototyping Language (PSDL) to aid the designer in handling hard real-time constraints. The language models time critical operations with maximum execution times, maximum response times and minimum periods. The mechanisms for expressing timing constraints in PSDL are described along with their meanings relative to a series of hardware models which include multi-processor configurations. We also describe a language construct for specifying the policies governing real-time behavior under overload conditions.
On Diagram Tokens and Types Rejecting the temptation to make up a list of necessary and sufficient conditions for diagrammatic and sentential systems, we present an important distinction which arises from sentential and diagrammatic features of systems. Importantly, the distinction we will explore in the paper lies at a meta-level. That is, we argue for a major difference in meta-theory between diagrammatic and sentential systems, by showing the necessity of a more fine-grained syntax for a diagrammatic system than for a sentential system. Unlike with sentential systems, a diagrammatic system requires two levels of syntax--token and type. Token-syntax is about particular diagrams instantiated on some physical medium, and type-syntax provides a formal definition with which a concrete representtation of a diagram must comply. While these two levels of syntax are closely related, the domains of type-syntax and token-syntax are distinct from each other. Euler diagrams are chosen as a case study to illustrate the following major points of the paper: (i) What kinds of diagrammatic features (as opposed to sentential features) require two different levels of syntax? (ii) What is the relation between these two levels of syntax? (iii) What is the advantage of having a two-tiered syntax?
Verification conditions are code This paper presents a new theoretical result concerning Hoare Logic. It is shown here that the verification conditions that support a Hoare Logic program derivation are themselves sufficient to construct a correct implementation of the given pre-, and post-condition specification. This property is mainly of theoretical interest, though it is possible that it may have some practical use, for example if predicative programming methodology is adopted. The result is shown to hold for both the original, partial correctness, Hoare logic, and also a variant for total correctness derivations.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.202145
0.034225
0.006135
0.004433
0.000837
0.000398
0.000119
0.000043
0.000007
0.000001
0
0
0
0
MoMut::UML Model-Based Mutation Testing for UML
Model-based, mutation-driven test case generation via heuristic-guided branching search. This work introduces a heuristic-guided branching search algorithm for model-based, mutation-driven test case generation. The algorithm is designed towards the efficient and computationally tractable exploration of discrete, non-deterministic models with huge state spaces. Asynchronous parallel processing is a key feature of the algorithm. The algorithm is inspired by the successful path planning algorithm Rapidly exploring Random Trees (RRT). We adapt RRT in several aspects towards test case generation. Most notably, we introduce parametrized heuristics for start and successor state selection, as well as a mechanism to construct test cases from the data produced during search. We implemented our algorithm in the existing test case generation framework MoMuT. We present an extensive evaluation of our heuristics and parameters based on a diverse set of demanding models obtained in an industrial context. In total we continuously utilized 128 CPU cores on three servers for two weeks to gather the experimental data presented. Using statistical methods we determine which heuristics are performing well on all models. With our new algorithm, we are now able to process models consisting of over 2300 concurrent objects. To our knowledge there is no other mutation driven test case generation tool that is able to process models of this magnitude.
Model-Based Mutation Testing of an Industrial Measurement Device.
Killing strategies for model-based mutation testing. This article presents the techniques and results of a novel model-based test case generation approach that automatically derives test cases from UML state machines. The main contribution of this article is the fully automated fault-based test case generation technique together with two empirical case studies derived from industrial use cases. Also, an in-depth evaluation of different fault-based test case generation strategies on each of the case studies is given and a comparison with plain random testing is conducted. The test case generation methodology supports a wide range of UML constructs and is grounded on the formal semantics of Back's action systems and the well-known input-output conformance relation. Mutation operators are employed on the level of the specification to insert faults and generate test cases that will reveal the faults inserted. The effectiveness of this approach is shown and it is discussed how to gain a more expressive test suite by combining cheap but undirected random test case generation with the more expensive but directed mutation-based technique. Finally, an extensive and critical discussion of the lessons learnt is given as well as a future outlook on the general usefulness and practicability of mutation-based test case generation. Copyright © 2014 John Wiley & Sons, Ltd.
Automated Conformance Verification of Hybrid Systems Due to the combination of discrete events and continuous behavior the validation of hybrid systems is a challenging task. Nevertheless, as for other systems the correctness of such hybrid systems is a major concern. In this paper we present a new approach for verifying the input-output conformance of two hybrid systems. This approach can be used to generate mutation-based test cases. We specify a hybrid system within the framework of Qualitative Action Systems. Here, besides conventional discrete actions, the continuous dynamics of hybrid systems is described with so called qualitative actions. This paper then shows how labeled transition systems can be used to describe the trace semantics of Qualitative Action Systems. The labeled transition systems are used to verify the conformance between two Qualitative Action Systems. Finally, we present first experimental results on a water tank system.
Towards Symbolic Model-Based Mutation Testing: Pitfalls in Expressing Semantics as Constraints Model-based mutation testing uses altered models to generate test cases that are able to detect whether a certain fault has been implemented in the system under test. For this purpose, we need to check for conformance between the original and the mutated model. We have developed an approach for conformance checking of action systems using constraints. Action systems are well-suited to specify reactive systems and may involve non-determinism. Expressing their semantics as constraints for the purpose of conformance checking is not totally straight forward. This paper presents some pitfalls that hinder the way to a sound encoding of semantics into constraint satisfaction problems and gives solutions for each problem.
Action Systems with Synchronous Communication this paper show that a simple extension of the action systems framework,adding procedure declarations to action systems, will give us a very general mechanism forsynchronized communication between action systems. Both actions and procedure bodiesare guarded commands. When an action in one action system calls a procedure in anotheraction system, the eoeect is that of a remote procedure call. The calling action and theprocedure body involved in the call are executed as a single atomic...
Object-oriented modeling and design
Hierarchical correctness proofs for distributed algorithms This thesis introduces a new model for distributed computation in asynchronous networks, the input-output automaton. This simple, powerful model captures in a novel way the game-theoretical interaction between a system and its environment, and allows fundamental properties of distributed computation such as fair computation to be naturally expressed. Furthermore, this model can be used to construct modular, hierarchical correctness proofs of distributed algorithms. This thesis defines the input-output automaton model, and presents an interesting example of how this model can be used to construct such proofs.
Multistage negotiation for distributed constraint satisfaction A cooperation paradigm and coordination protocol for a distributed planning system consisting of a network of semi-autonomous agents with limited internode communication and no centralized control is presented. A multistage negotiation paradigm for solving distributed constraint satisfaction problems in this kind of system has been developed. The strategies presented enable an agent in a distributed planning system to become aware of the extent to which its own local decisions may have adverse nonlocal impact in planning. An example problem is presented in the context of transmission path restoration for dedicated circuits in a communications network. Multistage negotiation provides an agent with sufficient information about the impact of local decisions on a nonlocal state so that the agent may make local decisions that are correct from a global perspective, without attempting to provide a complete global state to all agents. Through multistage negotiation, an agent is able to recognize when a set of global goals cannot be satisfied, and is able to solve a related problem by finding a way of satisfying a reduced set of goals
On the Lattice of Specifications: Applications to a Specification Methodology In this paper we investigate the lattice properties of the natural ordering between specifications, which expresses that a specification expresses a stronger requirement than another specification. The lattice-like structure that we uncover is used as a basis for a specification methodology.
Visual Formalisms Revisited The development of an interactive application is a complex task that has to consider data, behavior, inter- communication, architecture and distribution aspects of the modeled system. In particular, it presupposes the successful communication between the customer and the software expert. To enhance this communica- tion most modern software engineering methods rec- ommend to specify the different aspects of a system by visual formalisms. In essence, visual specifications are directed graphs that are interpreted in a particular way for each as- pect of the system. They are also intended to be com- positional. This means that, each node can itself be a graph with a separate meaning. However, the lack of a denotational model for hierarchical graphs often leads to the loss of compositionality. This has severe negative consequences in the development of realistic applications. In this paper we present a simple denotational model (which is by definition compositional) for the architecture and behavior aspects of a system. This model is then used to give as emantics to almost all the concepts occurring in ROOM. Our model also provides a compositional semantics for or-states in statecharts.
Miro: Visual Specification of Security Miro is a set of languages and tools that support the visual specification of file system security. Two visual languages are presented: the instance language, which allows specification of file system access, and the constraint language, which allows specification of security policies. Miro visual languages and tools are used to specify security configurations. A visual language is one whose entities are graphical, such as boxes and arrows, specifying means stating independently of any implementation the desired properties of a system. Security means file system protection: ensuring that files are protected from unauthorized access and granting privileges to some users, but not others. Tools implemented and examples of how these languages can be applied to real security specification problems are described.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1.071111
0.066667
0.044089
0.043556
0.005429
0.001067
0.000026
0
0
0
0
0
0
0
Acquiring Temporal Knowledge from Schedules This paper presents an economical algorithm for generating conceptual graphs from schedules and timing diagrams. The graphs generated are based on activity concepts associated with intervals of the schedules. The temporal conceptual relations selected here are drawn from both interval and endpoint temporal logics in order to minimize the complexity of the generated graphs to no more that k(n–1) temporal relations in a schedule of n intervals over k timelines (resources). Temporal reasoning and consistency checking in terms of the selected temporal relations are briefly reviewed.
Conceptual Structures: Current Practices, Second International Conference on Conceptual Structures, ICCS '94, College Park, Maryland, USA, August 16-20, 1994, Proceedings
Toward synthesis from English descriptions This paper reports on a research project to design a system for automatically interpreting English specifications of digital systems in terms of design representation formalisms currently employed in CAD systems. The necessary processes involve the machine analysis of English and the synthesis of models from the specifications. The approach being investigated is interactive and consists of syntactic scanning, semantic analysis, interpretation generation, and model integration.
Polynomial Algorithms for Projection and Matching The main purpose of this paper is to develop polynomial algorithms for the projection and matching operations on conceptual graphs. Since all interesting problems related to these operations are at least NP-complete — we will consider here the exhibition of a solution and counting the solutions — we propose to explore polynomial cases by restricting the form of the graphs or relaxing constraints on the operations. We examine the particular conceptual graphs whose underlying structure is a tree. Besides general or injective projections, we define intermediary kinds of projections. We then show how these notions can be extended to matchings.
Implementing a semantic interpreter using conceptual graphs
Conceptual Structures: Standards and Practices, 7th International Conference on Conceptual Structures, ICCS '99, Blacksburg, Virginia, USA, July 12-15, 1999, Proceedings
Relating Diagrams to Logic Although logic is general enough to describe anything that can be implemented on a digital computer, the unreadability of predi- cate calculus makes it unpopular as a design language. Instead, many graphic notations have been developed, each for a narrow range of purposes. Conceptual graphs are a graphic system of logic that is as general as predicate calculus, but they are as readable as the special- purpose diagrams. In fact, many popular diagrams can be viewed as special cases of conceptual graphs: type hierarchies, entity-relationship diagrams, parse trees, dataflow diagrams, flow charts, state-transition diagrams, and Petri nets. This paper shows how such diagrams can be translated to conceptual graphs and thence into other systems of logic, such as the Knowledge Interchange Format (KIF).
Guest Editor's Introduction: Knowledge-Management Systems-Converting and Connecting
Verifying task-based specifications in conceptual graphs A conceptual model is a model of real world concepts and application domains as perceived by users and developers. It helps developers investigate and represent the semantics of the problem domain, as well as communicate among themselves and with users. In this paper, we propose the use of task-based specifications in conceptual graphs (TBCG) to construct and verify a conceptual model. Task-based specification methodology is used to serve as the mechanism to structure the knowledge captured in the conceptual model; whereas conceptual graphs are adopted as the formalism to express task-based specifications and to provide a reasoning capability for the purpose of verification. Verifying a conceptual model is performed on model specifications of a task through constraints satisfaction and relaxation techniques, and on process specifications of the task based on operators and rules of inference inherited in conceptual graphs.
Aspects of applicative programming for file systems (Preliminary Version) This paper develops the implications of recent results in semantics for applicative programming. Applying suspended evaluation (call-by-need) to the arguments of file construction functions results in an implicit synchronization of computation and output. The programmer need not participate in the determination of the pace and the extent of the evaluation of his program. Problems concerning multiple input and multiple output files are considered: typical behavior is illustrated with an example of a rudimentary text editor written applicatively. As shown in the trace of this program, the driver of the program is the sequential output device(s). Implications of applicative languages for I/O bound operating systems are briefly considered.
Towards a Deeper Understanding of Quality in Requirements Engineering The notion of quality in requirements specifications is poorly understood, and in most literature only bread and butter lists of useful properties have been provided. However, the recent frameworks of Lindland et al. and Pohl have tried to take a more systematic approach. In this paper, these two frameworks are reviewed and compared. Although they have different outlook, their deeper structures are not contradictory.
Checking Java Programs via Guarded Commands This paper defines a simple guarded-command--like language and its semantics.The language is used as an intermediate language in generating verification conditionsfor Java. The paper discusses why it is a good idea to generate verificationconditions via an intermediate language, rather than directly.Publication history. This paper appears in Formal Techniques for Java Programs,workshop proceedings. Bart Jacobs, Gary T. Leavens, Peter Muller, and Arnd PoetzschHeffter,editors. Technical ...
Using a structured design approach to reduce risks in end user spreadsheet development Computations performed using end-user developed spreadsheets have resulted in serious errors and represent a major control risk to organizations. Literature suggests that factors contributing to spreadsheet errors include developer inexperience, poor design approaches, application types, problem complexity, time pressure, and presence or absence of review procedures. We explore the impact of using a structured design approach for spreadsheet development. We used two field experiments and found that subjects using the design approach showed a significant reduction in the number of ‘linking errors,’ i.e., mistakes in creating links between values that must connect one area of the spreadsheet to another or from one worksheet to another in a common workbook. Our results provide evidence that design approaches that explicitly identify potential error factors may improve end-user application reliability. We also observed that factors such as gender, application expertise, and workgroup configuration also influenced spreadsheet error rates.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.071329
0.03532
0.027022
0.018566
0.008602
0.002565
0.000453
0.000007
0.000001
0
0
0
0
0
Arithmetic coding using hierarchical dependency context model for H.264/AVC video coding In this paper, a hierarchical dependency context model (HDCM) is firstly proposed to exploit the statistical correlations of DCT (Discrete Cosine Transform) coefficients in H.264/AVC video coding standard, in which the number of non-zero coefficients in a DCT block and the scanned position are used to capture the magnitude varying tendency of DCT coefficients. Then a new binary arithmetic coding using hierarchical dependency context model (HDCMBAC) is proposed. HDCMBAC associates HDCM with binary arithmetic coding to code the syntax elements for a DCT block, which consist of the number of non-zero coefficients, significant flag and level information. Experimental results demonstrate that HDCMBAC can achieve similar coding performance as CABAC at low and high QPs (quantization parameter). Meanwhile the context modeling and the arithmetic decoding in HDCMBAC can be carried out in parallel, since the context dependency only exists among different parts of basic syntax elements in HDCM.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Modelling Hybrid Train Speed Controller using Proof and Refinement The modern radio-based railway signalling systems aim to increase network's capacity by enabling trains to run closer to each other. At the core of such systems is train's on-board computer (discrete) responsible for computing and controlling the speed (continuous) of the train. Such systems are best captured by hybrid models, which capture discrete and continuous system's aspects. Hybrid models are notoriously difficult to model and verify, in our research we address this problem by applying hybrid systems' modelling patterns and stepwise refinement for developing hybrid train speed controller model.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Parameter-Induced Aliasing and Related Problems can be Avoided Aliasing is an old but yet unsolved problem, being disadvantegous for most aspects of programming languages. We suggest a new model for variables which avoids aliasing by maintaining the property of always having exactly one access path to a variable. In particular, variables have no address. Based on this model, we develop language rules which can be checked in local context and we suggest programming guidelines to prevent alias effects in Ada 95 programs.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
On Diagram Tokens and Types Rejecting the temptation to make up a list of necessary and sufficient conditions for diagrammatic and sentential systems, we present an important distinction which arises from sentential and diagrammatic features of systems. Importantly, the distinction we will explore in the paper lies at a meta-level. That is, we argue for a major difference in meta-theory between diagrammatic and sentential systems, by showing the necessity of a more fine-grained syntax for a diagrammatic system than for a sentential system. Unlike with sentential systems, a diagrammatic system requires two levels of syntax--token and type. Token-syntax is about particular diagrams instantiated on some physical medium, and type-syntax provides a formal definition with which a concrete representtation of a diagram must comply. While these two levels of syntax are closely related, the domains of type-syntax and token-syntax are distinct from each other. Euler diagrams are chosen as a case study to illustrate the following major points of the paper: (i) What kinds of diagrammatic features (as opposed to sentential features) require two different levels of syntax? (ii) What is the relation between these two levels of syntax? (iii) What is the advantage of having a two-tiered syntax?
A visual framework for modelling with heterogeneous notations This paper presents a visual framework for organizing models of systems which allows a mixture of notations, diagrammatic or text-based, to be used. The framework is based on the use of templates which can be nested and sometimes flattened. It is modular and can be used to structure the constraint space of the system, making it scalable with appropriate tool support. It is also flexible and extensible: users can choose which notations to use, mix them and add new notations or templates. The goal of this work is to provide more intuitive and expressive languages and frameworks to support the construction and presentation of rich and precise models.
Nesting in Euler Diagrams: syntax, semantics and construction This paper considers the notion of nesting in Euler diagrams, and how nesting affects the interpretation and construction of such diagrams. After setting up the necessary definitions for concrete Euler diagrams (drawn in the plane) and abstract diagrams (having just formal structure), the notion of nestedness is defined at both concrete and abstract levels. The concept of a dual graph is used to give an alternative condition for a drawable abstract Euler diagram to be nested. The natural progression to the diagram semantics is explored and we present a “nested form” for diagram semantics. We describe how this work supports tool-building for diagrams, and how effective we might expect this support to be in terms of the proportion of nested diagrams.
Towards a Formalization of Constraint Diagrams Geared to complement UML and to the specification of large software systems by non-mathematicians, constraint diagrams are a visual language that generalizes the popular and intuitive Venn diagrams and Euler circles, and adds facilities for quantifying over elements and navigating relations. The language design emphasizes scalability and expressiveness while retaining intuitiveness. Spider diagrams form a subset of the notation, leaving out universal quantification and the ability to navigate relations. Spider diagrams have been given a formal definition. This paper extends that definition to encompass the constraint diagram notation. The formalization of constraint diagrams is nontrivial: it exposes subtleties concerned with the implicit ordering of symbols in the visual language, which were not evident before a formal definition of the language was attempted. This has led to an improved design of the language
Drawing graphs nicely using simulated annealing The paradigm of simulated annealing is applied to the problem of drawing graphs “nicely.” Our algorithm deals with general undirected graphs with straight-line edges, and employs several simple criteria for the aesthetic quality of the result. The algorithm is flexible, in that the relative weights of the criteria can be changed. For graphs of modest size it produces good results, competitive with those produced by other methods, notably, the “spring method” and its variants.
Randomized graph drawing with heavy-duty preprocessing We present a graph drawing system for general undirected graphs with straight-line edges. It carries out a rather complex set of preprocessing steps, designed to produce a topologically good, but not necessarily nice-looking layout, which is then subjected to Davidson and Harel's simulated annealing beautification algorithm. The intermediate layout is planar for planar graphs and attempts to come close to planar for nonplanar graphs. The system's results are significantly better, and much faster, than what the annealing approach is able to achieve on its own.
Statecharts: A visual formalism for complex systems Abstract. We,present,a broad,extension,of the,conventional,formalism,of state machines,and state diagrams, that is relevant to the specification and design of complex discrete-event systems, such as multi-computer real-time systems, communication protocols and digital control units. Our diagrams, which we call statecharts, extend conventional state-transition diagrams with essentially three elements, dealing, respectively, with the notions of hierarchy, concurrency and communica- tion. These,transform,the language,of state diagrams,into a highly,structured,and,economical description,language.,Statecharts,are thus,compact,and,expressiv-small,diagrams,can,express complex,behavior-as,well,as compositional,and,modular.,When,coupled,with,the capabilities of computerized graphics, statecharts enable viewing the description at different levels of detail, and make even very large specifications manageable and comprehensible. In fact, we intend to demonstrate,here that statecharts,counter,many,of the objections,raised,against,conventional,state diagrams, and thus appear to render specification by diagrams an attractive and plausible approach. Statecharts,can be used,either as a stand-alone,behavioral,description,or as part of a more,general design methodology that deals also with the system’s other aspects, such as functional decomposi- tion and,data-flow specification. We also discuss,some,practical,experience,that was,gained,over the last three,years,in applying,the statechart,formalism,to the specification,of a particularly complex,system.
Object Interaction in Object-Oriented Deductive Conceptual Models We present the main components of an object-oriented deductive approach to conceptual modelling of information systems. This approach does not model object interaction explicitly. However interaction among objects can be derived by means of a formal procedure that we outline.
The Software Development System This paper presents a discussion of the Software Development System (SDS), a methodology addressing the problems involved in the development of software for Ballistic Missile Defense systems. These are large, real-time, automated systems with a requirement for high reliability. The SDS is a broad approach attacking problems arising in requirements generation, software design, coding, and testing. The approach is highly requirements oriented and has resulted in the formulation of structuring concepts, a requirements statement language, process design language, and support software to be used throughout the development cycle. This methodology represents a significant advance in software technology for the development of software for a class of systems such as BMD. The support software has been implemented and is undergoing evaluation.
The Draco Approach to Constructing Software from Reusable Components This paper discusses an approach called Draco to the construction of software systems from reusable software parts. In particular we are concerned with the reuse of analysis and design information in addition to programming language code. The goal of the work on Draco has been to increase the productivity of software specialists in the construction of similar systems. The particular approach we have taken is to organize reusable software components by problem area or domain. Statements of programs in these specialized domains are then optimized by source-to-source program transformations and refined into other domains. The problems of maintaining the representational consistency of the developing program and producing efficient practical programs are discussed. Some examples from a prototype system are also given.
Compound brushing explained This paper proposes a conceptual model called compound brushing for modeling the brushing techniques used in dynamic data visualization. In this approach, brushing techniques are modeled as higraphs with five types of basic entities: data, selection, device, renderer, and transformation. Using this model, a flexible visual programming tool is designed not only to configure and control various common types of brushing techniques currently used in dynamic data visualization but also to investigate new brushing techniques.
Fuzzy logic as a basis for reusing task‐based specifications
LANSF: a protocol modelling environment and its implementation SUMMARY LANSF is a software package that was originally designed as a tool to investigate the behaviour of medium access control (MAC) level protocols. These protocols form an interesting class of distributed computations: timing of events is the key factor in them. The protocol definition language of LANSF is based on C, and protocols are specified (programmed) as collections of communicating, interrupt-driven processes. These specifications are executable: an event-driven emulator of MAC-level communication phenomena forms the foundation of the implementation. Some tools for debugging, testing, and validation of protocol specifications are provided. We present key features of LANSF at the syntactic level, comment informally on the semantics of these features, and highlight some implementation issues. A complete example of a LANSF application is discussed in the Appendix.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.109726
0.109319
0.02384
0.021868
0.002827
0.000763
0.000048
0.000001
0
0
0
0
0
0
A general evaluation measure for document organization tasks A number of key Information Access tasks -- Document Retrieval, Clustering, Filtering, and their combinations -- can be seen as instances of a generic {\em document organization} problem that establishes priority and relatedness relationships between documents (in other words, a problem of forming and ranking clusters). As far as we know, no analysis has been made yet on the evaluation of these tasks from a global perspective. In this paper we propose two complementary evaluation measures -- Reliability and Sensitivity -- for the generic Document Organization task which are derived from a proposed set of formal constraints (properties that any suitable measure must satisfy). In addition to be the first measures that can be applied to any mixture of ranking, clustering and filtering tasks, Reliability and Sensitivity satisfy more formal constraints than previously existing evaluation metrics for each of the subsumed tasks. Besides their formal properties, its most salient feature from an empirical point of view is their strictness: a high score according to the harmonic mean of Reliability and Sensitivity ensures a high score with any of the most popular evaluation metrics in all the Document Retrieval, Clustering and Filtering datasets used in our experiments.
A Formal Approach to Effectiveness Metrics for Information Access: Retrieval, Filtering, and Clustering. In this tutorial we present a formal account of evaluation metrics for three of the most salient information related tasks: Retrieval, Clustering, and Filtering. We focus on the most popular metrics and, by exploiting measurement theory, we show some constraints for suitable metrics in each of the three tasks. We also systematically compare metrics according to how they satisfy such constraints, we provide criteria to select the most adequate metric for each specific information access task, and we discuss how to combine and weight metrics.
Axiometrics: Axioms of Information Retrieval Effectiveness Metrics. There are literally dozens (most likely more than one hundred) information retrieval eectiveness metrics, and counting, but a common, general, and formal understanding of their properties is still missing. In this paper we aim at improving and extending the recently published work by Busin and Mizzaro [6]. That paper proposes an axiomatic approach to Information Retrieval (IR) eectiveness metrics, and more in detail: (i) it denes a framework based on the notions of measure, measurement, and similarity; (ii) it provides a general denition of IR eectiveness metric; and (iii) it proposes a set of axioms that every eectiveness metric should satisfy. Here we build on their work and more specically: we design a dierent and improved set of axioms, we provide a denition of some common metrics, and we derive some theorems from the axioms.
Sentiment Analysis and Topic Detection of Spanish Tweets: A Comparative Study of of NLP Techniques.
A comparison of extrinsic clustering evaluation metrics based on formal constraints There is a wide set of evaluation metrics available to compare the quality of text clustering algorithms. In this article, we define a few intuitive formal constraints on such metrics which shed light on which aspects of the quality of a clustering are captured by different metric families. These formal constraints are validated in an experiment involving human assessments, and compared with other constraints proposed in the literature. Our analysis of a wide range of metrics shows that only BCubed satisfies all formal constraints. We also extend the analysis to the problem of overlapping clustering, where items can simultaneously belong to more than one cluster. As Bcubed cannot be directly applied to this task, we propose a modified version of Bcubed that avoids the problems found with other metrics.
Overview of RepLab 2014: Author Profiling and Reputation Dimensions for Online Reputation Management.
Axiomatic Thinking for Information Retrieval: And Related Tasks This is the first workshop on the emerging interdisciplinary research area of applying axiomatic thinking to information retrieval (IR) and related tasks. The workshop aims to help foster collaboration of researchers working on different perspectives of axiomatic thinking and encourage discussion and research on general methodological issues related to applying axiomatic thinking to IR and related tasks.
Distributed Representations of Words and Phrases and their Compositionality. The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of "Canada" and "Air" cannot be easily combined to obtain "Air Canada". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible.
Feedback stabilization of some event graph models The authors introduce several notions of stability for event graph models, timed or not. The stability is similar to the boundedness notion for Petri nets. The event graph models can be controlled by an output feedback which takes information from some observable transitions and can disable some controllable transitions. The controller itself is composed of an event graph. In this framework the authors solve the corresponding stabilization problems, i.e., they wonder if such a controller may prevent the explosion of the number of tokens
An Effective Implementation for the Generalized Input-Output Construct of CSP
The transformation schema: An extension of the data flow diagram to represent control and timing The data flow diagram has been extensively used to model the data transformation aspects of proposed systems. However, previous definitions of the data flow diagram have not provided a comprehensive way to represent the interaction between the timing and control aspects of a system and its data transformation behavior. This paper describes an extension of the data flow diagram called the transformation schema. The transformation schema provides a notation and formation rules for building a comprehensive system. model, and a set of execution rules to allow prediction of the behavior over time of a system modeled in this way. The notation and formation rules allow depiction of a system as a network of potentially concurrent “centers of activity” (transformations), and of data repositories (stores), linked by communication paths (flows). The execution rules provide a qualitative prediction rather than a quantitative one, describing the acceptance of inputs and the production of outputs by the transformations but not input and output values. The transformation schema permits the creation and evaluation of two different types of system models. In the essential (requirements) model, the schema is used to represent a virtual machine with infinite resources. The elements of the schema depict idealized processing and memory components. In the implementation model, the schema is used to represent a real machine with limited resources, and the results of the execution predict the behavior of an implementation of requirements. The transformations of the schema can depict software running on digital processors, hard-wired digital or analog circuits, and so on, and the stores of the schema can depict disk files, tables in memory, and so on.
A Graphical Query Language Based on an Extended E-R Model
The requirements apprentice: an initial scenario The implementation of the Requirements Apprentice has reached the point where it is possible to exhibit a concrete scenario showing the intended basic capabilities of the system. The Requirements Apprentice accepts ambiguous, incomplete, and inconsistent input from a requirements analyst and assists the analyst in creating and validating a coherent requirements description. This processing is supported by a general-purpose reasoning system and a library of requirements cliches that contains reusable descriptions of standard concepts used in requirements.
Use of symmetry in prediction-error field for lossless compression of 3D MRI images Abstract Three dimensional MRI images which are powerful tools for diagnosis of many diseases require large storage space. A number of lossless compression schemes exist for this purpose. In this paper we propose a new approach for lossless compression of these images which exploits the inherent symmetry that exists in 3D MRI images. First, an efficient pixel prediction scheme is used to remove correlation between pixel values in an MRI image. Then a block matching routine is employed to take advantage of the symmetry within the prediction error image. Inter-slice correlations are eliminated using another block matching. Results of the proposed approach are compared with the existing standard compression techniques.
1.00434
0.005391
0.004913
0.004865
0.004568
0.004451
0.003774
0.000997
0
0
0
0
0
0
Reasoning in Higraphs with Loose Edges Harel introduces the notion of zooming out as a usefuloperation in working with higraphs. Zooming out allows usto consider less detailed versions of a higraph by droppingsome detail from the description in a structured manner. Althoughthis is a very useful operation it seems it can be misleadingin some circumstances by allowing the user of thezoomed out higraph to make false inferences given the usualtransition system semantics for higraphs. We consider oneapproach to rectifying this situation by following throughHarel's suggestion that, in some circumstances, it may beuseful to consider higraphs with edges that have no specificorigin or destination. We call these higraphs loose higraphsand show that an appropriate definition of zooming on loosehigraphs avoids some of the difficulties arising from the useof zooming. We also consider a logic for connectivity inloose higraphs.
Visual Formalisms Revisited The development of an interactive application is a complex task that has to consider data, behavior, inter- communication, architecture and distribution aspects of the modeled system. In particular, it presupposes the successful communication between the customer and the software expert. To enhance this communica- tion most modern software engineering methods rec- ommend to specify the different aspects of a system by visual formalisms. In essence, visual specifications are directed graphs that are interpreted in a particular way for each as- pect of the system. They are also intended to be com- positional. This means that, each node can itself be a graph with a separate meaning. However, the lack of a denotational model for hierarchical graphs often leads to the loss of compositionality. This has severe negative consequences in the development of realistic applications. In this paper we present a simple denotational model (which is by definition compositional) for the architecture and behavior aspects of a system. This model is then used to give as emantics to almost all the concepts occurring in ROOM. Our model also provides a compositional semantics for or-states in statecharts.
Zooming-out on Higraph-based diagrams - Syntactic and Semantic Issues Computing system representations based on Harel's notion of hierarchical graph, or higraph, have become popular since the invention of Statecharts. Such hierarchical representations support a useful filtering operation, called “zooming-out”, which is used to manage the level of detail presented to the user designing or reasoning about a large and complex system. In the framework of (lightweight) category theory, we develop the mathematics of zooming e ut for higraphs with loose edges, formalise the transition semantics of such higraphs and conduct an analysis of the effect the operation of zooming out has on the semantic interpretations, as required for the soundness of reasoning arguments depending on zoom-out steps.
Towards a Formalization of Constraint Diagrams Geared to complement UML and to the specification of large software systems by non-mathematicians, constraint diagrams are a visual language that generalizes the popular and intuitive Venn diagrams and Euler circles, and adds facilities for quantifying over elements and navigating relations. The language design emphasizes scalability and expressiveness while retaining intuitiveness. Spider diagrams form a subset of the notation, leaving out universal quantification and the ability to navigate relations. Spider diagrams have been given a formal definition. This paper extends that definition to encompass the constraint diagram notation. The formalization of constraint diagrams is nontrivial: it exposes subtleties concerned with the implicit ordering of symbols in the visual language, which were not evident before a formal definition of the language was attempted. This has led to an improved design of the language
Formalizing spider diagrams Geared to complement UML and to the specification of large software systems by non-mathematicians, \Em{spider diagrams} are a visual language that generalizes the popular and intuitive Venn diagrams and Euler circles. The language design emphasized scalability and expressiveness while retaining intuitiveness. In this extended abstract we describe spider diagrams from a mathematical standpoint and show how their formal semantics in terms of logical expressions can be made. We also claim that all spider diagrams are self-consistent.
A Data Type Approach to the Entity-Relationship Approach
Expanding the utility of semantic networks through partitioning An augmentation of semantic networks is presented in which the various nodes and arcs are partitioned into "net spaces." These net spaces delimit the scopes of quantified variables, distinguish hypothetical and imaginary situations from reality, encode alternative worlds considered in planning, and focus attention at particular levels of detail.
Feedback stabilization of some event graph models The authors introduce several notions of stability for event graph models, timed or not. The stability is similar to the boundedness notion for Petri nets. The event graph models can be controlled by an output feedback which takes information from some observable transitions and can disable some controllable transitions. The controller itself is composed of an event graph. In this framework the authors solve the corresponding stabilization problems, i.e., they wonder if such a controller may prevent the explosion of the number of tokens
Unscented filtering and nonlinear estimation The extended Kalman filter (EKF) is probably the most widely used estimation algorithm for nonlinear systems. However, more than 35 years of experience in the estimation community has shown that is difficult to implement, difficult to tune, and only reliable for systems that are almost linear on the time scale of the updates. Many of these difficulties arise from its use of linearization. To overcome this limitation, the unscented transformation (UT) was developed as a method to propagate mean and covariance information through nonlinear transformations. It is more accurate, easier to implement, and uses the same order of calculations as linearization. This paper reviews the motivation, development, use, and implications of the UT.
Multistage negotiation for distributed constraint satisfaction A cooperation paradigm and coordination protocol for a distributed planning system consisting of a network of semi-autonomous agents with limited internode communication and no centralized control is presented. A multistage negotiation paradigm for solving distributed constraint satisfaction problems in this kind of system has been developed. The strategies presented enable an agent in a distributed planning system to become aware of the extent to which its own local decisions may have adverse nonlocal impact in planning. An example problem is presented in the context of transmission path restoration for dedicated circuits in a communications network. Multistage negotiation provides an agent with sufficient information about the impact of local decisions on a nonlocal state so that the agent may make local decisions that are correct from a global perspective, without attempting to provide a complete global state to all agents. Through multistage negotiation, an agent is able to recognize when a set of global goals cannot be satisfied, and is able to solve a related problem by finding a way of satisfying a reduced set of goals
Reasoning and Refinement in Object-Oriented Specification Languages This paper describes a formal object-oriented specification language, Z++, and identifies proof rules and associated specification structuring and development styles for the facilitation of validation and verification of implementations against specifications in this language. We give inference rules for showing that certain forms of inheritance lead to refinement, and for showing that refinements are preserved by constructs such as promotion of an operation from a supplier class to a client class. Extension of these rules to other languages is also discussed.
Software engineering for parallel systems Current approaches to software engineering practice for parallel systems are reviewed. The parallel software designer has not only to address the issues involved in the characterization of the application domain and the underlying hardware platform, but, in many instances, the production of portable, scalable software is desirable. In order to accommodate these requirements, a number of specific techniques and tools have been proposed, and these are discussed in this review in the framework of the parallel software life-cycle. The paper outlines the role of formal methods in the practical production of parallel software, but its main focus is the emergence of development methodologies and environments. These include CASE tools and run-time support systems, as well as the use of methods taken from experience of conventional software development. Because of the particular emphasis on performance of parallel systems, work on performance evaluation and monitoring systems is considered.
Developing Mode-Rich Satellite Software by Refinement in Event B To ensure dependability of on-board satellite systems, the designers should, in particular, guarantee correct implementation of the mode transition scheme, i.e., ensure that the states of the system components are consistent with the global system mode. However, there is still a lack of scalable approaches to formal verification of correctness of complex mode transitions. In this paper we present a formal development of an Attitude and Orbit Control System (AOCS) undertaken within the ICT DEPLOY project. AOCS is a complex mode-rich system, which has an intricate mode-transition scheme. We show that refinement in Event B provides the engineers with a scalable formal technique that enables both development of mode-rich systems and proof-based verification of their mode consistency.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.105
0.1
0.05
0.018182
0.000909
0.000016
0.000008
0
0
0
0
0
0
0
Fully Distributed Nonlinear State Estimation Using Sensor Networks This paper studies the problem of fully distributed state estimation using networked local sensors. Specifically, our previously proposed algorithm, namely, the Distributed Hybrid Information Fusion algorithm is extended to the scenario with nonlinearities involved in both the process model and the local sensing models. The unscented transformation approach is adopted for such an extension so that no computation of Jacobian matrix is needed. Moreover, the extended algorithm requires only one communication iteration between every two consecutive time instants. It is also analytically shown that for the case with linear sensing models, the local estimate errors are bounded in the mean square sense. A simulation example is used to illustrate the effectiveness of the extended algorithm.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Secure Tropos: A Security-Oriented Extension Of The Tropos Methodology Although security plays an important role in the development of multiagent systems, a careful analysis of software development processes shows that the definition of security requirements is, usually considered after the design of the system. One of the reasons is the fact that agent oriented software engineering methodologies have not integrated security concerns throughout their developing stages. The integration of security concerns during the whole range of the development stages can help in the development of more secure multiagent systems. In this paper we introduce extensions to the Tropos methodology to enable it to model security concerns throughout the whole development process. A description of the new concepts and modelling activities is given together with a discussion on how these concepts and modelling activities are integrated to the current stages of Tropos. A real life case study from the health and social care sector is used to illustrate the approach.
Syntax highlighting in business process models Sense-making of process models is an important task in various phases of business process management initiatives. Despite this, there is currently hardly any support in business process modeling tools to adequately support model comprehension. In this paper we adapt the concept of syntax highlighting to workflow nets, a modeling technique that is frequently used for business process modeling. Our contribution is three-fold. First, we establish a theoretical argument to what extent highlighting could improve comprehension. Second, we formalize a concept for syntax highlighting in workflow nets and present a prototypical implementation with the WoPeD modeling tool. Third, we report on the results of an experiment that tests the hypothetical benefits of highlighting for comprehension. Our work can easily be transferred to other process modeling tools and other process modeling techniques.
A comparison of security requirements engineering methods This paper presents a conceptual framework for security engineering, with a strong focus on security requirements elicitation and analysis. This conceptual framework establishes a clear-cut vocabulary and makes explicit the interrelations between the different concepts and notions used in security engineering. Further, we apply our conceptual framework to compare and evaluate current security requirements engineering approaches, such as the Common Criteria, Secure Tropos, SREP, MSRA, as well as methods based on UML and problem frames. We review these methods and assess them according to different criteria, such as the general approach and scope of the method, its validation, and quality assurance capabilities. Finally, we discuss how these methods are related to the conceptual framework and to one another.
Software engineering for security: a roadmap Is there such a thing anymore as a software system that doesn't need to be secure? Almost every software- controlled system faces threats from potential adversaries, from Internet-aware client applications running on PCs, to complex telecommunications and power systems acces- sible over the Internet, to commodity software with copy protection mechanisms. Software engineers must be cog- nizant of these threats and engineer systems with credible defenses, while still delivering value to customers. In this paper, we present our perspectives on the research issues that arise in the interactions between software engineering and security.
Security and Privacy Requirements Analysis within a Social Setting Security issues for software systems ultimately concern relationships among social actors - stakeholders, system users, potential attackers - and the software acting on their behalf. This paper proposes a methodological framework for dealing with security and privacy requirements based on i*, an agent-oriented requirements modeling language. The framework supports a set of analysis techniques. In particular, attacker analysis helps identify potential system abusers and their malicious intents. Dependency vulnerability analysis helps detect vulnerabilities in terms of organizational relationships amongstakeholders. Countermeasure analysis supports the dynamic decision-making process of defensive system players in addressing vulnerabilities and threats. Finally, access control analysis bridges the gap between security requirement models and security implementation models. The framework is illustrated with an example involving security and privacy concerns in the design of agent-based health information systems. In addition, we discuss model evaluation techniques, including qualitative goal model analysis and property verification techniques based on model checking.
Towards Regulatory Compliance: Extracting Rights and Obligations to Align Requirements with Regulations In the United States, federal and state regulations prescribe stakeholder rights and obligations that must be satisfied by the requirements for software systems. These regulations are typically wrought with ambiguities, making the process of deriving system requirements ad hoc and error prone. In highly regulated domains such as healthcare, there is a need for more comprehensive standards that can be used to assure that system requirements conform to regulations. To address this need, we expound upon a process called Semantic Parameterization previously used to derive rights and obligations from privacy goals. In this work, we apply the process to the Privacy Rule from the U.S. Health Insurance Portability and Accountability Act (HIPAA). We present our methodology for extracting and prioritizing rights and obligations from regulations and show how semantic models can be used to clarify ambiguities through focused elicitation and to balance rights with obligations. The results of our analysis can aid requirements engineers, standards organizations, compliance officers, and stakeholders in assuring systems conform to policy and satisfy requirements.
Goal-Oriented Requirements Engineering: A Guided Tour Abstract: Goals capture, at different levels of abstraction, the various objectives the system under consideration should achieve. Goal-oriented requirements engineering is concerned with the use of goals for eliciting, elaborating, structuring, specifying, analyzing, negotiating, documenting, and modifying requirements. This area has received increasing attention over the past few years. The paper reviews various research efforts undertaken along this line of research. The arguments in favor of goal orientation are first briefly discussed. The paper then com-pares the main approaches to goal modeling, goal specification and goal-based reasoning in the many activities of the requirements engineering process. To make the discussion more concrete, a real case study is used to suggest what a goal-oriented requirements engineering method may look like. Experience with such approaches and tool support are briefly discussed as well.
Requirements Engineering in the Year 00: A research perspective Requirements engineering (RE) is concerned with the identification of the goals to be achieved by the envisioned system, the operationalization of such goals into services and constraints, and the assignment of responsibilities for the resulting requirements to agents such as humans, devices, and software. The processes involved in RE include domain analysis, elicitation, specification, assessment, negotiation, documentation, and evolution. Getting high-quality requirements is difficult and critical. Recent surveys have confirmed the growing recognition of RE as an area of utmost importance in software engineering research and practice.The paper presents a brief history of the main concepts and techniques developed to date to support the RE task, with a special focus on modeling as a common denominator to all RE processes. The initial description of a complex safety-critical system is used to illustrate a number of current research trends in RE-specific areas such as goal-oriented requirements elaboration, conflict management, and the handling of abnormal agent behaviors. Opportunities for goal-based architecture derivation are also discussed together with research directions to let the field move towards more disciplined habits.
Specification-based test oracles for reactive systems
No Silver Bullet Essence and Accidents of Software Engineering First Page of the Article
UniProt Knowledgebase: a hub of integrated protein data. The UniProt Knowledgebase (UniProtKB) acts as a central hub of protein knowledge by providing a unified view of protein sequence and functional information. Manual and automatic annotation procedures are used to add data directly to the database while extensive cross-referencing to more than 120 external databases provides access to additional relevant information in more specialized data collections. UniProtKB also integrates a range of data from other resources. All information is attributed to its original source, allowing users to trace the provenance of all data. The UniProt Consortium is committed to using and promoting common data exchange formats and technologies, and UniProtKB data is made freely available in a range of formats to facilitate integration with other databases.
Visualizing Argument Structure Constructing arguments and understanding them is not easy. Visualization of argument structure has been shown to help understanding and improve critical thinking. We describe a visualization tool for understanding arguments. It utilizes a novel hi-tree based representation of the argument’s structure and provides focus based interaction techniques for visualization. We give efficient algorithms for computing these layouts.
Analogical retrieval in reuse-oriented requirements engineering Computational mechanisms are presented for analogical retrieval of domain knowledge as a basis for intelligent tool-based assistance for requirements engineers, A first mechanism, called the domain matcher, retrieves object system models which describe key features for new problems, A second mechanism, called the problem classifier, reasons with analogical mappings inferred by the domain matcher to detect potential incompleteness, overspecification and inconsistencies in entered facts and requirements, Both mechanisms are embedded in AIR, a toolkit that provides co-operative reuse-oriented assistance for requirements engineers.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.05125
0.05
0.025
0.00875
0.004302
0.001476
0.000124
0.000035
0.000017
0.000002
0
0
0
0
Disentangling virtual machine architecture Virtual machine (VM) implementations are made of intricately intertwined subsystems, interacting largely through implicit dependencies. As the degree of crosscutting present in VMs is very high, VM implementations exhibit significant internal complexity. This study proposes an architecture approach for VMs that regards a VM as a composite of service modules coordinated through explicit bidirectional interfaces. Aspect-oriented programming techniques are used to establish these interfaces, to coordinate module interaction, and to declaratively express concrete VM architectures. A VM architecture description language is presented in a case study, illustrating the application of the proposed architectural principles.
Object and reference immutability using Java generics A compiler-checked immutability guarantee provides useful documentation, facilitates reasoning, and enables optimizations. This paper presents Immutability Generic Java (IGJ), a novel language extension that expresses immutability without changing Java's syntax by building upon Java's generics and annotation mechanisms. In IGJ, each class has one additional type parameter that is Immutable, Mutable, or ReadOnly. IGJ guarantees both reference immutability (only mutable references can mutate an object) and object immutability (an immutable reference points to an immutable object). IGJ is the first proposal for enforcing object immutability within Java's syntax and type system, and its reference immutability is more expressive than previous work. IGJ also permits covariant changes of type parameters in a type-safe manner, e.g., a readonly list of integers is a subtype of a readonly list of numbers. IGJ extends Java's type system with a few simple rules. We formalize this type system and prove it sound. Our IGJ compiler works by type-erasure and generates byte-code that can be executed on any JVM without runtime penalty.
It's alive! continuous feedback in UI programming Live programming allows programmers to edit the code of a running program and immediately see the effect of the code changes. This tightening of the traditional edit-compile-run cycle reduces the cognitive gap between program code and execution, improving the learning experience of beginning programmers while boosting the productivity of seasoned ones. Unfortunately, live programming is difficult to realize in practice as imperative languages lack well-defined abstraction boundaries that make live programming responsive or its feedback comprehensible. This paper enables live programming for user interface programming by cleanly separating the rendering and non-rendering aspects of a UI program, allowing the display to be refreshed on a code change without restarting the program. A type and effect system formalizes this separation and provides an evaluation model that incorporates the code update step. By putting live programming on a more formal footing, we hope to enable critical and technical discussion of live programming systems.
The disappearing boundary between development-time and run-time Modern software systems are increasingly embedded in an open world that is constantly evolving, because of changes in the requirements, in the surrounding environment, and in the way people interact with them. The platform itself on which software runs may change over time, as we move towards cloud computing. These changes are difficult to predict and anticipate, and their occurrence is out of control of the application developers. Because of these changes, the applications themselves need to change. Often, changes in the applications cannot be handled off-line, but require the software to self-react by adapting its behavior dynamically, to continue to ensure the desired quality of service. The big challenge in front of us is how to achieve the necessary degrees of flexibility and dynamism required by software without compromising the necessary dependability. This paper advocates that future software engineering research should focus on providing intelligent support to software at run-time, breaking today's rigid boundary between development-time and run-time. Models need to continue to live at run-time and evolve as changes occur while the software is running. To ensure dependability, analysis that the updated system models continue to satisfy the goals must be performed by continuous verification. If verification fails, suitable adjustment policies, supported by model-driven re-derivation of parts of the system, must be activated to keep the system aligned with its expected requirements. The paper presents the background that motivates this research focus, the main existing research directions, and an agenda for future work.
Delegation proxies: the power of propagation Scoping behavioral variations to dynamic extents is useful to support non-functional requirements that otherwise result in cross-cutting code. Unfortunately, such variations are difficult to achieve with traditional reflection or aspects. We show that with a modification of dynamic proxies, called delegation proxies, it becomes possible to reflectively implement variations that propagate to all objects accessed in the dynamic extent of a message send. We demonstrate our approach with examples of variations scoped to dynamic extents that help simplify code related to safety, reliability, and monitoring.
Maxine: An approachable virtual machine for, and in, java A highly productive platform accelerates the production of research results. The design of a Virtual Machine (VM) written in the Java™ programming language can be simplified through exploitation of interfaces, type and memory safety, automated memory management (garbage collection), exception handling, and reflection. Moreover, modern Java IDEs offer time-saving features such as refactoring, auto-completion, and code navigation. Finally, Java annotations enable compiler extensions for low-level “systems programming” while retaining IDE compatibility. These techniques collectively make complex system software more “approachable” than has been typical in the past. The Maxine VM, a metacircular Java VM implementation, has aggressively used these features since its inception. A co-designed companion tool, the Maxine Inspector, offers integrated debugging and visualization of all aspects of the VM's runtime state. The Inspector's implementation exploits advanced Java language features, embodies intimate knowledge of the VM's design, and even reuses a significant amount of VM code directly. These characteristics make Maxine a highly approachable VM research platform and a productive basis for research and teaching.
O-O Requirements Analysis: an Agent Perspective In this paper, we present a formal object-oriented specification language designed for capturing requirements expressed on composite realtime systems. The specification describes the system as a society of 'agents', each of them being characterised (i) by its responsibility with respect to actions happening in the system and (ii) by its time-varying perception of the behaviour of the other agents. On top of the language, we also suggest some methodological guidance by considering a general strategy based on a progressive assignement cf responsibilities to agents.
Constructing specifications by combining parallel elaborations An incremental approach to construction is proposed, with the virtue of offering considerable opportunity for mechanized support. Following this approach one builds a specification through a series of elaborations that incrementally adjust a simple initial specification. Elaborations perform both refinements, adding further detail, and adaptations, retracting oversimplifications and tailoring approximations to the specifics of the task. It is anticipated that the vast majority of elaborations can be concisely described to a mechanism that will then perform them automatically. When elaborations are independent, they can be applied in parallel, leading to diverging specifications that must later be recombined. The approach is intended to facilitate comprehension and maintenance of specifications, as well as their initial construction.
A distributed alternative to finite-state-machine specifications A specification technique, formally equivalent to finite-state machines, is offered as an alternative because it is inherently distributed and more comprehensible. When applied to modules whose complexity is dominated by control, the technique guides the analyst to an effective decomposition of complexity, encourages well-structured error handling, and offers an opportunity for parallel computation. When applied to distributed protocols, the technique provides a unique perspective and facilitates automatic detection of some classes of error. These applications are illustrated by a controller for a distributed telephone system and the full-duplex alternating-bit protocol for data communication. Several schemes are presented for executing the resulting specifications.
A singleton failures semantics for Communicating Sequential Processes This paper defines a new denotational semantics for the language of Communicating Sequential Processes (CSP). The semantics lies between the existing traces and failures models of CSP, providing a treatment of non-determinism in terms of singleton failures. Although the semantics does not represent a congruence upon the full language, it is adequate for sequential tests of non-deterministic processes. This semantics corresponds exactly to a commonly used notion of data refinement in Z and Object-Z: an abstract data type is refined when the corresponding process is refined in terms of singleton failures. The semantics is used to explore the relationship between data refinement and process refinement, and to derive a rule for data refinement that is both sound and complete.
An Integrated Semantics for UML Class, Object and State Diagrams Based on Graph Transformation This paper studies the semantics of a central part of the Unified Modeling Language UML. It discusses UML class, object and state diagrams and presents a new integrated semantics for both on the basis of graph transformation. Graph transformation is a formal technique having some common ideas with the UML. Graph transformation rules are associated with the operations in class diagrams and with the transitions in state diagrams. The resulting graph transformations are combined into a one system in order to obtain a single coherent semantic description.
Specification Diagrams for Actor Systems Specification diagrams (SD's) are a novel form of graphical notation for specifying open distributed object systems. The design goal is to define notation for specifying message-passing behavior that is expressive, intuitively understandable, and that has formal semantic underpinnings. The notation generalizes informal notations such as UML's Sequence Diagrams and broadens their applicability to later in the design cycle. Specification diagrams differ from existing actor and process algebra presentations in that they are not executable per se; instead, like logics, they are inherently more biased toward specification. In this paper we rigorously define the language syntax and semantics and give examples that show the expressiveness of the language, how properties of specifications may be asserted diagrammatically, and how it is possible to reason rigorously and modularly about specification diagrams.
Miro: Visual Specification of Security Miro is a set of languages and tools that support the visual specification of file system security. Two visual languages are presented: the instance language, which allows specification of file system access, and the constraint language, which allows specification of security policies. Miro visual languages and tools are used to specify security configurations. A visual language is one whose entities are graphical, such as boxes and arrows, specifying means stating independently of any implementation the desired properties of a system. Security means file system protection: ensuring that files are protected from unauthorized access and granting privileges to some users, but not others. Tools implemented and examples of how these languages can be applied to real security specification problems are described.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1.121627
0.069535
0.069535
0.069535
0.036101
0.024775
0.000007
0
0
0
0
0
0
0
Improving the System/Software Engineering Interface for Complex System Development At the 2004 Engineering of Computer Based Systems (ECBS) Technical Committee meeting, the ECBS Executive Committee agreed that a guideline on Integrated System and Software Engineering would be beneficial to engineers working at the interface, and they agreed to work on such a guideline. This paper is written in the hope that it will serve as a basis for a discussion group, initiating such a guideline. This paper seeks to improve the integrated system/software engineering process during the phases when system and software developers work most closely together. These phases are system problem definition, software requirements analysis and specification, system solution analysis, and process planning. Lessons learned during twenty years, while working with system and software engineers to define requirements and solutions on aerospace projects are summarized. During this time, problems were noted and their cause determined. As a result, advice is provided.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Procedures and atomicity refinement The introduction of an early return from a (remote) procedure call can increase the degree of parallelism in a parallel or distributed algorithm modeled by an action system. We define a return statement for procedures in an action systems framework and show that it corresponds to carrying out an atomicity refinement.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Entity Linking meets Word Sense Disambiguation: a Unified Approach.
Entity Disambiguation by Knowledge and Text Jointly Embedding.
Learning Sentiment-Specific Word Embedding For Twitter Sentiment Classification We present a method that learns word embedding for Twitter sentiment classification in this paper. Most existing algorithms for learning continuous word representations typically only model the syntactic context of words but ignore the sentiment of text. This is problematic for sentiment analysis as they usually map words with similar syntactic context but opposite sentiment polarity, such as good and bad, to neighboring word vectors. We address this issue by learning sentiment-specific word embedding (SSWE), which encodes sentiment information in the continuous representation of words. Specifically, we develop three neural networks to effectively incorporate the supervision from sentiment polarity of text (e.g. sentences or tweets) in their loss functions. To obtain large scale training corpora, we learn the sentiment-specific word embedding from massive distant-supervised tweets collected by positive and negative emoticons. Experiments on applying SSWE to a benchmark Twitter sentiment classification dataset in SemEval 2013 show that (1) the SSWE feature performs comparably with hand-crafted features in the top-performed system; (2) the performance is further improved by concatenating SSWE with existing feature set.
Monitoring Reputation in the Wild Online West.
Sentiment strength detection for the social web Sentiment analysis is concerned with the automatic extraction of sentiment-related information from text. Although most sentiment analysis addresses commercial tasks, such as extracting opinions from product reviews, there is increasing interest in the affective dimension of the social web, and Twitter in particular. Most sentiment analysis algorithms are not ideally suited to this task because they exploit indirect indicators of sentiment that can reflect genre or topic instead. Hence, such algorithms used to process social web texts can identify spurious sentiment patterns caused by topics rather than affective phenomena. This article assesses an improved version of the algorithm SentiStrength for sentiment strength detection across the social web that primarily uses direct indications of sentiment. The results from six diverse social web data sets (MySpace, Twitter, YouTube, Digg, RunnersWorld, BBCForums) indicate that SentiStrength 2 is successful in the sense of performing better than a baseline approach for all data sets in both supervised and unsupervised cases. SentiStrength is not always better than machine-learning approaches that exploit indirect indicators of sentiment, however, and is particularly weaker for positive sentiment in news-related discussions. Overall, the results suggest that, even unsupervised, SentiStrength is robust enough to be applied to a wide variety of different social web contexts.
TwitterEcho: a distributed focused crawler to support open research with twitter data Modern social network analysis relies on vast quantities of data to infer new knowledge about human relations and communication. In this paper we describe TwitterEcho, an open source Twitter crawler for supporting this kind of research, which is characterized by a modular distributed architecture. Our crawler enables researchers to continuously collect data from particular user communities, while respecting Twitter's imposed limits. We present the core modules of the crawling server, some of which were specifically designed to focus the crawl on the Portuguese Twittosphere. Additional modules can be easily implemented, thus changing the focus to a different community. Our evaluation of the system shows high crawling performance and coverage.
POPSTAR at RepLab 2013: Polarity for Reputation Classification.
Simulating simple user behavior for system effectiveness evaluation Information retrieval effectiveness evaluation typically takes one of two forms: batch experiments based on static test collections, or lab studies measuring actual users interacting with a system. Test collection experiments are sometimes viewed as introducing too many simplifying assumptions to accurately predict the usefulness of a system to its users. As a result, there is great interest in creating test collections and measures that better model user behavior. One line of research involves developing measures that include a parameterized user model; choosing a parameter value simulates a particular type of user. We propose that these measures offer an opportunity to more accurately simulate the variance due to user behavior, and thus to analyze system effectiveness to a simulated user population. We introduce a Bayesian procedure for producing sampling distributions from click data, and show how to use statistical tools to quantify the effects of variance due to parameter selection.
A field study of the software design process for large systems The problems of designing large software systems were studied through interviewing personnel from 17 large projects. A layered behavioral model is used to analyze how three of these problems—the thin spread of application domain knowledge, fluctuating and conflicting requirements, and communication bottlenecks and breakdowns—affected software productivity and quality through their impact on cognitive, social, and organizational processes.
Queue-based multi-processing LISP As the need for high-speed computers increases, the need for multi-processors will be become more apparent. One of the major stumbling blocks to the development of useful multi-processors has been the lack of a good multi-processing language—one which is both powerful and understandable to programmers. Among the most compute-intensive programs are artificial intelligence (AI) programs, and researchers hope that the potential degree of parallelism in AI programs is higher than in many other applications. In this paper we propose multi-processing extensions to Lisp. Unlike other proposed multi-processing Lisps, this one provides only a few very powerful and intuitive primitives rather than a number of parallel variants of familiar constructs.
A superimposition control construct for distributed systems A control structure called a superimposition is proposed. The structure contains schematic abstractions of processes called roletypes in its declaration. Each roletype may be bound to processes from a basic distributed algorithm, and the operations of the roletype will then execute interleaved with those of the basic processes, over the same state space. This structure captures a kind of modularity natural for distributed programming, which previously has been treated using a macro-like implantation of code. The elements of a superimposition are identified, a syntax is suggested, correctness criteria are defined, and examples are presented.
Randomized graph drawing with heavy-duty preprocessing We present a graph drawing system for general undirected graphs with straight-line edges. It carries out a rather complex set of preprocessing steps, designed to produce a topologically good, but not necessarily nice-looking layout, which is then subjected to Davidson and Harel's simulated annealing beautification algorithm. The intermediate layout is planar for planar graphs and attempts to come close to planar for nonplanar graphs. The system's results are significantly better, and much faster, than what the annealing approach is able to achieve on its own.
A proof-based approach to verifying reachability properties This paper presents a formal approach to proving temporal reachability properties, expressed in CTL, on B systems. We are particularly interested in demonstrating that a system can reach a given state by executing a sequence of actions (or operation calls) called a path. Starting with a path, the proposed approach consists in calculating the proof obligations to discharge in order to prove that the path allows the system to evolve in order to verify the desired property. Since these proof obligations are expressed as first logic formulas without any temporal operator, they can be discharged using the prover of AtelierB. Our proposal is illustrated through a case study.
Analysis and Design of Secure Massive MIMO Systems in the Presence of Hardware Impairments. To keep the hardware costs of future communications systems manageable, the use of low-cost hardware components is desirable. This is particularly true for the emerging massive multiple-input multiple-output (MIMO) systems which equip base stations (BSs) with a large number of antenna elements. However, low-cost transceiver designs will further accentuate the hardware impairments, which are presen...
1.122
0.122
0.12
0.12
0.072
0.04
0.006
0.000226
0
0
0
0
0
0
README.md exists but content is empty. Use the Edit dataset card button to edit it.
Downloads last month
3
Edit dataset card