Query Text
stringlengths
9
8.71k
Ranking 1
stringlengths
14
5.31k
Ranking 2
stringlengths
11
5.31k
Ranking 3
stringlengths
11
8.42k
Ranking 4
stringlengths
17
8.71k
Ranking 5
stringlengths
14
4.95k
Ranking 6
stringlengths
14
8.42k
Ranking 7
stringlengths
17
8.42k
Ranking 8
stringlengths
10
5.31k
Ranking 9
stringlengths
9
8.42k
Ranking 10
stringlengths
9
8.42k
Ranking 11
stringlengths
10
4.11k
Ranking 12
stringlengths
14
8.33k
Ranking 13
stringlengths
17
3.82k
score_0
float64
1
1.25
score_1
float64
0
0.25
score_2
float64
0
0.25
score_3
float64
0
0.24
score_4
float64
0
0.24
score_5
float64
0
0.24
score_6
float64
0
0.21
score_7
float64
0
0.1
score_8
float64
0
0.02
score_9
float64
0
0
score_10
float64
0
0
score_11
float64
0
0
score_12
float64
0
0
score_13
float64
0
0
Conceptual Graphs and First-Order Logic . Conceptual Structures (CS) Theory is a logic-based knowledgerepresentation formalism. To show that conceptual graphs have thepower of first-order logic, it is necessary to have a mapping between bothformalisms. A proof system, i.e. axioms and inference rules, for conceptualgraphs is also useful. It must be sound (no false statement is derivedfrom a true one) and complete (all possible tautologies can be derivedfrom the axioms). This paper shows that Sowa's original definition of...
A situated classification solution of a resource allocation task represented in a visual language The Sisyphus room allocation problem solving example has been solved using a situated classification approach. A solution was developed from the protocol provided in terms of three heuristic classification systems, one classifying people, another rooms, and another tasks on an agenda of recommended room allocations. The domain ontology, problem data, problem-solving method, and domain-specific classification rules, have each been represented in a visual language. These knowledge structures compile to statements in a term subsumption knowledge representation language, and are loaded and run in a knowledge representation server to solve the problem. The user interface has been designed to provide support for human intervention in under-determi ned and over- determined situations, allowing advantage to be taken of the additional choices available in the first case, and a compromise solution to be developed in the second.
Viewpoint Consistency in Z and LOTOS: A Case Study . Specification by viewpoints is advocated as a suitable methodof specifying complex systems. Each viewpoint describes the envisagedsystem from a particular perspective, using concepts and specificationlanguages best suited for that perspective.Inherent in any viewpoint approach is the need to check or manage theconsistency of viewpoints and to show that the different viewpoints donot impose contradictory requirements. In previous work we have describeda range of techniques for...
Ontology, Metadata, and Semiotics The Internet is a giant semiotic system. It is a massive collection of Peirce's three kinds of signs: icons, which show the form of something; indices, which point to something; and symbols, which represent something according to some convention. But current proposals for ontologies and metadata have overlooked some of the most important features of signs. A sign has three aspects: it is (1) an entity that represents (2) another entity to (3) an agent. By looking only at the signs themselves, some metadata proposals have lost sight of the entities they represent and the agents  human, animal, or robot  which interpret them. With its three branches of syntax, semantics, and pragmatics, semiotics provides guidelines for organizing and using signs to represent something to someone for some purpose. Besides representation, semiotics also supports methods for translating patterns of signs intended for one purpose to other patterns intended for different but related purposes. This article shows how the fundamental semiotic primitives are represented in semantically equivalent notations for logic, including controlled natural languages and various computer languages.
Formal methods: state of the art and future directions ing with credit is permitted. To copy otherwise, to republish, to post onservers, to redistribute to lists, or to use any component of this work in other works, requires priorspecific permission and/or a fee. Permissions may be requested from Publications Dept, ACMInc., 1515 Broadway, New York, NY 10036 USA, fax +1 (212) 869-0481, or permissions@acm.org.2 \Delta E.M. Clarke and J.M. WingAbout Programs---Mechanical verification, Specification techniques; F.4.1 [Mathematical Logicand...
Integrating multiple paradigms within the blackboard framework While early knowledge-based systems suffered the frequent criticism of having little relevance to the real world, an increasing number of current applications deal with complex, real-world problems. Due to the complexity of real-world situations, no one general software technique can produce adequate results in different problem domains, and artificial intelligence usually needs to be integrated with conventional paradigms for efficient solutions. The complexity and diversity of real-world applications have also forced the researchers in the AI field to focus more on the integration of diverse knowledge representation and reasoning techniques for solving challenging, real-world problems. Our development environment, BEST (Blackboard-based Expert Systems Toolkit), is aimed to provide the ability to produce large-scale, evolvable, heterogeneous intelligent systems. BEST incorporates the best of multiple programming paradigms in order to avoid restricting users to a single way of expressing either knowledge or data. It combines rule-based programming, object-oriented programming, logic programming, procedural programming and blackboard modelling in a single architecture for knowledge engineering, so that the user can tailor a style of programming to his application, using any or arbitrary combinations of methods to provide a complete solution. The deep integration of all these techniques yields a toolkit more effective even for a specific single application than any technique in isolation or collections of multiple techniques less fully integrated. Within the basic, knowledge-based programming paradigm, BEST offers a multiparadigm language for representing complex knowledge, including incomplete and uncertain knowledge. Its problem solving facilities include truth maintenance, inheritance over arbitrary relations, temporal and hypothetical reasoning, opportunistic control, automatic partitioning and scheduling, and both blackboard and distributed problem-solving paradigms.
Logical foundations of object-oriented and frame-based languages We propose a novel formalism, called Frame Logic (abbr., F-logic), that accounts in a clean and declarative fashion for most of the structural aspects of object-oriented and frame-based languages. These features include object identity, complex objects, inheritance, polymorphic types, query methods, encapsulation, and others. In a sense, F-logic stands in the same relationship to the object-oriented paradigm as classical predicate calculus stands to relational programming. F-logic has a model-theoretic semantics and a sound and complete resolution-based proof theory. A small number of fundamental concepts that come from object-oriented programming have direct representation in F-logic; other, secondary aspects of this paradigm are easily modeled as well. The paper also discusses semantic issues pertaining to programming with a deductive object-oriented language based on a subset of F-logic.
Systems analysis: a systemic analysis of a conceptual model Adopting an appropriate model for systems analysis, by avoiding a narrow focus on the requirements specification and increasing the use of the systems analyst's knowledge base, may lead to better software development and improved system life-cycle management.
A knowledge engineering approach to knowledge management Knowledge management facilitates the capture, storage, and dissemination of knowledge using information technology. Methods for managing knowledge have become an important issue in the past few decades, and the KM community has developed a wide range of technologies and applications for both academic research and practical applications. In this paper, we propose a knowledge engineering approach (KMKE) to knowledge management. First, a knowledge modeling approach is used to organize and express various types of knowledge in a unified knowledge representation. Second, a verification mechanism is used to verify knowledge models based on the formal semantics of the knowledge representation. Third, knowledge models are classified and stored in a hierarchical ontology system. Fourth, a knowledge query language is designed to enhance the dissemination of knowledge. Finally, a knowledge update process is applied to modify the knowledge storage with respect to users' needs. A knowledge management system for computer repair is used as an illustrative example.
Goal-Based Requirements Analysis Goals are a logical mechanism for identifying, organizing and justifying software requirements. Strategies are needed for the initial identification and construction of goals. In this paper we discuss goals from the perspective of two themes: goal analysis and goal evolution. We begin with an overview of the goal-based method we have developed and summarize our experiences in applying our method to a relatively large example. We illustrate some of the issues that practitioners face when using a goal-based approach to specify the requirements for a system and close the paper with a discussion of needed future research on goal-based requirements analysis and evolution. Keywords: goal identification, goal elaboration, goal refinement, scenario analysis, requirements engineering, requirements methods
A New Implementation Technique for Applicative Languages
Software Engineering Environments
Unifying wp and wlp Boolean-valued predicates over a state space are isomorphic to its char- acteristic functions into {0,1}. Enlarging that range to { 1,0,1} allows the definition of extended predicates whose associated transformers gen- eralise the conventional wp and wlp. The correspondingly extended healthiness conditions include the new 'sub-additivity', an arithmetic inequality over predicates. Keywords: Formal semantics, program correctness, weakest precon- dition, weakest liberal precondition, Egli-Milner order.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.115014
0.12
0.06
0.036676
0.015
0.005029
0.002282
0.000503
0.000001
0
0
0
0
0
Automated Requirements Elicitation: Combining A Model-Driven Approach With Concept Reuse Extracting pertinent and useful information from customers has long plagued the process of requirements elicitation. This paper presents a new approach to support the elicitation process. This approach combines various techniques for requirements elicitation which include model-based concept acquisition, goal-driven structured interview and concept reuse. Compared with the available approaches for requirements elicitation, the most significant feature of our approach is that it supports both the automation of interaction with customers by using domain terminology, not software terminology and the automated construction of application requirements models using model-based concept elicitation and concept reuse. The capacity of this approach comes from its rich knowledge which is clustered into several abstract levels.
A Framework For Integrating Multiple Perspectives In System-Development - Viewpoints This paper outlines a framework which supports the use of multiple perspectives in system development, and provides a means for developing and applying systems design methods. The framework uses "viewpoints" to partition the system specification. the development method and the formal representations used to express the system specifications. This VOSE (viewpoint-oriented systems engineering) framework can be used to support the design of heterogeneous and composite systems. We illustrate the use of the framework with a small example drawn from composite system development and give an account of prototype automated tools based on the framework.
Elements underlying the specification of requirements As more and more complex computer‐based systems are built, it becomes increasingly more difficult to specify or visualize the system prior to its construction. One way of simplifying these tasks is to view the requirements from multiple viewpoints. However, if these viewpoints examine the requirements using different notations, how can we know if they are consistent? This paper describes the elemental concepts that underlie all requirements. By reducing each view of requirements to networks of these elemental concepts, it becomes possible to better understand the relationships among the views.
Four dark corners of requirements engineering Research in requirements engineering has produced an extensive body of knowledge, but there are four areas in which the foundation of the discipline seems weak or obscure. This article shines some light in the "four dark corners," exposing problems and proposing solutions. We show that all descriptions involved in requirements engineering should be descriptions of the environment. We show that certain control information is necessary for sound requirements engineering, and we explain the close association between domain knowledge and refinement of requirements. Together these conclusions explain the precise nature of requirements, specifications, and domain knowledge, as well as the precise nature of the relationships among them. They establish minimum standards for what information should be represented in a requirements language. They also make it possible to determine exactly what it means for requirements engineering to be successfully completed. Categories and Subject Descriptors: D.2.1 (Software Engineering): Requirements/Specifica- tions—methodologies
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
A semantics of multiple inheritance this paper is to present a clean semantics of multiple inheritance and to show that, in the context of strongly-typed, statically-scoped languages, a sound typechecking algorithm exists. Multiple inheritance is also interpreted in a broad sense: instead of being limited to objects, it is extended in a natural way to union types and to higher-order functional types. This constitutes a semantic basis for the unification of functional and object-oriented programming.
The Manchester prototype dataflow computer The Manchester project has developed a powerful dataflow processor based on dynamic tagging. This processor is large enough to tackle realistic applications and exhibits impressive speedup for programs with sufficient parallelism.
A lazy evaluator A different way to execute pure LISP programs is presented. It delays the evaluation of parameters and list structures without ever having to perform more evaluation steps than the usual method. Although the central idea can be found in earlier work this paper is of interest since it treats a rather well-known language and works out an algorithm which avoids full substitution. A partial correctness proof using Scott-Strachey semantics is sketched in a later section.
A study of cross-validation and bootstrap for accuracy estimation and model selection We review accuracy estimation methods and compare the two most common methods crossvalidation and bootstrap. Recent experimental results on artificial data and theoretical re cults in restricted settings have shown that for selecting a good classifier from a set of classifiers (model selection), ten-fold cross-validation may be better than the more expensive leaveone-out cross-validation. We report on a largescale experiment--over half a million runs of C4.5 and a Naive-Bayes algorithm--to estimate the effects of different parameters on these algrithms on real-world datasets. For crossvalidation we vary the number of folds and whether the folds are stratified or not, for bootstrap, we vary the number of bootstrap samples. Our results indicate that for real-word datasets similar to ours, The best method to use for model selection is ten fold stratified cross validation even if computation power allows using more folds.
A Theory of Prioritizing Composition An operator for the composition of two processes, where one process has priority over the other process, is studied. Processes are described by action systems, and data refinement is used for transforming processes. The operator is shown to be compositional, i.e. monotonic with respect to refinement. It is argued that this operator is adequate for modelling priorities as found in programming languages and operating systems. Rules for introducing priorities and for raising and lowering priorities of processes are given. Dynamic priorities are modelled with special priority variables which can be freely mixed with other variables and the prioritising operator in program development. A number of applications show the use of prioritising composition for modelling and specification in general.
Repository support for multi-perspective requirements engineering Relationships among different modeling perspectives have been systematically investigated focusing either on given notations (e.g. UML) or on domain reference models (e.g. ARIS/SAP). In contrast, many successful informal methods for business analysis and requirements engineering (e.g. JAD) emphasize team negotiation, goal orientation and flexibility of modeling notations. This paper addresses the question how much formal and computerized support can be provided in such settings without destroying their creative tenor. Our solution is based on a novel modeling language, M-Telos, that integrates the adaptability and analysis advantages of the logic-based meta modeling language Telos with a module concept covering the structuring mechanisms of scalable software architectures. It comprises four components: (1) A modular conceptual modeling formalism organizes individual perspectives and their interrelationships. (2) Perspective schemata are linked to a conceptual meta meta model of shared domain terms, thus giving the architecture a semantic meaning and enabling adaptability and extensibility of the network of perspectives. (3) Inconsistency management across perspectives is handled in a goal-oriented manner, by formalizing analysis goals as meta rules which are automatically customized to perspective schemata. (4) Continuous incremental maintenance of inconsistency information is provided by exploiting recent view maintenance techniques from deductive databases. The approach has been implemented as an extension to the ConceptBase ‡ ‡ ConceptBase is available through web site http://www-i5.Informatik.RWTH-Aachen.de/Cbdor/index.html. meta database management system and has been applied in a number of real-world requirements engineering projects.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.2
0.013333
0.011111
0.006061
0
0
0
0
0
0
0
0
0
0
Mobile UNITY schemas for agent coordination Mobile UNITY refers to a notation system and proof logic initially designed to accommodate the special needs of the emerging field of mobile computing. The model allows one to define units of computation and mobility and the formal rules for coordination among them in a highly decoupled manner. In this paper, we reexamine the expressive power of the Mobile UNITY coordination constructs from a new perspective rooted in the notion that disciplined usage of a powerful formal model must rely on formally defined schemas. Several coordination schemas are introduced and formalized. They examine the relationship between Mobile UNITY and other computing models and illustrate the mechanics of employing Mobile UNITY as the basis for a formal semantic characterization of coordination models.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Procedures and atomicity refinement The introduction of an early return from a (remote) procedure call can increase the degree of parallelism in a parallel or distributed algorithm modeled by an action system. We define a return statement for procedures in an action systems framework and show that it corresponds to carrying out an atomicity refinement.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
A minimum entropy based switched adaptive predictor for lossless compression of images The gradient adjusted predictor (GAP) uses seven fixed slope quantization bins and a predictor is associated with each bin, for prediction of pixels. The slope bin boundary in the same appears to be fixed without employing a criterion function. This paper presents a technique for slope classification that results in slope bins which are optimum for a given set of images. It also presents two techniques that find predictors which are statistically optimal for each of the slope bins. Slope classification and the predictors associated with the slope bins are obtained off-line. To find a representative predictor for a bin, a set of least-squares (LS) based predictors are obtained for all the pixels belonging to that bin. A predictor, from the set of predictors, that results in the minimum prediction error energy is chosen to represent the bin. Alternatively, the predictor is chosen, from the same set, based on minimum entropy as the criterion. Simulation results, of the proposed method have shown a significant improvement in the compression performance as compared to the GAP. Computational complexity of the proposed method , excluding the training process, is of the same order as that of GAP.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Rapid prototyping of control systems using high level Petri nets This paper presents a rapid prototyping methodology for the carrying out of control systems in which high level Petri nets provide the common framework to integrate the main phases of software development: specification, validation, performance evaluation, implementation.Petri nets are shown to be translatable into Ada program structures concerning processes and their synchronizations.
Extending the Entity-Relationship Approach for Dynamic Modeling Purposes
The Software Development System This paper presents a discussion of the Software Development System (SDS), a methodology addressing the problems involved in the development of software for Ballistic Missile Defense systems. These are large, real-time, automated systems with a requirement for high reliability. The SDS is a broad approach attacking problems arising in requirements generation, software design, coding, and testing. The approach is highly requirements oriented and has resulted in the formulation of structuring concepts, a requirements statement language, process design language, and support software to be used throughout the development cycle. This methodology represents a significant advance in software technology for the development of software for a class of systems such as BMD. The support software has been implemented and is undergoing evaluation.
Applications and Extensions of SADT First Page of the Article
Petri net-based object-oriented modelling of distributed systems This paper presents an object-oriented approach for building distributed systems. An example taken from the field of computer integrated manufacturing systems is taken as a guideline. According to this approach a system is built up through three steps: control and synchronization aspects for each class of objects are treated first using PROT nets, which are a high-level extension to Petri nets; then data are introduced specifying the internal states of the objects as well as the messages they send each other; finally the connections between the objects are introduced by means of a data flow diagram between classes. The implementation uses ADA as the target language, exploiting its tasking and structuring mechanisms. The flexibility of the approach and the possibility of using a knowledge-based user interface promote rapid prototyping and reusability.
Software Performance Engineering Performance is critical to the success of today's software systems. However, many software products fail to meet their performance objectives when they are initially constructed. Fixing these problems is costly and causes schedule delays, cost overruns, lost productivity, damaged customer relations, missed market windows, lost revenues, and a host of other difficulties. This chapter presents software performance engineering (SPE), a systematic, quantitative approach to constructing software systems that meet performance objectives. SPE begins early in the software development process to model the performance of the proposed architecture and high-level design. The models help to identify potential performance problems when they can be fixed quickly and economically.
Modeling of Distributed Real-Time Systems in DisCo In this paper we describe adding of metric real time to joint actions, and to the DisCo specification language and tool that are based on them. No new concepts or constructs are needed: time is represented by variables in objects, and action durations are given by action parameters. Thus, we can reason about real-time properties in the same way as about other properties. The scheduling model is unrestrict- ed in the sense that every logically possible computation gets some scheduling. This is more general than maximal parallelism, and the properties proved under it are less sen- sitive to small changes in timing. Since real time is handled by existing DisCo constructs, the tool with its execution ca- pabilities can be used to simulate and animate also real- time properties of specifications.
An experiment in technology transfer: PAISLey specification of requirements for an undersea lightwave cable system From May to October 1985 members of the Undersea Systems Laboratory and the Computer Technology Research Laboratory of AT&T Bell Laboratories worked together to apply the executable specification language PAISLey to requirements for the “SL” communications system. This paper describes our experiences and answers three questions based on the results of the experiment: Can SL requirements be specified formally in PAISLey? Can members of the SL project learn to read and write specifications in PAISLey? How would the use of PAISLey affect the productivity of the software-development team and the quality of the resulting software?
Petri nets in software engineering The central issue of this contribution is a methodology for the use of nets in practical systems design. We show how nets of channels and agencies allow for a continuous and systematic transition from informal and unprecise to precise and formal specifications. This development methodology leads to the representation of dynamic systems behaviour (using Pr/T-Nets) which is apt to rapid prototyping and formal correctness proofs.
Are knowledge representations the answer to requirement analysis? A clear distinction between a requirement and a specification is crucial to an understanding of how and why knowledge representation techniques can be useful for the requirement stage. A useful distinction is to divide the requirement analysis phase into a problem specification and system specification phases. It is argued that it is necessary first to understand what kind of knowledge is in the requirement analysis process before worrying about representational schemes
Real-time constraints in a rapid prototyping language This paper presents real-time constraints of a prototyping language and some mechanisms for handling these constraints in rapidly prototyping embedded systems. Rapid prototyping of embedded systems can be accomplished using a Computer Aided Prototyping System (CAPS) and its associated Prototyping Language (PSDL) to aid the designer in handling hard real-time constraints. The language models time critical operations with maximum execution times, maximum response times and minimum periods. The mechanisms for expressing timing constraints in PSDL are described along with their meanings relative to a series of hardware models which include multi-processor configurations. We also describe a language construct for specifying the policies governing real-time behavior under overload conditions.
On Diagram Tokens and Types Rejecting the temptation to make up a list of necessary and sufficient conditions for diagrammatic and sentential systems, we present an important distinction which arises from sentential and diagrammatic features of systems. Importantly, the distinction we will explore in the paper lies at a meta-level. That is, we argue for a major difference in meta-theory between diagrammatic and sentential systems, by showing the necessity of a more fine-grained syntax for a diagrammatic system than for a sentential system. Unlike with sentential systems, a diagrammatic system requires two levels of syntax--token and type. Token-syntax is about particular diagrams instantiated on some physical medium, and type-syntax provides a formal definition with which a concrete representtation of a diagram must comply. While these two levels of syntax are closely related, the domains of type-syntax and token-syntax are distinct from each other. Euler diagrams are chosen as a case study to illustrate the following major points of the paper: (i) What kinds of diagrammatic features (as opposed to sentential features) require two different levels of syntax? (ii) What is the relation between these two levels of syntax? (iii) What is the advantage of having a two-tiered syntax?
Verification conditions are code This paper presents a new theoretical result concerning Hoare Logic. It is shown here that the verification conditions that support a Hoare Logic program derivation are themselves sufficient to construct a correct implementation of the given pre-, and post-condition specification. This property is mainly of theoretical interest, though it is possible that it may have some practical use, for example if predicative programming methodology is adopted. The result is shown to hold for both the original, partial correctness, Hoare logic, and also a variant for total correctness derivations.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.202145
0.034225
0.006135
0.004433
0.000837
0.000398
0.000119
0.000043
0.000007
0.000001
0
0
0
0
MoMut::UML Model-Based Mutation Testing for UML
Model-based, mutation-driven test case generation via heuristic-guided branching search. This work introduces a heuristic-guided branching search algorithm for model-based, mutation-driven test case generation. The algorithm is designed towards the efficient and computationally tractable exploration of discrete, non-deterministic models with huge state spaces. Asynchronous parallel processing is a key feature of the algorithm. The algorithm is inspired by the successful path planning algorithm Rapidly exploring Random Trees (RRT). We adapt RRT in several aspects towards test case generation. Most notably, we introduce parametrized heuristics for start and successor state selection, as well as a mechanism to construct test cases from the data produced during search. We implemented our algorithm in the existing test case generation framework MoMuT. We present an extensive evaluation of our heuristics and parameters based on a diverse set of demanding models obtained in an industrial context. In total we continuously utilized 128 CPU cores on three servers for two weeks to gather the experimental data presented. Using statistical methods we determine which heuristics are performing well on all models. With our new algorithm, we are now able to process models consisting of over 2300 concurrent objects. To our knowledge there is no other mutation driven test case generation tool that is able to process models of this magnitude.
Model-Based Mutation Testing of an Industrial Measurement Device.
Killing strategies for model-based mutation testing. This article presents the techniques and results of a novel model-based test case generation approach that automatically derives test cases from UML state machines. The main contribution of this article is the fully automated fault-based test case generation technique together with two empirical case studies derived from industrial use cases. Also, an in-depth evaluation of different fault-based test case generation strategies on each of the case studies is given and a comparison with plain random testing is conducted. The test case generation methodology supports a wide range of UML constructs and is grounded on the formal semantics of Back's action systems and the well-known input-output conformance relation. Mutation operators are employed on the level of the specification to insert faults and generate test cases that will reveal the faults inserted. The effectiveness of this approach is shown and it is discussed how to gain a more expressive test suite by combining cheap but undirected random test case generation with the more expensive but directed mutation-based technique. Finally, an extensive and critical discussion of the lessons learnt is given as well as a future outlook on the general usefulness and practicability of mutation-based test case generation. Copyright © 2014 John Wiley & Sons, Ltd.
Automated Conformance Verification of Hybrid Systems Due to the combination of discrete events and continuous behavior the validation of hybrid systems is a challenging task. Nevertheless, as for other systems the correctness of such hybrid systems is a major concern. In this paper we present a new approach for verifying the input-output conformance of two hybrid systems. This approach can be used to generate mutation-based test cases. We specify a hybrid system within the framework of Qualitative Action Systems. Here, besides conventional discrete actions, the continuous dynamics of hybrid systems is described with so called qualitative actions. This paper then shows how labeled transition systems can be used to describe the trace semantics of Qualitative Action Systems. The labeled transition systems are used to verify the conformance between two Qualitative Action Systems. Finally, we present first experimental results on a water tank system.
Towards Symbolic Model-Based Mutation Testing: Pitfalls in Expressing Semantics as Constraints Model-based mutation testing uses altered models to generate test cases that are able to detect whether a certain fault has been implemented in the system under test. For this purpose, we need to check for conformance between the original and the mutated model. We have developed an approach for conformance checking of action systems using constraints. Action systems are well-suited to specify reactive systems and may involve non-determinism. Expressing their semantics as constraints for the purpose of conformance checking is not totally straight forward. This paper presents some pitfalls that hinder the way to a sound encoding of semantics into constraint satisfaction problems and gives solutions for each problem.
Action Systems with Synchronous Communication this paper show that a simple extension of the action systems framework,adding procedure declarations to action systems, will give us a very general mechanism forsynchronized communication between action systems. Both actions and procedure bodiesare guarded commands. When an action in one action system calls a procedure in anotheraction system, the eoeect is that of a remote procedure call. The calling action and theprocedure body involved in the call are executed as a single atomic...
Object-oriented modeling and design
Hierarchical correctness proofs for distributed algorithms This thesis introduces a new model for distributed computation in asynchronous networks, the input-output automaton. This simple, powerful model captures in a novel way the game-theoretical interaction between a system and its environment, and allows fundamental properties of distributed computation such as fair computation to be naturally expressed. Furthermore, this model can be used to construct modular, hierarchical correctness proofs of distributed algorithms. This thesis defines the input-output automaton model, and presents an interesting example of how this model can be used to construct such proofs.
Multistage negotiation for distributed constraint satisfaction A cooperation paradigm and coordination protocol for a distributed planning system consisting of a network of semi-autonomous agents with limited internode communication and no centralized control is presented. A multistage negotiation paradigm for solving distributed constraint satisfaction problems in this kind of system has been developed. The strategies presented enable an agent in a distributed planning system to become aware of the extent to which its own local decisions may have adverse nonlocal impact in planning. An example problem is presented in the context of transmission path restoration for dedicated circuits in a communications network. Multistage negotiation provides an agent with sufficient information about the impact of local decisions on a nonlocal state so that the agent may make local decisions that are correct from a global perspective, without attempting to provide a complete global state to all agents. Through multistage negotiation, an agent is able to recognize when a set of global goals cannot be satisfied, and is able to solve a related problem by finding a way of satisfying a reduced set of goals
On the Lattice of Specifications: Applications to a Specification Methodology In this paper we investigate the lattice properties of the natural ordering between specifications, which expresses that a specification expresses a stronger requirement than another specification. The lattice-like structure that we uncover is used as a basis for a specification methodology.
Visual Formalisms Revisited The development of an interactive application is a complex task that has to consider data, behavior, inter- communication, architecture and distribution aspects of the modeled system. In particular, it presupposes the successful communication between the customer and the software expert. To enhance this communica- tion most modern software engineering methods rec- ommend to specify the different aspects of a system by visual formalisms. In essence, visual specifications are directed graphs that are interpreted in a particular way for each as- pect of the system. They are also intended to be com- positional. This means that, each node can itself be a graph with a separate meaning. However, the lack of a denotational model for hierarchical graphs often leads to the loss of compositionality. This has severe negative consequences in the development of realistic applications. In this paper we present a simple denotational model (which is by definition compositional) for the architecture and behavior aspects of a system. This model is then used to give as emantics to almost all the concepts occurring in ROOM. Our model also provides a compositional semantics for or-states in statecharts.
Miro: Visual Specification of Security Miro is a set of languages and tools that support the visual specification of file system security. Two visual languages are presented: the instance language, which allows specification of file system access, and the constraint language, which allows specification of security policies. Miro visual languages and tools are used to specify security configurations. A visual language is one whose entities are graphical, such as boxes and arrows, specifying means stating independently of any implementation the desired properties of a system. Security means file system protection: ensuring that files are protected from unauthorized access and granting privileges to some users, but not others. Tools implemented and examples of how these languages can be applied to real security specification problems are described.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1.071111
0.066667
0.044089
0.043556
0.005429
0.001067
0.000026
0
0
0
0
0
0
0
Acquiring Temporal Knowledge from Schedules This paper presents an economical algorithm for generating conceptual graphs from schedules and timing diagrams. The graphs generated are based on activity concepts associated with intervals of the schedules. The temporal conceptual relations selected here are drawn from both interval and endpoint temporal logics in order to minimize the complexity of the generated graphs to no more that k(n–1) temporal relations in a schedule of n intervals over k timelines (resources). Temporal reasoning and consistency checking in terms of the selected temporal relations are briefly reviewed.
Conceptual Structures: Current Practices, Second International Conference on Conceptual Structures, ICCS '94, College Park, Maryland, USA, August 16-20, 1994, Proceedings
Toward synthesis from English descriptions This paper reports on a research project to design a system for automatically interpreting English specifications of digital systems in terms of design representation formalisms currently employed in CAD systems. The necessary processes involve the machine analysis of English and the synthesis of models from the specifications. The approach being investigated is interactive and consists of syntactic scanning, semantic analysis, interpretation generation, and model integration.
Polynomial Algorithms for Projection and Matching The main purpose of this paper is to develop polynomial algorithms for the projection and matching operations on conceptual graphs. Since all interesting problems related to these operations are at least NP-complete — we will consider here the exhibition of a solution and counting the solutions — we propose to explore polynomial cases by restricting the form of the graphs or relaxing constraints on the operations. We examine the particular conceptual graphs whose underlying structure is a tree. Besides general or injective projections, we define intermediary kinds of projections. We then show how these notions can be extended to matchings.
Implementing a semantic interpreter using conceptual graphs
Conceptual Structures: Standards and Practices, 7th International Conference on Conceptual Structures, ICCS '99, Blacksburg, Virginia, USA, July 12-15, 1999, Proceedings
Relating Diagrams to Logic Although logic is general enough to describe anything that can be implemented on a digital computer, the unreadability of predi- cate calculus makes it unpopular as a design language. Instead, many graphic notations have been developed, each for a narrow range of purposes. Conceptual graphs are a graphic system of logic that is as general as predicate calculus, but they are as readable as the special- purpose diagrams. In fact, many popular diagrams can be viewed as special cases of conceptual graphs: type hierarchies, entity-relationship diagrams, parse trees, dataflow diagrams, flow charts, state-transition diagrams, and Petri nets. This paper shows how such diagrams can be translated to conceptual graphs and thence into other systems of logic, such as the Knowledge Interchange Format (KIF).
Guest Editor's Introduction: Knowledge-Management Systems-Converting and Connecting
Verifying task-based specifications in conceptual graphs A conceptual model is a model of real world concepts and application domains as perceived by users and developers. It helps developers investigate and represent the semantics of the problem domain, as well as communicate among themselves and with users. In this paper, we propose the use of task-based specifications in conceptual graphs (TBCG) to construct and verify a conceptual model. Task-based specification methodology is used to serve as the mechanism to structure the knowledge captured in the conceptual model; whereas conceptual graphs are adopted as the formalism to express task-based specifications and to provide a reasoning capability for the purpose of verification. Verifying a conceptual model is performed on model specifications of a task through constraints satisfaction and relaxation techniques, and on process specifications of the task based on operators and rules of inference inherited in conceptual graphs.
Aspects of applicative programming for file systems (Preliminary Version) This paper develops the implications of recent results in semantics for applicative programming. Applying suspended evaluation (call-by-need) to the arguments of file construction functions results in an implicit synchronization of computation and output. The programmer need not participate in the determination of the pace and the extent of the evaluation of his program. Problems concerning multiple input and multiple output files are considered: typical behavior is illustrated with an example of a rudimentary text editor written applicatively. As shown in the trace of this program, the driver of the program is the sequential output device(s). Implications of applicative languages for I/O bound operating systems are briefly considered.
Towards a Deeper Understanding of Quality in Requirements Engineering The notion of quality in requirements specifications is poorly understood, and in most literature only bread and butter lists of useful properties have been provided. However, the recent frameworks of Lindland et al. and Pohl have tried to take a more systematic approach. In this paper, these two frameworks are reviewed and compared. Although they have different outlook, their deeper structures are not contradictory.
Checking Java Programs via Guarded Commands This paper defines a simple guarded-command--like language and its semantics.The language is used as an intermediate language in generating verification conditionsfor Java. The paper discusses why it is a good idea to generate verificationconditions via an intermediate language, rather than directly.Publication history. This paper appears in Formal Techniques for Java Programs,workshop proceedings. Bart Jacobs, Gary T. Leavens, Peter Muller, and Arnd PoetzschHeffter,editors. Technical ...
Using a structured design approach to reduce risks in end user spreadsheet development Computations performed using end-user developed spreadsheets have resulted in serious errors and represent a major control risk to organizations. Literature suggests that factors contributing to spreadsheet errors include developer inexperience, poor design approaches, application types, problem complexity, time pressure, and presence or absence of review procedures. We explore the impact of using a structured design approach for spreadsheet development. We used two field experiments and found that subjects using the design approach showed a significant reduction in the number of ‘linking errors,’ i.e., mistakes in creating links between values that must connect one area of the spreadsheet to another or from one worksheet to another in a common workbook. Our results provide evidence that design approaches that explicitly identify potential error factors may improve end-user application reliability. We also observed that factors such as gender, application expertise, and workgroup configuration also influenced spreadsheet error rates.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.071329
0.03532
0.027022
0.018566
0.008602
0.002565
0.000453
0.000007
0.000001
0
0
0
0
0
Arithmetic coding using hierarchical dependency context model for H.264/AVC video coding In this paper, a hierarchical dependency context model (HDCM) is firstly proposed to exploit the statistical correlations of DCT (Discrete Cosine Transform) coefficients in H.264/AVC video coding standard, in which the number of non-zero coefficients in a DCT block and the scanned position are used to capture the magnitude varying tendency of DCT coefficients. Then a new binary arithmetic coding using hierarchical dependency context model (HDCMBAC) is proposed. HDCMBAC associates HDCM with binary arithmetic coding to code the syntax elements for a DCT block, which consist of the number of non-zero coefficients, significant flag and level information. Experimental results demonstrate that HDCMBAC can achieve similar coding performance as CABAC at low and high QPs (quantization parameter). Meanwhile the context modeling and the arithmetic decoding in HDCMBAC can be carried out in parallel, since the context dependency only exists among different parts of basic syntax elements in HDCM.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Modelling Hybrid Train Speed Controller using Proof and Refinement The modern radio-based railway signalling systems aim to increase network's capacity by enabling trains to run closer to each other. At the core of such systems is train's on-board computer (discrete) responsible for computing and controlling the speed (continuous) of the train. Such systems are best captured by hybrid models, which capture discrete and continuous system's aspects. Hybrid models are notoriously difficult to model and verify, in our research we address this problem by applying hybrid systems' modelling patterns and stepwise refinement for developing hybrid train speed controller model.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Parameter-Induced Aliasing and Related Problems can be Avoided Aliasing is an old but yet unsolved problem, being disadvantegous for most aspects of programming languages. We suggest a new model for variables which avoids aliasing by maintaining the property of always having exactly one access path to a variable. In particular, variables have no address. Based on this model, we develop language rules which can be checked in local context and we suggest programming guidelines to prevent alias effects in Ada 95 programs.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
On Diagram Tokens and Types Rejecting the temptation to make up a list of necessary and sufficient conditions for diagrammatic and sentential systems, we present an important distinction which arises from sentential and diagrammatic features of systems. Importantly, the distinction we will explore in the paper lies at a meta-level. That is, we argue for a major difference in meta-theory between diagrammatic and sentential systems, by showing the necessity of a more fine-grained syntax for a diagrammatic system than for a sentential system. Unlike with sentential systems, a diagrammatic system requires two levels of syntax--token and type. Token-syntax is about particular diagrams instantiated on some physical medium, and type-syntax provides a formal definition with which a concrete representtation of a diagram must comply. While these two levels of syntax are closely related, the domains of type-syntax and token-syntax are distinct from each other. Euler diagrams are chosen as a case study to illustrate the following major points of the paper: (i) What kinds of diagrammatic features (as opposed to sentential features) require two different levels of syntax? (ii) What is the relation between these two levels of syntax? (iii) What is the advantage of having a two-tiered syntax?
A visual framework for modelling with heterogeneous notations This paper presents a visual framework for organizing models of systems which allows a mixture of notations, diagrammatic or text-based, to be used. The framework is based on the use of templates which can be nested and sometimes flattened. It is modular and can be used to structure the constraint space of the system, making it scalable with appropriate tool support. It is also flexible and extensible: users can choose which notations to use, mix them and add new notations or templates. The goal of this work is to provide more intuitive and expressive languages and frameworks to support the construction and presentation of rich and precise models.
Nesting in Euler Diagrams: syntax, semantics and construction This paper considers the notion of nesting in Euler diagrams, and how nesting affects the interpretation and construction of such diagrams. After setting up the necessary definitions for concrete Euler diagrams (drawn in the plane) and abstract diagrams (having just formal structure), the notion of nestedness is defined at both concrete and abstract levels. The concept of a dual graph is used to give an alternative condition for a drawable abstract Euler diagram to be nested. The natural progression to the diagram semantics is explored and we present a “nested form” for diagram semantics. We describe how this work supports tool-building for diagrams, and how effective we might expect this support to be in terms of the proportion of nested diagrams.
Towards a Formalization of Constraint Diagrams Geared to complement UML and to the specification of large software systems by non-mathematicians, constraint diagrams are a visual language that generalizes the popular and intuitive Venn diagrams and Euler circles, and adds facilities for quantifying over elements and navigating relations. The language design emphasizes scalability and expressiveness while retaining intuitiveness. Spider diagrams form a subset of the notation, leaving out universal quantification and the ability to navigate relations. Spider diagrams have been given a formal definition. This paper extends that definition to encompass the constraint diagram notation. The formalization of constraint diagrams is nontrivial: it exposes subtleties concerned with the implicit ordering of symbols in the visual language, which were not evident before a formal definition of the language was attempted. This has led to an improved design of the language
Drawing graphs nicely using simulated annealing The paradigm of simulated annealing is applied to the problem of drawing graphs “nicely.” Our algorithm deals with general undirected graphs with straight-line edges, and employs several simple criteria for the aesthetic quality of the result. The algorithm is flexible, in that the relative weights of the criteria can be changed. For graphs of modest size it produces good results, competitive with those produced by other methods, notably, the “spring method” and its variants.
Randomized graph drawing with heavy-duty preprocessing We present a graph drawing system for general undirected graphs with straight-line edges. It carries out a rather complex set of preprocessing steps, designed to produce a topologically good, but not necessarily nice-looking layout, which is then subjected to Davidson and Harel's simulated annealing beautification algorithm. The intermediate layout is planar for planar graphs and attempts to come close to planar for nonplanar graphs. The system's results are significantly better, and much faster, than what the annealing approach is able to achieve on its own.
Statecharts: A visual formalism for complex systems Abstract. We,present,a broad,extension,of the,conventional,formalism,of state machines,and state diagrams, that is relevant to the specification and design of complex discrete-event systems, such as multi-computer real-time systems, communication protocols and digital control units. Our diagrams, which we call statecharts, extend conventional state-transition diagrams with essentially three elements, dealing, respectively, with the notions of hierarchy, concurrency and communica- tion. These,transform,the language,of state diagrams,into a highly,structured,and,economical description,language.,Statecharts,are thus,compact,and,expressiv-small,diagrams,can,express complex,behavior-as,well,as compositional,and,modular.,When,coupled,with,the capabilities of computerized graphics, statecharts enable viewing the description at different levels of detail, and make even very large specifications manageable and comprehensible. In fact, we intend to demonstrate,here that statecharts,counter,many,of the objections,raised,against,conventional,state diagrams, and thus appear to render specification by diagrams an attractive and plausible approach. Statecharts,can be used,either as a stand-alone,behavioral,description,or as part of a more,general design methodology that deals also with the system’s other aspects, such as functional decomposi- tion and,data-flow specification. We also discuss,some,practical,experience,that was,gained,over the last three,years,in applying,the statechart,formalism,to the specification,of a particularly complex,system.
Object Interaction in Object-Oriented Deductive Conceptual Models We present the main components of an object-oriented deductive approach to conceptual modelling of information systems. This approach does not model object interaction explicitly. However interaction among objects can be derived by means of a formal procedure that we outline.
The Software Development System This paper presents a discussion of the Software Development System (SDS), a methodology addressing the problems involved in the development of software for Ballistic Missile Defense systems. These are large, real-time, automated systems with a requirement for high reliability. The SDS is a broad approach attacking problems arising in requirements generation, software design, coding, and testing. The approach is highly requirements oriented and has resulted in the formulation of structuring concepts, a requirements statement language, process design language, and support software to be used throughout the development cycle. This methodology represents a significant advance in software technology for the development of software for a class of systems such as BMD. The support software has been implemented and is undergoing evaluation.
The Draco Approach to Constructing Software from Reusable Components This paper discusses an approach called Draco to the construction of software systems from reusable software parts. In particular we are concerned with the reuse of analysis and design information in addition to programming language code. The goal of the work on Draco has been to increase the productivity of software specialists in the construction of similar systems. The particular approach we have taken is to organize reusable software components by problem area or domain. Statements of programs in these specialized domains are then optimized by source-to-source program transformations and refined into other domains. The problems of maintaining the representational consistency of the developing program and producing efficient practical programs are discussed. Some examples from a prototype system are also given.
Compound brushing explained This paper proposes a conceptual model called compound brushing for modeling the brushing techniques used in dynamic data visualization. In this approach, brushing techniques are modeled as higraphs with five types of basic entities: data, selection, device, renderer, and transformation. Using this model, a flexible visual programming tool is designed not only to configure and control various common types of brushing techniques currently used in dynamic data visualization but also to investigate new brushing techniques.
Fuzzy logic as a basis for reusing task‐based specifications
LANSF: a protocol modelling environment and its implementation SUMMARY LANSF is a software package that was originally designed as a tool to investigate the behaviour of medium access control (MAC) level protocols. These protocols form an interesting class of distributed computations: timing of events is the key factor in them. The protocol definition language of LANSF is based on C, and protocols are specified (programmed) as collections of communicating, interrupt-driven processes. These specifications are executable: an event-driven emulator of MAC-level communication phenomena forms the foundation of the implementation. Some tools for debugging, testing, and validation of protocol specifications are provided. We present key features of LANSF at the syntactic level, comment informally on the semantics of these features, and highlight some implementation issues. A complete example of a LANSF application is discussed in the Appendix.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.109726
0.109319
0.02384
0.021868
0.002827
0.000763
0.000048
0.000001
0
0
0
0
0
0
A general evaluation measure for document organization tasks A number of key Information Access tasks -- Document Retrieval, Clustering, Filtering, and their combinations -- can be seen as instances of a generic {\em document organization} problem that establishes priority and relatedness relationships between documents (in other words, a problem of forming and ranking clusters). As far as we know, no analysis has been made yet on the evaluation of these tasks from a global perspective. In this paper we propose two complementary evaluation measures -- Reliability and Sensitivity -- for the generic Document Organization task which are derived from a proposed set of formal constraints (properties that any suitable measure must satisfy). In addition to be the first measures that can be applied to any mixture of ranking, clustering and filtering tasks, Reliability and Sensitivity satisfy more formal constraints than previously existing evaluation metrics for each of the subsumed tasks. Besides their formal properties, its most salient feature from an empirical point of view is their strictness: a high score according to the harmonic mean of Reliability and Sensitivity ensures a high score with any of the most popular evaluation metrics in all the Document Retrieval, Clustering and Filtering datasets used in our experiments.
A Formal Approach to Effectiveness Metrics for Information Access: Retrieval, Filtering, and Clustering. In this tutorial we present a formal account of evaluation metrics for three of the most salient information related tasks: Retrieval, Clustering, and Filtering. We focus on the most popular metrics and, by exploiting measurement theory, we show some constraints for suitable metrics in each of the three tasks. We also systematically compare metrics according to how they satisfy such constraints, we provide criteria to select the most adequate metric for each specific information access task, and we discuss how to combine and weight metrics.
Axiometrics: Axioms of Information Retrieval Effectiveness Metrics. There are literally dozens (most likely more than one hundred) information retrieval eectiveness metrics, and counting, but a common, general, and formal understanding of their properties is still missing. In this paper we aim at improving and extending the recently published work by Busin and Mizzaro [6]. That paper proposes an axiomatic approach to Information Retrieval (IR) eectiveness metrics, and more in detail: (i) it denes a framework based on the notions of measure, measurement, and similarity; (ii) it provides a general denition of IR eectiveness metric; and (iii) it proposes a set of axioms that every eectiveness metric should satisfy. Here we build on their work and more specically: we design a dierent and improved set of axioms, we provide a denition of some common metrics, and we derive some theorems from the axioms.
Sentiment Analysis and Topic Detection of Spanish Tweets: A Comparative Study of of NLP Techniques.
A comparison of extrinsic clustering evaluation metrics based on formal constraints There is a wide set of evaluation metrics available to compare the quality of text clustering algorithms. In this article, we define a few intuitive formal constraints on such metrics which shed light on which aspects of the quality of a clustering are captured by different metric families. These formal constraints are validated in an experiment involving human assessments, and compared with other constraints proposed in the literature. Our analysis of a wide range of metrics shows that only BCubed satisfies all formal constraints. We also extend the analysis to the problem of overlapping clustering, where items can simultaneously belong to more than one cluster. As Bcubed cannot be directly applied to this task, we propose a modified version of Bcubed that avoids the problems found with other metrics.
Overview of RepLab 2014: Author Profiling and Reputation Dimensions for Online Reputation Management.
Axiomatic Thinking for Information Retrieval: And Related Tasks This is the first workshop on the emerging interdisciplinary research area of applying axiomatic thinking to information retrieval (IR) and related tasks. The workshop aims to help foster collaboration of researchers working on different perspectives of axiomatic thinking and encourage discussion and research on general methodological issues related to applying axiomatic thinking to IR and related tasks.
Distributed Representations of Words and Phrases and their Compositionality. The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of "Canada" and "Air" cannot be easily combined to obtain "Air Canada". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible.
Feedback stabilization of some event graph models The authors introduce several notions of stability for event graph models, timed or not. The stability is similar to the boundedness notion for Petri nets. The event graph models can be controlled by an output feedback which takes information from some observable transitions and can disable some controllable transitions. The controller itself is composed of an event graph. In this framework the authors solve the corresponding stabilization problems, i.e., they wonder if such a controller may prevent the explosion of the number of tokens
An Effective Implementation for the Generalized Input-Output Construct of CSP
The transformation schema: An extension of the data flow diagram to represent control and timing The data flow diagram has been extensively used to model the data transformation aspects of proposed systems. However, previous definitions of the data flow diagram have not provided a comprehensive way to represent the interaction between the timing and control aspects of a system and its data transformation behavior. This paper describes an extension of the data flow diagram called the transformation schema. The transformation schema provides a notation and formation rules for building a comprehensive system. model, and a set of execution rules to allow prediction of the behavior over time of a system modeled in this way. The notation and formation rules allow depiction of a system as a network of potentially concurrent “centers of activity” (transformations), and of data repositories (stores), linked by communication paths (flows). The execution rules provide a qualitative prediction rather than a quantitative one, describing the acceptance of inputs and the production of outputs by the transformations but not input and output values. The transformation schema permits the creation and evaluation of two different types of system models. In the essential (requirements) model, the schema is used to represent a virtual machine with infinite resources. The elements of the schema depict idealized processing and memory components. In the implementation model, the schema is used to represent a real machine with limited resources, and the results of the execution predict the behavior of an implementation of requirements. The transformations of the schema can depict software running on digital processors, hard-wired digital or analog circuits, and so on, and the stores of the schema can depict disk files, tables in memory, and so on.
A Graphical Query Language Based on an Extended E-R Model
The requirements apprentice: an initial scenario The implementation of the Requirements Apprentice has reached the point where it is possible to exhibit a concrete scenario showing the intended basic capabilities of the system. The Requirements Apprentice accepts ambiguous, incomplete, and inconsistent input from a requirements analyst and assists the analyst in creating and validating a coherent requirements description. This processing is supported by a general-purpose reasoning system and a library of requirements cliches that contains reusable descriptions of standard concepts used in requirements.
Use of symmetry in prediction-error field for lossless compression of 3D MRI images Abstract Three dimensional MRI images which are powerful tools for diagnosis of many diseases require large storage space. A number of lossless compression schemes exist for this purpose. In this paper we propose a new approach for lossless compression of these images which exploits the inherent symmetry that exists in 3D MRI images. First, an efficient pixel prediction scheme is used to remove correlation between pixel values in an MRI image. Then a block matching routine is employed to take advantage of the symmetry within the prediction error image. Inter-slice correlations are eliminated using another block matching. Results of the proposed approach are compared with the existing standard compression techniques.
1.00434
0.005391
0.004913
0.004865
0.004568
0.004451
0.003774
0.000997
0
0
0
0
0
0
Reasoning in Higraphs with Loose Edges Harel introduces the notion of zooming out as a usefuloperation in working with higraphs. Zooming out allows usto consider less detailed versions of a higraph by droppingsome detail from the description in a structured manner. Althoughthis is a very useful operation it seems it can be misleadingin some circumstances by allowing the user of thezoomed out higraph to make false inferences given the usualtransition system semantics for higraphs. We consider oneapproach to rectifying this situation by following throughHarel's suggestion that, in some circumstances, it may beuseful to consider higraphs with edges that have no specificorigin or destination. We call these higraphs loose higraphsand show that an appropriate definition of zooming on loosehigraphs avoids some of the difficulties arising from the useof zooming. We also consider a logic for connectivity inloose higraphs.
Visual Formalisms Revisited The development of an interactive application is a complex task that has to consider data, behavior, inter- communication, architecture and distribution aspects of the modeled system. In particular, it presupposes the successful communication between the customer and the software expert. To enhance this communica- tion most modern software engineering methods rec- ommend to specify the different aspects of a system by visual formalisms. In essence, visual specifications are directed graphs that are interpreted in a particular way for each as- pect of the system. They are also intended to be com- positional. This means that, each node can itself be a graph with a separate meaning. However, the lack of a denotational model for hierarchical graphs often leads to the loss of compositionality. This has severe negative consequences in the development of realistic applications. In this paper we present a simple denotational model (which is by definition compositional) for the architecture and behavior aspects of a system. This model is then used to give as emantics to almost all the concepts occurring in ROOM. Our model also provides a compositional semantics for or-states in statecharts.
Zooming-out on Higraph-based diagrams - Syntactic and Semantic Issues Computing system representations based on Harel's notion of hierarchical graph, or higraph, have become popular since the invention of Statecharts. Such hierarchical representations support a useful filtering operation, called “zooming-out”, which is used to manage the level of detail presented to the user designing or reasoning about a large and complex system. In the framework of (lightweight) category theory, we develop the mathematics of zooming e ut for higraphs with loose edges, formalise the transition semantics of such higraphs and conduct an analysis of the effect the operation of zooming out has on the semantic interpretations, as required for the soundness of reasoning arguments depending on zoom-out steps.
Towards a Formalization of Constraint Diagrams Geared to complement UML and to the specification of large software systems by non-mathematicians, constraint diagrams are a visual language that generalizes the popular and intuitive Venn diagrams and Euler circles, and adds facilities for quantifying over elements and navigating relations. The language design emphasizes scalability and expressiveness while retaining intuitiveness. Spider diagrams form a subset of the notation, leaving out universal quantification and the ability to navigate relations. Spider diagrams have been given a formal definition. This paper extends that definition to encompass the constraint diagram notation. The formalization of constraint diagrams is nontrivial: it exposes subtleties concerned with the implicit ordering of symbols in the visual language, which were not evident before a formal definition of the language was attempted. This has led to an improved design of the language
Formalizing spider diagrams Geared to complement UML and to the specification of large software systems by non-mathematicians, \Em{spider diagrams} are a visual language that generalizes the popular and intuitive Venn diagrams and Euler circles. The language design emphasized scalability and expressiveness while retaining intuitiveness. In this extended abstract we describe spider diagrams from a mathematical standpoint and show how their formal semantics in terms of logical expressions can be made. We also claim that all spider diagrams are self-consistent.
A Data Type Approach to the Entity-Relationship Approach
Expanding the utility of semantic networks through partitioning An augmentation of semantic networks is presented in which the various nodes and arcs are partitioned into "net spaces." These net spaces delimit the scopes of quantified variables, distinguish hypothetical and imaginary situations from reality, encode alternative worlds considered in planning, and focus attention at particular levels of detail.
Feedback stabilization of some event graph models The authors introduce several notions of stability for event graph models, timed or not. The stability is similar to the boundedness notion for Petri nets. The event graph models can be controlled by an output feedback which takes information from some observable transitions and can disable some controllable transitions. The controller itself is composed of an event graph. In this framework the authors solve the corresponding stabilization problems, i.e., they wonder if such a controller may prevent the explosion of the number of tokens
Unscented filtering and nonlinear estimation The extended Kalman filter (EKF) is probably the most widely used estimation algorithm for nonlinear systems. However, more than 35 years of experience in the estimation community has shown that is difficult to implement, difficult to tune, and only reliable for systems that are almost linear on the time scale of the updates. Many of these difficulties arise from its use of linearization. To overcome this limitation, the unscented transformation (UT) was developed as a method to propagate mean and covariance information through nonlinear transformations. It is more accurate, easier to implement, and uses the same order of calculations as linearization. This paper reviews the motivation, development, use, and implications of the UT.
Multistage negotiation for distributed constraint satisfaction A cooperation paradigm and coordination protocol for a distributed planning system consisting of a network of semi-autonomous agents with limited internode communication and no centralized control is presented. A multistage negotiation paradigm for solving distributed constraint satisfaction problems in this kind of system has been developed. The strategies presented enable an agent in a distributed planning system to become aware of the extent to which its own local decisions may have adverse nonlocal impact in planning. An example problem is presented in the context of transmission path restoration for dedicated circuits in a communications network. Multistage negotiation provides an agent with sufficient information about the impact of local decisions on a nonlocal state so that the agent may make local decisions that are correct from a global perspective, without attempting to provide a complete global state to all agents. Through multistage negotiation, an agent is able to recognize when a set of global goals cannot be satisfied, and is able to solve a related problem by finding a way of satisfying a reduced set of goals
Reasoning and Refinement in Object-Oriented Specification Languages This paper describes a formal object-oriented specification language, Z++, and identifies proof rules and associated specification structuring and development styles for the facilitation of validation and verification of implementations against specifications in this language. We give inference rules for showing that certain forms of inheritance lead to refinement, and for showing that refinements are preserved by constructs such as promotion of an operation from a supplier class to a client class. Extension of these rules to other languages is also discussed.
Software engineering for parallel systems Current approaches to software engineering practice for parallel systems are reviewed. The parallel software designer has not only to address the issues involved in the characterization of the application domain and the underlying hardware platform, but, in many instances, the production of portable, scalable software is desirable. In order to accommodate these requirements, a number of specific techniques and tools have been proposed, and these are discussed in this review in the framework of the parallel software life-cycle. The paper outlines the role of formal methods in the practical production of parallel software, but its main focus is the emergence of development methodologies and environments. These include CASE tools and run-time support systems, as well as the use of methods taken from experience of conventional software development. Because of the particular emphasis on performance of parallel systems, work on performance evaluation and monitoring systems is considered.
Developing Mode-Rich Satellite Software by Refinement in Event B To ensure dependability of on-board satellite systems, the designers should, in particular, guarantee correct implementation of the mode transition scheme, i.e., ensure that the states of the system components are consistent with the global system mode. However, there is still a lack of scalable approaches to formal verification of correctness of complex mode transitions. In this paper we present a formal development of an Attitude and Orbit Control System (AOCS) undertaken within the ICT DEPLOY project. AOCS is a complex mode-rich system, which has an intricate mode-transition scheme. We show that refinement in Event B provides the engineers with a scalable formal technique that enables both development of mode-rich systems and proof-based verification of their mode consistency.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.105
0.1
0.05
0.018182
0.000909
0.000016
0.000008
0
0
0
0
0
0
0
Fully Distributed Nonlinear State Estimation Using Sensor Networks This paper studies the problem of fully distributed state estimation using networked local sensors. Specifically, our previously proposed algorithm, namely, the Distributed Hybrid Information Fusion algorithm is extended to the scenario with nonlinearities involved in both the process model and the local sensing models. The unscented transformation approach is adopted for such an extension so that no computation of Jacobian matrix is needed. Moreover, the extended algorithm requires only one communication iteration between every two consecutive time instants. It is also analytically shown that for the case with linear sensing models, the local estimate errors are bounded in the mean square sense. A simulation example is used to illustrate the effectiveness of the extended algorithm.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Secure Tropos: A Security-Oriented Extension Of The Tropos Methodology Although security plays an important role in the development of multiagent systems, a careful analysis of software development processes shows that the definition of security requirements is, usually considered after the design of the system. One of the reasons is the fact that agent oriented software engineering methodologies have not integrated security concerns throughout their developing stages. The integration of security concerns during the whole range of the development stages can help in the development of more secure multiagent systems. In this paper we introduce extensions to the Tropos methodology to enable it to model security concerns throughout the whole development process. A description of the new concepts and modelling activities is given together with a discussion on how these concepts and modelling activities are integrated to the current stages of Tropos. A real life case study from the health and social care sector is used to illustrate the approach.
Syntax highlighting in business process models Sense-making of process models is an important task in various phases of business process management initiatives. Despite this, there is currently hardly any support in business process modeling tools to adequately support model comprehension. In this paper we adapt the concept of syntax highlighting to workflow nets, a modeling technique that is frequently used for business process modeling. Our contribution is three-fold. First, we establish a theoretical argument to what extent highlighting could improve comprehension. Second, we formalize a concept for syntax highlighting in workflow nets and present a prototypical implementation with the WoPeD modeling tool. Third, we report on the results of an experiment that tests the hypothetical benefits of highlighting for comprehension. Our work can easily be transferred to other process modeling tools and other process modeling techniques.
A comparison of security requirements engineering methods This paper presents a conceptual framework for security engineering, with a strong focus on security requirements elicitation and analysis. This conceptual framework establishes a clear-cut vocabulary and makes explicit the interrelations between the different concepts and notions used in security engineering. Further, we apply our conceptual framework to compare and evaluate current security requirements engineering approaches, such as the Common Criteria, Secure Tropos, SREP, MSRA, as well as methods based on UML and problem frames. We review these methods and assess them according to different criteria, such as the general approach and scope of the method, its validation, and quality assurance capabilities. Finally, we discuss how these methods are related to the conceptual framework and to one another.
Software engineering for security: a roadmap Is there such a thing anymore as a software system that doesn't need to be secure? Almost every software- controlled system faces threats from potential adversaries, from Internet-aware client applications running on PCs, to complex telecommunications and power systems acces- sible over the Internet, to commodity software with copy protection mechanisms. Software engineers must be cog- nizant of these threats and engineer systems with credible defenses, while still delivering value to customers. In this paper, we present our perspectives on the research issues that arise in the interactions between software engineering and security.
Security and Privacy Requirements Analysis within a Social Setting Security issues for software systems ultimately concern relationships among social actors - stakeholders, system users, potential attackers - and the software acting on their behalf. This paper proposes a methodological framework for dealing with security and privacy requirements based on i*, an agent-oriented requirements modeling language. The framework supports a set of analysis techniques. In particular, attacker analysis helps identify potential system abusers and their malicious intents. Dependency vulnerability analysis helps detect vulnerabilities in terms of organizational relationships amongstakeholders. Countermeasure analysis supports the dynamic decision-making process of defensive system players in addressing vulnerabilities and threats. Finally, access control analysis bridges the gap between security requirement models and security implementation models. The framework is illustrated with an example involving security and privacy concerns in the design of agent-based health information systems. In addition, we discuss model evaluation techniques, including qualitative goal model analysis and property verification techniques based on model checking.
Towards Regulatory Compliance: Extracting Rights and Obligations to Align Requirements with Regulations In the United States, federal and state regulations prescribe stakeholder rights and obligations that must be satisfied by the requirements for software systems. These regulations are typically wrought with ambiguities, making the process of deriving system requirements ad hoc and error prone. In highly regulated domains such as healthcare, there is a need for more comprehensive standards that can be used to assure that system requirements conform to regulations. To address this need, we expound upon a process called Semantic Parameterization previously used to derive rights and obligations from privacy goals. In this work, we apply the process to the Privacy Rule from the U.S. Health Insurance Portability and Accountability Act (HIPAA). We present our methodology for extracting and prioritizing rights and obligations from regulations and show how semantic models can be used to clarify ambiguities through focused elicitation and to balance rights with obligations. The results of our analysis can aid requirements engineers, standards organizations, compliance officers, and stakeholders in assuring systems conform to policy and satisfy requirements.
Goal-Oriented Requirements Engineering: A Guided Tour Abstract: Goals capture, at different levels of abstraction, the various objectives the system under consideration should achieve. Goal-oriented requirements engineering is concerned with the use of goals for eliciting, elaborating, structuring, specifying, analyzing, negotiating, documenting, and modifying requirements. This area has received increasing attention over the past few years. The paper reviews various research efforts undertaken along this line of research. The arguments in favor of goal orientation are first briefly discussed. The paper then com-pares the main approaches to goal modeling, goal specification and goal-based reasoning in the many activities of the requirements engineering process. To make the discussion more concrete, a real case study is used to suggest what a goal-oriented requirements engineering method may look like. Experience with such approaches and tool support are briefly discussed as well.
Requirements Engineering in the Year 00: A research perspective Requirements engineering (RE) is concerned with the identification of the goals to be achieved by the envisioned system, the operationalization of such goals into services and constraints, and the assignment of responsibilities for the resulting requirements to agents such as humans, devices, and software. The processes involved in RE include domain analysis, elicitation, specification, assessment, negotiation, documentation, and evolution. Getting high-quality requirements is difficult and critical. Recent surveys have confirmed the growing recognition of RE as an area of utmost importance in software engineering research and practice.The paper presents a brief history of the main concepts and techniques developed to date to support the RE task, with a special focus on modeling as a common denominator to all RE processes. The initial description of a complex safety-critical system is used to illustrate a number of current research trends in RE-specific areas such as goal-oriented requirements elaboration, conflict management, and the handling of abnormal agent behaviors. Opportunities for goal-based architecture derivation are also discussed together with research directions to let the field move towards more disciplined habits.
Specification-based test oracles for reactive systems
No Silver Bullet Essence and Accidents of Software Engineering First Page of the Article
UniProt Knowledgebase: a hub of integrated protein data. The UniProt Knowledgebase (UniProtKB) acts as a central hub of protein knowledge by providing a unified view of protein sequence and functional information. Manual and automatic annotation procedures are used to add data directly to the database while extensive cross-referencing to more than 120 external databases provides access to additional relevant information in more specialized data collections. UniProtKB also integrates a range of data from other resources. All information is attributed to its original source, allowing users to trace the provenance of all data. The UniProt Consortium is committed to using and promoting common data exchange formats and technologies, and UniProtKB data is made freely available in a range of formats to facilitate integration with other databases.
Visualizing Argument Structure Constructing arguments and understanding them is not easy. Visualization of argument structure has been shown to help understanding and improve critical thinking. We describe a visualization tool for understanding arguments. It utilizes a novel hi-tree based representation of the argument’s structure and provides focus based interaction techniques for visualization. We give efficient algorithms for computing these layouts.
Analogical retrieval in reuse-oriented requirements engineering Computational mechanisms are presented for analogical retrieval of domain knowledge as a basis for intelligent tool-based assistance for requirements engineers, A first mechanism, called the domain matcher, retrieves object system models which describe key features for new problems, A second mechanism, called the problem classifier, reasons with analogical mappings inferred by the domain matcher to detect potential incompleteness, overspecification and inconsistencies in entered facts and requirements, Both mechanisms are embedded in AIR, a toolkit that provides co-operative reuse-oriented assistance for requirements engineers.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.05125
0.05
0.025
0.00875
0.004302
0.001476
0.000124
0.000035
0.000017
0.000002
0
0
0
0
Disentangling virtual machine architecture Virtual machine (VM) implementations are made of intricately intertwined subsystems, interacting largely through implicit dependencies. As the degree of crosscutting present in VMs is very high, VM implementations exhibit significant internal complexity. This study proposes an architecture approach for VMs that regards a VM as a composite of service modules coordinated through explicit bidirectional interfaces. Aspect-oriented programming techniques are used to establish these interfaces, to coordinate module interaction, and to declaratively express concrete VM architectures. A VM architecture description language is presented in a case study, illustrating the application of the proposed architectural principles.
Object and reference immutability using Java generics A compiler-checked immutability guarantee provides useful documentation, facilitates reasoning, and enables optimizations. This paper presents Immutability Generic Java (IGJ), a novel language extension that expresses immutability without changing Java's syntax by building upon Java's generics and annotation mechanisms. In IGJ, each class has one additional type parameter that is Immutable, Mutable, or ReadOnly. IGJ guarantees both reference immutability (only mutable references can mutate an object) and object immutability (an immutable reference points to an immutable object). IGJ is the first proposal for enforcing object immutability within Java's syntax and type system, and its reference immutability is more expressive than previous work. IGJ also permits covariant changes of type parameters in a type-safe manner, e.g., a readonly list of integers is a subtype of a readonly list of numbers. IGJ extends Java's type system with a few simple rules. We formalize this type system and prove it sound. Our IGJ compiler works by type-erasure and generates byte-code that can be executed on any JVM without runtime penalty.
It's alive! continuous feedback in UI programming Live programming allows programmers to edit the code of a running program and immediately see the effect of the code changes. This tightening of the traditional edit-compile-run cycle reduces the cognitive gap between program code and execution, improving the learning experience of beginning programmers while boosting the productivity of seasoned ones. Unfortunately, live programming is difficult to realize in practice as imperative languages lack well-defined abstraction boundaries that make live programming responsive or its feedback comprehensible. This paper enables live programming for user interface programming by cleanly separating the rendering and non-rendering aspects of a UI program, allowing the display to be refreshed on a code change without restarting the program. A type and effect system formalizes this separation and provides an evaluation model that incorporates the code update step. By putting live programming on a more formal footing, we hope to enable critical and technical discussion of live programming systems.
The disappearing boundary between development-time and run-time Modern software systems are increasingly embedded in an open world that is constantly evolving, because of changes in the requirements, in the surrounding environment, and in the way people interact with them. The platform itself on which software runs may change over time, as we move towards cloud computing. These changes are difficult to predict and anticipate, and their occurrence is out of control of the application developers. Because of these changes, the applications themselves need to change. Often, changes in the applications cannot be handled off-line, but require the software to self-react by adapting its behavior dynamically, to continue to ensure the desired quality of service. The big challenge in front of us is how to achieve the necessary degrees of flexibility and dynamism required by software without compromising the necessary dependability. This paper advocates that future software engineering research should focus on providing intelligent support to software at run-time, breaking today's rigid boundary between development-time and run-time. Models need to continue to live at run-time and evolve as changes occur while the software is running. To ensure dependability, analysis that the updated system models continue to satisfy the goals must be performed by continuous verification. If verification fails, suitable adjustment policies, supported by model-driven re-derivation of parts of the system, must be activated to keep the system aligned with its expected requirements. The paper presents the background that motivates this research focus, the main existing research directions, and an agenda for future work.
Delegation proxies: the power of propagation Scoping behavioral variations to dynamic extents is useful to support non-functional requirements that otherwise result in cross-cutting code. Unfortunately, such variations are difficult to achieve with traditional reflection or aspects. We show that with a modification of dynamic proxies, called delegation proxies, it becomes possible to reflectively implement variations that propagate to all objects accessed in the dynamic extent of a message send. We demonstrate our approach with examples of variations scoped to dynamic extents that help simplify code related to safety, reliability, and monitoring.
Maxine: An approachable virtual machine for, and in, java A highly productive platform accelerates the production of research results. The design of a Virtual Machine (VM) written in the Java™ programming language can be simplified through exploitation of interfaces, type and memory safety, automated memory management (garbage collection), exception handling, and reflection. Moreover, modern Java IDEs offer time-saving features such as refactoring, auto-completion, and code navigation. Finally, Java annotations enable compiler extensions for low-level “systems programming” while retaining IDE compatibility. These techniques collectively make complex system software more “approachable” than has been typical in the past. The Maxine VM, a metacircular Java VM implementation, has aggressively used these features since its inception. A co-designed companion tool, the Maxine Inspector, offers integrated debugging and visualization of all aspects of the VM's runtime state. The Inspector's implementation exploits advanced Java language features, embodies intimate knowledge of the VM's design, and even reuses a significant amount of VM code directly. These characteristics make Maxine a highly approachable VM research platform and a productive basis for research and teaching.
O-O Requirements Analysis: an Agent Perspective In this paper, we present a formal object-oriented specification language designed for capturing requirements expressed on composite realtime systems. The specification describes the system as a society of 'agents', each of them being characterised (i) by its responsibility with respect to actions happening in the system and (ii) by its time-varying perception of the behaviour of the other agents. On top of the language, we also suggest some methodological guidance by considering a general strategy based on a progressive assignement cf responsibilities to agents.
Constructing specifications by combining parallel elaborations An incremental approach to construction is proposed, with the virtue of offering considerable opportunity for mechanized support. Following this approach one builds a specification through a series of elaborations that incrementally adjust a simple initial specification. Elaborations perform both refinements, adding further detail, and adaptations, retracting oversimplifications and tailoring approximations to the specifics of the task. It is anticipated that the vast majority of elaborations can be concisely described to a mechanism that will then perform them automatically. When elaborations are independent, they can be applied in parallel, leading to diverging specifications that must later be recombined. The approach is intended to facilitate comprehension and maintenance of specifications, as well as their initial construction.
A distributed alternative to finite-state-machine specifications A specification technique, formally equivalent to finite-state machines, is offered as an alternative because it is inherently distributed and more comprehensible. When applied to modules whose complexity is dominated by control, the technique guides the analyst to an effective decomposition of complexity, encourages well-structured error handling, and offers an opportunity for parallel computation. When applied to distributed protocols, the technique provides a unique perspective and facilitates automatic detection of some classes of error. These applications are illustrated by a controller for a distributed telephone system and the full-duplex alternating-bit protocol for data communication. Several schemes are presented for executing the resulting specifications.
A singleton failures semantics for Communicating Sequential Processes This paper defines a new denotational semantics for the language of Communicating Sequential Processes (CSP). The semantics lies between the existing traces and failures models of CSP, providing a treatment of non-determinism in terms of singleton failures. Although the semantics does not represent a congruence upon the full language, it is adequate for sequential tests of non-deterministic processes. This semantics corresponds exactly to a commonly used notion of data refinement in Z and Object-Z: an abstract data type is refined when the corresponding process is refined in terms of singleton failures. The semantics is used to explore the relationship between data refinement and process refinement, and to derive a rule for data refinement that is both sound and complete.
An Integrated Semantics for UML Class, Object and State Diagrams Based on Graph Transformation This paper studies the semantics of a central part of the Unified Modeling Language UML. It discusses UML class, object and state diagrams and presents a new integrated semantics for both on the basis of graph transformation. Graph transformation is a formal technique having some common ideas with the UML. Graph transformation rules are associated with the operations in class diagrams and with the transitions in state diagrams. The resulting graph transformations are combined into a one system in order to obtain a single coherent semantic description.
Specification Diagrams for Actor Systems Specification diagrams (SD's) are a novel form of graphical notation for specifying open distributed object systems. The design goal is to define notation for specifying message-passing behavior that is expressive, intuitively understandable, and that has formal semantic underpinnings. The notation generalizes informal notations such as UML's Sequence Diagrams and broadens their applicability to later in the design cycle. Specification diagrams differ from existing actor and process algebra presentations in that they are not executable per se; instead, like logics, they are inherently more biased toward specification. In this paper we rigorously define the language syntax and semantics and give examples that show the expressiveness of the language, how properties of specifications may be asserted diagrammatically, and how it is possible to reason rigorously and modularly about specification diagrams.
Miro: Visual Specification of Security Miro is a set of languages and tools that support the visual specification of file system security. Two visual languages are presented: the instance language, which allows specification of file system access, and the constraint language, which allows specification of security policies. Miro visual languages and tools are used to specify security configurations. A visual language is one whose entities are graphical, such as boxes and arrows, specifying means stating independently of any implementation the desired properties of a system. Security means file system protection: ensuring that files are protected from unauthorized access and granting privileges to some users, but not others. Tools implemented and examples of how these languages can be applied to real security specification problems are described.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1.121627
0.069535
0.069535
0.069535
0.036101
0.024775
0.000007
0
0
0
0
0
0
0
Improving the System/Software Engineering Interface for Complex System Development At the 2004 Engineering of Computer Based Systems (ECBS) Technical Committee meeting, the ECBS Executive Committee agreed that a guideline on Integrated System and Software Engineering would be beneficial to engineers working at the interface, and they agreed to work on such a guideline. This paper is written in the hope that it will serve as a basis for a discussion group, initiating such a guideline. This paper seeks to improve the integrated system/software engineering process during the phases when system and software developers work most closely together. These phases are system problem definition, software requirements analysis and specification, system solution analysis, and process planning. Lessons learned during twenty years, while working with system and software engineers to define requirements and solutions on aerospace projects are summarized. During this time, problems were noted and their cause determined. As a result, advice is provided.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Procedures and atomicity refinement The introduction of an early return from a (remote) procedure call can increase the degree of parallelism in a parallel or distributed algorithm modeled by an action system. We define a return statement for procedures in an action systems framework and show that it corresponds to carrying out an atomicity refinement.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Entity Linking meets Word Sense Disambiguation: a Unified Approach.
Entity Disambiguation by Knowledge and Text Jointly Embedding.
Learning Sentiment-Specific Word Embedding For Twitter Sentiment Classification We present a method that learns word embedding for Twitter sentiment classification in this paper. Most existing algorithms for learning continuous word representations typically only model the syntactic context of words but ignore the sentiment of text. This is problematic for sentiment analysis as they usually map words with similar syntactic context but opposite sentiment polarity, such as good and bad, to neighboring word vectors. We address this issue by learning sentiment-specific word embedding (SSWE), which encodes sentiment information in the continuous representation of words. Specifically, we develop three neural networks to effectively incorporate the supervision from sentiment polarity of text (e.g. sentences or tweets) in their loss functions. To obtain large scale training corpora, we learn the sentiment-specific word embedding from massive distant-supervised tweets collected by positive and negative emoticons. Experiments on applying SSWE to a benchmark Twitter sentiment classification dataset in SemEval 2013 show that (1) the SSWE feature performs comparably with hand-crafted features in the top-performed system; (2) the performance is further improved by concatenating SSWE with existing feature set.
Monitoring Reputation in the Wild Online West.
Sentiment strength detection for the social web Sentiment analysis is concerned with the automatic extraction of sentiment-related information from text. Although most sentiment analysis addresses commercial tasks, such as extracting opinions from product reviews, there is increasing interest in the affective dimension of the social web, and Twitter in particular. Most sentiment analysis algorithms are not ideally suited to this task because they exploit indirect indicators of sentiment that can reflect genre or topic instead. Hence, such algorithms used to process social web texts can identify spurious sentiment patterns caused by topics rather than affective phenomena. This article assesses an improved version of the algorithm SentiStrength for sentiment strength detection across the social web that primarily uses direct indications of sentiment. The results from six diverse social web data sets (MySpace, Twitter, YouTube, Digg, RunnersWorld, BBCForums) indicate that SentiStrength 2 is successful in the sense of performing better than a baseline approach for all data sets in both supervised and unsupervised cases. SentiStrength is not always better than machine-learning approaches that exploit indirect indicators of sentiment, however, and is particularly weaker for positive sentiment in news-related discussions. Overall, the results suggest that, even unsupervised, SentiStrength is robust enough to be applied to a wide variety of different social web contexts.
TwitterEcho: a distributed focused crawler to support open research with twitter data Modern social network analysis relies on vast quantities of data to infer new knowledge about human relations and communication. In this paper we describe TwitterEcho, an open source Twitter crawler for supporting this kind of research, which is characterized by a modular distributed architecture. Our crawler enables researchers to continuously collect data from particular user communities, while respecting Twitter's imposed limits. We present the core modules of the crawling server, some of which were specifically designed to focus the crawl on the Portuguese Twittosphere. Additional modules can be easily implemented, thus changing the focus to a different community. Our evaluation of the system shows high crawling performance and coverage.
POPSTAR at RepLab 2013: Polarity for Reputation Classification.
Simulating simple user behavior for system effectiveness evaluation Information retrieval effectiveness evaluation typically takes one of two forms: batch experiments based on static test collections, or lab studies measuring actual users interacting with a system. Test collection experiments are sometimes viewed as introducing too many simplifying assumptions to accurately predict the usefulness of a system to its users. As a result, there is great interest in creating test collections and measures that better model user behavior. One line of research involves developing measures that include a parameterized user model; choosing a parameter value simulates a particular type of user. We propose that these measures offer an opportunity to more accurately simulate the variance due to user behavior, and thus to analyze system effectiveness to a simulated user population. We introduce a Bayesian procedure for producing sampling distributions from click data, and show how to use statistical tools to quantify the effects of variance due to parameter selection.
A field study of the software design process for large systems The problems of designing large software systems were studied through interviewing personnel from 17 large projects. A layered behavioral model is used to analyze how three of these problems—the thin spread of application domain knowledge, fluctuating and conflicting requirements, and communication bottlenecks and breakdowns—affected software productivity and quality through their impact on cognitive, social, and organizational processes.
Queue-based multi-processing LISP As the need for high-speed computers increases, the need for multi-processors will be become more apparent. One of the major stumbling blocks to the development of useful multi-processors has been the lack of a good multi-processing language—one which is both powerful and understandable to programmers. Among the most compute-intensive programs are artificial intelligence (AI) programs, and researchers hope that the potential degree of parallelism in AI programs is higher than in many other applications. In this paper we propose multi-processing extensions to Lisp. Unlike other proposed multi-processing Lisps, this one provides only a few very powerful and intuitive primitives rather than a number of parallel variants of familiar constructs.
A superimposition control construct for distributed systems A control structure called a superimposition is proposed. The structure contains schematic abstractions of processes called roletypes in its declaration. Each roletype may be bound to processes from a basic distributed algorithm, and the operations of the roletype will then execute interleaved with those of the basic processes, over the same state space. This structure captures a kind of modularity natural for distributed programming, which previously has been treated using a macro-like implantation of code. The elements of a superimposition are identified, a syntax is suggested, correctness criteria are defined, and examples are presented.
Randomized graph drawing with heavy-duty preprocessing We present a graph drawing system for general undirected graphs with straight-line edges. It carries out a rather complex set of preprocessing steps, designed to produce a topologically good, but not necessarily nice-looking layout, which is then subjected to Davidson and Harel's simulated annealing beautification algorithm. The intermediate layout is planar for planar graphs and attempts to come close to planar for nonplanar graphs. The system's results are significantly better, and much faster, than what the annealing approach is able to achieve on its own.
A proof-based approach to verifying reachability properties This paper presents a formal approach to proving temporal reachability properties, expressed in CTL, on B systems. We are particularly interested in demonstrating that a system can reach a given state by executing a sequence of actions (or operation calls) called a path. Starting with a path, the proposed approach consists in calculating the proof obligations to discharge in order to prove that the path allows the system to evolve in order to verify the desired property. Since these proof obligations are expressed as first logic formulas without any temporal operator, they can be discharged using the prover of AtelierB. Our proposal is illustrated through a case study.
Analysis and Design of Secure Massive MIMO Systems in the Presence of Hardware Impairments. To keep the hardware costs of future communications systems manageable, the use of low-cost hardware components is desirable. This is particularly true for the emerging massive multiple-input multiple-output (MIMO) systems which equip base stations (BSs) with a large number of antenna elements. However, low-cost transceiver designs will further accentuate the hardware impairments, which are presen...
1.122
0.122
0.12
0.12
0.072
0.04
0.006
0.000226
0
0
0
0
0
0
I-structures: data structures for parallel computing It is difficult to achieve elegance, efficiency, and parallelism simultaneously in functional programs that manipulate large data structures. We demonstrate this through careful analysis of program examples using three common functional data-structuring approaches-lists using Cons, arrays using Update (both fine-grained operators), and arrays using make-array (a “bulk” operator). We then present I-structure as an alternative and show elegant, efficient, and parallel solutions for the program examples in Id, a language with I-structures. The parallelism in Id is made precise by means of an operational semantics for Id as a parallel reduction system. I-structures make the language nonfunctional, but do not lose determinacy. Finally, we show that even in the context of purely functional languages, I-structures are invaluable for implementing functional data abstractions.
Diverse executable semantics definitions in NUSL and an implementation of functional types Several different semantics definitions of a sample language, SAL, ar e given, all in NUSL . Their differences and similarities, and mor e generally, different approaches to the definition of programmin g languages are discussed . The implementation of functional types i s explored.
Fortran 90 arrays Excellent application performance should not require tour de force programming efforts by users. Fortran 90 in an attempt to bring it from a scalar orientation into an array notation, has adopted some of the early concepts of APL, such as array operations. The introduction of these ideas is shown to be inadequate meeting the algorithmic needs of programmers, in terms of expressiveness, consistency, and conciseness. Comparisons with APL show Fortran 90 to be a mongrel, neither scalar- nor array-oriented, unable achieve the levels of productivity, performance, reliability, and maintainability required by computer users in the 1990s.
BaLinda Lisp: a parallel list-processing language The authors describe BaLinda (Biddle and Linda) Lisp, a parallel execution Lisp dialect designed to take advantage of the architectural capabilities of Biddle (bidirectional data driven Lisp engine). The Future construct is used to initiate parallel execution threads, which may communicate through Linda-like commands operating on a tuple space. These features provide good support for parallel execution, and blend together well with notational consistency and simplicity. Unstructured task initiation and termination commands are avoided, while mandatory and speculative parallelisms (lazy versus eager executions) are both supported
A case study of parallel execution of a rule-based expert system We report on a case study of the potentials for parallel execution of the inference engine of EMYCIN, a rule-based expert system. Multilisp, which supports parallel execution of tasks by means of thefuture construct, is used to implement the parallel version of the backwards-chaining inference engine. The study uses explicit specification of parallel execution and synchronization to attain parallel execution. It suggests some general techniques for obtaining parallel execution in expert systems and other applications.
Environments as first class objects We describe a programming language called Symmetric Lisp that treats environments as first-class objects. Symmetric Lisp allows programmers to write expressions that evaluate to environments, and to create and denote variables and constants of type environment as well. One consequence is that the roles filled in other languages by a variety of limited, special purpose environment forms like records, structures, closures, modules, classes and abstract data types are filled instead by a single versatile and powerful structure. In addition to being its fundamental structuring tool, environments also serve as the basic functional object in the language. Because the elements of an environment are evaluated in parallel, Symmetric Lisp is a parallel programming language; because they may be assembled dynamically as well as statically, Symmetric Lisp accommodates an unusually flexible and simple (parallel) interpreter as well as other history-sensitive applications requiring dynamic environments. We show that first-class environments bring about fundamental changes in a language's structure: conventional distinctions between declarations and expressions, data structures and program structures, passive modules and active processes disappear.
Verification of conceptual models based on linguistic knowledge As conceptual models reflect people's perception of real-world phenomena, they are closely tied to the way people describe and talk about these phenomena in natural language. The models, used for developing database and information systems and for presenting or explaining databases, usually include linguistic notions to help interpret their contents. In this paper, we show how linguistic knowledge from a semantically based lexicon can be used to check the quality of conceptual models. The check is a natural extension of traditional verification checks, and its purpose is to ensure that words and phrases included in the models are used in a linguistically meaningful way. Linguistic expressions must be of the correct type, and the relationships between expressions in the model must be acceptable with respect to the semantic constraints indicated in the lexicon.
Integrating non-interfering versions of programs The need to integrate several versions of a program into a common one arises frequently, but it is a tedious and time consuming task to integrate programs by hand. To date, the only available tools for assisting with program integration are variants of text-based differential file comparators; these are of limited utility because one has no guarantees about how the program that is the product of an integration behaves compared to the programs that were integrated.
Requirements Specification for Process-Control Systems The paper describes an approach to writing requirements specifications for process-control systems, a specification language that supports this approach, and an example application of the approach and the language on an industrial aircraft collision avoidance system (TCAS II). The example specification demonstrates: the practicality of writing a formal requirements specification for a complex, process-control system; and the feasibility of building a formal model of a system using a specification language that is readable and reviewable by application experts who are not computer scientists or mathematicians. Some lessons learned in the process of this work, which are applicable both to forward and reverse engineering, are also presented.
The ESTEREL Synchronous Programming Language and its Mathematical Semantics Without Abstract
The Software Development System This paper presents a discussion of the Software Development System (SDS), a methodology addressing the problems involved in the development of software for Ballistic Missile Defense systems. These are large, real-time, automated systems with a requirement for high reliability. The SDS is a broad approach attacking problems arising in requirements generation, software design, coding, and testing. The approach is highly requirements oriented and has resulted in the formulation of structuring concepts, a requirements statement language, process design language, and support software to be used throughout the development cycle. This methodology represents a significant advance in software technology for the development of software for a class of systems such as BMD. The support software has been implemented and is undergoing evaluation.
Drawing Hypergraphs in the Subset Standard (Short Demo Paper) We report an experience on a practical system for drawing hypergraphs in the subset standard. The PATATE system is based on the application of a classical force directed method to a dynamic graph, which is deduced, at a given iteration time, from the hypergraph structure and particular vertex locations. Different strategies to define the dynamic underlying graph are presented. We illustrate in particular the method when the graph is obtained by computing an Euclidean Steiner tree.
Temporal predicate transforms and fair termination It is usually assumed that implementations of nondeterministic programs may resolve the nondeterminacy arbitrarily. In some circumstances, however, we may wish to assume that the implementation is in some sense fair, by which we mean that in its long-term behaviour it does not show undue bias in forever favouring some nondeterministic choices over others. Under the assumption of fairness many otherwise failing programs become terminating. We construct various predicate transformer semantics of such fairly-terminating programs. The approach is based on formulating the familiar temporal operators always, eventually, and infinitely often as predicate transformers. We use these operators to construct a framework that accommodates many kinds of fairness, including varieties of socalled weak and strong fairness in both their all-levels and top-level forms. Our formalization of the notion of fairness does not exploit the syntactic shape of programs, and allows the familiar nondeterminacy and fair nondeterminacy to be arbitrarily combined in the one program. Invariance theorems for reasoning about fairly terminating programs are proved. The semantics admits probabilistic implementations provided that unbounded fairness is excluded.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.038356
0.047753
0.044444
0.015918
0.008622
0.004123
0.000743
0.000245
0.000119
0.000022
0
0
0
0
Measuring the quality of data models: an empirical evaluation of the use of quality metrics in practice This paper describes the empirical evaluation of a set of proposed metrics for evaluating the quality of data models. A total of twenty nine candidate metrics were originally proposed, each of which measured a different aspect of quality of a data model. Action research was used to evaluate the usefulness of the metrics in five application development projects in two private sector organisations. Of the metrics originally proposed, only three "survived" the empirical validation process, and two new metrics were discovered. The result was a set of five metrics which participants felt were manageable to apply in practice. An unexpected finding was that subjective ratings of quality and qualitative descriptions of quality issues were perceived to be much more useful than the metrics. While the idea of using metrics to quantify the quality of data models seems good in theory, the results of this study seem to indicate that it is not quite so useful in practice. The conclusion is that using a combination of "hard" and "soft" information (metrics, subjective ratings, qualitative description of issues) provides the most effective solution to the problem of evaluating the quality of data models, and that moves towards increased quantification may be counterproductive.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Procedures and atomicity refinement The introduction of an early return from a (remote) procedure call can increase the degree of parallelism in a parallel or distributed algorithm modeled by an action system. We define a return statement for procedures in an action systems framework and show that it corresponds to carrying out an atomicity refinement.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Multichannel SVD-based image de-noising In this paper, we propose a multichannel SVD-based image de-noising algorithm. The IntDCT is employed to decorrelate the image into sixteen subbands. The SVD is then applied to each of the subbands and the additive noise is reduced by truncating the eigenvalues. The simulation results illustrate that this technique can effectively filter the noisy images without assuming any statistics of the image by using a data compression technique.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Classification of resource management approaches in fog/edge paradigm and future research prospects: a systematic review The fog paradigm extends the cloud capabilities at the edge of the network. Fog computing-based real-time applications (Online gaming, 5G, Healthcare 4.0, Industrial IoT, autonomous vehicles, virtual reality, augmented reality, and many more) are growing at a very fast pace. There are limited resources at the fog layer compared to the cloud, which leads to resource constraint problems. Edge resources need to be utilized efficiently to fulfill the growing demand for a large number of IoT devices. Lots of work has been done for the efficient utilization of edge resources. This paper provided a systematic review of fog resource management literature from the year 2016–2021. In this review paper, the fog resource management approaches are divided into 9 categories which include resource scheduling, application placement, load balancing, resource allocation, resource estimation, task offloading, resource provisioning, resource discovery, and resource orchestration. These resource management approaches are further subclassified based on the technology used, QoS factors, and data-driven strategies. Comparative analysis of existing articles is provided based on technology, tools, application area, and QoS factors. Further, future research prospects are discussed in the context of QoS factors, technique/algorithm, tools, applications, mobility support, heterogeneity, AI-based, distributed network, hierarchical network, and security. A systematic literature review of existing survey papers is also included. At the end of this work, key findings are highlighted in the conclusion section.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Dissipativity and passivity analysis for memristor-based neural networks with leakage and two additive time-varying delays. In this paper, the problems of dissipativity and passivity analysis for memristor-based neural networks (MNNs) with both time-varying leakage delay and two additive time-varying delays are studied. By introducing an improved Lyapunov–Krasovskii functional (LKF) with triple integral terms, and combining the reciprocally convex combination technique, Wirtinger-based integral inequality with free-weighting matrices technique, some less conservative delay-dependent dissipativity and passivity criteria are obtained. The proposed criteria that depend on the upper bounds of the leakage and additive time-varying delays are given in terms of linear matrix inequalities (LMI), which can be solved by MATLAB LMI Control Toolbox. Meanwhile, the criteria for the system with a single time-varying delay are also provided. Finally, some examples are given to illustrate the effectiveness and superiority of the obtained results.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Axiomatic Thinking for Information Retrieval: And Related Tasks This is the first workshop on the emerging interdisciplinary research area of applying axiomatic thinking to information retrieval (IR) and related tasks. The workshop aims to help foster collaboration of researchers working on different perspectives of axiomatic thinking and encourage discussion and research on general methodological issues related to applying axiomatic thinking to IR and related tasks.
An Axiomatic Analysis of Diversity Evaluation Metrics: Introducing the Rank-Biased Utility Metric. Many evaluation metrics have been defined to evaluate the effectiveness ad-hoc retrieval and search result diversification systems. However, it is often unclear which evaluation metric should be used to analyze the performance of retrieval systems given a specific task. Axiomatic analysis is an informative mechanism to understand the fundamentals of metrics and their suitability for particular scenarios. In this paper, we define a constraint-based axiomatic framework to study the suitability of existing metrics in search result diversification scenarios. The analysis informed the definition of Rank-Biased Utility (RBU) -- an adaptation of the well-known Rank-Biased Precision metric -- that takes into account redundancy and the user effort associated to the inspection of documents in the ranking. Our experiments over standard diversity evaluation campaigns show that the proposed metric captures quality criteria reflected by different metrics, being suitable in the absence of knowledge about particular features of the scenario under study.
Are we on the Right Track?: An Examination of Information Retrieval Methodologies. The unpredictability of user behavior and the need for effectiveness make it difficult to define a suitable research methodology for Information Retrieval (IR). In order to tackle this challenge, we categorize existing IR methodologies along two dimensions: (1) empirical vs. theoretical, and (2) top-down vs. bottom-up. The strengths and drawbacks of the resulting categories are characterized according to 6 desirable aspects. The analysis suggests that different methodologies are complementary and therefore, equally necessary. The categorization of the 167 full papers published in the last SIGIR (2016 and 2017) and ICTIR (2017) conferences suggest that most of existing work is empirical bottom-up, suggesting lack of some desirable aspects. With the hope of improving IR research practice, we propose a general methodology for IR that integrates the strengths of existing research methods.
Towards a Formal Framework for Utility-oriented Measurements of Retrieval Effectiveness In this paper we present a formal framework to define and study the properties of utility-oriented measurements of retrieval effectiveness, like AP, RBP, ERR and many other popular IR evaluation measures. The proposed framework is laid in the wake of the representational theory of measurement, which provides the foundations of the modern theory of measurement in both physical and social sciences, thus contributing to explicitly link IR evaluation to a broader context. The proposed framework is minimal, in the sense that it relies on just one axiom, from which other properties are derived. Finally, it contributes to a better understanding and a clear separation of what issues are due to the inherent problems in comparing systems in terms of retrieval effectiveness and what others are due to the expected numerical properties of a measurement.
On Overview of KRL, a Knowledge Representation Language
Integrating noninterfering versions of programs The need to integrate several versions of a program into a common one arises frequently, but it is a tedious and time consuming task to integrate programs by hand. To date, the only available tools for assisting with program integration are variants of text-based differential file comparators; these are of limited utility because one has no guarantees about how the program that is the product of an integration behaves compared to the programs that were integrated.This paper concerns the design of a semantics-based tool for automatically integrating program versions. The main contribution of the paper is an algorithm that takes as input three programs A, B, and Base, where A and B are two variants of Base. Whenever the changes made to Base to create A and B do not “interfere” (in a sense defined in the paper), the algorithm produces a program M that integrates A and B. The algorithm is predicated on the assumption that differences in the behavior of the variant programs from that of Base, rather than differences in the text, are significant and must be preserved in M. Although it is undecidable whether a program modification actually leads to such a difference, it is possible to determine a safe approximation by comparing each of the variants with Base. To determine this information, the integration algorithm employs a program representation that is similar (although not identical) to the dependence graphs that have been used previously in vectorizing and parallelizing compilers. The algorithm also makes use of the notion of a program slice to find just those statements of a program that determine the values of potentially affected variables.The program-integration problem has not been formalized previously. It should be noted, however, that the integration problem examined here is a greatly simplified one; in particular, we assume that expressions contain only scalar variables and constants, and that the only statements used in programs are assignment statements, conditional statements, and while-loops.
Object-oriented development in an industrial environment Object-oriented programming is a promising approach to the industrialization of the software development process. However, it has not yet been incorporated in a development method for large systems. The approaches taken are merely extensions of well-known techniques when 'programming in the small' and do not stand on the firm experience of existing developments methods for large systems. One such technique called block design has been used within the telecommunication industry and relies on a similar paradigm as object-oriented programming. The two techniques together with a third technique, conceptual modeling used for requirement modeling of information systems, have been unified into a method for the development of large systems.
Optimal, efficient, recursive edge detection filters The design of an optimal, efficient, infinite-impulse-response (IIR) edge detection filter is described. J. Canny (1986) approached the problem by formulating three criteria designed in any edge detection filter: good detection, good localization, and low spurious response. He maximized the product of the first two criteria while keeping the spurious response criterion constant. Using the variational approach, he derived a set of finite extent step edge detection filters corresponding to various values of the spurious response criterion, approximating the filters by the first derivative of a Gaussian. A more direct approach is described in this paper. The three criteria are formulated as appropriate for a filter of infinite impulse response, and the calculus of variations is used to optimize the composite criteria. Although the filter derived is also well approximated by first derivative of a Gaussian, a superior recursively implemented approximation is achieved directly. The approximating filter is separable into two linear filters operating in two orthogonal directions allowing for parallel edge detection processing. The implementation is very simple and computationally efficient
Combining angels, demons and miracles in program specifications The complete lattice of monotonic predicate transformers is interpreted as a command language with a weakest precondition semantics. This command lattice contains Dijkstra's guarded commands as well as miracles. It also permits unbounded nondeterminism and angelic nondeterminism. The language is divided into sublanguages using criteria of demonic and angelic nondeterminism, termination and absence of miracles. We investigate dualities between the sublanguages and how they can be generated from simple primitive commands. The notions of total correctness and refinement are generalized to the command lattice.
Beyond models and metaphors: visual formalisms in user interface design The user interface has both syntactic functions-supplying commands and arguments to programs-and semantic functions-visually presenting application semantics and supporting problem solving cognition. The authors argue that though both functions are important, it is time to devote more resources to the problems of the semantic interface. Complex problem solving activities, e.g. for design and analysis tasks, benefit from clear visualizations of application semantics in the user interface. Designing the semantic interface requires computational building blocks capable of representing and visually presenting application semantics in a clear, precise way. The authors argue that neither mental models not metaphors provide a basis for designing and implementing such building blocks, but that visual formalisms do. They compare the benefits of mental models, metaphors and visual formalisms as the basis for designing the user interface, with particular attention to the practical solutions each provides to application developers
Knowledge-based and statistical approaches to text retrieval Major research issues in information retrieval are reviewed, and developments in knowledge-based approaches are described. It is argued that although a fair amount of work has been done, the effectiveness of this approach has yet to be demonstrated. It is suggested that statistical techniques and knowledge-based approaches should be viewed as complementary, rather than competitive.<>
S/NET: A High-Speed Interconnect for Multiple Computers This paper describes S/NET (symmetric network), a high-speed small area interconnect that supports effective multiprocessing using message-based communication. This interconnect provides low latency, bounded contention time, and high throughput. It further provides hardware support for low level flow control and signaling. The interconnect is a star network with an active switch. The computers connect to the switch through full duplex fiber links. The S/NET provides a simple memory addressable interface to the processors and appears as a logical bus interconnect. The switch provides fast, fair, and deterministic contention resolution. It further supports high priority signals to be sent unimpeded in presence of data traffic (this can viewed as equivalent to interrupts on a conventional memory bus). The initial implementation supports a mix of VAX computers and Motorola 68000 based single board computers up to a maximum of 12. The switch throughput is 80 Mbits/s and the fiber links operate at a data rate of 10 Mbits/s. The kernel-to-kernel latency is only100 mus. We present a description of the architecture and discuss the performance of current systems.
Verifying task-based specifications in conceptual graphs A conceptual model is a model of real world concepts and application domains as perceived by users and developers. It helps developers investigate and represent the semantics of the problem domain, as well as communicate among themselves and with users. In this paper, we propose the use of task-based specifications in conceptual graphs (TBCG) to construct and verify a conceptual model. Task-based specification methodology is used to serve as the mechanism to structure the knowledge captured in the conceptual model; whereas conceptual graphs are adopted as the formalism to express task-based specifications and to provide a reasoning capability for the purpose of verification. Verifying a conceptual model is performed on model specifications of a task through constraints satisfaction and relaxation techniques, and on process specifications of the task based on operators and rules of inference inherited in conceptual graphs.
Trading Networks with Bilateral Contracts. We consider general networks of bilateral contracts that include supply chains. We define a new stability concept, called trail stability, and show that any network of bilateral contracts has a trail-stable outcome whenever agents' preferences satisfy full substitutability. Trail stability is a natural extension of chain stability, but is a stronger solution concept in general contract networks. Trail-stable outcomes are not immune to deviations of arbitrary sets of firms. In fact, we show that outcomes satisfying an even more demanding stability property -- full trail stability -- always exist. We pin down conditions under which trail-stable and fully trail-stable outcomes have a lattice structure. We then completely describe the relationships between all stability concepts. When contracts specify trades and prices, we also show that competitive equilibrium exists in networked markets even in the absence of fully transferrable utility. The competitive equilibrium outcome is trail-stable.
1.2
0.1
0.1
0.028571
0
0
0
0
0
0
0
0
0
0
Database design with common sense business reasoning and learning Automated database design systems embody knowledge about the database design process. However, their lack of knowledge about the domains for which databases are being developed significantly limits their usefulness. A methodology for acquiring and using general world knowledge about business for database design has been developed and implemented in a system called the Common Sense Business Reasoner, which acquires facts about application domains and organizes them into a a hierarchical, context-dependent knowledge base. This knowledge is used to make intelligent suggestions to a user about the entities, attributes, and relationships to include in a database design. A distance function approach is employed for integrating specific facts, obtained from individual design sessions, into the knowledge base (learning) and for applying the knowledge to subsequent design problems (reasoning).
Distributed Intelligent Agents In Retsina, the authors have developed a distributed collection of software agents that cooperate asynchronously to perform goal-directed information retrieval and integration for supporting a variety of decision-making tasks. Examples for everyday organizational decision making and financial portfolio management demonstrate its effectiveness.
Validating Requirements for Fault Tolerant Systems using Model Checking Model checking is shown to be an effective tool in validating the behavior of a fault tolerant embedded spacecraft controller. The case study presented here shows that by judiciously abstracting away extraneous complexity, the state space of the model could be exhaustively searched allowing critical functional requirements to be validated down to the design level. Abstracting away detail not germane to the problem of interest leaves by definition a partial specification behind. The success of this procedure shows that it is feasible to effectively validate a partial specification with this technique. Three anomalies were found in the system. One was an error in the detailed requirements, and the other two were missing/ ambiguous requirements. Because the method allows validation of partial specifications, it is also an effective approach for maintaining fidelity between a co-evolving specification and an implementation.
Repository support for multi-perspective requirements engineering Relationships among different modeling perspectives have been systematically investigated focusing either on given notations (e.g. UML) or on domain reference models (e.g. ARIS/SAP). In contrast, many successful informal methods for business analysis and requirements engineering (e.g. JAD) emphasize team negotiation, goal orientation and flexibility of modeling notations. This paper addresses the question how much formal and computerized support can be provided in such settings without destroying their creative tenor. Our solution is based on a novel modeling language, M-Telos, that integrates the adaptability and analysis advantages of the logic-based meta modeling language Telos with a module concept covering the structuring mechanisms of scalable software architectures. It comprises four components: (1) A modular conceptual modeling formalism organizes individual perspectives and their interrelationships. (2) Perspective schemata are linked to a conceptual meta meta model of shared domain terms, thus giving the architecture a semantic meaning and enabling adaptability and extensibility of the network of perspectives. (3) Inconsistency management across perspectives is handled in a goal-oriented manner, by formalizing analysis goals as meta rules which are automatically customized to perspective schemata. (4) Continuous incremental maintenance of inconsistency information is provided by exploiting recent view maintenance techniques from deductive databases. The approach has been implemented as an extension to the ConceptBase ‡ ‡ ConceptBase is available through web site http://www-i5.Informatik.RWTH-Aachen.de/Cbdor/index.html. meta database management system and has been applied in a number of real-world requirements engineering projects.
Designing And Building A Negotiating Automated Agent Negotiations are very important in a multiagent environment, particularly, in an environment where there are conflicts between the agents, and cooperation would be beneficial. We have developed a general structure for a Negotiating Automated Agent that consists of five modules: a Prime Minister, a Ministry of Defense, a Foreign Office, a Headquarters and Intelligence. These modules are implemented using a dynamic set of local agents belonging to the different modules. We used this structure to develop a Diplomacy player, Diplomat. Playing Diplomacy involves a certain amount of technical skills as in other board games, but the capacity to negotiate, explain, convince, promise, keep promises or break them, is an essential ingredient in good play. Diplomat was evaluated and consistently played better than human players.
A metamodel approach for the management of multiple models and the translation of schemes A metamodel approach is proposed as a framework for the definition of different data models and the management of translations of schemes from one model to another. This notion is useful in an environment for the support of the design and development of information systems, since different data models can be used and schemes referring to different models need to be exchanged. The approach is based on the observation that the constructs used in the various models can be classified into a limited set of basic types, such as lexical type, abstract type, aggregation, function. It follows that the translations of schemes can be specified on the basis of translations of the involved types of constructs: this is effectively performed by means of a procedural language and a number of predefined modules that express the standard translations between the basic constructs.
Explicit integration of goals in heuristic algorithm design We describe a transformational derivation system that semi-automatically derives a simplified version of Mycin's therapy selection algorithm. It uses general transformation rules to explicitly integrate the multiple, sometimes conflicting goals that govern the design of heuristic algorithms. The generality of its transformations is demonstrated by using them to derive a variation based on formulating and integrating the same design goals differently.
Monitoring software requirements using instrumented code Ideally, software is derived from requirements whose properties have been established as good. However, it is difficult to define and analyze requirements. Moreover derivation of software from requirements is error prone. Finally, the installation and use of compiled software can introduce errors. Thus, it can be difficult to provide assurances about the state of a software's execution. We present a framework to monitor the requirements of software as it executes. The framework is general, and allows for automated support. The current implementation uses a combination of assertion and model checking to inform the monitor. We focus on two issues: (1) the expression of "suspect requirements", and (2) the transparency of the software and its environment to the monitor. We illustrate these issues with the widely known problems of the Dining Philosophers and the CCITT X.509 authentication. Each are represented as Java programs which are then instrumented and monitored.
Supporting conflict resolution in cooperative design systems Complex modern-day artifacts are designed cooperatively by groups of experts, each with their own areas of expertise. The interaction of such experts inevitably involves conflict. This paper presents an implemented computational model, based on studies of human cooperative design, for supporting the resolution of such conflicts. This model is based centrally on the insights that general conflict resolution expertise exists separately from domain-level design expertise, and that this expertise can be instantiated in the context of particular conflicts into specific advice for resolving those conflicts. Conflict resolution expertise consists of a taxonomy of design conflict classes in addition to associated general advice suitable for resolving conflicts in these classes. The abstract nature of conflict resolution expertise makes it applicable to a wide variety of design domains. This paper describes this conflict resolution model and provides examples of its operation from an implemented cooperative design system for local area network design that uses machine- based design agents. How this model is being extended to support and learn from collaboration of human design agents is also discussed.
PC-RE: a method for personal and contextual requirements engineering with some experience A method for requirements analysis is proposed that accounts for individual and personal goals, and the effect of time and context on personal requirements. First a framework to analyse the issues inherent in requirements that change over time and location is proposed. The implications of the framework on system architecture are considered as three implementation pathways: functional specifications, development of customisable features and automatic adaptation by the system. These pathways imply the need to analyse system architecture requirements. A scenario-based analysis method is described for specifying requirements goals and their potential change. The method addresses goal setting for measurement and monitoring, and conflict resolution when requirements at different layers (group, individual) and from different sources (personal, advice from an external authority) conflict. The method links requirements analysis to design by modelling alternative solution pathways. Different implementation pathways have cost–benefit implications for stakeholders, so cost–benefit analysis techniques are proposed to assess trade-offs between goals and implementation strategies. The use of the framework is illustrated with two case studies in assistive technology domains: e-mail and a personalised navigation system. The first case study illustrates personal requirements to help cognitively disabled users communicate via e-mail, while the second addresses personal and mobile requirements to help disabled users make journeys on their own, assisted by a mobile PDA guide. In both case studies the experience from requirements analysis to implementation, requirements monitoring, and requirements evolution is reported.
Diagrams based on structural object perception Most diagrams, particularly those used in software engineering, are line drawings consisting of nodes drawn as rectangles or circles, and edges drawn as lines linking them. In the present paper we review some of the literature on human perception to develop guidelines for effective diagram drawing. Particular attention is paid to structural object recognition theory. According to this theory as objects are perceived they are decomposed into 3D set of primitives called geons, together with the skeleton structure connecting them. We present a set of guidelines for drawing variations on node-link diagrams using geon-like primitives, and provide some examples. Results from three experiments are reported that evaluate 3D geon diagrams in comparison with 2D UML (Unified Modeling Language) diagrams. The first experiment measures the time and accuracy for a subject to recognize a sub-structure of a diagram represented either using geon primitives or UML primitives. The second and third experiments compare the accuracy of recalling geon vs. UML diagrams. The results of these experiments show that geon diagrams can be visually analyzed more rapidly, with fewer errors, and can be remembered better in comparison with equivalent UML diagrams.
Programming languages for distributed computing systems When distributed systems first appeared, they were programmed in traditional sequential languages, usually with the addition of a few library procedures for sending and receiving messages. As distributed applications became more commonplace and more sophisticated, this ad hoc approach became less satisfactory. Researchers all over the world began designing new programming languages specifically for implementing distributed applications. These languages and their history, their underlying principles, their design, and their use are the subject of this paper.We begin by giving our view of what a distributed system is, illustrating with examples to avoid confusion on this important and controversial point. We then describe the three main characteristics that distinguish distributed programming languages from traditional sequential languages, namely, how they deal with parallelism, communication, and partial failures. Finally, we discuss 15 representative distributed languages to give the flavor of each. These examples include languages based on message passing, rendezvous, remote procedure call, objects, and atomic transactions, as well as functional languages, logic languages, and distributed data structure languages. The paper concludes with a comprehensive bibliography listing over 200 papers on nearly 100 distributed programming languages.
Program Construction by Parts . Given a specification that includes a number of user requirements,we wish to focus on the requirements in turn, and derive a partlydefined program for each; then combine all the partly defined programsinto a single program that satisfies all the requirements simultaneously.In this paper we introduces a mathematical basis for solving this problem;and we illustrate it by means of a simple example.1 Introduction and MotivationWe propose a program construction method whereby, given a...
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1.111269
0.100335
0.100335
0.100335
0.100335
0.100335
0.050204
0.025258
0.010255
0.00009
0.000001
0
0
0
The Requirements Problem for Adaptive Systems. Requirements Engineering (RE) focuses on eliciting, modeling, and analyzing the requirements and environment of a system-to-be in order to design its specification. The design of the specification,...
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Prediction-based control of LTI systems with input and output time-varying delays. The stability of a prediction-based controller is studied in presence of time-varying delays both in the input and in the output. Thanks to the reduction method and a Lyapunov–Krasovskii analysis, stability conditions are derived. A comparison is also made between the single input delay and single output delay cases. It is shown that this method can be applied to stabilize output delay systems without restriction on the delay rate. The results are illustrated numerically on a double integrator.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Fixed parameter algorithms for restricted coloring problems: acyclic, star, nonrepetitive, harmonious and clique colorings
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
A near-lossless image compression algorithm suitable for hardware design in wireless endoscopy system In order to decrease the communication bandwidth and save the transmitting power in the wireless endoscopy capsule, this paper presents a new near-lossless image compression algorithm based on the Bayer format image suitable for hardware design. This algorithm can provide low average compression rate (2.12 bits/pixel) with high image quality (larger than 53.11 dB) for endoscopic images. Especially, it has low complexity hardware overhead (only two line buffers) and supports real-time compressing. In addition, the algorithm can provide lossless compression for the region of interest (ROI) and high-quality compression for other regions. The ROI can be selected arbitrarily by varying ROI parameters. In addition, the VLSI architecture of this compression algorithm is also given out. Its hardware design has been implemented in 0.18 µm CMOS process.
A new near-lossless image compression algorithm suitable for hardware design in wireless endoscopy system In order to decrease the communication bandwidth and save the transmitting power in the wireless endoscopy capsule, this paper presents a new near-lossless image compression algorithm suitable for hardware design based on the Bayer format image. This algorithm can provide low average compression rate (2.12 bits/pixel) with high image quality (larger than 53.11 dB) for endoscopic images. Especially, it has low complexity hardware overload and supports real-time compressing. In addition, the algorithm can provide lossless compression for the region of interest (ROI) and high quality compression for other regions. The ROI can be selected arbitrarily by varying ROI parameters.
A linear edge model and its application in lossless image coding A linear edge model for the prediction of edge pixel values is first proposed and a gradient-adjusted predictor based on this model for context-based lossless image coding is then developed. Theoretical analysis and experiment results show that the performance of the proposed predictor is better than that of the state-of-the-art predictors for most test images.
Lossless image coding using a switching predictor with run-length encodings In this paper, we propose a switching adaptive predictor (FSWAP) with run-length encoding for lossless image coding. The proposed FSWAP system has two operation modes; run mode and regular mode. If the members in the texture context of the coding pixel have identical grey values, the run mode is used, otherwise the regular mode is used. The run mode, using run-length coding, with an arithmetic coder, is very useful for images with flat regions. The regular mode borrows the switching predictor structure in SWAP (Lih-Jen Kau et al, IEEE Trans. Fuzzy Systems) with some modifications. Experiments show that simplified context clustering is very useful in error modeling for prediction refinement. Furthermore, the execution time of FSWAP can be accelerated with minor degradation in the bit rates associated with the modifications. Comparisons of the proposed system to existing state-of-the-art predictive coders are given to demonstrate its coding efficiency
Efficient high-performance ASIC implementation of JPEG-LS encoder This paper introduces an innovative design which implements a high-performance JPEG-LS encoder. The encoding process follows the principles of the JPEG-LS lossless mode. The proposed implementation consists of an efficient pipelined JPEG-LS encoder, which operates at a significantly higher encoding rate than any other JPEG-LS hardware or software implementation while keeping area small.
Virtually lossless compression of astrophysical images We describe an image compression strategy potentially capable of preserving the scientific quality of astrophysical data, simultaneously allowing a consistent bandwidth reduction to be achieved. Unlike strictly lossless techniques, by which moderate compression ratios are attainable, and conventional lossy techniques, in which the mean square error of the decoded data is globally controlled by users, near-lossless methods are capable of locally constraining the maximum absolute error, based on user's requirements. An advanced lossless/near-lossless differential pulse code modulation (DPCM) scheme, recently introduced by the authors and relying on a causal spatial prediction, is adjusted to the specific characteristics of astrophysical image data (high radiometric resolution, generally low noise, etc.). The background noise is preliminarily estimated to drive the quantization stage for high quality, which is the primary concern in most of astrophysical applications. Extensive experimental results of lossless, near-lossless, and lossy compression of astrophysical images acquired by the Hubble space telescope show the advantages of the proposed method compared to standard techniques like JPEG-LS and JPEG2000. Eventually, the rationale of virtually lossless compression, that is, a noise-adjusted lossles/near-lossless compression, is highlighted and found to be in accordance with concepts well established for the astronomers' community.
An Efficient Lossless Embedded Compression Engine Using Compacted-Felics Algorithm The memory bandwidth and capacity have become a critical design issue in display media chip for high-end display applications. In this paper, the lossless embedded compression engine using compacted-FELICS algorithm, which primarily consists of adjusted binary code and Golomb-Rice code, is proposed to handle this scenario. The encoding capability of its VLSI architecture can achieve Full-HD 1080p@60Hz. The prototype chip is implemented by TSMC 0.18-um with Artisan cell library, and its core size is 0.98mm x 0.97mm.
Compression of map images by multilayer context tree modeling. We propose a method for compressing color map images by context tree modeling and arithmetic coding. We consider multicomponent map images with semantic layer separation and images that are divided into binary layers by color separation. The key issue in the compression method is the utilization of interlayer correlations, and to solve the optimal ordering of the layers. The interlayer dependencies are acquired by optimizing the context tree for every pair of image layers. The resulting cost matrix of the interlayer dependencies is considered as a directed spanning tree problem and solved by an algorithm based on the Edmond's algorithm for optimum branching and by the optimal selection and removal of the background color. The proposed method gives results 50% better than JBIG and 25% better than a single-layer context tree modeling.
Specifying software requirements for complex systems: new techniques and their application This paper concerns new techniques for making requirements specifications precise, concise, unambiguous, and easy to check for completeness and consistency. The techniques are well-suited for complex real-time software systems; they were developed to document the requirements of existing flight software for the Navy's A-7 aircraft. The paper outlines the information that belongs in a requirements document and discusses the objectives behind the techniques. Each technique is described and illustrated with examples from the A-7 document. The purpose of the paper is to introduce the A-7 document as a model of a disciplined approach to requirements specification; the document is available to anyone who wishes to see a fully worked-out example of the approach.
An incremental constraint solver An incremental constraint solver, the DeltaBlue algorithm maintains an evolving solution to the constraint hierarchy as constraints are added and removed. DeltaBlue minimizes the cost of finding a new solution after each change by exploiting its knowledge of the last solution.
Usability analysis with Markov models How hard to users to find interactive devices to use to achieve their goals, and how can we get this information early enough to influence design? We show that Markov modeling can obtain suitable measures, and we provide formulas that can be used for a large class of systems. We analyze and consider alternative designs for various real examples. We introduce a “knowledege/usability graph,” which shows the impact of even a smaller amount of knowledge for the user, and the extent to which designers' knowledge may bias their views of usability. Markov models can be built into design tools, and can therefore be made very convenient for designers to utilize. One would hope that in the future, design tools would include such mathematical analysis, and no new design skills would be required to evaluate devices. A particular concern of this paper is to make the approach accessible. Complete program code and all the underlying mathematics are provided in appendices to enable others to replicate and test all results shown.
The S/Net's Linda kernel (extended abstract) No abstract available.
Ontology, Metadata, and Semiotics The Internet is a giant semiotic system. It is a massive collection of Peirce's three kinds of signs: icons, which show the form of something; indices, which point to something; and symbols, which represent something according to some convention. But current proposals for ontologies and metadata have overlooked some of the most important features of signs. A sign has three aspects: it is (1) an entity that represents (2) another entity to (3) an agent. By looking only at the signs themselves, some metadata proposals have lost sight of the entities they represent and the agents  human, animal, or robot  which interpret them. With its three branches of syntax, semantics, and pragmatics, semiotics provides guidelines for organizing and using signs to represent something to someone for some purpose. Besides representation, semiotics also supports methods for translating patterns of signs intended for one purpose to other patterns intended for different but related purposes. This article shows how the fundamental semiotic primitives are represented in semantically equivalent notations for logic, including controlled natural languages and various computer languages.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.03979
0.041903
0.025
0.025
0.013141
0.005
0.000137
0.000005
0
0
0
0
0
0
Robust passivity analysis for uncertain neural networks with leakage delay and additive time-varying delays by using general activation function. This article deals with the robust passivity analysis problem for uncertain neural networks with both leakage delay and additive time-varying delays by using a more general activation function technique. The information of activation function which is ignored in the existing results is taken into account in this paper. Based on Lyapunov stability theory, a proper Lyapunov–Krasovskii functional (LKF) with some new terms is constructed. The less conservative delay-dependent stability criteria have been obtained by applying a newly developed integral inequality that includes Jensen’s inequality and a Wirtinger-based integral inequality as a special case. Some sufficient conditions are achieved to guarantee the stability and passivity of the addressed system model. All the proposed results are formulated as linear matrix inequalities (LMIs). Finally, three numerical cases are simulated to show the effectiveness and benefits of our proposed method.
Neutral-type of delayed inertial neural networks and their stability analysis using the LMI Approach. A theoretical investigation of neutral-type of delayed inertial neural networks using the Lyapunov stability theory and Linear Matrix Inequality (LMI) approach is presented. Based on a suitable variable transformation, an inertial neural network consisting of second-order differential equations can be converted into a first-order differential model. The sufficient conditions of the delayed inertial neural network are derived by constructing suitable Lyapunov functional candidates, introducing new free weighting matrices, and utilizing the Writinger integral inequality. Through the LMI solution, we analyse the global asymptotic stability condition of the resulting delayed inertial neural network. Simulation examples are presented to demonstrate the effectiveness of the derived analytical results.
Dissipativity and passivity analysis of Markovian jump impulsive neural networks with time delays. This paper discusses the issue of dissipativity and passivity analysis for a class of impulsive neural networks with both Markovian jump parameters and mixed time delays. The jumping parameters are modelled as a continuous-time discrete-state Markov chain. Based on a multiple integral inequality technique, a novel delay-dependent dissipativity criterion is established via a suitable Lyapunov functional involving the multiple integral terms. The proposed dissipativity and passivity conditions for the impulsive neural networks are represented by means of linear matrix inequalities. Finally, three numerical examples are given to show the effectiveness of the proposed criteria.
On Overview of KRL, a Knowledge Representation Language
Implementing Remote procedure calls Remote procedure calls (RPC) are a useful paradigm for providing communication across a network between programs written in a high level language. This paper describes a package, written as part of the Cedar project, providing a remote procedure call facility. The paper describes the options that face a designer of such a package, and the decisions we made. We describe the overall structure of our RPC mechanism, our facilities for binding RPC clients, the transport level communication protocol, and some performance measurements. We include descriptions of some optimisations we used to achieve high performance and to minimize the load on server machines that have many clients. Our primary aim in building an RPC package was to make the building of distributed systems easier. Previous protocols were sufficiently hard to use that only members of a select group of communication experts were willing to undertake the construction of distributed systems. We hoped to overcome this by providing a communication paradigm as close as possible to the familiar facilities of our high level languages. To achieve this aim, we concentrated on making remote calls efficient, and on making the semantics of remote calls as close as possible to those of local calls.
Alloy: a lightweight object modelling notation Alloy is a little language for describing structural properties. It offers a declaration syntax compatible with graphical object models, and a set-based formula syntax powerful enough to express complex constraints and yet amenable to a fully automatic semantic analysis. Its meaning is given by translation to an even smaller (formally defined) kernel. This paper presents the language in its entirety, and explains its motivation, contributions and deficiencies.
Semantic grammar: an engineering technique for constructing natural language understanding systems One of the major stumbling blocks to more effective used computers by naive users is the lack of natural means of communication between the user and the computer system. This report discusses a paradigm for constructing efficient and friendly man-machine interface systems involving subsets of natural language for limited domains of discourse. As such this work falls somewhere between highly constrained formal language query systems and unrestricted natural language under-standing systems. The primary purpose of this research is not to advance our theoretical under-standing of natural language but rather to put forth a set of techniques for embedding both semantic/conceptual and pragmatic information into a useful natural language interface module. Our intent has been to produce a front end system which enables the user to concentrate on his problem or task rather than making him worry about how to communicate his ideas or questions to the machine.
Recursive functions of symbolic expressions and their computation by machine, Part I this paper in L a TEXpartly supported by ARPA (ONR) grant N00014-94-1-0775to Stanford University where John McCarthy has been since 1962. Copied with minor notationalchanges from CACM, April 1960. If you want the exact typography, look there. Currentaddress, John McCarthy, Computer Science Department, Stanford, CA 94305, (email:jmc@cs.stanford.edu), (URL: <a href="http://citeseer.ist.psu.edu/rd/0/http%3AqSqqSqwww-formal.stanford.eduqSqjmcqSq" onmouseover="self.status="http://www-formal.stanford.edu/jmc/"; return true" onmouseout="self.status=""; return true">http://www-formal.stanford.edu/jmc/</a> )by starting with the class of expressions called S-expressions and the functionscalled...
A study of cross-validation and bootstrap for accuracy estimation and model selection We review accuracy estimation methods and compare the two most common methods crossvalidation and bootstrap. Recent experimental results on artificial data and theoretical re cults in restricted settings have shown that for selecting a good classifier from a set of classifiers (model selection), ten-fold cross-validation may be better than the more expensive leaveone-out cross-validation. We report on a largescale experiment--over half a million runs of C4.5 and a Naive-Bayes algorithm--to estimate the effects of different parameters on these algrithms on real-world datasets. For crossvalidation we vary the number of folds and whether the folds are stratified or not, for bootstrap, we vary the number of bootstrap samples. Our results indicate that for real-word datasets similar to ours, The best method to use for model selection is ten fold stratified cross validation even if computation power allows using more folds.
A Theory of Prioritizing Composition An operator for the composition of two processes, where one process has priority over the other process, is studied. Processes are described by action systems, and data refinement is used for transforming processes. The operator is shown to be compositional, i.e. monotonic with respect to refinement. It is argued that this operator is adequate for modelling priorities as found in programming languages and operating systems. Rules for introducing priorities and for raising and lowering priorities of processes are given. Dynamic priorities are modelled with special priority variables which can be freely mixed with other variables and the prioritising operator in program development. A number of applications show the use of prioritising composition for modelling and specification in general.
Inheritance of proofs The Curry-Howard isomorphism, a fundamental property shared by many type theories, establishes a direct correspondence between programs and proofs. This suggests that the same structuring principles that ease programming should be useful for proving as well. To exploit object-oriented structuring mechanisms for verification, we extend the object-model of Pierce and Turner, based on the higher-order typed X-calculus F less than or equal to(omega), with a logical component. By enriching the (functional) signature of objects with a specification, methods and their correctness proofs are packed together in objects. The uniform treatment of methods and proofs gives rise in a natural way to object-oriented proving principles - including inheritance of proofs, late binding of proofs, and encapsulation of proofs - as analogues to object-oriented programming principles. We have used Lego, a type-theoretic proof checker, to explore the feasibility of this approach. (C) 1998 John Wiley & Sons, Inc.
Software engineering for parallel systems Current approaches to software engineering practice for parallel systems are reviewed. The parallel software designer has not only to address the issues involved in the characterization of the application domain and the underlying hardware platform, but, in many instances, the production of portable, scalable software is desirable. In order to accommodate these requirements, a number of specific techniques and tools have been proposed, and these are discussed in this review in the framework of the parallel software life-cycle. The paper outlines the role of formal methods in the practical production of parallel software, but its main focus is the emergence of development methodologies and environments. These include CASE tools and run-time support systems, as well as the use of methods taken from experience of conventional software development. Because of the particular emphasis on performance of parallel systems, work on performance evaluation and monitoring systems is considered.
Developing Mode-Rich Satellite Software by Refinement in Event B To ensure dependability of on-board satellite systems, the designers should, in particular, guarantee correct implementation of the mode transition scheme, i.e., ensure that the states of the system components are consistent with the global system mode. However, there is still a lack of scalable approaches to formal verification of correctness of complex mode transitions. In this paper we present a formal development of an Attitude and Orbit Control System (AOCS) undertaken within the ICT DEPLOY project. AOCS is a complex mode-rich system, which has an intricate mode-transition scheme. We show that refinement in Event B provides the engineers with a scalable formal technique that enables both development of mode-rich systems and proof-based verification of their mode consistency.
Generalized Jensen Inequalities with Application to Stability Analysis of Systems with Distributed Delays over Infinite Time-Horizons. The Jensen inequality has been recognized as a powerful tool to deal with the stability of time-delay systems. Recently, a new inequality that encompasses the Jensen inequality was proposed for the stability analysis of systems with finite delays. In this paper, we first present a generalized integral inequality and its double integral extension. It is shown how these inequalities can be applied to improve the stability result for linear continuous-time systems with gamma-distributed delays. Then, for the discrete-time counterpart we provide an extended Jensen summation inequality with infinite sequences, which leads to less conservative stability conditions for linear discrete-time systems with Poisson-distributed delays. The improvements obtained by the introduced generalized inequalities are demonstrated through examples.
1.2
0.1
0.1
0
0
0
0
0
0
0
0
0
0
0
A framework for object oriented design and prototyping of manufacturing systems Object-oriented programming, through the use of a library of standard classes and their inheritance relationships, offers a consistent framework to specify, design, and prototype the software architecture of complex discrete event control systems. This work applies such a framework to computer integrated manufacturing, proposing a unified view to represent the hierarchical control structure of a commonly accepted reference model. The results are a library of classes (called G++ and based on the C++ programming language) to express concurrency and to support an extended client server paradigm, and a new design methodology
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Support software for the development of programmable logic controller applications Enhancements to the software aids, used for the development of Programmable Logic Controller programs, are proposed in this work. A possible architecture of the software realizing these enhancements and the language constructs required for its configuration to a specific application are also presented. Based on this architecture, experimental software aids have been developed to demonstrate that it is quite feasible to provide the major services considered in this proposal. These services allow the program developer to emulate the dynamic operation of a specific programmable controller to alternative scenarios of input variations and relationships over a defined time horizon, and configure displays of graphics and timing diagrams which may assist him in following up and testing the program execution.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Dynamic configuration management in a graph-oriented distributed programming environment Dynamic configuration is a desirable property of a distributed system where dynamic modification and extension to the system and the applications are required. It allows the system configuration to be specified and changed while the system is executing. This paper describes a software platform that facilitates a novel approach to the dynamically configurable programming of parallel and distributed applications and systems. This platform is based on a graph-oriented model and it provides support for constructing reconfigurable distributed programs. We describe the design and implementation of a dynamic configuration manager for the graph-oriented distributed programming environment. The requirements and services for dynamic reconfiguration are identified. The architectural design of a dynamic configuration manager is presented, and a parallel virtual machine-based prototypical implementation of the manager, on a local area network of workstations, is described.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
3rd international workshop on software evolution through transformations: embracing change Transformation-based techniques such as refactoring, model transformation and model-driven development, architectural reconfiguration, etc. are at the heart of many software engineering activities, making it possible to cope with an ever changing environment. This workshop provides a forum for discussing these techniques, their formal foundations and applications.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
A Conceptual Graph Model for W3C Resource Description Framework With the aim of building a "Semantic Web", the content of the documents must be explicitly represented through metadata in order to enable contents-guided search. Our approach is to exploit a standard language (RDF, recommended by W3C) for expressing such metadata and to interpret these metadata in conceptual graphs (CG) in order to exploit querying and inferencing capabilities enabled by CG formalism. The paper presents our mapping of RDF into CG and its interest in the context of the semantic Web.
Procedures and atomicity refinement The introduction of an early return from a (remote) procedure call can increase the degree of parallelism in a parallel or distributed algorithm modeled by an action system. We define a return statement for procedures in an action systems framework and show that it corresponds to carrying out an atomicity refinement.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Practical Stability and Event-Triggered Load Frequency Control of Networked Power Systems Practical stability analysis for delayed systems is discussed. Weak practical stability conditions are derived by using Halanay’s inequality and the Lyapunov method. The results are applied to address the problem of practical stability of a power system over a delay-induced communication network. An event-detection-based control scheme is proposed to reduce the communication burdens. Based on the obtained practical stability conditions and the proposed event-detection scheme, sufficient practical stability conditions for load frequency control of a networked power system are given. Furthermore, a new design approach to the event-detection-based controller is presented. Finally, a numerical simulation is given to show the effectiveness and advantage of the obtained results.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1
0
0
0
0
0
0
0
0
0
0
0
0
0
ASSET: A life cycle verification and visibility system This paper describes the Automated Systems and Software Engineering Technology (ASSET) System, a system of techniques and tools aiding in the management and control of product development and maintenance. Improved verification techniques are applied throughout the entire life cycle and management visibility is greatly enhanced. The paper discusses the critical need for improving upon past and present management methodology, and describes the ASSET verification methodology, the ASSET system architecture, and the current ASSET development status.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Semantic analysis of Larch Interface Specifications
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Computing similarity in a reuse library system: an AI-based approach This paper presents an AI based library system for software reuse, called AIRS, that allows a developer to browse a software library in search of components that best meet some stated requirement. A component is described by a set of (feature, term) pairs. A feature represents a classification criterion, and is defined by a set of related terms. The system allows to represent packages (logical units that group a set of components) which are also described in terms of features. Candidate reuse components and packages are selected from the library based on the degree of similarity between their descriptions and a given target description. Similarity is quantified by a nonnegative magnitude (distance) proportional to the effort required to obtain the target given a candidate. Distances are computed by comparator functions based on the subsumption, closeness, and package relations. We present a formalization of the concepts on which the AIRS system is based. The functionality of a prototype implementation of the AIRS system is illustrated by application to two different software libraries: a set of Ada packages for data structure manipulation, and a set of C components for use in Command, Control, and Information Systems. Finally, we discuss some of the ideas we are currently exploring to automate the construction of AIRS classification libraries.
Indexing hypertext documents in context
Aligning an Enterprise System with Enterprise Requirements: An Iterative Process Keywords: Enterprise systems, Business process alignment, Object-Process Methodology Abstract: Aligning an off-the-shelf software package with the business processes of the enterprise implementing it is one of the main problems in the implementation of enterprise systems. The paper proposes an iterative alignment process, which takes a requirement-driven approach. It benefits from reusing business process design without being restricted by predefined solutions and criteria. The process employs an automated matching between a model of the enterprise requirements and a model of the enterprise system capabilities. It identifies possible matches between the two models and evaluates the gaps between them despite differences in their completeness and detail level. Thus it provides the enterprise with a set of feasible combinations of requirements that can be satisfied by the system as a basis for making implementation decisions. The automated matching is applied iteratively, until a satisfactory solution is found. Object Process Methodology (OPM) is applied for modeling both the system and the enterprise requirements, which are inputs for the automated matching. The alignment process has been tested in an experimental study, whose encouraging results demonstrate its ability to provide a satisfactory solution to the alignment problem.
The use of lexical affinities in requirements extraction The use of lexical afftnities to help a human requirements analyst find abstractions in problem descriptions is explored. It is hoped that a lexical athnities tinding tool can be used as part of an environment to help organize the sentences and phrases of a natural language problem description to aid the requirements analyst in the extraction of requirements. An experiment to confirm its effectiveness is described. The first steps in the development of any computational system should be the writing of requirements with the client's help. It may be necessary to build a prototype tirst, but ultimately before building a production-quality version, it is necessary to agree upon what is to be in the system. Winchester and Bstrin (34) list a number of requirements for the requimments them- selves. The main of these from the programmer-client perspective ate that the requirements must be understandable to both the customers and the designers and builders; the parts of the requirements must be consistent with each other; and the requirements must be complete so that the designers and builders do not have to make unintended value judgements during their WOdL This paper deals ultimately with, describes, and determines the effectiveness of one tool designed to assist in one part of the psocess of writ- ing requirements. It is essential that the reader understand the context in which this tool is expected to operate. Hence, Sections 2 through 5 are devoted to briefly describing this context. desired system should do. These views range from being totally unrelated to each other to being totally inconsistent with each other. It is no wonder that the distillation of these views into a consistent, complete, and unambiguous statement of the requirements. albeit in natural language. is a major part of the problem of developing software which meets the client's needs. Tbere- fore, it is essential to have methods and took that help in distilling these many views into coharent requirements. 3. PAST WORK There are already a variety of systems, tools, snd methods for dealing with requirements. These include SADT 128,271, IORL 1311, PSL/PSA (32), RDL (34), RSL (5,6,7), RML (ll) and Burstin's prototype (12) tool. The tirst two are graphically oriented, and the second of these is automated. The remainder work from highly constrained subsets of English consisting of sentences, each of which states one requirement to which the final imple- mentation must adhere. These sentences can be considered as relations in a database. Those which ate automated have tools for working with the sen- tences and abstractions of the requirements document once these sentences and abstractions have been recognized and stated. Due to space limitations, only those having a direct impact on this wodc am described in detail herein. A
Matching conceptual graphs as an aid to requirements re-use The types of knowledge used during requirements acquisition are identified and a tool to aid in this process, ReqColl (Requirements Collector) is introduced. The tool uses conceptual graphs to represent domain concepts and attempts to recognise new concepts through the use of a matching facility. The overall approach to requirements capture is first described and the approach to matching illustrated informally. The detailed procedure for matching conceptual graphs is then given. Finally ReqColl is compared to similar work elsewhere and some future research directions indicated.
A Task-Based Methodology for Specifying Expert Systems A task-based specification methodology for expert system specification that is independent of the problem solving architecture, that can be applied to many expert system applications, that focuses on what the knowledge is, not how it is implemented, that introduces the major concepts involved gradually, and that supports verification and validation is discussed. To evaluate the methodology, a specification of R1/SOAR, an expert system that reimplements a major portion of the R1 expert system, was reverse engineered.<>
Multistage negotiation for distributed constraint satisfaction A cooperation paradigm and coordination protocol for a distributed planning system consisting of a network of semi-autonomous agents with limited internode communication and no centralized control is presented. A multistage negotiation paradigm for solving distributed constraint satisfaction problems in this kind of system has been developed. The strategies presented enable an agent in a distributed planning system to become aware of the extent to which its own local decisions may have adverse nonlocal impact in planning. An example problem is presented in the context of transmission path restoration for dedicated circuits in a communications network. Multistage negotiation provides an agent with sufficient information about the impact of local decisions on a nonlocal state so that the agent may make local decisions that are correct from a global perspective, without attempting to provide a complete global state to all agents. Through multistage negotiation, an agent is able to recognize when a set of global goals cannot be satisfied, and is able to solve a related problem by finding a way of satisfying a reduced set of goals
An exploratory contingency model of user participation and MIS use A model is proposed of the relationship between user participation and degree of MIS usage. The model has four dimensions: participation characteristics, system characteristics, system initiator, and the system development environment. Stages of the System Development Life Cycle are considered as a participation characteristics, task complexity as a system characteristics, and top management support and user attitudes as parts of the system development environment. The data are from a cross-sectional survey in Korea, covering 134 users of 77 different information systems in 32 business firms. The results of the analysis support the proposed model in general. Several implications of this for MIS managers are then discussed.
An executable visual formalism for object-oriented conceptual modeling : Conceptual modeling aims at establishing the conceptual knowledge necessary forproper communication between a development team and users. This paper presents an executablevisual formalism for object-oriented conceptual modeling of information systems. This formalismis an integration of the Entity-Relationship approach, Petri nets, relational calculus, and time temporallogic. It supports integrated and encapsulated modeling of the structural and behavioralaspects of objects, and object...
Representing Software Engineering Knowledge We argue that one important role that ArtificialIntelligence can play in Software Engineering is to act as a sourceof ideas about representing knowledge that can improve thestate-of-the-art in software information management, rather than justbuilding intelligent computer assistants. Among others, suchtechniques can lead to new approaches for capturing, recording,organizing, and retrieving knowledge about a software system.Moreover, this knowledge can be stored in a software knowledge base,which serves as “corporate memory”, facilitating the work ofdevelopers, maintainers and users alike. We pursue this centraltheme by focusing on requirements engineering knowledge, illustratingit with ideas originally reported in (Greenspan et al., 1982; Borgida et al., 1993; Yu, 1993) and (Chung, 1993b). The first example concerns the language RML,designed on a foundation of ideas from frame- and logic-basedknowledge representation schemes, to offer a novel (at least for itstime) formal requirements modeling language. The second contributionadapts solutions of the frame problem originally proposed in thecontext of AI planning in order to offer a better formulation of thenotion of state change caused by an activity, which appears in mostformal requirements modeling languages. The final contributionimports ideas from multi-agent planning systems to propose a novelontology for capturing organizational intentions in requirementsmodeling. In each case we examine alterations that have been made toknowledge representation ideas in order to adapt them for SoftwareEngineering use.
Prototyping as a tool in the specification of user requirements One of the major problems in developing new computer applications is specifying the user's requirements such that the Requirements Specification is correct, complete and unambiguous. Although prototyping is often considered too expensive, correcting ambiguities and misunderstandings at the specification stage is significantly cheaper than correcting a system after it has gone into production. This paper describes how a prototype was used to help specify the requirements of a computer system to manage and control a semiconductor processing facility. The cost of developing and running the prototype was less than 10% of the total software development cost.
Design and analysis of high-throughput lossless image compression engine using VLSI-oriented FELICS algorithm In this paper, the VLSI-oriented fast, efficient, lossless image compression system (FELICS) algorithm, which consists of simplified adjusted binary code and Golomb-Rice code with storage-less k parameter selection, is proposed to provide the lossless compression method for high-throughput applications. The simplified adjusted binary code reduces the number of arithmetic operation and improves processing speed. According to theoretical analysis, the storage-less k parameter selection applies a fixed k value in Golomb-Rice code to remove data dependency and extra storage for cumulation table. Besides, the color difference preprocessing is also proposed to improve coding efficiency with simple arithmetic operation. Based on VLSI-oriented FELICS algorithm, the proposed hardware architecture features compactly regular data flow, and two-level parallelism with four-stage pipelining is adopted as the framework of the proposed architecture. The chip is fabricated in TSMC 0.13-µm 1P8M CMOS technology with Artisan cell library. Experiment results reveal that the proposed architecture presents superior performance in parallelism-efficiency and power-efficiency compared with other existing works, which characterize high-speed lossless compression. The maximum throughput can achieve 4.36 Gb/s. Regarding high definition (HD) display applications, our encoding capability can achieve a high-quality specification of full-HD 1080p at 60 Hz with complete red, green, blue color components. Furthermore, with the configuration as the multilevel parallelism, the proposed architecture can be applied to the advanced HD display specifications, which demand huge requirement of throughput.
Conceptual Structures: Knowledge Visualization and Reasoning, 16th International Conference on Conceptual Structures, ICCS 2008, Toulouse, France, July 7-11, 2008, Proceedings
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1.046373
0.034278
0.033333
0.017658
0.011564
0.005575
0.001227
0.00055
0.000281
0.00009
0.000002
0
0
0
How to Draw a Planar Clustered Graph In this paper, we introduce and show how to draw a practical graph structure known as clustered graphs. We present an algorithm which produces planar, straight-line, convex drawings of clustered graphs in O(n2.5) time. We also demonstrate an area lower bound and an angle upper bound for straight-line convex drawings of C-planar graphs. We show that such drawings require (2n) area and the smallest angle is O(1/n). Our bounds are unlike the area and angle bounds of classical graph drawing conventions in which area bound is (n2) and angle bounds are functions of the maximum degree of the graph. Our results indicate important tradeoff between line straightness and area, and between region convexity and area.
Planarity for Clustered Graphs In this paper, we introduce a new graph model known as clustered graphs, i.e. graphs with recursive clustering structures. This graph model has many applications in informational and mathematical sciences. In particular, we study C-planarity of clustered graphs. Given a clustered graph, the C-planarity testing problem is to determine whether the clustered graph can be drawn without edge crossings, or edge-region crossings. In this paper, we present efficient algorithms for testing C-planarity and finding C-planar embeddings of clustered graphs.
New Layout Techniques for Entity-Relationship Diagrams
Maintaining hierarchical graph views We formalize the problem of maintaining views of graphs. These are graphs induced by the contraction of vertex subsets that are defined by associated hierarchies. We provide data structures that allow applications to refine and coarsen such views interactively and efficiently, in time linear in the number of changes induced by any exploration operation. The problem is motivated by applications in graph visualization.
Graph Layout Adjustment Strategies . When adjusting a graph layout, it is often desirable to preservevarious properties of the original graph in the adjusted view. Pertinentproperties may include straightness of lines, graph topology, orthogonalitiesand proximities. A layout adjustment algorithm which can beused to create fisheye views of nested graphs is introduced. The SHriMP(Simple Hierarchical Multi-Perspective) visualization technique uses thisalgorithm to create fisheye views of nested graphs. This algorithm...
Multilevel Visualization of Clustered Graphs Clustered graphs are graphs with recursive clustering structures over the vertices. This type of structure appears in many systems. Examples include CASE tools, management information systems, VLSI design tools, and reverse engineering systems. Existing layout algorithms represent the clustering structure as recursively nested regions in the plane. However, as the structure becomes more and more complex, two dimensional plane representations tend to be insufficient. In this paper, firstly, we describe some two dimensional plane drawing algorithms for clustered graphs; then we show how to extend two dimensional plane drawings to three dimensional multilevel drawings. We consider two conventions: straight-line convex drawings and orthogonal rectangular drawings; and we show some examples.
Query Optimization Techniques Utilizing Path Indexes in Object-Oriented Database Systems We propose query optimization techniquesthat fully utilize the advantages of path indexesin object-oriented database systems. Althoughpath indexes provide an efficient accessto complex objects, little research has beendone on query optimization that fully utilizepath indexes. We first devise a generalizedindex intersection technique, adapted to thestructure of the path index extended fromconventional indexes, for utilizing multiple(path) indexes to access each class in a query.We...
Proving Liveness Properties of Concurrent Programs
RSF: a formalism for executable requirement specifications RSF is a formalism for specifying and prototyping systems with time constraints. Specifications are given via a set of transition rules. The application of a transition rule is dependent upon certain events. The occurrence times of the events and the data associated with them must satisfy given properties. As a consequence of the application of a rule, some events are generated and others are scheduled to occur in the future, after given intervals of time. Specifications can be queried, and the computation of answers to queries provides a generalized form of rapid prototyping. Executability is obtained by mapping the RSF rules into logic programming. The rationale, a definition of the formalism, the execution techniques which support the general notion of rapid prototyping and a few examples of its use are presented.
The Three Dimensions of Requirements Engineering Requirements engineering (RE) is perceived as an area of growing im- portance. Due to the increasing effort spent for research in this area many con- tributions to solve different problems within RE exist. The purpose of this paper is to identify the main goals to be reached during the requirements engineering process in order to develop a framework for RE. This framework consists of the three dimensions:
Improvements to Platt's SMO Algorithm for SVM Classifier Design This article points out an important source of inefficiency in Platt's sequential minimal optimization (SMO) algorithm that is caused by the use of a single threshold value. Using clues from the KKT conditions for the dual problem, two threshold parameters are employed to derive modifications of SMO. These modified algorithms perform significantly faster than the original SMO on all benchmark data sets tried.
An Overview of JPEG-2000 JPEG-2000 is an emerging standard for still image compression. This paper provides a brief history of the JPEG-2000 standardization process, an overview of the standard, and some description of the capabilities provided by the standard. Part I of the JPEG-2000 standard specifies the minimum compliant decoder, while Part II describes optional, value-added extensions. Although the standard specifies only the decoder and bitstream syntax, in this paper we describe JPEG-2000 from the point of view of encoding. We take this approach, as we believe it is more amenable to a compact description more easily understood by most readers.
Compositional noninterference from first principles The recently formulated Shadow Semantics for noninterference-style security of sequential programs avoids the Refinement Paradox by preserving demonic nondeterminism in those cases where reducing it would compromise security. The construction (originally) of the semantic domain for The Shadow, and the interpretation of programs in it, relied heavily on intuition, guesswork and the advice of others. That being so, it is natural after the fact to try to reconstruct an idealised “inevitable” path from first principles to where we actually ended up: not only does one learn (more) about semantic principles by doing so, but the “rational reconstruction” helps to expose the choices made, along the way, and to legitimise the decisions that resolved them. Unlike our other papers on noninterference, this one does not contain a significant case study: instead its aim is to provide the most accessible account we can of the methods we use and why our model, in its details, has turned out the way it has. In passing, it might give some insight into the general role and significance of compositionality and testing-with-context for program semantics. Finally, a technical contribution here is a new “Transfer Principle” that captures uniformly a large class of classical refinements that remain valid when noninterference is taken into account in our style.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.039048
0.024013
0.01525
0.012648
0.004246
0.000283
0.000059
0.000002
0
0
0
0
0
0
Inheritance of proofs The Curry-Howard isomorphism, a fundamental property shared by many type theories, establishes a direct correspondence between programs and proofs. This suggests that the same structuring principles that ease programming should be useful for proving as well. To exploit object-oriented structuring mechanisms for verification, we extend the object-model of Pierce and Turner, based on the higher-order typed X-calculus F less than or equal to(omega), with a logical component. By enriching the (functional) signature of objects with a specification, methods and their correctness proofs are packed together in objects. The uniform treatment of methods and proofs gives rise in a natural way to object-oriented proving principles - including inheritance of proofs, late binding of proofs, and encapsulation of proofs - as analogues to object-oriented programming principles. We have used Lego, a type-theoretic proof checker, to explore the feasibility of this approach. (C) 1998 John Wiley & Sons, Inc.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Towards an Automatic Integration of Statecharts The integration of statecharts is part of an integration methodology for object oriented views. Statecharts are the most important language for the representation of the behaviour of objects and are used in many object oriented modeling techniques, e.g. in UML ([23]). In this paper we focus on the situation where the behaviour of an object type is represented in several statecharts, which have to be integrated into a single statechart. The presented approach allows an automatic integration process but gives the designer possibilities to make own decisions to guide the integration process and to achieve qualitative design goals.
A Conceptual Graph Model for W3C Resource Description Framework With the aim of building a "Semantic Web", the content of the documents must be explicitly represented through metadata in order to enable contents-guided search. Our approach is to exploit a standard language (RDF, recommended by W3C) for expressing such metadata and to interpret these metadata in conceptual graphs (CG) in order to exploit querying and inferencing capabilities enabled by CG formalism. The paper presents our mapping of RDF into CG and its interest in the context of the semantic Web.
Procedures and atomicity refinement The introduction of an early return from a (remote) procedure call can increase the degree of parallelism in a parallel or distributed algorithm modeled by an action system. We define a return statement for procedures in an action systems framework and show that it corresponds to carrying out an atomicity refinement.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Exponential stability of impulsive systems with application to uncertain sampled-data systems We establish exponential stability of nonlinear time-varying impulsive systems by employing Lyapunov functions with discontinuity at the impulse times. Our stability conditions have the property that when specialized to linear impulsive systems, the stability tests can be formulated as Linear Matrix Inequalities (LMIs). Then we consider LTI uncertain sampled-data systems in which there are two sources of uncertainty: the values of the process parameters can be unknown while satisfying a polytopic condition and the sampling intervals can be uncertain and variable. We model such systems as linear impulsive systems and we apply our theorem to the analysis and state-feedback stabilization. We find a positive constant which determines an upper bound on the sampling intervals for which the stability of the closed loop is guaranteed. The control design LMIs also provide controller gains that can be used to stabilize the process. We also consider sampled-data systems with constant sampling intervals and provide results that are less conservative than the ones obtained for variable sampling intervals.
On robust stability of aperiodic sampled-data systems - An integral quadratic constraint approach This manuscript is concerned with stability analysis of sampled-data systems with non-uniform sampling patterns. The stability problem is tackled from a continuous-time point of view, via the so-called “input delay approach”, where the “aperiodic sampling operation” is modelled by a “average delay-difference” operator for which characterization based on integral quadratic constrains (IQC) is identified. The system is then viewed as feedback interconnection of a stable linear time-varying system and the “average delay-difference” operator. With the IQCs identified for the “average delay-difference” operator, the IQC theory is applied to derive convex stability conditions. Results of numerical tests are given to illustrate the effectiveness of the proposed approach.
Stability analysis of some classes of input-affine nonlinear systems with aperiodic sampled-data control. In this paper we investigate the stability analysis of nonlinear sampled-data systems, which are affine in the input. We assume that a stabilizing controller is designed using the emulation technique. We intend to provide sufficient stability conditions for the resulting sampled-data system. This allows to find an estimate of the upper bound on the asynchronous sampling intervals, for which stability is ensured. The main idea of the paper is to address the stability problem in a new framework inspired by the dissipativity theory. Furthermore, the result is shown to be constructive. Numerically tractable criteria are derived using linear matrix inequality for polytopic systems and using sum of squares technique for the class of polynomial systems.
Stability analysis of systems with aperiodic sample-and-hold devices Motivated by the widespread use of networked and embedded control systems, improved stability conditions are derived for sampled-data feedback control systems with uncertainly time-varying sampling intervals. The results are derived by exploiting the passivity-type property of the operator arising in the input-delay approach to the system in addition to the gain of the operator, and are hence less conservative than existing ones.
An IQC Approach to Robust Stability of Aperiodic Sampled-Data Systems. Conditions for robust stability of sampled-data systems with non-uniform sampling patterns and structural uncertainties are derived. The problem is tackled under the integral quadratic constraint (IQC) framework, where the aperiodic sampling operation is modelled by an delay-integration operator. Characterization based on integral quadratic constrains (IQC) is identified for this operator and the IQC theory is applied to derive convex stability criteria. Compared to the dominating Lyapunov approach where the candidate Lyapunov-Krasovskii functionals or looped functionals need to be tailored for the systems under consideration and therefore the stability conditions need to be re-derived whenever additional uncertainties are considered, the proposed approach has the advantage of avoiding such endeavor. Numerical examples are given to illustrate this main point and effectiveness of the proposed approach.
Recent developments on the stability of systems with aperiodic sampling: An overview. This article presents basic concepts and recent research directions about the stability of sampled-data systems with aperiodic sampling. We focus mainly on the stability problem for systems with arbitrary time-varying sampling intervals which has been addressed in several areas of research in Control Theory. Systems with aperiodic sampling can be seen as time-delay systems, hybrid systems, Input/Output interconnections, discrete-time systems with time-varying parameters, etc. The goal of the article is to provide a structural overview of the progress made on the stability analysis problem. Without being exhaustive, which would be neither possible nor useful, we try to bring together results from diverse communities and present them in a unified manner. For each of the existing approaches, the basic concepts, fundamental results, converse stability theorems (when available), and relations with the other approaches are discussed in detail. Results concerning extensions of Lyapunov and frequency domain methods for systems with aperiodic sampling are recalled, as they allow to derive constructive stability conditions. Furthermore, numerical criteria are presented while indicating the sources of conservatism, the problems that remain open and the possible directions of improvement. At last, some emerging research directions, such as the design of stabilizing sampling sequences, are briefly discussed.
Time-delay systems: an overview of some recent advances and open problems After presenting some motivations for the study of time-delay system, this paper recalls modifications (models, stability, structure) arising from the presence of the delay phenomenon. A brief overview of some control approaches is then provided, the sliding mode and time-delay controls in particular. Lastly, some open problems are discussed: the constructive use of the delayed inputs, the digital implementation of distributed delays, the control via the delay, and the handling of information related to the delay value.
Hierarchy of stability criterion for time-delay systems based on multiple integral approach. Taking a class of time-delay systems as research object, this brief aims at developing a theoretical support on the hierarchy of stability criterion which is derived by the multiple integral approach and free-weighting matrix technique. The hierarchy implies that the conservatism of stability criterion can be reduced by increasing the ply of integral terms in LyapunovKrasovskii functional (LKF). Together with three numerical experiments, the hierarchy of stability criterion is further shown.
On delay-dependent approach for robust stability and stabilization of T-S fuzzy systems with constant delay and uncertainties This paper investigates robust stability analysis and stabilization of delay and uncertain systems approximated by a Takagi-Sugeno (T-S) fuzzy model. An innovative approach is proposed to develop delay-dependent stability criteria of the systems, which makes use of less-redundant information to construct Lyapunov function, employs an integral equation method to handle the cross-product terms, and alleviates the requirements of the bounding technique and model transformations that have been popularly adopted in many existing references. This leads to significant improvement in the stability performance with far fewer unknown variables in the stability computation. From the derived stability criteria, a new memoryless state-feedback control is further developed. The controller gain and the maximum allowable delay bound of the closed-loop control system can be obtained simultaneously by solving an optimization problem. Numerical examples are also given to demonstrate the theoretical results.
Distributed Estimation for Moving Target Based on State-Consensus Strategy This technical note studies the distributed estimation problem for a continuous-time moving target under switching interconnection topologies. A recursive distributed estimation algorithm is proposed by using state-consensus strategy, where a common gain is assigned to adjust the innovative and state-consensus information for each sensor in the network. Under mild conditions on observability and connectivity, the stability of the distributed estimation algorithm is analyzed. An upper bound and lower bound for the total mean square estimation error (TMSEE) are obtained by virtue of the common Lyapunov method and Kalman-Bucy filtering theory, respectively. Then a numerical simulation is given to verify the effectiveness of the proposed algorithm.
An incremental constraint solver An incremental constraint solver, the DeltaBlue algorithm maintains an evolving solution to the constraint hierarchy as constraints are added and removed. DeltaBlue minimizes the cost of finding a new solution after each change by exploiting its knowledge of the last solution.
Theory-W Software Project Management Principles and Examples A software project management theory is presented called Theory W: make everyone a winner. The authors explain the key steps and guidelines underlying the Theory W statement and its two subsidiary principles: plan the flight and fly the plan; and, identify and manage your risks. Theory W's fundamental principle holds that software project managers will be fully successful if and only if they make winners of all the other participants in the software process: superiors, subordinates, customers, users, maintainers, etc. Theory W characterizes a manager's primary role as a negotiator between his various constituencies, and a packager of project solutions with win conditions for all parties. Beyond this, the manager is also a goal-setter, a monitor of progress towards goals, and an activist in seeking out day-to-day win-lose or lose-lose project conflicts confronting them, and changing them into win-win situations. Several examples illustrate the application of Theory W. An extensive case study is presented and analyzed: the attempt to introduce new information systems to a large industrial corporation in an emerging nation. The analysis shows that Theory W and its subsidiary principles do an effective job both in explaining why the project encountered problems, and in prescribing ways in which the problems could have been avoided.
Lossless Microarray Image Compression using Region Based Predictors Microarray image technology is a powerful tool for monitoring the expression of thousands of genes simultaneously. Each microarray experiment produces large amount of image data, hence efficient compression routines that exploit microarray image structures are required. In this paper we introduce a lossless image compression method which segments the pixels of the image into three categories of background, foreground, and spot edges. The segmentation is performed by finding a threshold value which minimizes the weighted sum of the standard deviations of the foreground and background pixels. Each segment of the image is compressed using a separate predictor. The results of the implementation of the method show its superiority compared to the well-known microarray compression schemes as well as to the general lossless image compression standards.
Lossless Compression of Hyperspectral Imagery Via Lookup Tables and Classified Linear Spectral Prediction This paper presents a novel algorithm suitable for the lossless compression of hyperspectral imagery. The algorithm generalizes two previous algorithms, in which the concept nearest neighbor (NN) prediction implemented through lookup tables (LUTs) was introduced. Here, the set of LUTs, two or more, say M, on each band are allowed to span more than one previous band, say N bands, and the decision among one of the NM possible prediction values is based on the closeness of the value contained in the LUT to an advanced prediction, spanning N previous bands as well, provided by a top-performing scheme recently developed by the authors and featuring a classified spectral prediction. Experimental results carried out on the AVIRIS '97 dataset show improvements up to 15% over the baseline LUT-NN algorithm. However, preliminary results carried out on raw data show that all LUT-based methods are not suitable for on-board compression, since they take advantage uniquely of the sparseness of data histograms, which is originated by the on-ground calibration procedure.
1.008038
0.008837
0.008159
0.007757
0.005196
0.002521
0.000734
0.000099
0.000035
0.000001
0
0
0
0
Co-simulating event-B and continuous models via FMI We present a generic co-simulation approach between discrete-event models, developed in the Event-B formal method, and continuous models, exported via the Functional Mock-up Interface for Co-simulation standard. The concept is implemented into a simulation extension for the Rodin platform, thus leveraging powerful capabilities of refinement-based modelling and deductive verification in Event-B while introducing a continuous-time aspect and simulation-based validation for the development of complex hybrid systems.
An incremental development of the Mondex system in Event-B A development of the Mondex system was undertaken using Event-B and its associated proof tools. An incremental approach was used whereby the refinement between the abstract specification of the system and its detailed design was verified through a series of refinements. The consequence of this incremental approach was that we achieved a very high degree of automatic proof. The essential features of our development are outlined. We also present some modelling and proof guidelines that we found helped us gain a deep understanding of the system and achieve the high degree of automatic proof.
Developing topology discovery in Event-B We present a formal development in Event-B of a distributed topology discovery algo- rithm. Distributed topology discovery is at the core of several routing algorithms and is the problem of each node in a network discovering and maintaining information on the network topology. One of the key challenges in developing this algorithm is specify- ing the problem itself. We provide a specification that includes both safety properties, formalizing invariants that should hold in all system states, and liveness properties that characterize when the system reaches stable states. We prove these properties by ap- propriately combining proofs of invariants, event refinement, event convergence, and deadlock freedom. The combination of these features is novel and should be useful for formalizing and developing other kinds of semi-reactive systems, which are systems that react to, but do not modify, their environment. Our entire development has been formalized and machine checked using the Rodin tool.
ProB: A Model Checker for B We present PRoB, an animation and model checking tool for the B method. PRoB's animation facilities allow users to gain confidence in their specifications, and unlike the animator provided by the B-Toolkit, the user does not have to guess the right values for the operation arguments or choice variables. PRoB contains a model checker and a constraint-based checker, both of which can be used to detect various errors in B specifications. We present our first experiences in using PRoB on several case studies, highlighting that PRoB enables users to uncover errors that are not easily discovered by existing tools.
Formal Derivation of Strongly Correct Concurrent Programs. Summary  A method is described for deriving concurrent programs which are consistent with the problem specifications and free from deadlock and from starvation. The programs considered are expressed by nondeterministic repetitive selections of pairs of synchronizing conditions and subsequent actions. An iterative, convergent calculus is developed for synthesizing the invariant and synchronizing conditions which guarantee strong correctness. These conditions are constructed as limits of recurrences associated with the specifications and the actions. An alternative method for deriving starvationfree programs by use of auxiliary variables is also given. The applicability of the techniques presented is discussed through various examples; their use for verification purposes is illustrated as well.
A mathematical perspective for software measures research Basic principles which necessarily underlie software measures research are analysed. In the prevailing paradigm for the validation of software measures, there is a fundamental assumption that the sets of measured documents are ordered and that measures should report these orders. The authors describe mathematically, the nature of such orders. Consideration of these orders suggests a hierarchy of software document measures, a methodology for developing new measures and a general approach to the analytical evaluation of measures. They also point out the importance of units for any type of measurement and stress the perils of equating document structure complexity and psychological complexity
Distributed snapshots: determining global states of distributed systems This paper presents an algorithm by which a process in a distributed system determines a global state of the system during a computation. Many problems in distributed systems can be cast in terms of the problem of detecting global states. For instance, the global state detection algorithm helps to solve an important class of problems: stable property detection. A stable property is one that persists: once a stable property becomes true it remains true thereafter. Examples of stable properties are “computation has terminated,” “ the system is deadlocked” and “all tokens in a token ring have disappeared.” The stable property detection problem is that of devising algorithms to detect a given stable property. Global state detection can also be used for checkpointing.
ACE: building interactive graphical applications
Duality in specification languages: a lattice-theoretical approach A very general lattice-based language of commands, based on theprimitive operations of substitution and test for equality, isconstructed. This base language permits unbounded nondeterminism,demonic and angelic nondeterminism. A dual language permitting miraclesis constructed. Combining these two languages yields an extended baselanguage which is complete, in the sense that all monotonic predicatetransformers can be constructed in it. The extended base languageprovides a unifying framework for various specification languages; weshow how two Dijkstra-style specification languages can be embedded init.—Authors' Abstract
Abstract Syntax and Semantics of Visual Languages The effective use of visual languages requires a precise understanding of their meaning. Moreover, it is impossible to prove properties of visual languages like soundness of transformation rules or correctness results without having a formal language definition. Although this sounds obvious, it is surprising that only little work has been done about the semantics of visual languages, and even worse, there is no general framework available for the semantics specification of different visual languages. We present such a framework that is based on a rather general notion of abstract visual syntax. This framework allows a logical as well as a denotational approach to visual semantics, and it facilitates the formal reasoning about visual languages and their properties. We illustrate the concepts of the proposed approach by defining abstract syntax and semantics for the visual languages VEX, Show and Tell and Euler circles. We demonstrate the semantics in action by proving a rule for visual reasoning with Euler circles and by showing the correctness of a Show and Tell program.
A Software Development Environment for Improving Productivity First Page of the Article
Software engineering for parallel systems Current approaches to software engineering practice for parallel systems are reviewed. The parallel software designer has not only to address the issues involved in the characterization of the application domain and the underlying hardware platform, but, in many instances, the production of portable, scalable software is desirable. In order to accommodate these requirements, a number of specific techniques and tools have been proposed, and these are discussed in this review in the framework of the parallel software life-cycle. The paper outlines the role of formal methods in the practical production of parallel software, but its main focus is the emergence of development methodologies and environments. These include CASE tools and run-time support systems, as well as the use of methods taken from experience of conventional software development. Because of the particular emphasis on performance of parallel systems, work on performance evaluation and monitoring systems is considered.
Developing Mode-Rich Satellite Software by Refinement in Event B To ensure dependability of on-board satellite systems, the designers should, in particular, guarantee correct implementation of the mode transition scheme, i.e., ensure that the states of the system components are consistent with the global system mode. However, there is still a lack of scalable approaches to formal verification of correctness of complex mode transitions. In this paper we present a formal development of an Attitude and Orbit Control System (AOCS) undertaken within the ICT DEPLOY project. AOCS is a complex mode-rich system, which has an intricate mode-transition scheme. We show that refinement in Event B provides the engineers with a scalable formal technique that enables both development of mode-rich systems and proof-based verification of their mode consistency.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.1
0.05
0.02
0.003846
0
0
0
0
0
0
0
0
0
0
Hierarchical modeling via optimal context quantization Optimal context quantization with respect to the minimum conditional entropy (MCECQ) is proven to be an efficient way for high order statistical modeling and model complexity reduction in data compression systems. The MCECQ merges together contexts that have similar statistics to reduce the size of the original model. In this technique, the number of output clusters (the model size) must be set before quantization. Optimal model size for the given data is not usually known in advance. We extend the MCECQ technique to a multi-model approach for context modeling, which overcomes this problem and gives the possibilities for better fitting the model to the actual data. The method is primarily intended for image compression algorithms. In our experiments, we applied the proposed technique to embedded conditional bit-plane entropy coding of wavelet transform coefficients. We show that the performance of the proposed modeling achieves the performance of the optimal model of fixed size found individually for given data using MCECQ (and in most cases it is even slightly better).
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Dissipative Control For Singular Time-Delay System With Actuator Saturation Via State Feedback And Output Feedback This paper is devoted to the problem of dissipative control for a class of singular time-delay systems with actuator saturation via state feedback and output feedback. First, by tuning the Wirtinger-based integral and the double integral inequality, a sufficient condition is derived to guarantee that the singular time-delay system is regular, impulse free, asymptotically stable and strictly (Q, S, R)-dissipative. Then, based on the derived condition, and applying linear matrix inequality techniques, the dissipative state feedback and output feedback controller are synthesised. Moreover, the maximal estimate of the domain of attraction is proposed by an optimisation problem. Finally, some simulation examples are provided to verify the effectiveness of the obtained theoretic results.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1
0
0
0
0
0
0
0
0
0
0
0
0
0
A taxonomy of virtual worlds usage in education Virtual worlds are an important tool in modern education practices as well as providing socialisation, entertainment and a laboratory for collaborative work. This paper focuses on the uses of virtual worlds for education and synthesises over 100 published academic papers, reports and educational websites from around the world. A taxonomy is then derived from these papers, delineating current theoretical and practical work on virtual world usage, specifically in the field of education. The taxonomy identifies rich veins of current research and practice in associated educational theory and in simulated worlds or environments, yet it also demonstrates the paucity of work in important areas such as evaluation, grading and accessibility.
Development and evaluation of a system enhancing Second Life to support synchronous role-based collaborative learning Research and commercial interest toward 3D virtual worlds are recently growing because they probably represent the new direction for the next generation of web applications. Although these environments present several features that are useful for informal collaboration, structured collaboration is required to effectively use them in a working or in a didactical setting. This paper presents a system supporting synchronous collaborative learning by naturally enriching Learning Management System services with meeting management and multimedia features. Monitoring and moderation of discussions are also managed at a single group and at the teaching level. The Second Life (SL) environment has been integrated with two ad hoc developed Moodle plug-ins and SL objects have been designed, modeled, and programmed to support synchronous role-based collaborative activities. We also enriched SL with tools to support the capturing and displaying of textual information during collaborative sessions for successive retrieval. In addition, the multimedia support has been enhanced with functionalities for navigating multimedia contents. We also report on an empirical study aiming at evaluating the use of the proposed SL collaborative learning as compared with face-to-face group collaboration. Results show that the two approaches are statistically undistinguishable in terms of performance, comfort with communication, and overall satisfaction. Copyright © 2009 John Wiley & Sons, Ltd.
Effectiveness of virtual reality-based instruction on students' learning outcomes in K-12 and higher education: A meta-analysis The purpose of this meta-analysis is to examine overall effect as well as the impact of selected instructional design principles in the context of virtual reality technology-based instruction (i.e. games, simulation, virtual worlds) in K-12 or higher education settings. A total of 13 studies (N = 3081) in the category of games, 29 studies (N = 2553) in the category of games, and 27 studies (N = 2798) in the category of virtual worlds were meta-analyzed. The key inclusion criteria were that the study came from K-12 or higher education settings, used experimental or quasi-experimental research designs, and used a learning outcome measure to evaluate the effects of the virtual reality-based instruction. Results suggest games (FEM = 0.77; REM = 0.51), simulations (FEM = 0.38; REM = 0.41), and virtual worlds (FEM = 0.36; REM = 0.41) were effective in improving learning outcome gains. The homogeneity analysis of the effect sizes was statistically significant, indicating that the studies were different from each other. Therefore, we conducted moderator analysis using 13 variables used to code the studies. Key findings included that: games show higher learning gains than simulations and virtual worlds. For simulation studies, elaborate explanation type feedback is more suitable for declarative tasks whereas knowledge of correct response is more appropriate for procedural tasks. Students performance is enhanced when they conduct the game play individually than in a group. In addition, we found an inverse relationship between number of treatment sessions learning gains for games. With regards to the virtual world, we found that if students were repeatedly measured it deteriorates their learning outcome gains. We discuss results to highlight the importance of considering instructional design principles when designing virtual reality-based instruction.
Automatic speech recognition- an approach for designing inclusive games Computer games are now a part of our modern culture. However, certain categories of people are excluded from this form of entertainment and social interaction because they are unable to use the interface of the games. The reason for this can be deficits in motor control, vision or hearing. By using automatic speech recognition systems (ASR), voice driven commands can be used to control the game, which can thus open up the possibility for people with motor system difficulty to be included in game communities. This paper aims at find a standard way of using voice commands in games which uses a speech recognition system in the backend, and that can be universally applied for designing inclusive games. Present speech recognition systems however, do not support emotions, attitudes, tones etc. This is a drawback because such expressions can be vital for gaming. Taking multiple types of existing genres of games into account and analyzing their voice command requirements, a general ASRS module is proposed which can work as a common platform for designing inclusive games. A fuzzy logic controller proposed then is to enhance the system. The standard voice driven module can be based on algorithm or fuzzy controller which can be used to design software plug-ins or can be included in microchip. It then can be integrated with the game engines; creating the possibility of voice driven universal access for controlling games.
Presence and engagement in an interactive drama In this paper we present the results of a qualitative, empirical study exploring the impact of immersive technologies on presence and engagement, using the interactive drama Façade as the object of study. In this drama, players are situated in a married couple's apartment, and interact primarily through conversation with the characters and manipulation of objects in the space. We present participants' experiences across three different versions of Façade -- augmented reality (AR) and two desktop computing based implementations, one where players communicate using speech and the other using typed keyboard input. Through interviews and observations of players, we find that immersive AR can create an increased sense of presence, confirming generally held expectations. However, we demonstrate that increased presence does not necessarily lead to more engagement. Rather, mediation may be necessary for some players to fully engage with certain interactive media experiences.
Three-way decisions based on neutrosophic sets and AHP-QFD framework for supplier selection problem. The neutrosophic set is an excellent tool for dealing with vague and inconsistent information effectively. Consequently, by studying the concept of three-way decisions based on neutrosophic set, we can find a suitable manner to take a reasonable decision. In this article, we suggest two rules of three-way decisions based on three membership degrees of neutrosophic set. A new evaluation function is presented to calculate weights of alternatives, for choosing the best one. We also study a supplier selection problem (selecting suppliers to obtain the indispensable materials for assisting the outputs of companies). The best suppliers need to be selected to enhance quality, service, to reduce cost, and to control time. The most widely used technique for determining the requirements of a company is the Quality Function Deployment (QFD). Since traditional QFD technique does not prioritize stakeholders’ requirements and fails to deal with vague and inconsistent information, this research also integrates it with Analytic Hierarchy Process (AHP) depending on neutrosophic environment. A case study is presented to illustrate the effectiveness of the proposed model.
An incremental ant colony optimization based approach to task assignment to processors for multiprocessor scheduling. Optimized task scheduling is one of the most important challenges to achieve high performance in multiprocessor environments such as parallel and distributed systems. Most introduced task-scheduling algorithms are based on the so-called list scheduling technique. The basic idea behind list scheduling is to prepare a sequence of nodes in the form of a list for scheduling by assigning them some priority measurements, and then repeatedly removing the node with the highest priority from the list and allocating it to the processor providing the earliest start time (EST). Therefore, it can be inferred that the makespans obtained are dominated by two major factors: (1) which order of tasks should be selected (sequence subproblem); (2) how the selected order should be assigned to the processors (assignment subproblem). A number of good approaches for overcoming the task sequence dilemma have been proposed in the literature, while the task assignment problem has not been studied much. The results of this study prove that assigning tasks to the processors using the traditional EST method is not optimum; in addition, a novel approach based on the ant colony optimization algorithm is introduced, which can find far better solutions.
Robust and Imperceptible Dual Watermarking for Telemedicine Applications In this paper, the effects of different error correction codes on the robustness and imperceptibility of discrete wavelet transform and singular value decomposition based dual watermarking scheme is investigated. Text and image watermarks are embedded into cover radiological image for their potential application in secure and compact medical data transmission. Four different error correcting codes such as Hamming, the Bose, Ray-Chaudhuri, Hocquenghem (BCH), the Reed---Solomon and hybrid error correcting (BCH and repetition code) codes are considered for encoding of text watermark in order to achieve additional robustness for sensitive text data such as patient identification code. Performance of the proposed algorithm is evaluated against number of signal processing attacks by varying the strength of watermarking and covers image modalities. The experimental results demonstrate that this algorithm provides better robustness without affecting the quality of watermarked image.This algorithm combines the advantages and removes the disadvantages of the two transform techniques. Out of the three error correcting codes tested, it has been found that Reed---Solomon shows the best performance. Further, a hybrid model of two of the error correcting codes (BCH and repetition code) is concatenated and implemented. It is found that the hybrid code achieves better results in terms of robustness. This paper provides a detailed analysis of the obtained experimental results.
Separation and information hiding We investigate proof rules for information hiding, using the recent formalism of separation logic. In essence, we use the separating conjunction to partition the internal resources of a module from those accessed by the module's clients. The use of a logical connective gives rise to a form of dynamic partitioning, where we track the transfer of ownership of portions of heap storage between program components. It also enables us to enforce separation in the presence of mutable data structures with embedded addresses that may be aliased.
Incorporating usability into requirements engineering tools The development of a computer system requires the definition of a precise set of properties or constraints that the system must satisfy with maximum economy and efficiency. This definition process requires a significant amount of communication between the requestor and the developer of the system. In recent years, several methodologies and tools have been proposed to improve this communication process. This paper establishes a framework for examining the methodologies and techniques, charting the progress made, and identifying opportunities to improve the communication capabilities of a requirements engineering tool.
Non-interference through determinism The standard approach to the specification of a secure system is to present a (usually state-based) abstract security model separately from the specification of the system's functional requirements, and establishing a correspondence between the two specifications. This complex treatment has resulted in development methods distinct from those usually advocated for general applications.We provide a novel and intellectually satisfying formulation of security properties in a process algebraic framework, and show that these are preserved under refinement. We relate the results to a more familiar state-based (Z) specification methodology. There are efficient algorithms for verifying our security properties using model checking.
Matching language and hardware for parallel computation in the Linda Machine The Linda Machine is a parallel computer that has been designed to support the Linda parallel programming environment in hardware. Programs in Linda communicate through a logically shared associative memory called tuple space. The goal of the Linda Machine project is to implement Linda's high-level shared-memory abstraction efficiently on a nonshared-memory architecture. The authors describe the machine's special-purpose communication network and its associated protocols, the design of the Linda coprocessor, and the way its interaction with the network supports global access to tuple space. The Linda Machine is in the process of fabrication. The authors discuss the machine's projected performance and compare this to software versions of Linda.
Refinement in Object-Z and CSP In this paper we explore the relationship between refinement in Object-Z and refinement in CSP. We prove with a simple counterexample that refinement within Object-Z, established using the standard simulation rules, does not imply failures-divergences refinement in CSP. This contradicts accepted results.Having established that data refinement in Object-Z and failures refinement in CSP are not equivalent we identify alternative refinement orderings that may be used to compare Object-Z classes and CSP processes. When reasoning about concurrent properties we need the strength of the failures-divergences refinement ordering and hence identify equivalent simulation rules for Object-Z. However, when reasoning about sequential properties it is sufficient to work within the simpler relational semantics of Object-Z. We discuss an alternative denotational semantics for CSP, the singleton failures semantic model, which has the same information content as the relational model of Object-Z.
Reversible data hiding by adaptive group modification on histogram of prediction errors. In this work, the conventional histogram shifting (HS) based reversible data hiding (RDH) methods are first analyzed and discussed. Then, a novel HS based RDH method is put forward by using the proposed Adaptive Group Modification (AGM) on the histogram of prediction errors. Specifically, in the proposed AGM method, multiple bins are vacated based on their magnitudes and frequencies of occurrences by employing an adaptive strategy. The design goals are to maximize hiding elements while minimizing shifting and modification elements to maintain image high quality by giving priority to the histogram bins utilized for hiding. Furthermore, instead of hiding only one bit at a time, the payload is decomposed into segments and each segment is hidden by modifying a triplet of prediction errors to suppress distortion. Experimental results show that the proposed AGM technique outperforms the current state-of-the-art HS based RDH methods. As a representative result, the proposed method achieves an improvement of 4.30 dB in terms of PSNR when 105,000 bits are hidden into the test Lenna image.
1.22
0.22
0.22
0.01
0.005
0.004
0.001667
0.000278
0
0
0
0
0
0
A framework for formally defining the syntax of visual languages This paper outlines a framework for syntax definition. The purpose of the framework is to enable the construction of grammatical formalisms which are suitable for defining the syntax of visual languages. The paper considers a number of existing formalisms and illustrates how they may be expressed within the framework. The framework is then used to design a new grammatical formalism for defining the syntax of languages of Venn diagrams. It is concluded that the framework demonstrates potential to aid the development of grammatical formalisms for defining the syntax of visual languages
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
PPM: One Step to Practicality New mechanism for PPM data compression scheme is invented. Simple procedures for adaptive escape estimation are proposed. Practical implementation of these methods is described and showed that this implementation gives best to date results at complexity comparable with widespread LZ77- and BWT-based algorithms.
Hypergraph Lossless Image Compression Hypergraphs are a very powerful tool and can represent many problems. In this paper we define a new image representation based on hypergraphs. This representation conducts to a new lossless compression algorithm for images called HLC. We present the algorithm and give some experimental results proving its efficiency. Finally we show that this algorithm can be generalized to three-dimensional images and to parametric lossy compression.
GIFTS - the precursor geostationary satellite component of the future Earth Observing System The Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS) combines advanced technologies to observe surface thermal properties and atmospheric weather and chemistry variables in four dimensions. Large area format Focal Plane detector Arrays (LFPAs) provide near instantaneous large area coverage with high horizontal resolution. A Fourier Transform Spectrometer (FTS) enables atmospheric radiance spectra to be observed simultaneously for all LFPA detector elements thereby providing high vertical resolution temperature and moisture sounding information. The fourth dimension, time, is provided by the geosynchronous satellite platform, which enables near continuous imaging of the atmosphere's three-dimensional structure. The key advances that GIFTS achieves beyond current geosynchronous capabilities are: (1) the water-vapor winds will be altitude-resolved throughout the troposphere, (2) surface temperature and atmospheric soundings will be achieved with high spatial and temporal resolution, and (3) the transport of tropospheric pollutant gases (i.e. CO and O3) will be observed. GIFTS will be launched in 2005 as NASA's third New Millennium Program (NMP) Earth Observing (EO-3) satellite mission, and will serve as the prototype of sounding systems to fly on future operational geosynchronous satellites. After a one-year validation period in view of North America, the GIFTS will be repositioned to become the Navy's Indian Ocean METOC Imager (IOMI). We describe the GIFTS technology and provide examples of the GIFTS remote sensing capabilities using aircraft interferometer data. The GIFTS is an important step in implementing the NASA Earth Science Enterprise vision of a sensor web for future Earth observations.
PCIF: An Algorithm for Lossless True Color Image Compression An efficient algorithm for compressing true color images is proposed. The technique uses a combination of simple and computationally cheap operations. The three main steps consist of predictive image filtering, decomposition of data, and data compression through the use of run length encoding, Huffman coding and grouping the values into polyominoes. The result is a practical scheme that achieves good compression while providing fast decompression. The approach has performance comparable to, and often better than, competing standards such JPEG 2000 and JPEG-LS.
Performance Analysis of the JPEG 2000 Image Coding Standard Some of the major objectives of the JPEG 2000 still image coding standard were compression and memory efficiency, lossy to lossless coding, support for continuous-tone to bi-level images, error resilience, and random access to regions of interest. This paper will provide readers with some insight on various features and functionalities supported by a baseline JPEG 2000-compliant codec. Three JPEG 2000 software implementations (Kakadu, JasPer, JJ2000) are compared with several other codecs, including JPEG, JBIG, JPEG-LS, MPEG-4 VTC and H.264 intra coding. This study can serve as a guideline for users to estimate the effectiveness of JPEG 2000 for various applications, and to select optimal parameters according to specific application requirements.
Compression of Hyperspectral Images Using Discerete Wavelet Transform and Tucker Decomposition The compression of hyperspectral images (HSIs) has recently become a very attractive issue for remote sensing applications because of their volumetric data. In this paper, an efficient method for hyperspectral image compression is presented. The proposed algorithm, based on Discrete Wavelet Transform and Tucker Decomposition (DWT-TD), exploits both the spectral and the spatial information in the images. The core idea behind our proposed technique is to apply TD on the DWT coefficients of spectral bands of HSIs. We use DWT to effectively separate HSIs into different sub-images and TD to efficiently compact the energy of sub-images. We evaluate the effect of the proposed method on real HSIs and also compare the results with the well-known compression methods. The obtained results show a better performance of the proposed method. Moreover, we show the impact of compression HSIs on the supervised classification and linear unmixing.
The LOCO-I lossless image compression algorithm: principles and standardization into JPEG-LS LOCO-I (LOw COmplexity LOssless COmpression for Images) is the algorithm at the core of the new ISO/ITU standard for lossless and near-lossless compression of continuous-tone images, JPEG-LS. It is conceived as a “low complexity projection” of the universal context modeling paradigm, matching its modeling unit to a simple coding unit. By combining simplicity with the compression potential of context models, the algorithm “enjoys the best of both worlds.” It is based on a simple fixed context model, which approaches the capability of the more complex universal techniques for capturing high-order dependencies. The model is tuned for efficient performance in conjunction with an extended family of Golomb (1966) type codes, which are adaptively chosen, and an embedded alphabet extension for coding of low-entropy image regions. LOCO-I attains compression ratios similar or superior to those obtained with state-of-the-art schemes based on arithmetic coding. Moreover, it is within a few percentage points of the best available compression ratios, at a much lower complexity level. We discuss the principles underlying the design of LOCO-I, and its standardization into JPEC-LS
Partitioned vector quantization: application to lossless compression of hyperspectral images A novel design for a vector quantizer that uses multiple codebooks of variable dimensionality is proposed. High dimensional source vectors are first partitioned into two or more subvectors of (possibly) different length and then, each subvector is individually encoded with an appropriate codebook. Further redundancy is exploited by conditional entropy coding of the subvectors indices. This scheme allows practical quantization of high dimensional vectors in which each vector component is allowed to have different alphabet and distribution. This is typically the case of the pixels representing a hyperspectral image. We present experimental results in the lossless and near-lossless encoding of such images. The method can be easily adapted to lossy coding.
Optimal source codes for geometrically distributed integer alphabets (Corresp.) LetP(i)= (1 - theta)theta^ibe a probability assignment on the set of nonnegative integers wherethetais an arbitrary real number,0 < theta < 1. We show that an optimal binary source code for this probability assignment is constructed as follows. Letlbe the integer satisfyingtheta^l + theta^{l+1} leq 1 < theta^l + theta^{l-1}and represent each nonnegative integeriasi = lj + rwhenj = lfloor i/l rfloor, the integer part ofi/l, andr = [i] mod l. Encodejby a unary code (i.e.,jzeros followed by a single one), and encoderby a Huffman code, using codewords of lengthlfloor log_2 l rfloor, forr < 2^{lfloor log l+1 rfloor} - l, and lengthlfloor log_2 l rfloor + 1otherwise. An optimal code for the nonnegative integers is the concatenation of those two codes.
I-structures: data structures for parallel computing It is difficult to achieve elegance, efficiency, and parallelism simultaneously in functional programs that manipulate large data structures. We demonstrate this through careful analysis of program examples using three common functional data-structuring approaches-lists using Cons, arrays using Update (both fine-grained operators), and arrays using make-array (a “bulk” operator). We then present I-structure as an alternative and show elegant, efficient, and parallel solutions for the program examples in Id, a language with I-structures. The parallelism in Id is made precise by means of an operational semantics for Id as a parallel reduction system. I-structures make the language nonfunctional, but do not lose determinacy. Finally, we show that even in the context of purely functional languages, I-structures are invaluable for implementing functional data abstractions.
Visual support for reengineering work processes
A relation algebraic model of robust correctness We propose a new and uniform abstract relational approach todemonic nondeterminism and robust correctness similar to Hoare's chaossemantics. It is based on a specific set of relations on flat lattices.This set forms a complete lattice. Furthermore, we deal with therefinement of programs. Among other things, we show the correctness ofthe unfold/fold method for demonic nondeterminism and robust correctnessas refinement relation and investigate relationships to Dijkstra'swp-calculus and Morgan's specification statement.—Authors' Abstract
Thue choosability of trees A vertex colouring of a graph G is nonrepetitive if for any path P=(v"1,v"2,...,v"2"r) in G, the first half is coloured differently from the second half. The Thue choice number of G is the least integer @? such that for every @?-list assignment L of G, there exists a nonrepetitive L-colouring of G. We prove that for any positive integer @?, there is a tree T with @p"c"h(T)@?. On the other hand, it is proved that if G^' is a graph of maximum degree @D, and G is obtained from G^' by attaching to each vertex v of G^' a connected graph of tree-depth at most z rooted at v, then @p"c"h(G)@?c(@D,z) for some constant c(@D,d) depending only on @D and z.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.026956
0.02875
0.025398
0.025
0.008333
0.002778
0.000274
0.000038
0.000003
0
0
0
0
0
Parallel image normalization on a mesh connected array processor Image normalization is a basic operation in various image processing tasks. A parallel algorithm for fast binary image normalization is proposed for a mesh connected array processor. The principal operation in this algorithm is pixel mapping. The basic idea of parallel pixel mapping is to utilize a store and forward mechanism which routes pixels from their source locations to destinations in parallel along the paths of minimum length. The routing is based on a simple yet powerful concept of flow control patterns . This can form the basis for designing other parallel algorithms for low level image processing. The normalization process is decomposed into three procedures: translation, rotation and scaling. In each procedure, a mapping algorithm is employed to route the object pixels from source locations to destinations. Simulation results for the parallel image normalization on generated images are provided.
Antialiasing Scan-Line Data The output of a scan-line visible-surface algorithm is a collection of scan-line segments with associated simple shading functions, which together define the shading as a function of a continuous variable along each scan line. A hybrid antialiasing method that uses this information fully is presented. The method extends to the case where an image transform maps the scan lines into slanted lines in the output raster coordinates. Edge-slope information can be used to infer data along extra scan lines to improve antialiasing. Results obtained with the method are given.<>
Synthetic texturing using digital filters
A nonaliasing, real-time spatial transform technique A two-pass spatial transform technique that does not exhibit the aliasing artifacts associated with techniques for spatial transform of discrete sampled images is possible through the use of a complete and continuous resampling interpolation algorithm. The algorithm is complete in the sense that all the pixels of the input image under the map of the output image fully contribute to the output image. It is continuous in the sense that no gaps or overlaps exist in the sampling of the input pixels and that the sampling can be performed with arbitrary precision. The technique is real time in the sense that it can be guaranteed to operate for any arbitrary transform within a given time limit. Because of the complete and continuous nature of the resampling algorithm, the resulting image is free of the classic sampling artifacts such as graininess, degradation, and edge aliasing.
Discrete techniques for computer transformations of digital images and patterns Images and patterns through a cycle conversion T −1 T are discussed and facilitated by the combined algorithms, where the transformation is T :( ξ , η )→( x , y ) with the linear or nonlinear functions x = x ( ξ , η ) and y = y ( ξ , η ). A new Area Method is presented for images through T and T −1 T of linear transformations, and three combinations of the Splitting-Shooting Method and the Splitting-Integrating Method are proposed for images through T −1 T of linear and nonlinear transformations. Furthermore, both error analysis and graphical experiments given prove the importance of those combinations to computer vision, image processing, graphs and pattern recognition.
Moment images, polynomial fit filters. and the problem of surface interpolation A uniform hierarchical procedure for processing incomplete image data is describes. It begins with the computation of local moments within windows centered on each output sample point. Arrays of such measures, called moment images, are computed efficiently through the application of a series of small kernel filters. A polynomial surface is then fit to the available image data within a local neighborhood of each sample point. Best-fit polynomials are obtained from the corresponding local moments. The procedure, hierarchical polynomial fit filtering, yields a multiresolution set of low-pass filtered images. The set of low-pass images is combined by multiresolution interpolation to form a smooth surface passing through the original image data
Information fractals for evidential pattern classification Proposed is a novel model of belief functions based on fractal theory. The model is first justified in qualitative, intuitive terms, then formally defined. Also, the application of the model to the design of an evidential classifier is described. The proposed classification scheme is illustrated by a simple example dealing with robot sensing. The approach followed is motivated by applications to the design of intelligent systems, such as sensor-based dexterous manipulators, that must operate in unstructured, highly uncertain environments. Sensory data are assumed to be (1) incomplete and (2) gathered at multiple levels of resolution
Geometric attacks on image watermarking systems Synchronization errors can lead to significant performance loss in image watermarking methods, as the geometric attacks in the Stirmark benchmark software show. The authors describe the most common types of geometric attacks and survey proposed solutions.
A universal algorithm for sequential data compression A universal algorithm for sequential data compression is presented. Its performance is investigated with respect to a nonprobabilistic model of constrained sources. The compression ratio achieved by the proposed universal code uniformly approaches the lower bounds on the compression ratios attainable by block-to-variable codes and variable-to-block codes designed to match a completely specified source.
Logical foundations of object-oriented and frame-based languages We propose a novel formalism, called Frame Logic (abbr., F-logic), that accounts in a clean and declarative fashion for most of the structural aspects of object-oriented and frame-based languages. These features include object identity, complex objects, inheritance, polymorphic types, query methods, encapsulation, and others. In a sense, F-logic stands in the same relationship to the object-oriented paradigm as classical predicate calculus stands to relational programming. F-logic has a model-theoretic semantics and a sound and complete resolution-based proof theory. A small number of fundamental concepts that come from object-oriented programming have direct representation in F-logic; other, secondary aspects of this paradigm are easily modeled as well. The paper also discusses semantic issues pertaining to programming with a deductive object-oriented language based on a subset of F-logic.
The anchored version of the temporal framework In this survey paper we present some of the recent developments in the temporal formal system for the specification, verification and development of reactive programs. While the general methodology remains very much the one presented in some earlier works on the subject, such as [MP83c,MP83a,Pnu86], there have been several technical improvements and gained insights in understanding the computational model, the logic itself, the proof system and its presentation, and connections with alternative formalisms, such as finite automata. In this paper we explicate some of these improvements and extensions.
Dependence Directed Reasoning and Learning in Systems Maintenance Support The maintenance of large information systems involves continuous modifications in response to evolving business conditions or changing user requirements. Based on evidence from a case study, it is shown that the system maintenance activity would benefit greatly if the process knowledge reflecting the teleology of a design could be captured and used in order to reason about he consequences of changing conditions or requirements, A formalism called REMAP (representation and maintenance of process knowledge) that accumulates design process knowledge to manage systems evolution is described. To accomplish this, REMAP acquires and maintains dependencies among the design decisions made during a prototyping process, and is able to learn general domain-specific design rules on which such dependencies are based. This knowledge cannot only be applied to prototype refinement and systems maintenance, but can also support the reuse of existing design or software fragments to construct similar ones using analogical reasoning techniques.
A knowledge representation language for requirements engineering Requirements engineering, the phase of software development where the users' needs are investigated, is more and more shifting its concern from the target system towards its environment. A new generation of languages is needed to support the definition of application domain knowledge and the behavior of the universe around the computer. This paper assesses the applicability of classical knowledge representation techniques to this purpose. Requirements engineers insist, however, more on natural representation, whereas expert systems designers insist on efficient automatic use of the knowledge. Given this priority of expressiveness, two candidates emerge: the semantic networks and the techniques based on logic. They are combined in a language called the ERAE model, which is illustrated on examples, and compared to other requirements engineering languages.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.048101
0.071138
0.04851
0.029632
0.018681
0.001108
0.000226
0.000011
0
0
0
0
0
0
Normal form approach to compiler design This paper demonstrates how reduction to normal form can help in the design of a correct compiler for Dijkstra's guarded command language. The compilation strategy is to transform a source program, by a series of algebraic manipulations, into a normal form that describes the behaviour of a stored-program computer. Each transformation eliminates high-level language constructs in favour of lower-level constructs. The correctness of the compiler follows from the correctness of each of the algebraic transformations.
Developing Correct Systems The goal of the Provably Correct Systems project (ProCoS) is to develop a mathematical basis for development of embedded, real-time, computer systems. This survey paper introduces novel specication languages and verication techniques for four levels of development: Requirements denition and design; Program specications and their transformation to parallel programs; Compilation of programs to hardware; and Compilation of real-time programs to conventional processors.
A Case Study in Transformational Design of Concurrent Systems . We explain a transformationalapproach to the design and verification ofcommunicating concurrent systems. Thetransformations start form specifications thatcombine trace-based with state-based assertionalreasoning about the desired communicationbehaviour, and yield concurrent implementations.We illustrate our approach by acase study proving correctness of implementationsof safe and regular registers allowingconcurrent writing and reading phases, originallydue to Lamport.1...
A Tactic Driven Refinement Tool
Reasoning Algebraically about Loops We show here how to formalize different kinds of loop constructs within the refinement calculus, and how to use this formalization to derive general loop transformation rules. The emphasis is on using algebraic methods for reasoning about equivalence and refinement of loops, rather than looking at operational ways of reasoning about loops in terms of their execution sequences. We apply the algebraic reasoning techniques to derive a collection of different loop transformation rules that have been found important in practical program derivations: merging and reordering of loops, data refinement of loops with stuttering transitions and atomicity refinement of loops.
Informal Strategies in Design by Refinement To become more widely accepted, formal development methods must come to be seen to complement existing systems design techniques, rather than to replace them. This paper proposes one way in which this can take place—in a formal development framework, closely based on the refinement calculus but simultaneously accommodating some important informal design strategies.
Software development: two approaches to animation of Z specifications using Prolog Formal methods rely on the correctness of the formal requirements specification, but this correctness cannot be proved. This paper discusses the use of software tools to assist in the validation of formal specifications and advocates a system by which Z specifications may be animated as Prolog programs. Two Z/Prolog translation strategies are explored; formal program synthesis and structure simulation. The paper explains why the former proved to be unsuccessful and describes the techniques developed for implementing the latter approach, with the aid of case studies
Biting the silver bullet: toward a brighter future for system development The author responds to two discouraging position papers by F.B. Brooks, Jr. (see ibid., vol.20, no.4, p 10-19, 1987) and D.L. Parnas (see Commun. ACM, vol.28, no.12, p.1326-35, 1985) regarding the potential of software engineering. While agreeing with most of the specific points made in both papers, he illuminates the brighter side of the coin, emphasizing developments in the field that were too recent or immature to have influenced Brooks and Parnas when they wrote their manuscripts. He reviews their arguments, and then considers a class of systems that has been termed reactive, which are widely considered to be particularly problematic. He reviews a number of developments that have taken place in the past several years and submits that they combine to form the kernel of a solid general-purpose approach to the development of complex reactive systems.<>
On the Lattice of Specifications: Applications to a Specification Methodology In this paper we investigate the lattice properties of the natural ordering between specifications, which expresses that a specification expresses a stronger requirement than another specification. The lattice-like structure that we uncover is used as a basis for a specification methodology.
Specifying software requirements for complex systems: new techniques and their application This paper concerns new techniques for making requirements specifications precise, concise, unambiguous, and easy to check for completeness and consistency. The techniques are well-suited for complex real-time software systems; they were developed to document the requirements of existing flight software for the Navy's A-7 aircraft. The paper outlines the information that belongs in a requirements document and discusses the objectives behind the techniques. Each technique is described and illustrated with examples from the A-7 document. The purpose of the paper is to introduce the A-7 document as a model of a disciplined approach to requirements specification; the document is available to anyone who wishes to see a fully worked-out example of the approach.
Goal-directed requirements acquisition Requirements analysis includes a preliminary acquisition step where a global model for the specification of the system and its environment is elaborated. This model, called requirements model, involves concepts that are currently not supported by existing formal specification languages, such as goals to be achieved, agents to be assigned, alternatives to be negotiated, etc. The paper presents an approach to requirements acquisition which is driven by such higher-level concepts. Requirements models are acquired as instances of a conceptual meta-model. The latter can be represented as a graph where each node captures an abstraction such as, e.g., goal, action, agent, entity, or event, and where the edges capture semantic links between such abstractions. Well-formedness properties on nodes and links constrain their instances—that is, elements of requirements models. Requirements acquisition processes then correspond to particular ways of traversing the meta-model graph to acquire appropriate instances of the various nodes and links according to such constraints. Acquisition processes are governed by strategies telling which way to follow systematically in that graph; at each node specific tactics can be used to acquire the corresponding instances. The paper describes a significant portion of the meta-model related to system goals, and one particular acquisition strategy where the meta-model is traversed backwards from such goals. The meta-model and the strategy are illustrated by excerpts of a university library system.
Property Based Coordination For a multiagent system (MAS), coordination is the assumption that agents are able to adapt their behavior according to those of the other agents. The principle of Property Based Coordination (PBC) is to represent each entity com- posing the MAS by its observable properties, and to organize their perception by the agents. The main result is to enable the agents to have contextual behav- iors. In this paper, we instantiate the PBC principle by a model, called EASI -Environment as Active Support of Interaction-, which is inspired from the Sym- bolic Data Analysis theory. It enables to build up an interaction as a connection point between the needs of the initiator, those of the receptor(s) and a given con- text. We demonstrate that thanks to PBC, EASI is expressive enough to instantiate other solutions to the connection problem. Our proposition has been used in the traveler information domain to develop an Agent Information Server dynamically parameterized by its users.
Should Concurrency be Specified?
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.034449
0.022584
0.010004
0.007389
0.002444
0.001111
0.000265
0.000041
0.000005
0
0
0
0
0
Requirements critiquing using domain abstractions Reusing domain abstractions representing key domain features has been shown to aid requirement specification, however their role in requirements engineering has not been investigated thoroughly. This paper proposes domain abstractions to aid requirements critiquing as well as specification, thus maximising the payoff from retrieving domain abstractions. The requirements critic is part of a prototype intelligent requirements engineering toolkit being developed as part of the Nature project, ESPRIT basic research action 6353. The critic retrieves domain abstractions to validate requirement specifications for problems including incompleteness, inconsistencies and ambiguities. Intelligent, mixed initiative dialogue between the critic and requirements engineer permits requirements critiquing at the right time and level of abstraction
Approaches to interface design The current literature on interface design is reviewed. Four major approaches to interface design are identified; craft, cognitive engineering, enhanced software engineering and technologist. The aim of this classification framework is not to split semantic hairs, but to provide a comprehensive overview of a complex field and to clarify some of the issues involved. The paper goes on to discuss the source of quality in interface design and concludes with some recommendations on how to improve HCI methods.
A co-operative scenario based approach to acquisition and validation of system requirements: How exceptions can help! Scenarios, in most situations, are descriptions of required interactions between a desired system and its environment, which detail normative system behaviour. Our studies of current scenario use in requirements engineering have revealed that there is considerable interest in the use of scenarios for acquisition, elaboration and validation of system requirements. However, scenarios have seldom bee...
Integrating methods of human-computer interface design with structured systems development Various methods for specification and design of the human-computer interface have been proposed but practice of such methods is not widespread. Possible reasons for this may be the lack of integration of human-computer interface design with software engineering and the specialized nature of HCI methods. A method of interface design is proposed which integrates the development of the human-computer interface with structured systems analysis and design. The method covers task and user analysis, interface specification and dialogue design. A case study of a library system is used to illustrate the method which is discussed in relation to different approaches that have been adopted for interface specification and design. It is argued that software design methods should cover all aspects of process design and the human-computer interface.
Distributed Intelligent Agents In Retsina, the authors have developed a distributed collection of software agents that cooperate asynchronously to perform goal-directed information retrieval and integration for supporting a variety of decision-making tasks. Examples for everyday organizational decision making and financial portfolio management demonstrate its effectiveness.
An organisation ontology for enterprise modeling: Preliminary concepts for linking structure and behaviour The paper presents our preliminary exploration into an organisation ontology for the TOVE enterprise model. The ontology puts forward a number of conceptualizations for modeling organisations: activities, agents, roles, positions, goals, communication, authority, commitment. Its primary focus has been in linking structure and behaviour through the concept of empowerment. Empowerment is the right of an organisation agent to perform status changing actions. This linkage is critical to the unification of enterprise models and their executability.
A program integration algorithm that accommodates semantics-preserving transformations Given a program Base and two variants, A and B, each created by modifying separate copies of Base, the goal of program integration is to determine whether the modifications interfere, and if they do not, to create an integrated program that includes both sets of changes as well as the portions of Base preserved in both variants. Text-based integration techniques, such as the one used by the UNIX diff3 utility, are obviously unsatisfactory because one has no guarantees about how the execution behavior of the integrated program relates to the behaviors of Base, A, and B. The first program-integration algorithm to provide such guarantees was developed by Horwitz, Prins, and Reps. However, a limitation of that algorithm is that it incorporates no notion of semantics-preserving transformations. This limitation causes the algorithm to be overly conservative in its definition of interference. For example, if one variant changes the way a computation is performed (without changing the values computed) while the other variant adds code that uses the result of the computation, the algorithm would classify those changes as interfering. This paper describes a new integration algorithm that is able to accommodate semantics-preserving transformations.
Negotiation in distributed artificial intelligence: drawing from human experience Distributed artificial intelligence and cooperative problem solving deal with agents who allocate resources, make joint decisions and develop plans. Negotiation may be important for interacting agents who make sequential decisions. We discuss the use of negotiation in conflict resolution in distributed AI and select those elements of human negotiations that can help artificial agents better to resolve conflicts. Problem evolution, a basic aspect of negotiation, can be represented using the restructurable modelling method of developing decision support systems. Restructurable modelling is implemented in the knowledge-based generic decision analysis and simulation system Negoplan. Experiments show that Negoplan can effectively help resolve individual and group conflicts in negotiation.<>
Reusing requirements through a modeling and composition support tool This paper presents the concepts and tools for reusing requirements being designed and implemented within the ITHACA project. The RECAST (REquirements Collection And Specification Tool) tool guides the Application Developer in the requirement specification process by providing suggestions to the reuse of components. To this aim, RECAST includes a meta-level of definitions; here, meta-level classes associated to components contain design suggestions about the reuse of these components and about the design actions to be performed during the subsequent application development phases.
Role of data dictionaries in information resource management The role of information resource dictionary systems (data dictionary systems) is important in two important phases of information resource management: First , information requirements analysis and specification, which is a complex activity requiring data dictionary support: the end result is the specification of an “Enterprise Model,” which embodies the major activities, processes, information flows, organizational constraints, and concepts. This role is examined in detail after analyzing the existing approaches to requirements analysis and specification. Second , information modeling which uses the information in the Enterprise Model to construct a formal implementation independent database specification: several information models and support tools that may aid in transforming the initial requirements into the final logical database design are examined. The metadata — knowledge about both data and processes — contained in the data dictionary can be used to provide views of data for the specialized tools that make up the database design workbench. The role of data dictionary systems in the integration of tools is discussed.
Data Flow Structures for System Specification and Implementation Data flow representations are used increasingly as a formal modeling tool for the specification of systems. While we often think of systems in this form, developers have been reluctant to implement data flow constructs directly because languages and operating systems have traditionally not encouraged (or supported) such an approach. This paper describes the use of data flow structures for system analysis and a set of facilities that make direct implementation of data flow convenient and natural.
From MooZ to Eiffel - A Rigorous Approach to System Development We propose a method for refining MooZ specifications into Eiffel programs. MooZ is an object-oriented extension of the Z model based specification language and Eiffel is a programming language which is also based on the object-oriented paradigm. We present the refinement method and then we illustrate its application to part of an Industrial Maintenance System.
Grounded Conceptual Graph Models The ability to represent real-world objects is an important feature of a practical knowledge system. Most knowledge systems involve informal or ad-hoc mappings from their internal symbols to objects and concepts in their environment. This work introduces a framework for formally associating symbols to their meanings, a process we call grounding. Two kinds of grounding are discussed with respect to conceptual graphs --- active grounding, which involves actors to provide mappings to the environment, and terminological grounding, which involves actors that establish the basic elements of meaning with respect to a subject field's agreed-upon terminology. The work incorporates active knowledge systems and international terminological standards.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1.034922
0.031373
0.023826
0.018824
0.01181
0.01181
0.006964
0.004725
0.002139
0.000249
0.000004
0
0
0
Smart imaging to empower brain-wide neuroscience at single-cell levels A deep understanding of the neuronal connectivity and networks with detailed cell typing across brain regions is necessary to unravel the mechanisms behind the emotional and memorial functions as well as to find the treatment of brain impairment. Brain-wide imaging with single-cell resolution provides unique advantages to access morphological features of a neuron and to investigate the connectivity of neuron networks, which has led to exciting discoveries over the past years based on animal models, such as rodents. Nonetheless, high-throughput systems are in urgent demand to support studies of neural morphologies at larger scale and more detailed level, as well as to enable research on non-human primates (NHP) and human brains. The advances in artificial intelligence (AI) and computational resources bring great opportunity to ‘smart’ imaging systems, i.e., to automate, speed up, optimize and upgrade the imaging systems with AI and computational strategies. In this light, we review the important computational techniques that can support smart systems in brain-wide imaging at single-cell resolution.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Towards a Formalization of Constraint Diagrams Geared to complement UML and to the specification of large software systems by non-mathematicians, constraint diagrams are a visual language that generalizes the popular and intuitive Venn diagrams and Euler circles, and adds facilities for quantifying over elements and navigating relations. The language design emphasizes scalability and expressiveness while retaining intuitiveness. Spider diagrams form a subset of the notation, leaving out universal quantification and the ability to navigate relations. Spider diagrams have been given a formal definition. This paper extends that definition to encompass the constraint diagram notation. The formalization of constraint diagrams is nontrivial: it exposes subtleties concerned with the implicit ordering of symbols in the visual language, which were not evident before a formal definition of the language was attempted. This has led to an improved design of the language
A visual framework for modelling with heterogeneous notations This paper presents a visual framework for organizing models of systems which allows a mixture of notations, diagrammatic or text-based, to be used. The framework is based on the use of templates which can be nested and sometimes flattened. It is modular and can be used to structure the constraint space of the system, making it scalable with appropriate tool support. It is also flexible and extensible: users can choose which notations to use, mix them and add new notations or templates. The goal of this work is to provide more intuitive and expressive languages and frameworks to support the construction and presentation of rich and precise models.
Visual Formalisms Revisited The development of an interactive application is a complex task that has to consider data, behavior, inter- communication, architecture and distribution aspects of the modeled system. In particular, it presupposes the successful communication between the customer and the software expert. To enhance this communica- tion most modern software engineering methods rec- ommend to specify the different aspects of a system by visual formalisms. In essence, visual specifications are directed graphs that are interpreted in a particular way for each as- pect of the system. They are also intended to be com- positional. This means that, each node can itself be a graph with a separate meaning. However, the lack of a denotational model for hierarchical graphs often leads to the loss of compositionality. This has severe negative consequences in the development of realistic applications. In this paper we present a simple denotational model (which is by definition compositional) for the architecture and behavior aspects of a system. This model is then used to give as emantics to almost all the concepts occurring in ROOM. Our model also provides a compositional semantics for or-states in statecharts.
Drawing Graphs in Euler Diagrams We describe a method for drawing graph-enhanced Euler diagrams using a three stage method. The first stage is to lay out the underlying Euler diagram using a multicriteria optimizing system. The second stage is to find suitable locations for nodes in the zones of the Euler diagram using a force based method. The third stage is to minimize edge crossings and total edge length by swapping the location of nodes that are in the same zone with a multicriteria hill climbing method. We show a working version of the software that draws spider diagrams. Spider diagrams represent logical expressions by superimposing graphs upon an Euler diagram. This application requires an extra step in the drawing process because the embedded graphs only convey information about the connectedness of nodes and so a spanning tree must be chosen for each maximally connected component. Similar notations to Euler diagrams enhanced with graphs are common in many applications and our method is generalizable to drawing Hypergraphs represented in the subset standard, or to drawing Higraphs where edges are restricted to connecting with only atomic nodes.
An Algebraic Foundation for Higraphs Higraphs, which are structures extending graphs by permitting a hierarchy of nodes, underlie a number of diagrammatic formalisms popular in computing. We provide an algebraic account of higraphs (and of a mild extension), with our main focus being on the mathematical structures underlying common operations, such as those required for understanding the semantics of higraphs and Statecharts, and for implementing sound software tools which support them.
Degrees of acyclicity for hypergraphs and relational database schemes Database schemes (winch, intuitively, are collecuons of table skeletons) can be wewed as hypergraphs (A hypergraph Is a generalization of an ordinary undirected graph, such that an edge need not contain exactly two nodes, but can instead contain an arbitrary nonzero number of nodes.) A class of "acychc" database schemes was recently introduced. A number of basic desirable propemes of database schemes have been shown to be equivalent to acyclicity This shows the naturalness of the concept. However, unlike the situation for ordinary, undirected graphs, there are several natural, noneqmvalent notions of acyclicity for hypergraphs (and hence for database schemes). Various desirable properties of database schemes are constdered and it is shown that they fall into several equivalence classes, each completely characterized by the degree of acycliclty of the scheme The results are also of interest from a purely graph-theoretic viewpomt. The original notion of aeyclicity has the countermtmtive property that a subhypergraph of an acychc hypergraph can be cyclic. This strange behavior does not occur for the new degrees of acyelicity that are considered.
Constraint Matching for Diagram Design: Qualitative Visual Languages This paper examines diagrams which exploit qualitative spatial relations (QSRs) for representation. Our point of departure is the theory that such diagram systems are most effective when their formal properties match those of the domains that they represent (e.g. [1, 2, 3]). We argue that this is true in certain cases (e.g. when a user is constructing diagrammatic representations of a certain kind) but that formal properties cannot be studied in isolation from an account of the cognitive capacities of diagram users to detect and categorize diagram objects and relations. We discuss a cognitively salient repertoire of elements in qualitative visual languages, which is different from the set of primitives in mathematical topology, and explore how this repertoire affects the expressivity of the languages in terms of their vocabulary and the possible spatial relations between diagram elements. We then give a detailed analysis of the formal properties of relations between the diagram elements. It is shown that the analysis can be exploited systematically for the purposes of designing a diagram system and analysing expressivity. We demonstrate this methodology with reference to several domains, e.g. diagrams for file systems and set theory (see e.g. [4]).
Towards Event-Driven Modelling for Database Design
From E-R to "A-R" - Modelling Strategic Actor Relationships for Business Process Reengineering
Program families: program construction by context independent refinements The concept of program families is a generalisation of the conventional stepwise refinement paradigm. We formalise program families by allowing Hoare-triplets to be parameterized. Next we derive a simple calculus to develop programs which are known a priori to be correct with respect to explicitly formulated pre- and postconditions. Program families deal with at least two important problems of conventional refinement steps, i.e. program families are not context dependent and they apply just as well to top-down decomposition as to the bottom-up or middle-out approach. It turns out that the meaning of a pseudostatement in the context of program families is quite different from its meaning in the conventional refinement process. A couple of examples illustrate the technique: the 1000 primes problem, a palindrome filter and a sorting routine. The discussion relates program families to Morgan's refinement calculus, Knuth' literate programming and Soloway's programming plans.
On the interplay between consistency, completeness, and correctness in requirements evolution The initial expression of requirements for a computer-based system is often informal and possibly vague. Requirements engineers need to examine this often incomplete and inconsistent brief expression of needs. Based on the available knowledge and expertise, assumptions are made and conclusions are deduced to transform this ‘rough sketch’ into more complete, consistent, and hence correct requirements. This paper addresses the question of how to characterize these properties in an evolutionary framework, and what relationships link these properties to a customer's view of correctness. Moreover, we describe in rigorous terms the different kinds of validation checks that must be performed on different parts of a requirements specification in order to ensure that errors (i.e. cases of inconsistency and incompleteness) are detected and marked as such, leading to better quality requirements.
Behavioral Subtyping, Specification Inheritance, and Modular Reasoning 2006 CR Categories: D. 2.2 [Software Engineering] Design Tools and Techniques, Object-oriented design methods; D. 2.3 [Software Engineering] Coding Tools and Techniques, Object-oriented programming; D. 2.4 [Software Engineering] Software/Program Verification, Class invariants, correctness proofs, formal methods, programming by contract, reliability, tools, Eiffel, JML; D. 2.7 [Software Engineering] Distribution, Maintenance, and Enhancement, Documentation; D. 3.1 [Programming Languages] Formal Definitions and Theory, Semantics; D. 3.2 [Programming Languages] Language Classifications, Object-oriented languages; D. 3.3 [Programming Languages] Language Constructs and Features, classes and objects, inheritance; F. 3.1 [Logics and Meanings of Programs] Specifying and Verifying and Reasoning about Programs, Assertions, invariants, logics of programs, pre-and post-conditions, specification techniques;
Abstractions of non-interference security: probabilistic versus possibilistic. The Shadow Semantics (Morgan, Math Prog Construction, vol 4014, pp 359–378, 2006; Morgan, Sci Comput Program 74(8):629–653, 2009) is a possibilistic (qualitative) model for noninterference security. Subsequent work (McIver et al., Proceedings of the 37th international colloquium conference on Automata, languages and programming: Part II, 2010) presents a similar but more general quantitative model that treats probabilistic information flow. Whilst the latter provides a framework to reason about quantitative security risks, that extra detail entails a significant overhead in the verification effort needed to achieve it. Our first contribution in this paper is to study the relationship between those two models (qualitative and quantitative) in order to understand when qualitative Shadow proofs can be “promoted” to quantitative versions, i.e. in a probabilistic context. In particular we identify a subset of the Shadow’s refinement theorems that, when interpreted in the quantitative model, still remain valid even in a context where a passive adversary may perform probabilistic analysis. To illustrate our technique we show how a semantic analysis together with a syntactic restriction on the protocol description, can be used so that purely qualitative reasoning can nevertheless verify probabilistic refinements for an important class of security protocols. We demonstrate the semantic analysis by implementing the Shadow semantics in Rodin, using its special-purpose refinement provers to generate (and discharge) the required proof obligations (Abrial et al., STTT 12(6):447–466, 2010). We apply the technique to some small examples based on secure multi-party computations.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.040083
0.039559
0.036364
0.022545
0.007273
0.000995
0.000121
0.000017
0
0
0
0
0
0
Prototyping interactive information systems Applying prototype-oriented development processes to computerized application systems significantly improves the likelihood that useful systems will be developed and that the overall development cycle will be shortened. The prototype development methodology and development tool presented here have been widely applied to the development of interactive information systems in the commercial data processing setting. The effectiveness and relationship to other applications is discussed.
DMS: A comprehensive system for managing human-computer dialogue As the complexity of human-computer interfaces increases, those who use these interfaces as well as those reponsible for their design have recognized an urgent need for substantive research in the human factors of software development [2], [5]. Because of the magnitude of the task of producing software for individual human-computer interfaces, appropriate tools are needed for defining and improving such interfaces, both in research and production environments. This paper describes the research being carried out to construct DMS (Dialogue Management System), which is a complete system for defining, modifying, executing, and metering human-computer dialogues.
Problem-solution mapping in object-oriented design Six expert Smalltalk programmers and three expert procedural programmers were observed as they worked on a gourmet shopping design problem; they were asked to think aloud about what was going through their minds as they worked. These verbal protocols were recorded and examined for ways in which the programmers' understanding of the problem domain affected the design process; most of our examples are from the three Smalltalk programmers who focussed most on the mapping from problem to solution. We characterize the problem entities that did appear as solution objects, the active nature of the mapping process, and ways in which the resultant objects went beyond their problem analogs.
Seven basic principles of software engineering This paper attempts to distill the large number of individual aphorisms on good software engineering into a small set of basic principles. Seven principles have been determined which form a reasonably independent and complete set. These are: 1.(1) manage using a phased life-cycle plan. 2.(2) perform continuous validation. 3.(3) maintain disciplined product control. 4.(4) use modern programming practices. 5.(5) maintain clear accountability for results. 6.(6) use better and fewer people. 7.(7) maintain a commitment to improve the process. The overall rationale behind this set of principles is discussed, followed by a more detailed discussion of each of the principles.
Toward objective, systematic design-method comparisons Software design methodologies (SDMs) suggest ways to improve productivity and quality. They are collections of complementary design methods and rules for applying them. A base framework and modeling formalism to help designers compare SDMs and define what design issues different SDMs address, which of their components address similar design issues, and ways to integrate the best characteristics of each to make a cleaner, more comprehensive and flexible SDM are presented. The use of formalism and framework and the evaluation of objectivity and completeness using the type and function frameworks are described.<>
Characterizing visual languages A better understanding of the visual character of languages is important in developing one's ability to exploit the human visual system. The author briefly outlines Goodman's (1976) distinction between notational and analog languages, and describes its use in developing the notion of syntactic and semantic density as the defining characteristic of visual languages. Several languages are evaluated for their use of density. He concludes that practical languages are most visually effective when their layout is constrained by an important semantic domain
Constraining Pictures with Pictures
Comparison of analysis techniques for information requirement determination A comparison of systems analysis techniques, the Data Flow Diagram (DFD) and part of the Integrated Definition Method (IDEFo), is done using a new developmental framework.
A constructive approach to the design of distributed systems The underlying model of distributed systems is that of loosely coupled components r running in parallel and communicating by message passing. Description, construction and evolution of these systems is facilitated by separating the system structure, as a set of components and their interconnections, from the functional description of individual component behaviour. Furthermore, component reuse and structuring flexibility is enhanced if components are context independent ie. self- contained with a well defined interface for component interaction. The Conic environment for distributed programming supports this model. In particular, Conic provides a separate configuration language for the description, construction and evolution of distributed systems. The Conic environment has demonstrated a working environment which supports system distribution, r reconfiguration and extension. We had initially supposed that Conic might pose difficult challenges for us as software designers. For example, what design techniques should we employ to develop a system that exploits the Conic facilities? In fact we have experienced quite the opposite. The principles of explicit system structure and context independent components that underlie Conic have lead us naturally to a design approach which differs from that of both current industrial practice and current research. Our approach is termed "constructive" since it emphasises the satisfaction of system requirements by composition of components. In this paper we describe the approach and illustrate its use by application to an example, a model airport shuttle system which has been implemented in Conic.
Model Checking Complete Requirements Specifications Using Abstraction Although model checking has proven remarkably effective in detectingerrors in hardware designs, its success in the analysis of softwarespecifications has been limited. Model checking algorithms forhardware verification commonly use Binary Decision Diagrams (BDDs) to represent predicates involvingthe many Boolean variables commonly found in hardware descriptions.Unfortunately, BDD representations may be less effective for analyzingsoftware specifications, which usually contain not only Booleansbut variables spanning a wide range of data types. Further, softwarespecifications typically have huge, sometimes infinite, state spacesthat cannot be model checked directly using conventional symbolic methods.One promising but largely unexplored approach to model checking softwarespecifications is to apply mathematically sound abstraction methods.Such methods extract a reduced model from the specification, thus makingmodel checking feasible. Currently, users of model checkers routinelyanalyze reduced models but often generate the models in ad hoc ways. Asa result, the reduced models may be incorrect.This paper, an expanded version of (Bharadwaj and Heitmeyer, 1997), describes how one can model check a complete requirementsspecification expressed in the SCR (Software Cost Reduction) tabular notation.Unlike previous approaches which applied model checking to mode transitiontables with Boolean variables, we use model checking to analyze propertiesof a complete SCR specification with variables ranging over many data types.The paper also describes two sound and, under certain conditions, completemethods for producing abstractions from requirements specifications. Theseabstractions are derived from the specification and the property to beanalyzed. Finally, the paper describes how SCR requirements specificationscan be translated into the languages of Spin, an explicit state model checker,and SMV, a symbolic model checker, and presents the results of model checkingtwo sample SCR specifications using our abstraction methods and the twomodel checkers.
A superimposition control construct for distributed systems A control structure called a superimposition is proposed. The structure contains schematic abstractions of processes called roletypes in its declaration. Each roletype may be bound to processes from a basic distributed algorithm, and the operations of the roletype will then execute interleaved with those of the basic processes, over the same state space. This structure captures a kind of modularity natural for distributed programming, which previously has been treated using a macro-like implantation of code. The elements of a superimposition are identified, a syntax is suggested, correctness criteria are defined, and examples are presented.
Graph Drawing Methods Many structures in Information Technology can be modeled as graphs, and the success of the model depends on the appearance of the graph: a good drawing can be worth a thousand words, a poor drawing can confuse and obscure the model. This paper surveys recently developed methods for automatic graph drawing.
Parallel image normalization on a mesh connected array processor Image normalization is a basic operation in various image processing tasks. A parallel algorithm for fast binary image normalization is proposed for a mesh connected array processor. The principal operation in this algorithm is pixel mapping. The basic idea of parallel pixel mapping is to utilize a store and forward mechanism which routes pixels from their source locations to destinations in parallel along the paths of minimum length. The routing is based on a simple yet powerful concept of flow control patterns . This can form the basis for designing other parallel algorithms for low level image processing. The normalization process is decomposed into three procedures: translation, rotation and scaling. In each procedure, a mapping algorithm is employed to route the object pixels from source locations to destinations. Simulation results for the parallel image normalization on generated images are provided.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.037116
0.034439
0.017234
0.012407
0.004762
0.000833
0.000102
0.00002
0.000005
0.000001
0
0
0
0
Global resynchronization-based image watermarking resilient to geometric attacks. How to resist geometric attacks effectively and improve the watermark embedding capacity are still challenging tasks. An algorithm with better capabilities to resist cropping and combined attacks, as well as having a larger embedding capacity, is proposed. Feature points from the attacked image are obtained with the Speeded-Up Robust Features (SURF) algorithm. Then, they are matched with a few feature points obtained from the watermarked image. The matching point pairs are used to estimate the affine matrix, and then the geometric attacks are corrected by the inverse affine transform. Some positioning watermarks are embedded in the spatial domain to improve the accuracy of the resynchronization. Finally, the watermark is encoded by the fountain code such that the anti-cropping performance of the algorithm can be improved. The experimental results show that the proposed algorithm not only has a larger embedding capacity but also is resistant to many kinds of attacks.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Impediments to Regulatory Compliance of Requirements in Contractual Systems Engineering Projects: A Case Study Large-scale contractual systems engineering projects often need to comply with myriad government regulations and standards as part of contractual obligations. A key activity in the requirements engineering (RE) process for such a project is to demonstrate that all relevant requirements have been elicited from the regulatory documents and have been traced to the contract as well as to the target system components. That is, the requirements have met regulatory compliance. However, there are impediments to achieving this level of compliance due to such complexity factors as voluminous contract, large number of regulatory documents, and multiple domains of the system. Little empirical research has been conducted in the scientific community on identifying these impediments. Knowing these impediments is a driver for change in the solutions domain (i.e., creating improved or new methods, tools, processes, etc.) to deal with such impediments. Through a case study of an industrial RE project, we have identified a number of key impediments to achieving regulatory compliance in a large-scale, complex, systems engineering project. This project is an upgrade of a rail infrastructure system. The key contribution of the article is a number of hitherto uncovered impediments described in qualitative and quantitative terms. The article also describes an artefact model, depicting key artefacts and relationships involved in such a compliance project. This model was created from data gathered and observations made in this compliance project. In addition, the article describes emergent metrics on regulatory compliance of requirements that can possibly be used for estimating the effort needed to achieve regulatory compliance of system requirements.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Results on passivity and design of passive controller for fuzzy neural networks with additive time-varying delays The problem of designing a T–S fuzzy passive controller for fuzzy fractional-order neural networks (FOFNNs) with additive time-varying delays is taken up in this work. The novelty of this work lies in the aspect that it addresses all the challenges faced in deriving an order and delay-dependent LMI criterion for incommensurate FONNs. Based on indirect Lyapunov approach, sufficient conditions which ensure passivity of the considered FOFNNs are found. Distinct to the works already done, the results proposed include both fractional-order of the system and delay bounds. Further, the derived results are extended to incommensurate FOFNNs, thereby giving an order and delay-dependent LMI-based passivity conditions for incommensurate FOFNNs for the first time in the literature. Also a result on existence of passive controller for the considered NNs is derived. Finally, the proposed theory is verified by three numerical examples.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Probabilistic Transmission Scheme for Distributed Filtering Over Randomly Lossy Sensor Networks in the Presence of Eavesdropper This article studies distributed secure estimation in sensor networks with packet losses and an eavesdropper. The message is transmitted through communication channels between sensors, which can be overheard by the eavesdropper with a certain probability. Two types of sensor networks under two cooperative-filtering algorithms are considered, and probabilistic transmission schemes to defend against the eavesdropper are proposed. For collectively detectable sensor networks under a consensus Kalman filter, a sufficient distributed detectability condition on the transmission probabilities is identified to guarantee that the estimation errors of the sensors are statistically bounded. Furthermore, a necessary and sufficient security condition is obtained to guarantee unboundedness of the eavesdropper’s estimation error. For neighborhood-detectable nodes under a standard Kalman filter, a sufficient distributed detectability condition and a necessary security condition are provided on the transmission probabilities. Simulation examples are given to illustrate the results.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1
0
0
0
0
0
0
0
0
0
0
0
0
0
State feedback synchronization control of impulsive neural networks with mixed delays and linear fractional uncertainties. This study examines the synchronization problem of impulsive neural networks with mixed time-varying delays and linear fractional uncertainties. The mixed time-varying delays include distributed leakage, discrete and distributed time-varying delays. Moreover, the restrictions on derivatives of time-varying delays with upper bounds to smaller than one is relaxed by introducing free weight matrices. Based on suitable Lyapunov–Krasovskii functionals and integral inequalities, linear matrix inequality approach is used to derive the sufficient conditions that guarantee the synchronization criteria of impulsive neural networks via delay dependent state feedback control. Finally, three numerical examples are given to show the effectiveness of the theoretical results.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Splitting-Integrating Method for Normalizing Images by Inverse Transformations The splitting-integrating method is a technique developed for the normalization of images by inverse transformation. It does not require solving nonlinear algebraic equations and is much simpler than any existing algorithm for the inverse nonlinear transformation. Moreover, its solutions have a high order of convergence, and the images obtained through T/sup -1/ are free from superfluous holes and blanks, which often occur in transforming digitized images by other approaches. Application of the splitting-integrating method can be extended to supersampling in computer graphics, such as picture transformations by antialiasing, inverse nonlinear mapping, etc.
Discrete Techniques For 3-D Digital Images And Patterns Under Transformation Three-dimensional (3-D) digital images and patterns under transformations are facilitated by the splitting- shooting method (SSM) and the splitting- integration method (SIM), The combination (CSIM) of using both SSM and SIM and two combinations (CIIM) of using SIM only are proposed for a cycle conversion T-1T, where T is a nonlinear transformation, and T-1 is its inverse transformation. This paper focuses on exploitation of accuracy of obtained image greyness. In our discrete algorithms, letting a 3-D pixel be split into N-3 subpixels, the convergence rates, O(1/N), O(1/N-2), and O(1/N-3); of sequential error can be achieved by the three combinations respectively. High convergence rates indicate less CPU time needed. Both error bounds and computation of pixel greyness have shown the significance of the proposed new algorithms.
Discrete techniques for 3-D digital images and patterns undertransformation Three-dimensional (3-D) digital images and patterns under transformations are facilitated by the splitting-shooting method (SSM) and the splitting-integration method (SIM). The combination (CSIM) of using both SSM and SIM and two combinations (CIIM) of using SIM only are proposed for a cycle conversion T-1T, where T is a nonlinear transformation, and T-1 is its inverse transformation. This paper focuses on exploitation of accuracy of obtained image greyness. In our discrete algorithms, letting a 3-D pixel be split into N3 subpixels, the convergence rates, O(1/N), O(1/N2), and O(1/N 3), of sequential error can be achieved by the three combinations respectively. High convergence rates indicate less CPU time needed. Both error bounds and computation of pixel greyness have shown the significance of the proposed new algorithms
Robust contour decomposition using a constant curvature criterion The problem of decomposing an extended boundary or contour into simple primitives is addressed with particular emphasis on Laplacian-of-Gaussian zero-crossing contours. A technique is introduced for partitioning such contours into constant curvature segments. A nonlinear 'blip' filter matched to the impairment signature of the curvature computation process, an overlapped voting scheme, and a sequential contiguous segment extraction mechanism are used. This technique is insensitive to reasonable changes in algorithm parameters and robust to noise and minor viewpoint-induced distortions in the contour shape, such as those encountered between stereo image pairs. The results vary smoothly with the data, and local perturbations induce only local changes in the result. Robustness and insensitivity are experimentally verified.
Decomposition of convex polygonal morphological structuring elements into neighborhood subsets A discussion is presented of the decomposition of convex polygon-shaped structuring elements into neighborhood subsets. Such decompositions will lead to efficient implementation of corresponding morphological operations on neighborhood-processing-based parallel image computers. It is proved that all convex polygons are decomposable. Efficient decomposition algorithms are developed for different machine structures. An O(1) time algorithm, with respect to the image size, is developed for the four-neighbor-connected mesh machines; a linear time algorithm for determining the optimal decomposition is provided for the machines that can quickly perform 3×3 morphological operations
Discrete techniques for computer transformations of digital images and patterns Images and patterns through a cycle conversion T −1 T are discussed and facilitated by the combined algorithms, where the transformation is T :( ξ , η )→( x , y ) with the linear or nonlinear functions x = x ( ξ , η ) and y = y ( ξ , η ). A new Area Method is presented for images through T and T −1 T of linear transformations, and three combinations of the Splitting-Shooting Method and the Splitting-Integrating Method are proposed for images through T −1 T of linear and nonlinear transformations. Furthermore, both error analysis and graphical experiments given prove the importance of those combinations to computer vision, image processing, graphs and pattern recognition.
A nonaliasing, real-time spatial transform technique A two-pass spatial transform technique that does not exhibit the aliasing artifacts associated with techniques for spatial transform of discrete sampled images is possible through the use of a complete and continuous resampling interpolation algorithm. The algorithm is complete in the sense that all the pixels of the input image under the map of the output image fully contribute to the output image. It is continuous in the sense that no gaps or overlaps exist in the sampling of the input pixels and that the sampling can be performed with arbitrary precision. The technique is real time in the sense that it can be guaranteed to operate for any arbitrary transform within a given time limit. Because of the complete and continuous nature of the resampling algorithm, the resulting image is free of the classic sampling artifacts such as graininess, degradation, and edge aliasing.
Adaptive Membership Functions for Handwritten Character Recognition by Voronoi-Based Image Zoning In the field of handwritten character recognition, image zoning is a widespread technique for feature extraction since it is rightly considered to be able to cope with handwritten pattern variability. As a matter of fact, the problem of zoning design has attracted many researchers who have proposed several image-zoning topologies, according to static and dynamic strategies. Unfortunately, little attention has been paid so far to the role of feature-zone membership functions that define the way in which a feature influences different zones of the zoning method. The result is that the membership functions defined to date follow nonadaptive, global approaches that are unable to model local information on feature distributions. In this paper, a new class of zone-based membership functions with adaptive capabilities is introduced and its effectiveness is shown. The basic idea is to select, for each zone of the zoning method, the membership function best suited to exploit the characteristics of the feature distribution of that zone. In addition, a genetic algorithm is proposed to determine—in a unique process—the most favorable membership functions along with the optimal zoning topology, described by Voronoi tessellation. The experimental tests show the superiority of the new technique with respect to traditional zoning methods.
Arithmetic coding for data compression The state of the art in data compression is arithmetic coding, not the better-known Huffman method. Arithmetic coding gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.
A calculus of refinements for program derivations A calculus of program refinements is described, to be used as a tool for the step-by-step derivation of correct programs. A derivation step is considered correct if the new program preserves the total correctness of the old program. This requirement is expressed as a relation of (correct) refinement between nondeterministic program statements. The properties of this relation are studied in detail. The usual sequential statement constructors are shown to be monotone with respect to this relation and it is shown how refinement between statements can be reduced to a proof of total correctness of the refining statement. A special emphasis is put on the correctness of replacement steps, where some component of a program is replaced by another component. A method by which assertions can be added to statements to justify replacements in specific contexts is developed. The paper extends the weakest precondition technique of Dijkstra to proving correctness of larger program derivation steps, thus providing a unified framework for the axiomatic, the stepwise refinement and the transformational approach to program construction and verification.
Subsumption between queries to object-oriented databases Most work on query optimization in relational and object-oriented databaseshas concentrated on tuning algebraic expressions and the physical access tothe database contents. The attention to semantic query optimization, however,has been restricted due to its inherent complexity. We take a second lookat semantic query optimization in object-oriented databases and find thatreasoning techniques for concept languages developed in Artificial Intelligenceapply to this problem because concept...
A fixpoint theory for non-monotonic parallelism This paper studies parallel recursion. The trace specification language used in this paper incorporates sequentially, nondeterminism, reactiveness (including infinite traces), three forms of parallelism (including conjunctive, fair-interleaving and synchronous parallelism) and general recursion. In order to use Tarski's theorem to determine the fixpoints of recursions, we need to identify a well-founded partial order. Several orders are considered, including a new order called the lexical order , which tends to simulate the execution of a recursion in a similar manner as the Egli-Milner order. A theorem of this paper shows that no appropriate order exists for the language. Tarski's theorem alone is not enough to determine the fixpoints of parallel recursions. Instead of using Tarski's theorem directly, we reason about the fixpoints of terminating and nonterminating behaviours separately. Such reasoning is supported by the laws of a new composition called partition . We propose a fixpoint technique called the partitioned fixpoint , which is the least fixpoint of the nonterminating behaviours after the terminating behaviours reach their greatest fixpoint. The surprising result is that although a recursion may not be lexical-order monotonic, it must have the partitioned fixpoint, which is equal to the least lexical-order fixpoint. Since the partitioned fixpoint is well defined in any complete lattice, the results are applicable to various semantic models. Existing fixpoint techniques simply become special cases of the partitioned fixpoint. For example, an Egli-Milner-monotonic recursion has its least Egli-Milner fixpoint, which can be shown to be the same as the partitioned fixpoint. The new technique is more general than the least Egli-Milner fixpoint in that the partitioned fixpoint can be determined even when a recursion is not Egli-Milner monotonic. Examples of non-monotonic recursions are studied. Their partitioned fixpoints are shown to be consistent with our intuition.
Fair polyline networks for constrained smoothing of digital terrain elevation data In this paper, a framework for smoothing gridlike digital terrain elevation data, which achieves a fair shape by means of minimizing an energy functional, is presented. The minimization is performed under the side condition of hard constraints, which comes from available horizontal and vertical accuracy bounds in the standard elevation specification. In this paper, the framework is introduced, and...
Trading Networks with Bilateral Contracts. We consider general networks of bilateral contracts that include supply chains. We define a new stability concept, called trail stability, and show that any network of bilateral contracts has a trail-stable outcome whenever agents' preferences satisfy full substitutability. Trail stability is a natural extension of chain stability, but is a stronger solution concept in general contract networks. Trail-stable outcomes are not immune to deviations of arbitrary sets of firms. In fact, we show that outcomes satisfying an even more demanding stability property -- full trail stability -- always exist. We pin down conditions under which trail-stable and fully trail-stable outcomes have a lattice structure. We then completely describe the relationships between all stability concepts. When contracts specify trades and prices, we also show that competitive equilibrium exists in networked markets even in the absence of fully transferrable utility. The competitive equilibrium outcome is trail-stable.
1.042756
0.043872
0.043514
0.040744
0.040744
0.0224
0.001502
0.00024
0
0
0
0
0
0
Cooperating evolving components- A rigorous approach to evolving large software systems Large software systems have a large number of components and are developed over a long time period frequently by a large number of people. We describe a framework approach to evolving such systems based on an integration of product and process modelling. The evolving system is represented as a Product Tower, a hierarchy of components which provides views of the product at multiple levels of refinement. The evolution process is component based with the cooperation between components being mediated by the Product Tower. This ensures that the evolution process is scaleable and that it maintains, and evolves, the design model. We illustrate our approach with an example, outlining an evolution both of the product and of the process. The reflexive facilities of the process are shown to be key in ensuring the framework's ability to evolve.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Revisiting The Evaluation Of Diversified Search Evaluation Metrics With User Preferences To validate the credibility of diversity evaluation metrics, a number of methods that "evaluate evaluation metrics" are adopted in diversified search evaluation studies, such as Kendall's tau, Discriminative Power, and the Intuitiveness Test. These methods have been widely adopted and have aided us in gaining much insight into the effectiveness of evaluation metrics. However, they also follow certain types of user behaviors or statistical assumptions and do not take the information of users' actual search preferences into consideration. With multi-grade user preference judgments collected for diversified search result lists displayed parallel, we take user preferences as the ground truth to investigate the evaluation of diversity metrics. We find that user preference at the subtopic level gain similar results with those at the topic level, which means we can use user preference at the topic level with much less human efforts in future experiments. We further find that most existing evaluation metrics correlate with user preferences well for result lists with large performance differences, no matter the differences is detected by the metric or the users. According to these findings, we then propose a preference-weighted correlation, the Multi-grade User Preference (MUP) method, to evaluate the diversity metrics based on user preferences. The experimental results reveal that MUP evaluates diversity metrics from real users' perspective that may differ from other methods. In addition, we find the relevance of the search result is more important than the diversity of the search result in the diversified search evaluation of our experiments.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Difficulties in integrating multiview development systems Drawbacks of current approaches to integrating multiple perspectives in a development environment are discussed. An integrated environment is defined as one in which a dynamic collection of tools can work together on a single system so that changes made to the system by one tool can be seen by other tools, and integration criteria are set forth. Five representative approaches to systems integration-shared file systems, selective broadcasting, simple databases, view-oriented databases, and canonical representation-are examined, and their relative strengths and weaknesses are summarized. None of the integration mechanisms is shown to be uniformly superior to the others. The issue of environment evolution and its effect on integration is addressed.<>
Views of mathematical programming models and their instances Large-scale mathematical models are built, managed and applied by people with different cognitive skills. This poses a challenge for the design of a multi-view architecture of a system that accommodates these differences. A primary objective of mathematical modeling is providing insights into problem behavior, and there are many constituencies who require different views for different questions. One constituency is composed of modellers who have different views of basic model components. Another constituency is composed of problem owners for whom models are built. These two constituencies, which are not exhaustive, have significantly different needs and skills. This paper addresses this issue of multiview architecture by presenting a formal framework for the design of a view creation and management system. Specific views we consider include algebraic, block schematic, graphic, and textual. Both form and content are relevant to view creation, and the merits of views are determined by their value in aiding comprehension and insight. The need for a central, formal structure to create and manage views is demonstrated by the inadequacy of direct mappings from any of the popular systems that are typically designed to support only one view of linear programming models and their instances.
A software design framework or how to support real designers The problems inherent in capturing a designer's ideas about software provide a major difficulty for software development. In the paper, both observed designer practices and the procedural forms that are embodied in software design methods are examined. From these an integrated set of representation forms is proposed that can be used with an opportunistic design strategy. The features necessary for the realisation of this set of notations through the experimental GOOSE system, as well as their use for such activities as design execution, are described. Finally, there is an informal assessment of how well the eventual form of GOOSE has met its own design goals
Requirements specification of real-time systems: temporal parameters and timing-constraints The development of high-quality real-time systems depends on their correct requirements specification, which includes the analysis and specification of timing issues. This paper focuses on requirements specification of real-time systems, presenting a set of temporal parameters and timing-constraints related to the execution of systems processes. Timing-constraints are expressed by formulas, being useful for defining, representing, and validating the system temporal behavior, particularly in hard real-time systems specifications. The primary contribution over previous studies is the proposal of a more generic and complete set of timing-constraints, applied to the area of requirements engineering for real-time systems, which has not been sufficiently explored.
Process integration in CASE environments Research in CASE environments has focused on two kinds of integration: tool and object. A higher level of integration, process integration, which represents development activities explicitly in a software process model to guide and coordinate development and to integrate tools and objects, is proposed. Process integration uses software process models (SPMs) a process driver, a tool set, and interfaces for both developers and managers to form the backbone of a process-driven CASE environment. The developer's interface, a working environment that lets developers enact an SPM, and the manager's interface which gives managers and analysts the tools to define, monitor, and control the SPMs that developers are working on concurrently are discussed. The Softman environment experiment, an implementation of process-driven CASE environments with existing CASE environments, is reviewed.<>
An approach to conceptual feedback in multiple viewed software requirements modeling This paper outlinespart of an approach to these multiple-viewed requirements thatprovides some structure for integrating and validating multipleviews.Most recent research has acknowledged the presence of multipleviews, but only a few have explicitly modeled them as distinctviews. The work of Nissen, et al [Nissen96] is an example of apractical technique that is used in commercial settings to form aframework for discussion and negotiation among participants. Itsbiggest drawbacks are (a) ...
Microanalysis: Acquiring Database Semantics in Conceptual Graphs Relational databases are in widespread use, yet they suffer from serious limitations when one uses them for reasoning about real-world enterprises. This is due to the fact that database relations possess no inherent semantics. This paper describes an approach called microanalysis that we have used to effectively capture database semantics represented by conceptual graphs. The technique prescribes a manual knowledge acquisition process whereby each relation schema is captured in a single conceptual graph. The schema's graph can then easily be instantiated for each tuple in the database forming a set of graphs representing the entire database's semantics. Although our technique originally was developed to capture semantics in a restricted domain of interest, namely database inference detection, we believe that domain-directed microanalysis is a general approach that can be of significant value for databases in many domains. We describe the approach and give a brief example.
Being Suspicious: Critiquing Problem Specifications One should look closely at problem specifications before attempting solutions: we may find that the specifier has only a vague or even erroneous notion of what is required, that the solution of a more general or more specific problem may be of more use, or simply that the problem as given is misstated. Using software development as an example, we present a knowledge-based system for critiquing one form of problem specification, that of a formal software specification.
Abstract interpretation of reactive systems The advent of ever more complex reactive systems in increasingly critical areas calls for the development of automated verication techniques. Model checking is one such technique, which has proven quite successful. However, the state-explosion problem remains a major stumbling block. Recent experience indicates that solutions are to be found in the application of techniques for property-preserving abstraction and successive approximation of models. Most such applications have so far been based solely on the property-preserving characteristics of simulation relations. A major drawback of all these results is that they do not oer a satisfactory formalization of the notion of precision of abstractions. The theory of Abstract Interpretation oers a framework for the denition and justication of property-preserving abstractions. Furthermore, it provides a method for the eective computation of abstract models directly from the text of a program, thereby avoiding the need for intermediate storage of a full-blown model. Finally, it formalizes the notion of optimality, while allowing to trade precision for speed by computing suboptimal approximations. For a long time, applications of Abstract Interpretation have mainly focused on the analysis of universal safety properties, i.e., properties that hold in all states along every possible execution path. In this article, we extend Abstract Interpretation to the analysis of both existential and universal reactive properties, as expressible in the modal -calculus .I t is shown how abstract models may be constructed by symbolic execution of programs. A notion of approximation between abstract models is dened while conditions are given under which optimal models can be constructed. Examples are given to illustrate this. We indicate conditions under which also falsehood of formulae is preserved. Finally, we compare our approach to those based on simulation relations.
An experiment in technology transfer: PAISLey specification of requirements for an undersea lightwave cable system From May to October 1985 members of the Undersea Systems Laboratory and the Computer Technology Research Laboratory of AT&T Bell Laboratories worked together to apply the executable specification language PAISLey to requirements for the “SL” communications system. This paper describes our experiences and answers three questions based on the results of the experiment: Can SL requirements be specified formally in PAISLey? Can members of the SL project learn to read and write specifications in PAISLey? How would the use of PAISLey affect the productivity of the software-development team and the quality of the resulting software?
A Scenario Construction Process use cases should evolve fromconcrete use cases, not the other way round. Extendsassociation let us capture the functional requirements ofa complex system, in the same way we learn about anynew subject: First we understand the basic functions,then we introduce complexity."Gough et al. [28] follow an approach closer to the oneproposed in this article regarding their heuristics:`1. Creation of natural language documents: projectscope documents, customer needs documents, serviceneeds...
A comparison of normalization methods for high density oligonucleotide array data based on variance and bias. Motivation: When running experiments that involve multiple high density oligonucleotide arrays, it is important to remove sources of variation between arrays of non-biological origin. Normalization is a process for reducing this variation. It is common to see non-linear relations between arrays and the standard normalization provided by Affymetrix does not perform well in these situations. Results: We present three methods of performing normalization at the probe intensity level. These methods are called complete data methods because they make use of data from all arrays in an experiment to form the normalizing relation. These algorithms are compared to two methods that make use of a baseline array: a one number scaling based algorithm and a method that uses a non-linear normalizing relation by comparing the variability and bias of an expression measure. Two publicly available datasets are used to carry out the comparisons. The simplest and quickest complete data method is found to perform favorably. Availablity: Software implementing all three of the complete data normalization methods is available as part of the R package Affy, which is a part of the Bioconductor project http://www.bioconductor.org. Contact: bolstad@stat.berkeley.edu Supplementary information: Additional figures may be found at http://www.stat.berkeley.edu/similar tobolstad/normalize/ index.html.
Heuristic search in PARLOG using replicated worker style parallelism Most concurrent logic programming languages hide the distribution of processes among physical processors from the programmer. For parallel applications based on heuristic search, however, it is important for the programmer to accurately control this distribution. With such applications, an inferior distribution strategy easily leads to enormous search overheads, thus decreasing speedup on parallel hardware.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1.032966
0.043636
0.036364
0.018182
0.008007
0.003714
0.00082
0.000106
0.000068
0.00003
0.000002
0
0
0
On the Secrecy Capacity of Fading Channels We consider the secure transmission of information over an ergodic fading channel in the presence of an eavesdropper. Our eavesdropper can be viewed as the wireless counterpart of Wyner's wiretapper. The secrecy capacity of such a system is characterized under the assumption of asymptotically long coherence intervals. We first consider the full channel state information (CSI) case, where the transmitter has access to the channel gains of the legitimate receiver and the eavesdropper. The secrecy capacity under this full CSI assumption serves as an upper bound for the secrecy capacity when only the CSI of the legitimate receiver is known at the transmitter, which is characterized next. In each scenario, the perfect secrecy capacity is obtained along with the optimal power and rate allocation strategies. We then propose a low-complexity on/off power allocation strategy that achieves near-optimal performance with only the main channel CSI. More specifically, this scheme is shown to be asymptotically optimal as the average signal-to-noise ratio (SNR) goes to infinity, and interestingly, is shown to attain the secrecy capacity under the full CSI assumption. Overall, channel fading has a positive impact on the secrecy capacity and rate adaptation, based on the main channel CSI, is critical in facilitating secure communications over slow fading channels.
Massive MIMO Systems With Non-Ideal Hardware: Energy Efficiency, Estimation, and Capacity Limits The use of large-scale antenna arrays can bring substantial improvements in energy and/or spectral efficiency to wireless systems due to the greatly improved spatial resolution and array gain. Recent works in the field of massive multiple-input multiple-output (MIMO) show that the user channels decorrelate when the number of antennas at the base stations (BSs) increases, thus strong signal gains are achievable with little interuser interference. Since these results rely on asymptotics, it is important to investigate whether the conventional system models are reasonable in this asymptotic regime. This paper considers a new system model that incorporates general transceiver hardware impairments at both the BSs (equipped with large antenna arrays) and the single-antenna user equipments (UEs). As opposed to the conventional case of ideal hardware, we show that hardware impairments create finite ceilings on the channel estimation accuracy and on the downlink/uplink capacity of each UE. Surprisingly, the capacity is mainly limited by the hardware at the UE, while the impact of impairments in the large-scale arrays vanishes asymptotically and interuser interference (in particular, pilot contamination) becomes negligible. Furthermore, we prove that the huge degrees of freedom offered by massive MIMO can be used to reduce the transmit power and/or to tolerate larger hardware impairments, which allows for the use of inexpensive and energy-efficient antenna elements.
Cooperative wireless communications: a cross-layer approach This article outlines one way to address these problems by using the notion of cooperation between wireless nodes. In cooperative communications, multiple nodes in a wireless network work together to form a virtual antenna array. Using cooperation, it is possible to exploit the spatial diversity of the traditional MIMO techniques without each node necessarily having multiple antennas. Multihop networks use some form of cooperation by enabling intermediate nodes to forward the message from source to destination. However, cooperative communication techniques described in this article are fundamentally different in that the relaying nodes can forward the information fully or in part. Also the destination receives multiple versions of the message from the source, and one or more relays and combines these to obtain a more reliable estimate of the transmitted signal as well as higher data rates. The main advantages of cooperative communications are presented
Secure Connectivity Using Randomize-and-Forward Strategy in Cooperative Wireless Networks. In this letter, we study the problem of secure connectivity for colluding eavesdropper using randomize-and-forward (RF) strategy in cooperative wireless networks, where the distribution of the eavesdroppers is a homogenous Poisson point process (PPP). Considering the case of fixed relay, the exact expression for the secure connectivity probability is obtained. Then we obtain the lower bound and find that the lower bound gives accurate approximation of the exact secure connectivity probability when the eavesdropper density is small. Based on the lower bound expression, we obtain the optimal area of relay location and the approximate farthest secure distance between the source and destination for a given secure connectivity probability in the small eavesdropper density regime. Furthermore, we extend the model of fixed relay to random relay, and get the lower bound expression for the secure connectivity probability. © 1997-2012 IEEE.
How Much Does I/Q Imbalance Affect Secrecy Capacity? Radio frequency front ends constitute a fundamental part of both conventional and emerging wireless systems. However, in spite of their importance, they are often assumed ideal, although they are practically subject to certain detrimental impairments, such as amplifier nonlinearities, phase noise, and in-phase and quadrature (I/Q) imbalance (IQI). This letter is devoted to the quantification and e...
Cognitive radio network with secrecy and interference constraints. In this paper, we investigate the physical-layer security of a secure communication in single-input multiple-output (SIMO) cognitive radio networks (CRNs) in the presence of two eavesdroppers. In particular, both primary user (PU) and secondary user (SU) share the same spectrum, but they face with different eavesdroppers who are equipped with multiple antennas. In order to protect the PU communication from the interference of the SU and the risks of eavesdropping, the SU must have a reasonable adaptive transmission power which is set on the basis of channel state information, interference and security constraints of the PU. Accordingly, an upper bound and lower bound for the SU transmission power are derived. Furthermore, a power allocation policy, which is calculated on the convex combination of the upper and lower bound of the SU transmission power, is proposed. On this basis, we investigate the impact of the PU transmission power and channel mean gains on the security and system performance of the SU. Closed-form expressions for the outage probability, probability of non-zero secrecy capacity, and secrecy outage probability are obtained. Interestingly, our results show that the strong channel mean gain of the PU transmitter to the PU's eavesdropper in the primary network can enhance the SU performance.
Relay Placement for Physical Layer Security: A Secure Connection Perspective This work studies the problem of secure connection in cooperative wireless communication with two relay strategies, decode-and-forward (DF) and randomize-and-forward (RF). The four-node scenario and cellular scenario are considered. For the typical four-node (source, destination, relay, and eavesdropper) scenario, we derive the optimal power allocation for the DF strategy and find that the RF strategy is always better than the DF to enhance secure connection. In cellular networks, we show that without relay, it is difficult to establish secure connections from the base station to the cell edge users. The effect of relay placement for the cell edge users is demonstrated by simulation. For both scenarios, we find that the benefit of relay transmission increases when path loss becomes severer. © 2012 IEEE.
Formal verification for fault-tolerant architectures: prolegomena to the design of PVS PVS is the most recent in a series of verification systems developed at SRI. Its design was strongly influenced, and later refined, by our experiences in developing formal specifications and mechanically checked verifications for the fault-tolerant architecture, algorithms, and implementations of a model 驴reliable computing platform驴 (RCP) for life-critical digital flight-control applications, and by a collaborative project to formally verify the design of a commercial avionics processor called AAMP5. Several of the formal specifications and verifications performed in support of RCP and AAMP5 are individually of considerable complexity and difficulty. But in order to contribute to the overall goal, it has often been necessary to modify completed verifications to accommodate changed assumptions or requirements, and people other than the original developer have often needed to understand, review, build on, modify, or extract part of an intricate verification. In this paper, we outline the verifications performed, present the lessons learned, and describe some of the design decisions taken in PVS to better support these large, difficult, iterative, and collaborative verifications.
Scale & Affine Invariant Interest Point Detectors In this paper we propose a novel approach for detecting interest points invariant to scale and affine transformations. Our scale and affine invariant detectors are based on the following recent results: (1) Interest points extracted with the Harris detector can be adapted to affine transformations and give repeatable results (geometrically stable). (2) The characteristic scale of a local structure is indicated by a local extremum over scale of normalized derivatives (the Laplacian). (3) The affine shape of a point neighborhood is estimated based on the second moment matrix.Our scale invariant detector computes a multi-scale representation for the Harris interest point detector and then selects points at which a local measure (the Laplacian) is maximal over scales. This provides a set of distinctive points which are invariant to scale, rotation and translation as well as robust to illumination changes and limited changes of viewpoint. The characteristic scale determines a scale invariant region for each point. We extend the scale invariant detector to affine invariance by estimating the affine shape of a point neighborhood. An iterative algorithm modifies location, scale and neighborhood of each point and converges to affine invariant points. This method can deal with significant affine transformations including large scale changes. The characteristic scale and the affine shape of neighborhood determine an affine invariant region for each point.We present a comparative evaluation of different detectors and show that our approach provides better results than existing methods. The performance of our detector is also confirmed by excellent matching results&semi; the image is described by a set of scale/affine invariant descriptors computed on the regions associated with our points.
Specifying dynamic support for collaborative work within WORLDS In this paper, we present a specification language developed for WORLDS, a next generation computer-supported collaborative work system. Our specification language, called Introspect, employs a meta-level architecture to allow run-time modifications to specifications. We believe such an architecture is essential to WORLDS' ability to provide dynamic support for collaborative work in an elegant fashion.
Reasoning and Refinement in Object-Oriented Specification Languages This paper describes a formal object-oriented specification language, Z++, and identifies proof rules and associated specification structuring and development styles for the facilitation of validation and verification of implementations against specifications in this language. We give inference rules for showing that certain forms of inheritance lead to refinement, and for showing that refinements are preserved by constructs such as promotion of an operation from a supplier class to a client class. Extension of these rules to other languages is also discussed.
3rd international workshop on software evolution through transformations: embracing change Transformation-based techniques such as refactoring, model transformation and model-driven development, architectural reconfiguration, etc. are at the heart of many software engineering activities, making it possible to cope with an ever changing environment. This workshop provides a forum for discussing these techniques, their formal foundations and applications.
One VM to rule them all Building high-performance virtual machines is a complex and expensive undertaking; many popular languages still have low-performance implementations. We describe a new approach to virtual machine (VM) construction that amortizes much of the effort in initial construction by allowing new languages to be implemented with modest additional effort. The approach relies on abstract syntax tree (AST) interpretation where a node can rewrite itself to a more specialized or more general node, together with an optimizing compiler that exploits the structure of the interpreter. The compiler uses speculative assumptions and deoptimization in order to produce efficient machine code. Our initial experience suggests that high performance is attainable while preserving a modular and layered architecture, and that new high-performance language implementations can be obtained by writing little more than a stylized interpreter.
New results on stability analysis for systems with discrete distributed delay The integral inequality technique is widely used to derive delay-dependent conditions, and various integral inequalities have been developed to reduce the conservatism of the conditions derived. In this study, a new integral inequality was devised that is tighter than existing ones. It was used to investigate the stability of linear systems with a discrete distributed delay, and a new stability condition was established. The results can be applied to systems with a delay belonging to an interval, which may be unstable when the delay is small or nonexistent. Three numerical examples demonstrate the effectiveness and the smaller conservatism of the method.
1.061677
0.05936
0.048
0.048
0.048
0.048
0.03936
0
0
0
0
0
0
0
Lossless compression of regions-of-interest from retinal images This paper presents a lossless compression method performing separately the compression of the vessels and of the remaining part of eye fundus in retinal images. Retinal images contain valuable information sources for several distinct medical diagnosis tasks, where the features of interest can be e.g. the cotton wool spots in the eye fundus, or the volume of the vessels over concentric circular regions. It is assumed that one of the existent segmentation methods provided the segmentation of the vessels. The proposed compression method transmits losslessly the segmentation image, and then transmits the eye fundus part, or the vessels image, or both, conditional on the vessels segmentation. The independent compression of the two color image segments is performed using a sparse predictive method. Experiments are provided over a database of retinal images containing manual and estimated segmentations. The codelength of encoding the overall image, including the segmentation and the image segments, proves to be better than the codelength for the entire image obtained by JPEG2000 and other publicly available compressors.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Similar Chinese Characters Recognition Using Multiresolution Feature Space A large amount of similar Chinese characters is one of the main reasons to cause the high reject rate and substitution rate. In this paper, an innovative similar characters recognition method based on multi-resolution feature space is proposed. The process includes originally abstracting global feature vectors, progressively, dynamically and recursively adding finer local feature vectors to improve the ability of recognition and finally achieving the result which satisfies the conditions. In this way, it is not necessary to decide similar character sets manually since it can automatically choose the space in the largest difference of similar characters to construct new feature space. The efficiency of this method was proved by the experiments which effectively improved the recognition rate of similar characters
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
The Draco Approach to Constructing Software from Reusable Components This paper discusses an approach called Draco to the construction of software systems from reusable software parts. In particular we are concerned with the reuse of analysis and design information in addition to programming language code. The goal of the work on Draco has been to increase the productivity of software specialists in the construction of similar systems. The particular approach we have taken is to organize reusable software components by problem area or domain. Statements of programs in these specialized domains are then optimized by source-to-source program transformations and refined into other domains. The problems of maintaining the representational consistency of the developing program and producing efficient practical programs are discussed. Some examples from a prototype system are also given.
An Integrated Life-Cycle Model for Software Maintenance An integrated life-cycle model is presented for use in a software maintenance environment. The model represents information about the development and maintenance of software systems, emphasizing relationships between different phases of the software life cycle. It provides the basis for automated tools to assist maintenance personnel in making changes to existing software systems. The model is independent of particular specification, design, and programming languages because it represents only certain 'basic' semantic properties of software systems: control flow, data flow, and data structure. The software development processes by which one phase of the software life cycle is derived from another are represented by graph rewriting rules, which indicate how various components of a software system have been implemented. This approach permits analysis of the basic properties of a software system throughout the software life cycle. Examples are given to illustrate the integrated software life-cycle model during evolution.
Weakest Precondition for General Recursive Programs Formalized in Coq This paper describes a formalization of the weakest precondition, wp, for general recursive programs using the type-theoretical proof assistant Coq. The formalization is a deep embedding using the computational power intrinsic to type theory. Since Coq accepts only structural recursive functions, the computational embedding of general recursive programs is non-trivial. To justify the embedding, an operational semantics is defined and the equivalence between wp and the operational semantics is proved. Three major healthiness conditions, namely: Strictness, Monotonicity and Conjunctivity are proved as well.
Reusing requirements through a modeling and composition support tool This paper presents the concepts and tools for reusing requirements being designed and implemented within the ITHACA project. The RECAST (REquirements Collection And Specification Tool) tool guides the Application Developer in the requirement specification process by providing suggestions to the reuse of components. To this aim, RECAST includes a meta-level of definitions; here, meta-level classes associated to components contain design suggestions about the reuse of these components and about the design actions to be performed during the subsequent application development phases.
The requirements apprentice: an initial scenario The implementation of the Requirements Apprentice has reached the point where it is possible to exhibit a concrete scenario showing the intended basic capabilities of the system. The Requirements Apprentice accepts ambiguous, incomplete, and inconsistent input from a requirements analyst and assists the analyst in creating and validating a coherent requirements description. This processing is supported by a general-purpose reasoning system and a library of requirements cliches that contains reusable descriptions of standard concepts used in requirements.
Integration of domain analysis and analogical approach for software reuse Reusability has the ~tial to impove the productivity ~ an order of magnitude os more. Reusability can also improve the software quality aml reliability. Techniques and concepts in domain analysis end analogy can facilitate software reuse. Domain analysis is a process of identifying, capturing, representing, cless@ing and organiaii the informatkm and knowledge used in developing a mftwara system in order to make the information and knowledge reusable when developing new systems in the same application domain. Analogy is ● mapping from the base domain to the terget domain. Through analogy, software reuse is possible even for different domains. In this paper, the integration of domain analysis and the analogical appro~h is presented. The integrated epproach will fiist investigate the requirements and features of analogical reasoning. Object behaviors and system dynamics are incorporated into the analogy approach as addkionel mapping constraints. After the identification of requirements and demands of analogy, steps end products of the domain rudysismethod are &f* to meet these needs. The integrated method can not only support better understanding of a particular application domai~ but also csn promote the mapping aaoas different domains.
Capturing more world knowledge in the requirements specification The view is adopted that software requirements involve the representation (modeling) of considerable real-world knowledge, not just functional specifications. A framework (RMF) for requirements models is presented and its main features are illustrated. RMF allows information about three types of conceptual entities (objects, activities, and assertions) to be recorded uniformly using the notion of properties. By grouping all entities into classes or metaclasses, and by organizing classes into generalization (specialization) hierarchies, RMF supports three abstraction principles (classification, aggregation, and generalization) which appear to be of universal importance in the development and organization of complex descriptions. Finally, by providing a mathematical model underlying our terminology, we achieve both unambiguity and the potential to verify consistency of the model.
CREWS-SAVRE: Scenarios for Acquiring and Validating Requirements This paper reports research into semi-automatic generationof scenarios for validating software-intensive system requirements.The research was undertaken as part of the ESPRIT IV 21903 ‘CREWS’long-term research project. The paper presents the underlyingtheoretical models of domain knowledge, computational mechanisms anduser-driven dialogues needed for scenario generation. It describeshow CREWS draws on theoretical results from the ESPRIT III 6353‘NATURE’ basic research action, that is object system models whichare abstractions of the fundamental features of different categoriesof problem domain. CREWS uses these models to generate normal coursescenarios, then draws on theoretical and empirical research fromcognitive science, human-computer interaction, collaborative systemsand software engineering to generate alternative courses for thesescenarios. The paper describes a computational mechanism for derivinguse cases from object system models, simple rules to link actions ina use case, taxonomies of classes of exceptions which give rise toalternative courses in scenarios, and a computational mechanism forgeneration of multiple scenarios from a use case specification.
Abstract interpretation of reactive systems The advent of ever more complex reactive systems in increasingly critical areas calls for the development of automated verication techniques. Model checking is one such technique, which has proven quite successful. However, the state-explosion problem remains a major stumbling block. Recent experience indicates that solutions are to be found in the application of techniques for property-preserving abstraction and successive approximation of models. Most such applications have so far been based solely on the property-preserving characteristics of simulation relations. A major drawback of all these results is that they do not oer a satisfactory formalization of the notion of precision of abstractions. The theory of Abstract Interpretation oers a framework for the denition and justication of property-preserving abstractions. Furthermore, it provides a method for the eective computation of abstract models directly from the text of a program, thereby avoiding the need for intermediate storage of a full-blown model. Finally, it formalizes the notion of optimality, while allowing to trade precision for speed by computing suboptimal approximations. For a long time, applications of Abstract Interpretation have mainly focused on the analysis of universal safety properties, i.e., properties that hold in all states along every possible execution path. In this article, we extend Abstract Interpretation to the analysis of both existential and universal reactive properties, as expressible in the modal -calculus .I t is shown how abstract models may be constructed by symbolic execution of programs. A notion of approximation between abstract models is dened while conditions are given under which optimal models can be constructed. Examples are given to illustrate this. We indicate conditions under which also falsehood of formulae is preserved. Finally, we compare our approach to those based on simulation relations.
The Model Checker SPIN SPIN is an efficient verification system for models of distributed software systems. It has been used to detect design errors in applications ranging from high-level descriptions of distributed algorithms to detailed code for controlling telephone exchanges. This paper gives an overview of the design and structure of the verifier, reviews its theoretical foundation, and gives an overview of significant practical applications.
Hierarchical correctness proofs for distributed algorithms This thesis introduces a new model for distributed computation in asynchronous networks, the input-output automaton. This simple, powerful model captures in a novel way the game-theoretical interaction between a system and its environment, and allows fundamental properties of distributed computation such as fair computation to be naturally expressed. Furthermore, this model can be used to construct modular, hierarchical correctness proofs of distributed algorithms. This thesis defines the input-output automaton model, and presents an interesting example of how this model can be used to construct such proofs.
Run-length encodings (Corresp.) First Page of the Article
Enhancing Human Face Detection by Resampling Examples Through Manifolds As a large-scale database of hundreds of thousands of face images collected from the Internet and digital cameras becomes available, how to utilize it to train a well-performed face detector is a quite challenging problem. In this paper, we propose a method to resample a representative training set from a collected large-scale database to train a robust human face detector. First, in a high-dimensional space, we estimate geodesic distances between pairs of face samples/examples inside the collected face set by isometric feature mapping (Isomap) and then subsample the face set. After that, we embed the face set to a low-dimensional manifold space and obtain the low-dimensional embedding. Subsequently, in the embedding, we interweave the face set based on the weights computed by locally linear embedding (LLE). Furthermore, we resample nonfaces by Isomap and LLE likewise. Using the resulting face and nonface samples, we train an AdaBoost-based face detector and run it on a large database to collect false alarms. We then use the false detections to train a one-class support vector machine (SVM). Combining the AdaBoost and one-class SVM-based face detector, we obtain a stronger detector. The experimental results on the MIT + CMU frontal face test set demonstrated that the proposed method significantly outperforms the other state-of-the-art methods.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1.017073
0.025
0.02
0.01304
0.00568
0.003472
0.000935
0.000172
0.000071
0.000035
0.000004
0
0
0
Organizing the tasks in complex design projects This research is aimed at structuring complex design projects in order to develop better products more quickly. We use a matrix representation to capture both the sequence of and the technical relationships among the many design tasks to be performed. These relationships define the technical structure of a design project which is then analyzed in order to find alternative sequences and/or definitions of the design tasks. Such improved design procedures offer opportunities to speed development progress by streamlining the inter-task coordination. After using this technique to model design processes in several organizations, we have developed a design management strategy which focuses attention on the essential information transfer requirements of a technical project. We expect that this research will benefit not only new design tasks that have never been structured before but also long-standing, often repeated design tasks that may have drifted into poor organizational patterns over many years.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Procedures and atomicity refinement The introduction of an early return from a (remote) procedure call can increase the degree of parallelism in a parallel or distributed algorithm modeled by an action system. We define a return statement for procedures in an action systems framework and show that it corresponds to carrying out an atomicity refinement.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Reachable set estimation for Markovian jump neural networks with time-varying delay. This paper is concerned with the reachable set estimation for Markovian jump neural networks with time-varying delay and bounded peak inputs. The objective is to find a description of a reachable set that is containing all reachable states starting from the origin. In the framework of Lyapunov–Krasovskii functional method, an appropriate Lyapunov–Krasovskii functional is constructed firstly. Then by using the Wirtinger-based integral inequality and the extended reciprocally convex matrix inequality, an ellipsoidal description of the reachable set for the considered neural networks is derived. Finally, a numerical example with simulation results is provided to verify the effectiveness of our results.
New reliable nonuniform sampling control for uncertain chaotic neural networks under Markov switching topologies. This paper studies the stochastic exponential synchronization problem for uncertain chaotic neural networks (UCNNs) with probabilistic faults (PFs) and randomly occurring time-varying parameters uncertainties(ROTVPUs). To reflect more realistic control behaviors, a new stochastic reliable nonuniform sampling controller with Markov switching topologies is designed for the first time. First, by taking into full account more information on sawtooth structural sampling pattern, time delay and its variation, a novel loose-looped Lyapunov–Krasovskii functional (LLLKF) is developed via introducing matrices-refined-function and adjustable parameters. Second, with the aid of novel LLLKF and relaxed Wirtinger-based integral inequality (RWBII), new synchronization algorithms are established to guarantee that UCNNs are synchronous exponentially under probabilistic actuator and sensor faults. Third, based on the proposed optimization algorithm, the desired reliable sampled-data controller can be achieved under more larger exponential decay rate. Finally, two numerical examples are given to illustrate the effectiveness and advantages of the designed algorithms.
A New Approach to Stochastic Stability of Markovian Neural Networks With Generalized Transition Rates. This paper investigates the stability problem of Markovian neural networks (MNNs) with time delay. First, to reflect more realistic behaviors, more generalized transition rates are considered for MNNs, where all transition rates of some jumping modes are completely unknown. Second, a new approach, namely time-delay-dependent-matrix (TDDM) approach, is proposed for the first time. The TDDM approach is associated with both time delay and its time derivative. Thus, the TDDM approach can fully capture the information of time delay and would play a key role in deriving less conservative results. Third, based on the TDDM approach and applying Wirtinger's inequality and improved reciprocally convex inequality, stability criteria are derived. In comparison with some existing results, our results are not only less conservative but also involve lower calculation complexity. Finally, numerical examples are provided to show the effectiveness and advantages of the proposed results.
Impulsive Synchronization of Unbounded Delayed Inertial Neural Networks With Actuator Saturation and Sampled-Data Control and its Application to Image Encryption The article considers the impulsive synchronization for inertial neural networks with unbounded delay and actuator saturation via sampled-data control. Based on an impulsive differential inequality, the difficulties caused by unbounded delay and impulsive effect may be effectively avoid. By applying polytopic representation technique, the actuator saturation term is first considered into the design of impulsive controller, and less conservative linear matrix inequality (LMI) criteria that guarantee asymptotical synchronization for the considered model via hybrid control are given. As special cases, the asymptotical synchronization of the considered model via sampled-data control and saturating impulsive control are also studied, respectively. Numerical simulations are presented to claim the effectiveness of theoretical analysis. A new image encryption algorithm is proposed to utilize the synchronization theory of hybrid control. The validity of image encryption algorithm can be obtained by experiments.
Reliable asynchronous sampled-data filtering of T–S fuzzy uncertain delayed neural networks with stochastic switched topologies This paper investigates the issue of the reliable asynchronous sampled-data filtering of Takagi–Sugeno (T–S) fuzzy delayed neural networks with stochastic intermittent faults, randomly occurring time-varying parameters uncertainties and controller gain fluctuation. The asynchronous phenomenon occurs between the system modes and controller modes. First, in order to reduce the utilization rate of communication bandwidth, a novel alterable sampled-data terminal method is considered via variable sampling rates. Second, based on the fuzzy-model-based control approach, improved reciprocally convex inequality and new parameter-time-dependent discontinuous Lyapunov approach, several relaxed conditions are derived and compared with the existing work. Third, the intermittent fault-tolerance scheme is also taken into fully account in designing a reliable asynchronous sampled-data controller, which ensures such that the resultant neural networks is asymptotically stable. Finally, two numerical examples are presented to illustrate the effectiveness and advantages of the theoretical results.
Stability of time-delay systems via Wirtinger-based double integral inequality Based on the Wirtinger-based integral inequality, a double integral form of the Wirtinger-based integral inequality (hereafter called as Wirtinger-based double integral inequality) is introduced in this paper. To show the effectiveness of the proposed inequality, two stability criteria for systems with discrete and distributed delays are derived within the framework of linear matrix inequalities (LMIs). The advantage of employing the proposed inequalities is illustrated via two numerical examples.
Statecharts: A visual formalism for complex systems Abstract. We,present,a broad,extension,of the,conventional,formalism,of state machines,and state diagrams, that is relevant to the specification and design of complex discrete-event systems, such as multi-computer real-time systems, communication protocols and digital control units. Our diagrams, which we call statecharts, extend conventional state-transition diagrams with essentially three elements, dealing, respectively, with the notions of hierarchy, concurrency and communica- tion. These,transform,the language,of state diagrams,into a highly,structured,and,economical description,language.,Statecharts,are thus,compact,and,expressiv-small,diagrams,can,express complex,behavior-as,well,as compositional,and,modular.,When,coupled,with,the capabilities of computerized graphics, statecharts enable viewing the description at different levels of detail, and make even very large specifications manageable and comprehensible. In fact, we intend to demonstrate,here that statecharts,counter,many,of the objections,raised,against,conventional,state diagrams, and thus appear to render specification by diagrams an attractive and plausible approach. Statecharts,can be used,either as a stand-alone,behavioral,description,or as part of a more,general design methodology that deals also with the system’s other aspects, such as functional decomposi- tion and,data-flow specification. We also discuss,some,practical,experience,that was,gained,over the last three,years,in applying,the statechart,formalism,to the specification,of a particularly complex,system.
The symbol grounding problem There has been much discussion recently about the scope and limits of purely symbolic models of the mind and about the proper role of connectionism in cognitive modeling. This paper describes the “symbol grounding problem”: How can the semantic interpretation of a formal symbol system be made intrinsic to the system, rather than just parasitic on the meanings in our heads? How can the meanings of the meaningless symbol tokens, manipulated solely on the basis of their (arbitrary) shapes, be grounded in anything but other meaningless symbols? The problem is analogous to trying to learn Chinese from a Chinese/Chinese dictionary alone. A candidate solution is sketched: Symbolic representations must be grounded bottom-up in nonsymbolic representations of two kinds: (1) iconic representations , which are analogs of the proximal sensory projections of distal objects and events, and (2) categorical representations , which are learned and innate feature detectors that pick out the invariant features of object and event categories from their sensory projections. Elementary symbols are the names of these object and event categories, assigned on the basis of their (nonsymbolic) categorical representations. Higher-order (3) symbolic representations , grounded in these elementary symbols, consist of symbol strings describing category membership relations (e.g. “An X is a Y that is Z ”). Connectionism is one natural candidate for the mechanism that learns the invariant features underlying categorical representations, thereby connecting names to the proximal projections of the distal objects they stand for. In this way connectionism can be seen as a complementary component in a hybrid nonsymbolic/symbolic model of the mind, rather than a rival to purely symbolic modeling. Such a hybrid model would not have an autonomous symbolic “module,” however; the symbolic functions would emerge as an intrinsically “dedicated” symbol system as a consequence of the bottom-up grounding of categories' names in their sensory representations. Symbol manipulation would be governed not just by the arbitrary shapes of the symbol tokens, but by the nonarbitrary shapes of the icons and category invariants in which they are grounded.
Requirements engineering with viewpoints. The requirements engineering process involves a clear understanding of the requirements of the intended system. This includes the services required of the system, the system users, its environment and associated constraints. This process involves the capture, analysis and resolution of many ideas, perspectives and relationships at varying levels of detail. Requirements methods based on global reasoning appear to lack the expressive framework to adequately articulate this distributed requirements knowledge structure. The paper describes the problems in trying to establish an adequate and stable set of requirements and proposes a viewpoint-oriented requirements definition (VORD) method as a means of tackling some of these problems. This method structures the requirements engineering process using viewpoints associated with sources of requirements. The paper describes VORD in the light of current viewpoint-oriented requirements approaches and shows how it improves on them. A simple example of a bank auto-teller system is used to demonstrate the method.
Parallel Programming in Linda
Developing Object-based Distributed Systems _cf_loadingtexthtml="";_cf_contextpath="";_cf_ajaxscriptsrc="/CFIDE/scripts/ajax";_cf_jsonprefix='//';_cf_clientid='74298E73002D35EC51D5132770EDC330';Developing Object-based Distributed Systems function settab() { var mytabs = ColdFusion.Layout.getTabLayout('citationdetails'); mytabs.on('tabchange', function(tabpanel,activetab) { document.cookie = 'picked=' + '893512' + ',' + activetab.id; }) }function letemknow(){ ColdFusion.Window.show('letemknow');}function testthis(){alert('test');}function loadalert(){ alert("I am in the load alert"); }function loadalert2(){ alert("I am in the load alert2"); } google.load('visualization', '1', {packages:['orgchart']}); google.setOnLoadCallback(drawChart); function drawChart() { var data = new google.visualization.DataTable(); data.addColumn('string', 'Name'); data.addColumn('string', 'Manager'); data.addColumn('string', 'ToolTip'); data.addRows([ [{v:'0', f:'CCS for this Technical Report
Universal Sparse Modeling
Matching pedagogical intent with engineering design process models for precollege education Public perception of engineering recognizes its importance to national and international competitiveness, economy, quality of life, security, and other fundamental areas of impact; but uncertainty about engineering among the general public remains. Federal funding trends for education underscore many of the concerns regarding teaching and learning in science, technology, engineering, and mathematics subjects in primary through grade 12 (P-12) education. Conflicting perspectives on the essential attributes that comprise the engineering design process results in a lack of coherent criteria against which teachers and administrators can measure the validity of a resource, or assess its strengths and weaknesses, or grasp incongruities among competing process models. The literature suggests two basic approaches for representing engineering design: a phase-based, life cycle-oriented approach; and an activity-based, cognitive approach. Although these approaches serve various teaching and functional goals in undergraduate and graduate engineering education, as well as in practice, they tend to exacerbate the gaps in P-12 engineering efforts, where appropriate learning objectives that connect meaningfully to engineering are poorly articulated or understood. In this article, we examine some fundamental problems that must be resolved if preengineering is to enter the P-12 curriculum with meaningful standards and is to be connected through learning outcomes, shared understanding of engineering design, and other vestiges to vertically link P-12 engineering with higher education and the practice of engineering. We also examine historical aspects, various pedagogies, and current issues pertaining to undergraduate and graduate engineering programs. As a case study, we hope to shed light on various kinds of interventions and outreach efforts to inform these efforts or at least provide some insight into major factors that shape and define the environment and cultures of the two institutions (including epistemic perspectives, institutional objectives, and political constraints) that are very different and can compromise collaborative efforts between the institutions of P-12 and higher education.
A Probabilistic Calculus for Probabilistic Real-Time Systems Challenges within real-time research are mostly in terms of modeling and analyzing the complexity of actual real-time embedded systems. Probabilities are effective in both modeling and analyzing embedded systems by increasing the amount of information for the description of elements composing the system. Elements are tasks and applications that need resources, schedulers that execute tasks, and resource provisioning that satisfies the resource demand. In this work, we present a model that considers component-based real-time systems with component interfaces able to abstract both the functional and nonfunctional requirements of components and the system. Our model faces probabilities and probabilistic real-time systems unifying in the same framework probabilistic scheduling techniques and compositional guarantees varying from soft to hard real time. We provide an algebra to work with the probabilistic notation developed and form an analysis in terms of sufficient probabilistic schedulability conditions for task systems with either preemptive fixed-priority or earliest deadline first scheduling paradigms.
1.066667
0.066667
0.066667
0.033333
0.016667
0.000952
0
0
0
0
0
0
0
0
Refinement of Structured Interactive Systems. The refinement concept provides a formal tool for addressing the complexity of software-intensive systems, by verified stepwise development from an abstract specification towards an implementation. In this paper we propose a novel notion of refinement for a structured formalism dedicated to interactive systems, that combines a data-flow with a control-oriented approach. Our notion is based on scenarios, extending to two dimensions the trace-based definition for the refinement of classical sequential systems. We illustrate our refinement notion with a simple example and outline several extensions to include more sophisticated distributed techniques.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Lossy to lossless image compression based on reversible integer DCT A progressive image compression scheme is investigated using reversible integer discrete cosine transform (RDCT) which is derived from the matrix factorization theory. Previous techniques based on DCT suffer from bad performance in lossy image compression compared with wavelet image codec. And lossless compression methods such as IntDCT, I2I-DCT and so on could not compare with JPEG-LS or integer discrete wavelet transform (DWT) based codec. In this paper, lossy to lossless image compression can be implemented by our proposed scheme which consists of RDCT, coefficients reorganization, bit plane encoding, and reversible integer pre- and post-filters. Simulation results show that our method is competitive against JPEG-LS and JPEG2000 in lossless compression. Moreover, our method outperforms JPEG2000 (reversible 5/3 filter) for lossy compression, and the performance is even comparable with JPEG2000 which adopted irreversible 9/7 floating-point filter (9/7F filter).
Lossless Hyperspectral Compression Using Klt In this paper we propose an algorithm for the construction of a nearly optimal integer to integer approximation of the Karhlunen-Loeve Transform. The algorithm is based on the method of of P. Hao e Q. Shi as described in [1] but unlike described in the paper- we vary the pivoting in order to obtain a better approximation of the linear transform. We have then developed an algorithm for hyper-spectral image lossless compression that uses first an Integer Wavelet Transform for the spatial decorrelation, then our Integer-KLT for the spectral decorrelation, and finally a 3D context-based adaptive arithmetic coding (3D-CBAC), which exploits the dependencies among symbols. The results on the AVIRIS images are better than those indicated in literature.
Matrix factorizations for reversible integer mapping Reversible integer mapping is essential for lossless source coding by transformation. A general matrix factorization theory for reversible integer mapping of invertible linear transforms is developed. Concepts of the integer factor and the elementary reversible matrix (ERM) for integer mapping are introduced, and two forms of ERM-triangular ERM (TERM) and single-row ERM (SERM)-are studied. We prove that there exist some approaches to factorize a matrix into TERMs or SERMs if the transform is invertible and in a finite-dimensional space. The advantages of the integer implementations of an invertible linear transform are (i) mapping integers to integers, (ii) perfect reconstruction, and (iii) in-place calculation. We find that besides a possible permutation matrix, the TERM factorization of an N-by-N nonsingular matrix has at most three TERMs, and its SERM factorization has at most N+1 SERMs. The elementary structure of ERM transforms is the ladder structure. An executable factorization algorithm is also presented. Then, the computational complexity is compared, and some optimization approaches are proposed. The error bounds of the integer implementations are estimated as well. Finally, three ERM factorization examples of DFT, DCT, and DWT are given
Integer KLT design space exploration for hyperspectral satellite image compression The Integer KLT algorithm is an approximation of the Karhunen-Loève Transform that can be used as a lossless spectral decorrelator. This paper addresses the application of the Integer KLT to lossless compression of hyperspectral satellite imagery. Design space exploration is carried out to investigate the impact of tiling and clustering techniques on the compression ratio and execution time of Integer KLT. AVIRIS hyperspectral images are used as test image data and the spatial compression is carried out with JPEG2000. The results show that clustering canspeed up the execution process and can increase the compression performance.
The JPEG still picture compression standard A joint ISO/CCITT committee known as JPEG (Joint Photographic Experts Group) has been working to establish the first international compression standard for continuous-tone still images, both grayscale and color. JPEG's proposed standard aims to be generic, to support a wide variety of applications for continuous-tone images. To meet the differing needs of many applications, the JPEG standard includes two basic compression methods, each with various modes of operation. A DCT (discrete cosine transform)-based method is specified for `lossy' compression, and a predictive method for `lossless' compression. JPEG features a simple lossy technique known as the Baseline method, a subset of the other DCT-based modes of operation. The Baseline method has been by far the most widely implemented JPEG method to date, and is sufficient in its own right for a large number of applications. The author provides an overview of the JPEG standard, and focuses in detail on the Baseline method
Improved low-complexity intraband lossless compression of hyperspectral images by means of Slepian-Wolf coding In remote sensing systems, on-board data compression is a crucial task that has to be carried out with limited computational resources. In this paper we propose a novel lossless compression scheme for multispectral and hyperspectral images, which combines low encoding complexity and high-performance. The encoder is based on distributed source coding concepts, and employs Slepian-Wolf coding of the bitplanes of the CALIC prediction errors to achieve improved performance. Experimental results on AVIRIS data show that the proposed scheme exhibits performance similar to CALIC, and significantly better than JPEG 2000.
Crisp and Fuzzy Adaptive Spectral Predictions for Lossless and Near-Lossless Compression of Hyperspectral Imagery This letter presents an original approach that exploits classified spectral prediction for lossless/near-lossless hyperspectral-image compression. Minimum-mean-square-error spectral predictors are calculated, one for each small spatial block of each band, and are classified (clustered) to yield a user-defined number of prototype predictors that are capable of matching the spectral features of diff...
The lossless compression of AVIRIS images by vector quantization The structure of hyperspectral images reveals spectral responses that would seem ideal candidates for compression by vector quantization. This paper outlines the results of an investigation of lossless vector quantization of 224-band Airborne/Visible Infrared imaging Spectrometer (AVIRIS) images. Various vector formation techniques are identified and suitable quantization parameters are investigat...
Universal coding, information, prediction, and estimation A connection between universal codes and the problems of prediction and statistical estimation is established. A known lower bound for the mean length of universal codes is sharpened and generalized, and optimum universal codes constructed. The bound is defined to give the information in strings relative to the considered class of processes. The earlier derived minimum description length criterion for estimation of parameters, including their number, is given a fundamental information, theoretic justification by showing that its estimators achieve the information in the strings. It is also shown that one cannot do prediction in Gaussian autoregressive moving average (ARMA) processes below a bound, which is determined by the information in the data.
Hierarchical correctness proofs for distributed algorithms This thesis introduces a new model for distributed computation in asynchronous networks, the input-output automaton. This simple, powerful model captures in a novel way the game-theoretical interaction between a system and its environment, and allows fundamental properties of distributed computation such as fair computation to be naturally expressed. Furthermore, this model can be used to construct modular, hierarchical correctness proofs of distributed algorithms. This thesis defines the input-output automaton model, and presents an interesting example of how this model can be used to construct such proofs.
Object-oriented and conventional analysis and design methodologies Three object-oriented analysis methodologies and three object-oriented design methodologies are reviewed and compared to one another. The authors' intent is to answer the question of whether emerging object-oriented analysis and design methodologies require incremental or radical changes on the part of prospective adopters. The evolution of conventional development methodologies is discussed, and three areas-system partitioning, end-to-end process modeling, and harvesting reuse-that appear to be strong candidates for further development work are presented.<>
Automatic synthesis of SARA design models from system requirements In this research in design automation, two views are employed as the requirements of a system-namely, the functional requirements and the operations concept. A requirement analyst uses data flow diagrams and system verification diagrams (SVDs) to represent the functional requirements and the operations concept, respectively. System Architect's Apprentice (SARA) is an environment-supported method for designing hardware and software systems. A knowledge-based system, called the design assistant, was built to help the system designer to transform requirements stated in one particular collection of design languages. The SVD requirement specification features and the SARA design models are reviewed. The knowledge-based tool for synthesizing a particular domain of SARA design from the requirements is described, and an example is given to illustrate this synthesis process. This example shows the rules used and how they are applied. An evaluation of the approach is given.
A framework to support alignment of secure software engineering with legal regulations Regulation compliance is getting more and more important for software systems that process and manage sensitive information. Therefore, identifying and analysing relevant legal regulations and aligning them with security requirements become necessary for the effective development of secure software systems. Nevertheless, Secure Software Engineering Modelling Languages (SSEML) use different concepts and terminology from those used in the legal domain for the description of legal regulations. This situation, together with the lack of appropriate background and knowledge of laws and regulations, introduces a challenge for software developers. In particular, it makes difficult to perform (i) the elicitation of appropriate security requirements from the relevant laws and regulations; and (ii) the correct tracing of the security requirements throughout the development stages. This paper presents a framework to support the consideration of laws and regulations during the development of secure software systems. In particular, the framework enables software developers (i) to correctly elicit security requirements from the appropriate laws and regulations; and (ii) to trace these requirements throughout the development stages in order to ensure that the design indeed supports the required laws and regulations. Our framework is based on existing work from the area of secure software engineering, and it complements this work with a novel and structured process and a well-defined method. A practical case study is employed to demonstrate the applicability of our work.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.069941
0.006934
0.004473
0.003333
0.001778
0.00075
0.000284
0.000106
0.000005
0
0
0
0
0
Computer-Aided Computing Formal program design methods are most useful when supported with suitable mechanization. This need for mechanization has long been apparent, but there have been doubts whether verification technology could cope with the problems of scale and complexity. Though there is very little compelling evidence either way at this point, several powerful mechanical verification systems are now available for experimentation. Using SRI's PVS as one representative example, we argue that the technology of...
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Towards an Automatic Integration of Statecharts The integration of statecharts is part of an integration methodology for object oriented views. Statecharts are the most important language for the representation of the behaviour of objects and are used in many object oriented modeling techniques, e.g. in UML ([23]). In this paper we focus on the situation where the behaviour of an object type is represented in several statecharts, which have to be integrated into a single statechart. The presented approach allows an automatic integration process but gives the designer possibilities to make own decisions to guide the integration process and to achieve qualitative design goals.
A Conceptual Graph Model for W3C Resource Description Framework With the aim of building a "Semantic Web", the content of the documents must be explicitly represented through metadata in order to enable contents-guided search. Our approach is to exploit a standard language (RDF, recommended by W3C) for expressing such metadata and to interpret these metadata in conceptual graphs (CG) in order to exploit querying and inferencing capabilities enabled by CG formalism. The paper presents our mapping of RDF into CG and its interest in the context of the semantic Web.
Procedures and atomicity refinement The introduction of an early return from a (remote) procedure call can increase the degree of parallelism in a parallel or distributed algorithm modeled by an action system. We define a return statement for procedures in an action systems framework and show that it corresponds to carrying out an atomicity refinement.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
A semantic network representation of personal construct systems A method is presented for transforming and combining heuristic knowledge gathered from multiple domain experts into a common semantic network representation. Domain expert knowledge is gathered with an interviewing tool based on personal construct theory. The problem of expressing and using a large body of knowledge is fundamental to artificial intelligence and its application to knowledge-based or expert systems. The semantic network is a powerful, general representation that has been used as a tool for the definition of other knowledge representations. Combining multiple approaches to a domain of knowledge may reinforce mutual experiences, information, facts, and heuristics, yet still retain unique, specialist knowledge gained from different experiences. An example application of the algorithm is presented in two separate expert domains
Yoda: a framework for the conceptual design VLSI systems As the complexity of the VLSI design process grows, it becomes increasingly more costly to conduct design in a trial-and-error fashion because the number of possible design alternatives, as well as the cost of a complete synthesis and fabrication cycle, increase dramatically. A conceptual design addresses this problem by allowing the designer to conduct initial feasibility studies, giving guidance on the most promising design alternatives with a preliminary indication of estimated performance. The authors describe a general framework that supports this conceptual design and a particular instance of such a framework, called Yoda, that supports the conceptual design phase for digital signal processing filters.<>
A conceptual framework for ASIC design An attempt is made to gain a better understanding of the nature of ASIC (application-specific integrated circuit) design. This is done from a decision-making perspective, in terms of three knowledge frames: the design process, the design hyperspace, and the design repertoire. The design process frame emphasizes the hierarchical design approach and presents the methodology as a formalization of the design process. The design hyperspace concept relates to the recognition of design alternatives. Analysis techniques for evaluating algorithmic and architectural alternatives are collected and classified to form the design repertoire. This conceptual framework is an effective instrument for bridging the widening gap between system designers and VLSI technology. It also provides a conceptual platform for the development of tools for high-level architectural designs.<>
On Overview of KRL, a Knowledge Representation Language
Implementing Remote procedure calls Remote procedure calls (RPC) are a useful paradigm for providing communication across a network between programs written in a high level language. This paper describes a package, written as part of the Cedar project, providing a remote procedure call facility. The paper describes the options that face a designer of such a package, and the decisions we made. We describe the overall structure of our RPC mechanism, our facilities for binding RPC clients, the transport level communication protocol, and some performance measurements. We include descriptions of some optimisations we used to achieve high performance and to minimize the load on server machines that have many clients. Our primary aim in building an RPC package was to make the building of distributed systems easier. Previous protocols were sufficiently hard to use that only members of a select group of communication experts were willing to undertake the construction of distributed systems. We hoped to overcome this by providing a communication paradigm as close as possible to the familiar facilities of our high level languages. To achieve this aim, we concentrated on making remote calls efficient, and on making the semantics of remote calls as close as possible to those of local calls.
Alloy: a lightweight object modelling notation Alloy is a little language for describing structural properties. It offers a declaration syntax compatible with graphical object models, and a set-based formula syntax powerful enough to express complex constraints and yet amenable to a fully automatic semantic analysis. Its meaning is given by translation to an even smaller (formally defined) kernel. This paper presents the language in its entirety, and explains its motivation, contributions and deficiencies.
Semantic grammar: an engineering technique for constructing natural language understanding systems One of the major stumbling blocks to more effective used computers by naive users is the lack of natural means of communication between the user and the computer system. This report discusses a paradigm for constructing efficient and friendly man-machine interface systems involving subsets of natural language for limited domains of discourse. As such this work falls somewhere between highly constrained formal language query systems and unrestricted natural language under-standing systems. The primary purpose of this research is not to advance our theoretical under-standing of natural language but rather to put forth a set of techniques for embedding both semantic/conceptual and pragmatic information into a useful natural language interface module. Our intent has been to produce a front end system which enables the user to concentrate on his problem or task rather than making him worry about how to communicate his ideas or questions to the machine.
Recursive functions of symbolic expressions and their computation by machine, Part I this paper in L a TEXpartly supported by ARPA (ONR) grant N00014-94-1-0775to Stanford University where John McCarthy has been since 1962. Copied with minor notationalchanges from CACM, April 1960. If you want the exact typography, look there. Currentaddress, John McCarthy, Computer Science Department, Stanford, CA 94305, (email:jmc@cs.stanford.edu), (URL: <a href="http://citeseer.ist.psu.edu/rd/0/http%3AqSqqSqwww-formal.stanford.eduqSqjmcqSq" onmouseover="self.status="http://www-formal.stanford.edu/jmc/"; return true" onmouseout="self.status=""; return true">http://www-formal.stanford.edu/jmc/</a> )by starting with the class of expressions called S-expressions and the functionscalled...
A study of cross-validation and bootstrap for accuracy estimation and model selection We review accuracy estimation methods and compare the two most common methods crossvalidation and bootstrap. Recent experimental results on artificial data and theoretical re cults in restricted settings have shown that for selecting a good classifier from a set of classifiers (model selection), ten-fold cross-validation may be better than the more expensive leaveone-out cross-validation. We report on a largescale experiment--over half a million runs of C4.5 and a Naive-Bayes algorithm--to estimate the effects of different parameters on these algrithms on real-world datasets. For crossvalidation we vary the number of folds and whether the folds are stratified or not, for bootstrap, we vary the number of bootstrap samples. Our results indicate that for real-word datasets similar to ours, The best method to use for model selection is ten fold stratified cross validation even if computation power allows using more folds.
A Theory of Prioritizing Composition An operator for the composition of two processes, where one process has priority over the other process, is studied. Processes are described by action systems, and data refinement is used for transforming processes. The operator is shown to be compositional, i.e. monotonic with respect to refinement. It is argued that this operator is adequate for modelling priorities as found in programming languages and operating systems. Rules for introducing priorities and for raising and lowering priorities of processes are given. Dynamic priorities are modelled with special priority variables which can be freely mixed with other variables and the prioritising operator in program development. A number of applications show the use of prioritising composition for modelling and specification in general.
An ontological model of an information system An ontological model of an information system that provides precise definitions of fundamental concepts like system, subsystem, and coupling is proposed. This model is used to analyze some static and dynamic properties of an information system and to examine the question of what constitutes a good decomposition of an information system. Some of the major types of information system formalisms that bear on the authors' goals and their respective strengths and weaknesses relative to the model are briefly reviewed. Also articulated are some of the fundamental notions that underlie the model. Those basic notions are then used to examine the nature and some dynamics of system decomposition. The model's predictive power is discussed.
DOODLE: a visual language for object-oriented databases In this paper we introduce DOODLE, a new visual and declarative language for object-oriented databases. The main principle behind the language is that it is possible to display and query the database with arbitrary pictures. We allow the user to tailor the display of the data to suit the application at hand or her preferences. We want the user-defined visualizations to be stored in the database, and the language to express all kinds of visual manipulations. For extendibility reasons, the language is object-oriented. The semantics of the language is given by a well-known deductive query language for object-oriented databases. We hope that the formal basis of our language will contribute to the theoretical study of database visualizations and visual query languages, a subject that we believe is of great interest, but largely left unexplored.
Developing Mode-Rich Satellite Software by Refinement in Event B To ensure dependability of on-board satellite systems, the designers should, in particular, guarantee correct implementation of the mode transition scheme, i.e., ensure that the states of the system components are consistent with the global system mode. However, there is still a lack of scalable approaches to formal verification of correctness of complex mode transitions. In this paper we present a formal development of an Attitude and Orbit Control System (AOCS) undertaken within the ICT DEPLOY project. AOCS is a complex mode-rich system, which has an intricate mode-transition scheme. We show that refinement in Event B provides the engineers with a scalable formal technique that enables both development of mode-rich systems and proof-based verification of their mode consistency.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.2
0.2
0.2
0
0
0
0
0
0
0
0
0
0
0
Motion compensation for block-based lossless video coding using lattice-based binning A block-based lossless video coding scheme using the notion of binning has been proposed in. To further improve the compression and reduce the complexity, in this paper we investigate the impact of two sub-optimal motion search algorithms on the performance of this lattice-based scheme. While one of the algorithm tries avoiding motion vectors, the other tries to reduce complexity. Our experimental results have demonstrated that the loss due to sub-optimal motion search outweighs the gain when motion vectors are avoided. However, experimental results have shown that there is negligible performance loss when low-complexity sub-optimal three step search is used.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
On the Definition of Visual Languages and Their Editors Different diagrammatic languages are concrete variants of a core metamodel which specifies the way in which to express relations, and which is the basis for a semantic interpretation. In this paper, we identify families of diagrammatic languages exploiting the notion of metamodel as introduced in UML, i.e. through an abstract syntax, given as a class diagram, and a set of constraints in a logical language. The abstract syntax constrains the types of expressable relations and the types and multiplicities of the participating entities. The constraints express contextual and global properties of the relations and their participants. We propose a set of metamodels describing common types of diagrammatic languages. The advantages of this proposal are manifold: the analysis of constraints in the metamodel can be used to assess the adequacy of a type of language to a domain semantics and it is possible to check whether a concrete notation or syntax complies with the metamodel or introduces unforeseen constraints. Finally, we discuss how this characterisation allows the definition of flexible editors for concrete diagrammatic languages, where a specific editor results from the specialisation of some high-level construction primitives for the relevant family of languages.
An Algebraic Foundation for Higraphs Higraphs, which are structures extending graphs by permitting a hierarchy of nodes, underlie a number of diagrammatic formalisms popular in computing. We provide an algebraic account of higraphs (and of a mild extension), with our main focus being on the mathematical structures underlying common operations, such as those required for understanding the semantics of higraphs and Statecharts, and for implementing sound software tools which support them.
Statecharts: A visual formalism for complex systems Abstract. We,present,a broad,extension,of the,conventional,formalism,of state machines,and state diagrams, that is relevant to the specification and design of complex discrete-event systems, such as multi-computer real-time systems, communication protocols and digital control units. Our diagrams, which we call statecharts, extend conventional state-transition diagrams with essentially three elements, dealing, respectively, with the notions of hierarchy, concurrency and communica- tion. These,transform,the language,of state diagrams,into a highly,structured,and,economical description,language.,Statecharts,are thus,compact,and,expressiv-small,diagrams,can,express complex,behavior-as,well,as compositional,and,modular.,When,coupled,with,the capabilities of computerized graphics, statecharts enable viewing the description at different levels of detail, and make even very large specifications manageable and comprehensible. In fact, we intend to demonstrate,here that statecharts,counter,many,of the objections,raised,against,conventional,state diagrams, and thus appear to render specification by diagrams an attractive and plausible approach. Statecharts,can be used,either as a stand-alone,behavioral,description,or as part of a more,general design methodology that deals also with the system’s other aspects, such as functional decomposi- tion and,data-flow specification. We also discuss,some,practical,experience,that was,gained,over the last three,years,in applying,the statechart,formalism,to the specification,of a particularly complex,system.
On Overview of KRL, a Knowledge Representation Language
Implementing Remote procedure calls Remote procedure calls (RPC) are a useful paradigm for providing communication across a network between programs written in a high level language. This paper describes a package, written as part of the Cedar project, providing a remote procedure call facility. The paper describes the options that face a designer of such a package, and the decisions we made. We describe the overall structure of our RPC mechanism, our facilities for binding RPC clients, the transport level communication protocol, and some performance measurements. We include descriptions of some optimisations we used to achieve high performance and to minimize the load on server machines that have many clients. Our primary aim in building an RPC package was to make the building of distributed systems easier. Previous protocols were sufficiently hard to use that only members of a select group of communication experts were willing to undertake the construction of distributed systems. We hoped to overcome this by providing a communication paradigm as close as possible to the familiar facilities of our high level languages. To achieve this aim, we concentrated on making remote calls efficient, and on making the semantics of remote calls as close as possible to those of local calls.
Alloy: a lightweight object modelling notation Alloy is a little language for describing structural properties. It offers a declaration syntax compatible with graphical object models, and a set-based formula syntax powerful enough to express complex constraints and yet amenable to a fully automatic semantic analysis. Its meaning is given by translation to an even smaller (formally defined) kernel. This paper presents the language in its entirety, and explains its motivation, contributions and deficiencies.
A calculus of refinements for program derivations A calculus of program refinements is described, to be used as a tool for the step-by-step derivation of correct programs. A derivation step is considered correct if the new program preserves the total correctness of the old program. This requirement is expressed as a relation of (correct) refinement between nondeterministic program statements. The properties of this relation are studied in detail. The usual sequential statement constructors are shown to be monotone with respect to this relation and it is shown how refinement between statements can be reduced to a proof of total correctness of the refining statement. A special emphasis is put on the correctness of replacement steps, where some component of a program is replaced by another component. A method by which assertions can be added to statements to justify replacements in specific contexts is developed. The paper extends the weakest precondition technique of Dijkstra to proving correctness of larger program derivation steps, thus providing a unified framework for the axiomatic, the stepwise refinement and the transformational approach to program construction and verification.
Symbolic Model Checking Symbolic model checking is a powerful formal specification and verification method that has been applied successfully in several industrial designs. Using symbolic model checking techniques it is possible to verify industrial-size finite state systems. State spaces with up to 1030 states can be exhaustively searched in minutes. Models with more than 10120 states have been verified using special techniques.
2009 Data Compression Conference (DCC 2009), 16-18 March 2009, Snowbird, UT, USA
Hex-splines: a novel spline family for hexagonal lattices This paper proposes a new family of bivariate, nonseparable splines, called hex-splines, especially designed for hexagonal lattices. The starting point of the construction is the indicator function of the Voronoi cell, which is used to define in a natural way the first-order hex-spline. Higher order hex-splines are obtained by successive convolutions. A mathematical analysis of this new bivariate spline family is presented. In particular, we derive a closed form for a hex-spline of arbitrary order. We also discuss important properties, such as their Fourier transform and the fact they form a Riesz basis. We also highlight the approximation order. For conventional rectangular lattices, hex-splines revert to classical separable tensor-product B-splines. Finally, some prototypical applications and experimental results demonstrate the usefulness of hex-splines for handling hexagonally sampled data.
Repository support for multi-perspective requirements engineering Relationships among different modeling perspectives have been systematically investigated focusing either on given notations (e.g. UML) or on domain reference models (e.g. ARIS/SAP). In contrast, many successful informal methods for business analysis and requirements engineering (e.g. JAD) emphasize team negotiation, goal orientation and flexibility of modeling notations. This paper addresses the question how much formal and computerized support can be provided in such settings without destroying their creative tenor. Our solution is based on a novel modeling language, M-Telos, that integrates the adaptability and analysis advantages of the logic-based meta modeling language Telos with a module concept covering the structuring mechanisms of scalable software architectures. It comprises four components: (1) A modular conceptual modeling formalism organizes individual perspectives and their interrelationships. (2) Perspective schemata are linked to a conceptual meta meta model of shared domain terms, thus giving the architecture a semantic meaning and enabling adaptability and extensibility of the network of perspectives. (3) Inconsistency management across perspectives is handled in a goal-oriented manner, by formalizing analysis goals as meta rules which are automatically customized to perspective schemata. (4) Continuous incremental maintenance of inconsistency information is provided by exploiting recent view maintenance techniques from deductive databases. The approach has been implemented as an extension to the ConceptBase ‡ ‡ ConceptBase is available through web site http://www-i5.Informatik.RWTH-Aachen.de/Cbdor/index.html. meta database management system and has been applied in a number of real-world requirements engineering projects.
Characterizing plans as a set of constraints—the model—a framework for comparative analysis This paper presents an approach to representing and manipulating plans based on a model of plans as a set of constraints. The <I-N-OVA> model 1 is used to characterise the plan representation used within O-Plan and to relate this work to emerging formal analyses of plans and planning. This synergy of practical and formal approaches can stretch the formal methods to cover realistic plan representations as needed for real problem solving, and can improve the analysis that is possible for production planning systems.<I-N-OVA> is intended to act as a bridge to improve dialogue between a number of communities working on formal planning theories, practical planning systems and systems engineering process management methodologies. It is intended to support new work on automatic manipulation of plans, human communication about plans, principled and reliable acquisition of plan information, and formal reasoning about plans.
Maintaining a legacy: towards support at the architectural level An organization that develops large, software intensive systems with a long lifetime will encounter major changes in the market requirements, the software development environment, including its platform, and the target platform. In order to meet the challenges associated with these changes, software development has to undergo major changes as well, Especially when these systems are successful, and hence become an asset, particular care shall be taken to maintain this legacy; large systems with a long lifetime tend to become very complex and difficult to understand. Software architecture plays a vital role in the development of large software systems. For the purpose of maintenance, an up-to-date explicit description of the software architecture of a system supports understanding and comprehension of it, amongst other things. However, many large! complex systems do not have an up-to-date documented software architecture. Particularly in cases where these systems have a long lifetime, the (natural) turnover of personnel will make it very likely that many employees contributing to previous generations of the system are no longer available. A need to 'recover' the software architecture of the system may become prevalent, facilitating the understanding of the system, providing ways to improve its maintainability and quality and to control architectural changes. This paper gives an overview of an on-going effort to improve the maintainability and quality of a legacy system, and describes the recent introduction of support at the architectural level for program understanding and complexity control. Copyright (C) 2000 John Wiley & Sons, Ltd.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.2
0.04
0.00081
0
0
0
0
0
0
0
0
0
0
0
On the analysis needs when verifying state-based software requirements: an experience report In a previous investigation we formally defined procedures for analyzing hierarchical state-based requirements specifications for two properties: (1) completeness with respect to a set of criteria related to robustness (a response is specified for every possible input and input sequence) and (2) consistency (the specification is free from conflicting requirements and undesired nondeterminism). Informally, the analysis involves determining if large Boolean expressions are tautologies. We implemented the analysis procedures in a prototype tool and evaluated their effectiveness and efficiency on a large real world requirements specification expressed in an hierarchical state-based language called Requirements State Machine Language. Although our initial approach was largely successful, there were some drawbacks with the original tools. In our initial implementation we abstracted all formulas to propositional logic. Unfortunately, since we are manipulating the formulas without interpreting any of the functions in the individual predicates, the abstraction can lead to large numbers of spurious (or false) error reports. To increase the accuracy of our analysis we have continually refined our tool with decision procedures and, finally, come to the conclusion that theorem proving is often needed to avoid large numbers of spurious error reports. This paper discusses the problems with spurious error reports and describes our experiences analyzing a large commercial avionics system for completeness and consistency.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
CREWS : Towards Systematic Usage of Scenarios, Use Cases and Scenes In the wake of object-oriented software engineering, use cases have gainedenormous popularity as tools for bridging the gap between electronicbusiness management and information systems engineering. A wide varietyof practices has emerged but their relationships to each other, and withrespect to the traditional change management process, are poorlyunderstood. The ESPRIT Long Term Research Project CREWS(Cooperative Requirements Engineering With Scenarios) has conductedsurveys of the...
Distributed Intelligent Agents In Retsina, the authors have developed a distributed collection of software agents that cooperate asynchronously to perform goal-directed information retrieval and integration for supporting a variety of decision-making tasks. Examples for everyday organizational decision making and financial portfolio management demonstrate its effectiveness.
Validating Requirements for Fault Tolerant Systems using Model Checking Model checking is shown to be an effective tool in validating the behavior of a fault tolerant embedded spacecraft controller. The case study presented here shows that by judiciously abstracting away extraneous complexity, the state space of the model could be exhaustively searched allowing critical functional requirements to be validated down to the design level. Abstracting away detail not germane to the problem of interest leaves by definition a partial specification behind. The success of this procedure shows that it is feasible to effectively validate a partial specification with this technique. Three anomalies were found in the system. One was an error in the detailed requirements, and the other two were missing/ ambiguous requirements. Because the method allows validation of partial specifications, it is also an effective approach for maintaining fidelity between a co-evolving specification and an implementation.
Repository support for multi-perspective requirements engineering Relationships among different modeling perspectives have been systematically investigated focusing either on given notations (e.g. UML) or on domain reference models (e.g. ARIS/SAP). In contrast, many successful informal methods for business analysis and requirements engineering (e.g. JAD) emphasize team negotiation, goal orientation and flexibility of modeling notations. This paper addresses the question how much formal and computerized support can be provided in such settings without destroying their creative tenor. Our solution is based on a novel modeling language, M-Telos, that integrates the adaptability and analysis advantages of the logic-based meta modeling language Telos with a module concept covering the structuring mechanisms of scalable software architectures. It comprises four components: (1) A modular conceptual modeling formalism organizes individual perspectives and their interrelationships. (2) Perspective schemata are linked to a conceptual meta meta model of shared domain terms, thus giving the architecture a semantic meaning and enabling adaptability and extensibility of the network of perspectives. (3) Inconsistency management across perspectives is handled in a goal-oriented manner, by formalizing analysis goals as meta rules which are automatically customized to perspective schemata. (4) Continuous incremental maintenance of inconsistency information is provided by exploiting recent view maintenance techniques from deductive databases. The approach has been implemented as an extension to the ConceptBase ‡ ‡ ConceptBase is available through web site http://www-i5.Informatik.RWTH-Aachen.de/Cbdor/index.html. meta database management system and has been applied in a number of real-world requirements engineering projects.
Designing And Building A Negotiating Automated Agent Negotiations are very important in a multiagent environment, particularly, in an environment where there are conflicts between the agents, and cooperation would be beneficial. We have developed a general structure for a Negotiating Automated Agent that consists of five modules: a Prime Minister, a Ministry of Defense, a Foreign Office, a Headquarters and Intelligence. These modules are implemented using a dynamic set of local agents belonging to the different modules. We used this structure to develop a Diplomacy player, Diplomat. Playing Diplomacy involves a certain amount of technical skills as in other board games, but the capacity to negotiate, explain, convince, promise, keep promises or break them, is an essential ingredient in good play. Diplomat was evaluated and consistently played better than human players.
Patterns of large software systems: failure and success Software management consultants have something in common with physicians: both are much more likely to be called in when there are serious problems rather than when everything is fine. Examining large software systems-those in excess of 5000 function points (which is roughly 500000 source code statements in a procedural programming language such as Cobol or Fortran)-that are in trouble is very common for management consultants. Unfortunately, the systems are usually already late, over budget, and showing other signs of acute distress before the study begins. The consultant engagements, therefore, serve to correct the problems and salvage the system-if, indeed, salvaging is possible. The failure or cancellation rate of large software systems is over 20 percent. Of those that are completed, about two thirds experience schedule delays and cost overruns that may approach 100 percent. Roughly the same number are plagued by low reliability and quality problems in the first year of deployment. Yet some large systems finish early, meet their budgets, and have few, if any, quality problems. How do these projects succeed, when so many fail?
On formal aspects of electronic (or digital) commerce: examples of research issues and challenges The notion of electronic or digital commerce is gaining widespread popularity. By and large, these developments are being led by industry and government, with academic research following these trends in the form of empirical and economic research. Much more fundamental improvements to (global) commerce are possible, but are presently being overlooked for lack of adequate formal theories, representations and tools. This paper attempts to incite research in these directions.
Using the WinWin Spiral Model: A Case Study At the 1996 and 1997 International Conferences on Software Engineering, three of the six keynote addresses identified negotiation techniques as the most critical success factor in improving the outcome of software projects. The USC Center for Software Engineering has been developing a negotiation-based approach to software system requirements engineering, architecture, development, and management. This approach has three primary elements: Theory W, a management theory and approach, which says that making winners of the system's key stakeholders is a necessary and sufficient condition for project success. The WinWin spiral model, which extends the spiral software development model by adding Theory W activities to the front of each cycle. WinWin, a groupware tool that makes it easier for distributed stakeholders to negotiate mutually satisfactory (win-win) system specifications. This article describes an experimental validation of this approach, focusing on the application of the WinWin spiral model. The case study involved extending USC's Integrated Library System to access multimedia archives, including films, maps, and videos. The study showed that the WinWin spiral model is a good match for multimedia applications and is likely to be useful for other applications with similar characteristics--rapidly moving technology, many candidate approaches, little user or developer experience with similar systems, and the need for rapid completion.
STeP: Deductive-Algorithmic Verification of Reactive and Real-Time Systems . The Stanford Temporal Prover, STeP, combines deductivemethods with algorithmic techniques to verify linear-time temporal logicspecifications of reactive and real-time systems. STeP uses verificationrules, verification diagrams, automatically generated invariants, modelchecking, and a collection of decision procedures to verify finiteandinfinite-state systems.System Description: The Stanford Temporal Prover, STeP, supports thecomputer-aided formal verification of reactive, real-time...
Quantitative evaluation of software quality The study reported in this paper establishes a conceptual framework and some key initial results in the analysis of the characteristics of software quality. Its main results and conclusions are: • Explicit attention to characteristics of software quality can lead to significant savings in software life-cycle costs. • The current software state-of-the-art imposes specific limitations on our ability to automatically and quantitatively evaluate the quality of software. • A definitive hierarchy of well-defined, well-differentiated characteristics of software quality is developed. Its higher-level structure reflects the actual uses to which software quality evaluation would be put; its lower-level characteristics are closely correlated with actual software metric evaluations which can be performed. • A large number of software quality-evaluation metrics have been defined, classified, and evaluated with respect to their potential benefits, quantifiability, and ease of automation. •Particular software life-cycle activities have been identified which have significant leverage on software quality. Most importantly, we believe that the study reported in this paper provides for the first time a clear, well-defined framework for assessing the often slippery issues associated with software quality, via the consistent and mutually supportive sets of definitions, distinctions, guidelines, and experiences cited. This framework is certainly not complete, but it has been brought to a point sufficient to serve as a viable basis for future refinements and extensions.
An Approach to Fair Applicative Multiprogramming This paper presents a brief formal semantics of constructors for ordered sequences (cons) and for unordered multisets (frons) followed by a detailed operational semantics for both. A multiset is a generalization of a list structure which lacks order a priori; its order is determined by the a posteriori migration of computationally convergent elements to the front. The introductory material includes an example which demonstrates that a multiset of yet-unconverged values and a timing primitive may be used to implement the scheduler for an operating system in an applicative style. The operational semantics, given in PASCAL-like code, is described in two detailed steps: first a uniprocessor implementation of the cons/frons constructors and the first/rest probes, followed by an extension to a multiprocessor implementation. The center of either implementation is the EUREKA structure transformation, which brings convergent elements to the fore while preserving order of shared structures. The multiprocessor version is designed to run on an arbitrary number of processors with only one semaphore but makes heavy use of the sting memory store primitive. Stinging is a conditional store operation which is carried out independently of its dispatching processor so that shared nodes may be somewhat altered without interfering with other processors. An appendix presents the extension of this code to a fair implementation of multisets.
Some properties of sequential predictors for binary Markov sources Universal predictions of the next outcome of a binary sequence drawn from a Markov source with unknown parameters is considered. For a given source, the predictability is defined as the least attainable expected fraction of prediction errors. A lower bound is derived on the maximum rate at which the predictability is asymptotically approached uniformly over all sources in the Markov class. This bound is achieved by a simple majority predictor. For Bernoulli sources, bounds on the large deviations performance are investigated. A lower bound is derived for the probability that the fraction of errors will exceed the predictability by a prescribed amount Δ>0. This bound is achieved by the same predictor if Δ is sufficiently small
Program Construction by Parts . Given a specification that includes a number of user requirements,we wish to focus on the requirements in turn, and derive a partlydefined program for each; then combine all the partly defined programsinto a single program that satisfies all the requirements simultaneously.In this paper we introduces a mathematical basis for solving this problem;and we illustrate it by means of a simple example.1 Introduction and MotivationWe propose a program construction method whereby, given a...
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1.20067
0.20067
0.20067
0.20067
0.20067
0.20067
0.100339
0.050301
0.020181
0.0001
0
0
0
0
A Low-Complexity Bit-Plane Entropy Coding and Rate Control for 3-D DWT Based Video Coding This paper is dedicated to fast video coding based on three-dimensional discrete wavelet transform. First, we propose a novel low-complexity bit-plane entropy coding of wavelet subbands based on Levenstein zero-run coder for low entropy contexts and adaptive binary range coder for other contexts. Second, we propose a rate-distortion efficient criterion for skipping 2-D wavelet transforms and entropy encoding based on parent-child subband tree. Finally, we propose one pass rate control which uses virtual buffer concept for adaptive Lagrange multiplier selection. Simulations results show that the proposed video codec has a much lower computational complexity (from 2 to 6 times) for the same quality level compared to the H.264/AVC standard in the low complexity mode.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Procedures and atomicity refinement The introduction of an early return from a (remote) procedure call can increase the degree of parallelism in a parallel or distributed algorithm modeled by an action system. We define a return statement for procedures in an action systems framework and show that it corresponds to carrying out an atomicity refinement.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
An Interval-Based Approach to Modelling Time in Event-B Our work was inspired by our modelling and verification of a cardiac pacemaker, which includes concurrent aspects and a set of interdependent and cyclic timing constraints. To model timing constraints in such systems, we present an approach based on the concept of timing interval. We provide a template-based timing constraint modelling scheme that could potentially be applicable to a wide range of modelling scenarios. We give a notation and Event-B semantics for the interval. The Event-B coding of the interval is decoupled from the application logic of the model, therefore a generative design of the approach is possible. We demonstrate our interval approach and its refinement through a small example. The example is verified, model-checked and animated (manually validated) with the ProB animator.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Online and off-line handwriting recognition: a comprehensive survey Handwriting has continued to persist as a means of communication and recording information in day-to-day life even with the introduction of new technologies. Given its ubiquity in human transactions, machine recognition of handwriting has practical significance, as in reading handwritten notes in a PDA, in postal addresses on envelopes, in amounts in bank checks, in handwritten fields in forms, etc. This overview describes the nature of handwritten language, how it is transduced into electronic data, and the basic concepts behind written language recognition algorithms. Both the on-line case (which pertains to the availability of trajectory data during writing) and the off-line case (which pertains to scanned images) are considered. Algorithms for preprocessing, character and word recognition, and performance with practical systems are indicated. Other fields of application, like signature verification, writer authentification, handwriting learning tools are also considered.
Handwritten Digit Recognition by Multi-objective Optimization of Zoning Methods This paper addresses the use of multi-objective optimization techniques for optimal zoning design in the context of handwritten digit recognition. More precisely, the Non-dominant Sorting Genetic Algorithm II (NSGA II) has been considered for the optimization of Voronoi-based zoning methods. In this case both the number of zones and the zone position and shape are optimized in a unique genetic procedure. The experimental results point out the usefulness of multi-objective genetic algorithms for achieving effective zoning topologies for handwritten digit recognition.
New Advancements in Zoning-Based Recognition of Handwritten Characters In handwritten character recognition, zoning is one of the most effective approaches for features extraction. When a zoning method is considered, the pattern image is subdivided into zones each one providing regional information related to a specific part of the pattern. The design of a zoning method concerns the definition of zoning topology and membership function. Both aspects have been recently investigated and new solutions have been proposed, able to increase adaptability of the zoning method to different application requirements. In this paper some of the most recent results in the field of zoning method design are presented and some valuable directions of research are highlighted.
Numeral Recognition by Weighting Local Decisions This paper presents a new technique to improve thecombination of classification decisions obtained fromlocal analysis of patterns. Specifically, a geneticalgorithm is used to determine the optimal weight vectorto balance the local decisions in the combination process.The experimental results, carried out in the field ofhand-written numeral recognition, demonstrate theeffectiveness of the new technique.
Tuning between Exponential Functions and Zones for Membership Functions Selection in Voronoi-Based Zoning for Handwritten Character Recognition In Handwritten Character Recognition, zoning is rigtly considered as one of the most effective feature extraction techniques. In the past, many zoning methods have been proposed, based on static and dynamic zoning design strategies. Notwithstanding, little attention has been paid so far to the role of function-zone membership functions, that define the way in which a feature influences different zones of the pattern. In this paper the effectiveness of membership functions for zoning-based classification is investigated. For the purpose, a useful representation of zoning methods based on Voronoi Diagram is adopted and several membership functions are considered, according to abstract -- , ranked- and measurement-levels strategies. Furthermore, a new class of membership functions with adaptive capabilities is introduced and a real-coded genetic algorithm is proposed to determine both the optimal zoning and the adaptive membership functions most profitable for a given classification problem. The experimental tests, carried out in the field of handwritten digit recognition, show the superiority of adaptive membership functions compared to traditional functions, whatever zoning method is used.
Analysis and recognition of alphanumeric handprints by parts An advanced hierarchical model has been proposed to produce a more effective character recognizer based on the probability of occurrence of the patterns. New definitions such as crucial parts, efficiency ratios, degree of confusion, similar character pairs, etc. have also been given to facilitate pattern analysis and character recognition. Using these definitions, computer algorithms have been developed to recognize the characters by parts, including halves, quarters, and sixths. The recognition rates have been analyzed and compared with those obtained from subjective experiments. Based on the results of both computer and human experiments, a detailed analysis of the crucial parts and the Canadian standard alphanumeric character set has been made revealing some interesting fundamental characteristics of these handprint models. The results should be useful for pattern analysis and recognition, character understanding, handwriting education, and human-computer communication
An Iterative Growing and Pruning Algorithm for Classification Tree Design A critical issue in classification tree design-obtaining right-sized trees, i.e. trees which neither underfit nor overfit the data-is addressed. Instead of stopping rules to halt partitioning, the approach of growing a large tree with pure terminal nodes and selectively pruning it back is used. A new efficient iterative method is proposed to grow and prune classification trees. This method divides the data sample into two subsets and iteratively grows a tree with one subset and prunes it with the other subset, successively interchanging the roles of the two subsets. The convergence and other properties of the algorithm are established. Theoretical and practical considerations suggest that the iterative free growing and pruning algorithm should perform better and require less computation than other widely used tree growing and pruning algorithms. Numerical results on a waveform recognition problem are presented to support this view.
On Overview of KRL, a Knowledge Representation Language
Software Engineering This paper provides a definition of the term "software engineering" and a survey of the current state of the art and likely future trends in the field. The survey covers the technology available in the various phases of the software life cycle requirements engineering, design, coding, test, and maintenance and in the overall area of software management and integrated technology-management approaches. It is oriented primarily toward discussing the domain of applicability of techniques (where and when they work), rather than how they work in detail. To cover the latter, an extensive set of 104 references is provided.
The Depth And Width Of Local Minima In Discrete Solution Spaces Heuristic search techniques such as simulated annealing and tabu search require ''tuning'' of parameters (i.e., the cooling schedule in simulated annealing, and the tabu list length in tabu search), to achieve optimum performance. In order for a user to anticipate the best choice of parameters, thus avoiding extensive experimentation, a better understanding of the solution space of the problem to be solved is needed. Two functions of the solution space, the maximum depth and the maximum width of local minima are discussed here, and sharp bounds on the value of these functions are given for the 0-1 knapsack problem and the cardinality set covering problem.
Meaningful Modeling: What's the Semantics of "Semantics"? Researchers differ on what constitutes semantics for UML subsets and adaptations. Worse, implicit assumptions often influence these definitions and results, which makes comparing published research on UML semantics difficult.The authors have thus set out to clarify some of the notions involved in defining modeling languages, with an eye toward the particular difficulties arising in defining UML. They are primarily interested in distinguishing a language's notation, or syntax, from its meaning, or semantics, as well as recognizing the differences between variants of syntax and semantics in their nature, purpose, style, and use.
Making Distortions Comprehensible This paper discusses visual information representation from the perspective of human comprehension. The distortion viewing paradigm is an appropriate focus for this discussion as its motivation has always been to create more understandable displays. While these techniques are becoming increasingly popular for exploring images that are larger than the available screen space, in fact users sometimes report confusion and disorientation. We provide an overview of structural changes made in response to this phenomenon and examine methods for incorporating visual cues based on human perceptual skills.
Ontology, Metadata, and Semiotics The Internet is a giant semiotic system. It is a massive collection of Peirce's three kinds of signs: icons, which show the form of something; indices, which point to something; and symbols, which represent something according to some convention. But current proposals for ontologies and metadata have overlooked some of the most important features of signs. A sign has three aspects: it is (1) an entity that represents (2) another entity to (3) an agent. By looking only at the signs themselves, some metadata proposals have lost sight of the entities they represent and the agents  human, animal, or robot  which interpret them. With its three branches of syntax, semantics, and pragmatics, semiotics provides guidelines for organizing and using signs to represent something to someone for some purpose. Besides representation, semiotics also supports methods for translating patterns of signs intended for one purpose to other patterns intended for different but related purposes. This article shows how the fundamental semiotic primitives are represented in semantically equivalent notations for logic, including controlled natural languages and various computer languages.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.02802
0.03
0.03
0.017339
0.014499
0.006
0.000471
0
0
0
0
0
0
0
Applying Integrated Domain-Specific Modeling for Multi-concerns Development of Complex Systems. Current systems engineering efforts are increasingly driven by trade-offs and limitations imposed by multiple factors: Growing product complexity as well as stricter regulatory requirements in domains such as automotive or aviation necessitate advanced design and development methods. At the core of these influencing factors lies a consideration of competing non-functional concerns, such as safety and reliability, performance, and the fulfillment of quality requirements. In an attempt to cope with these aspects, incremental evolution of model-based engineering practice has produced heterogeneous tool environments without proper integration and exchange of design artifacts. In order to overcome these shortcomings of current engineering practice, we propose a holistic, model-based architecture and analysis framework for seamless design, analysis, and evolution of integrated system models. We describe how heterogeneous domain-specific modeling languages can be embedded into a common general-purpose model in order to facilitate the integration between previously disjoint design artifacts. A case study demonstrates the suitability of this methodology for the design of a safety-critical embedded system, a hypothetical gas heating, with respect to reliability engineering and further quality assurance activities.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Formal Methods for Software Engineers: Tradeoffs in Curriculum Design While formal methods are becoming increasingly important to software engineering, currently there is little consensus on how they should be taught. In this paper I outline some of the important dimensions of curriculum design for formal methods and illustrate the tradeoffs through a brief examination of four common course formats. I summarize what I have learned from teaching courses in each of these formats and outline an agenda of educational research that will enable us to teach formal methods more effectively.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Proof rules and transformations dealing with fairness We provide proof rules enabling the treatment of two fairness assumptions in the context of Dijkstra's do-od-programs. These proof rules are derived by considering a transformed version of the original program which uses random assignments z ≔? and admits only fair computations. Various, increasingly complicated, examples are discussed. In all cases reasonably simple proofs can be given. The proof rules use well-founded structures corresponding to infinite ordinals and deal with the original programs and not their translated versions.
Proof Rules Dealing with Fairness We provide proof rules allowing to deal with two fairness assumptions in the context of Dijkstra's do-od programs. These proof rules are obtained by considering a translated version of the original program which uses random assignment x:=? and admits only fair runs. The proof rules use infinite ordinals and deal with the original programs and not their translated versions.
Specifications of Concurrently Accessed Data Our specification of the buffer illustrates how some of the requirements described in the introduction are met. The specification is concise, and it can be manipulated easily. This allowed us to derive several properties of the buffer (Appendix A) and construct a proof of buffer concatenation (Section 4). Also refinement of the specification with the eventual goal of implementation seems feasible with this scheme.
Stepwise Refinement of Distributed Systems, Models, Formalisms, Correctness, REX Workshop, Mook, The Netherlands, May 29 - June 2, 1989, Proceedings
On the design of reactive systems The notion of joint actions provides a framework in which the granularity of atomic actions can be refined in the design of concurrent systems. An example of a telephone exchange is elaborated to demonstrate the feasibility of this approach for reactive systems and to illustrate transformations that are justifiable in such a process. Particular problems arise when a refinement would allow new interleavings of semantically relevant events. The meaning of a reactive computation is specified in a way that makes this possible.
Safety and Progress of Recursive Procedures Temporal weakest precondions are introduced for calculational reasoning about the states encountered during execution of not-necessarily terminating recursive procedures. The formalism can distinguish error from useful nontermination. The precondition functions are constructed in a new and more elegant way. Healthiness laws are discussed briefly. Proof rules are introduced that enable calculational proofs of various safety and progress properties. The construction of the precondition functions is justified in an Appendix that provides the operational semantics.
A Case Study in Transformational Design of Concurrent Systems . We explain a transformationalapproach to the design and verification ofcommunicating concurrent systems. Thetransformations start form specifications thatcombine trace-based with state-based assertionalreasoning about the desired communicationbehaviour, and yield concurrent implementations.We illustrate our approach by acase study proving correctness of implementationsof safe and regular registers allowingconcurrent writing and reading phases, originallydue to Lamport.1...
Towards programming with knowledge expressions Explicit use of knowledge expressions in the design of distributed algorithms is explored. A non-trivial case study is carried through, illustrating the facilities that a design language could have for setting and deleting the knowledge that the processes possess about the global state and about the knowledge of other processes. No implicit capabilities for logical reasoning are assumed. A language basis is used that allows common knowledge not only by an eager protocol but also in the true sense. The observation is made that the distinction between these two kinds of common knowledge can be associated with the level of abstraction: true common knowledge of higher levels of abstraction: true common knowledge of higher levels can be implemented as eager common knowledge on lower levels. A knowledge-motivated abstraction tool is therefore suggested to be useful in supporting stepwise refinement of distributed algorithms.
Data Refinement of Mixed Specifications .   Using predicate transformers as a basis, we give semantics and refinement rules for mixed specifications that allow UNITY style specifications to be written as a combination of abstract program and temporal properties. From the point of view of the programmer, mixed specifications may be considered a generalization of the UNITY specification notation to allow safety properties to be specified by abstract programs in addition to temporal properties. Alternatively, mixed specifications may be viewed as a generalization of the UNITY programming notation to allow arbitrary safety and progress properties in a generalized ‘always section’. The UNITY substitution axiom is handled in a novel way by replacing it with a refinement rule. The predicate transformers foundation allows known techniques for algorithmic and data-refinement for weakest precondition based programming to be applied to both safety and progress properties. In this paper, we define the predicate transformer based specifications, specialize the refinement techniques to them, demonstrate soundness, and illustrate the approach with a substantial example.
Abstracto 84: The next generation Programming languages are not an ideal vehicle for expressing algorithms. This paper sketches how a language Abstracto might be developed for “algorithmic expressions” that may be manipulated by the rules of “algorithmics”, quite similar to the manipulation of mathematical expressions in mathematics. Two examples are given of “abstract” algorithmic expressions that are not executable in the ordinary sense, but may be used in the derivation of programs. It appears that the notion of “refinement” may be replaced by a weaker notion for abstract algorithmic expressions, corresponding also to a weaker notion of “weakest precondition”.
State-Based Model Checking of Event-Driven System Requirements It is demonstrated how model checking can be used to verify safety properties for event-driven systems. SCR tabular requirements describe required system behavior in a format that is intuitive, easy to read, and scalable to large systems (e.g. the software requirements for the A-7 military aircraft). Model checking of temporal logics has been established as a sound technique for verifying properties of hardware systems. An automated technique for formalizing the semiformal SCR requirements and for transforming the resultant formal specification onto a finite structure that a model checker can analyze has been developed. This technique was effective in uncovering violations of system invariants in both an automobile cruise control system and a water-level monitoring system.
A novel approach for coding color quantized images An approach to the lossy compression of color images with limited palette that does not require color quantization of the decoded image is presented. The algorithm is particularly suited for coding images using an image-dependent palette. The technique restricts the pixels of the decoded image to take values only in the original palette. Thus, the decoded image can be readily displayed without having to be quantized. For comparable quality and bit rates, the technique significantly reduces the decoder computational complexity.
A Refinement Theory that Supports Reasoning About Knowledge and Time An expressive semantic framework for program refinement that supports both temporal reasoning and reasoning about the knowledge of multiple agents is developed. The refinement calculus owes the cleanliness of its decomposition rules for all programming language constructs and the relative simplicity of its semantic model to a rigid synchrony assumption which requires all agents and the environment to proceed in lockstep. The new features of the calculus are illustrated in a derivation of the two-phase-commit protocol.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.020809
0.030869
0.030869
0.015448
0.010687
0.005653
0.002312
0.0006
0.000056
0.000007
0
0
0
0
Relaxed Stability Criteria for Neural Networks With Time-Varying Delay Using Extended Secondary Delay Partitioning and Equivalent Reciprocal Convex Combination Techniques This article investigates global asymptotic stability for neural networks (NNs) with time-varying delay, which is differentiable and uniformly bounded, and the delay derivative exists and is upper-bounded. First, we propose the extended secondary delay partitioning technique to construct the novel Lyapunov–Krasovskii functional, where both single-integral and double-integral state variables are considered, while the single-integral ones are only solved by the traditional secondary delay partitioning. Second, a novel free-weight matrix equality (FWME) is presented to resolve the reciprocal convex combination problem equivalently and directly without Schur complement, which eliminates the need of positive definite matrices, and is less conservative and restrictive compared with various improved reciprocal convex inequalities. Furthermore, by the present extended secondary delay partitioning, equivalent reciprocal convex combination technique, and Bessel–Legendre inequality, two different relaxed sufficient conditions ensuring global asymptotic stability for NNs are obtained, for time-varying delays, respectively, with unknown and known lower bounds of the delay derivative. Finally, two examples are given to illustrate the superiority and effectiveness of the presented method.
Non-fragile finite-time H∞ state estimation of neural networks with distributed time-varying delay. In this article, the non-fragile finite-time H∞ state estimation problem of neural networks is discussed with distributed time-delays. Based on a modified Lyapunov–Krasovskii functional and the linear matrix inequality (LMI) technique, a novel delay-dependent criterion is presented such that the error system is finite-time boundedness with guaranteed H∞ performance. In order to obtain less conservative results, Wirtingers integral inequality and reciprocally convex approach are employed. The estimator gain matrix can be achieved by solving the LMIs. Finally, Numerical examples are given to demonstrate the effectiveness of the proposed approach.
Stability Analysis for Neural Networks With Time-Varying Delay via Improved Techniques. This paper is concerned with the stability problem for neural networks with a time-varying delay. First, an improved generalized free-weighting-matrix integral inequality is proposed, which encompasses the conventional one as a special case. Second, an improved Lyapunov-Krasovskii functional is constructed that contains two complement triple-integral functionals. Third, based on the improved techniques, a new stability condition is derived for neural networks with a time-varying delay. Finally, two widely used numerical examples are given to demonstrate that the proposed stability condition is very competitive in both conservatism and complexity.
Exponential stability and extended dissipativity criteria for generalized neural networks with interval time-varying delay signals. This paper discusses the problems of exponential stability and extended dissipativity analysis of generalized neural networks (GNNs) with time delays. A new criterion for the exponential stability and extended dissipativity of GNNs is established based on the suitable Lyapunov–Krasovskii functionals (LKFs) together with the Wirtinger single integral inequality (WSII) and Wirtinger double integral inequality (WDII) technique, and that is mixed with the reciprocally convex combination (RCC) technique. An improved exponential stability and extended dissipativity criterion for GNNs are expressed in terms of linear matrix inequalities (LMIs). The major contributions of this study are an exponential stability and extended dissipativity concept can be developed to analyze simultaneously the solutions of the exponential H∞, L2−L∞, passivity, and dissipativity performance for GNNs by selecting the weighting matrices. Finally, several interesting numerical examples are developed to verify the usefulness of the proposed results, among them one example was supported by real-life application of the benchmark problem that associates with reasonable issues under extended dissipativity performance.
Wirtinger-based integral inequality: Application to time-delay systems In the last decade, the Jensen inequality has been intensively used in the context of time-delay or sampled-data systems since it is an appropriate tool to derive tractable stability conditions expressed in terms of linear matrix inequalities (LMIs). However, it is also well-known that this inequality introduces an undesirable conservatism in the stability conditions and looking at the literature, reducing this gap is a relevant issue and always an open problem. In this paper, we propose an alternative inequality based on the Fourier Theory, more precisely on the Wirtinger inequalities. It is shown that this resulting inequality encompasses the Jensen one and also leads to tractable LMI conditions. In order to illustrate the potential gain of employing this new inequality with respect to the Jensen one, two applications on time-delay and sampled-data stability analysis are provided.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
The use of goals to surface requirements for evolving systems This paper addresses the use of goals to surface requirements for the redesign of existing or legacy systems. Goals are widely recognized as important precursors to system requirements, but the process of identifying and abstracting them has not been researched thoroughly. We present a summary of a goal-based method (GBRAM) for uncovering hidden issues, goals, and requirements and illustrate its application to a commercial system, an Intranet-based electronic commerce application, evaluating the method in the process. The core techniques comprising GBRAM are the systematic application of heuristics and inquiry questions for the analysis of goals, scenarios and obstacles. We conclude by discussing the lessons learned through applying goal refinement in the field and the implications for future research.
Petri nets: Properties, analysis and applications Starts with a brief review of the history and the application areas considered in the literature. The author then proceeds with introductory modeling examples, behavioral and structural properties, three methods of analysis, subclasses of Petri nets and their analysis. In particular, one section is devoted to marked graphs, the concurrent system model most amenable to analysis. Introductory discussions on stochastic nets with their application to performance modeling, and on high-level nets with their application to logic programming, are provided. Also included are recent results on reachability criteria. Suggestions are provided for further reading on many subject areas of Petri nets
Performance evaluation in content-based image retrieval: overview and proposals Evaluation of retrieval performance is a crucial problem in content-based image retrieval (CBIR). Many different methods for measuring the performance of a system have been created and used by researchers. This article discusses the advantages and shortcomings of the performance measures currently used. Problems such as defining a common image database for performance comparisons and a means of getting relevance judgments (or ground truth) for queries are explained. The relationship between CBIR and information retrieval (IR) is made clear, since IR researchers have decades of experience with the evaluation problem. Many of their solutions can be used for CBIR, despite the differences between the fields. Several methods used in text retrieval are explained. Proposals for performance measures and means of developing a standard test suite for CBIR, similar to that used in IR at the annual Text REtrieval Conference (TREC), are presented.
Workflow Modeling A discussion of workflow models and process description languages is presented. The relationshipbetween data, function and coordination aspects of the process is discussed, and a claim is made thatmore than one model view (or representation) is needed in order to grasp the complexity of processmodeling.The basis of a new model is proposed, showing that more expressive models can be built by supportingasynchronous events and batch activities, matched by powerfull run-time support.1...
Better knowledge management through knowledge engineering In recent years the term knowledge management has been used to describe the efforts of organizations to capture, store, and deploy knowledge. Most current knowledge management activities rely on database and Web technology; currently, few organizations have a systematic process for capturing knowledge, as distinct from data. The authors present a case study where knowledge engineering practices support knowledge management by a drilling optimization group in a large service company. The case study illustrates three facets of the knowledge management task: First, knowledge is captured by a knowledge acquisition process that uses a conceptual model of aspects of the company's business domain to guide the capture of cases. Second, knowledge is stored using a knowledge representation language to codify the structured knowledge in a number of knowledge bases, which together constitute a knowledge repository. Third, knowledge is deployed by running the knowledge bases in a knowledge server, accessible by on the company intranet.
The Jikes research virtual machine project: building an open-source research community This paper describes the evolution of the Jikes™ Research Virtual Machine project from an IBM internal research project, called Jalapeño, into an open-source project. After summarizing the original goals of the project, we discuss the motivation for releasing it as an open-source project and the activities performed to ensure the success of the project. Throughout, we highlight the unique challenges of developing and maintaining an open-source project designed specifically to support a research community.
Task Structures As A Basis For Modeling Knowledge-Based Systems Recently, there has been an increasing interest in improving the reliability and quality of Al systems. As a result, a number of approaches to knowledge-based systems modeling have been proposed. However, these approaches are limited in formally verifying the intended functionality and behavior of a knowledge-based system. In this article, we proposed a formal treatment to task structures to formally specify and verify knowledge-based systems modeled using these structures. The specification of a knowledge-based system modeled using task structures has two components: a model specification that describes static properties of the system, and a process specification that characterizes dynamic properties of the system. The static properties of a system are described by two models: a model about domain objects (domain model), and a model about the problem-solving states (state model). The dynamic properties of the system are characterized by (1) using the notion of state transition to explicitly describe what the functionality of a task is, and (2) specifying the sequence of tasks and interactions between tasks (i.e., behavior of a system) using task state expressions (TSE). The task structure extended with the proposed formalism not only provides a basis for detailed functional decomposition with procedure abstraction embedded in, but also facilitates the verification of the intended functionality and behavior of a knowledge-based system. (C) 1997 John Wiley gr Sons, Inc.
Reversible Denoising and Lifting Based Color Component Transformation for Lossless Image Compression An undesirable side effect of reversible color space transformation, which consists of lifting steps (LSs), is that while removing correlation it contaminates transformed components with noise from other components. Noise affects particularly adversely the compression ratios of lossless compression algorithms. To remove correlation without increasing noise, a reversible denoising and lifting step (RDLS) was proposed that integrates denoising filters into LS. Applying RDLS to color space transformation results in a new image component transformation that is perfectly reversible despite involving the inherently irreversible denoising; the first application of such a transformation is presented in this paper. For the JPEG-LS, JPEG 2000, and JPEG XR standard algorithms in lossless mode, the application of RDLS to the RDgDb color space transformation with simple denoising filters is especially effective for images in the native optical resolution of acquisition devices. It results in improving compression ratios of all those images in cases when unmodified color space transformation either improves or worsens ratios compared with the untransformed image. The average improvement is 5.0–6.0% for two out of the three sets of such images, whereas average ratios of images from standard test-sets are improved by up to 2.2%. For the efficient image-adaptive determination of filters for RDLS, a couple of fast entropy-based estimators of compression effects that may be used independently of the actual compression algorithm are investigated and an immediate filter selection method based on the detector precision characteristic model driven by image acquisition parameters is introduced.
1.2
0.2
0.066667
0.033333
0.000429
0
0
0
0
0
0
0
0
0
Using diversity in classifier set selection for arabic handwritten recognition The first observation concerning Arabian manuscript reveals the complexity of the task, especially for the used classifiers ensemble. One of the most important steps in the design of a multi-classifier system (MCS), is the its components choice (classifiers). This step is very important to the overall MCS performance since the combination of a set of identical classifiers will not outperform the individual members. To select the best classifier set from a pool of classifiers, the classifier diversity is the most important property to be considered. The aim of this paper is to study Arabic handwriting recognition using MCS optimization based on diversity measures. The first approach selects the best classifier subset from large classifiers set taking into account different diversity measures. The second one chooses among the classifier set the one with the best performance and adds it to the selected classifiers subset. The performance in our approach is calculated using three diversity measures based on correlation between errors. On two database sets using 9 different classifiers, we then test the effect of using the criterion to be optimized (diversity measures,), and fusion methods (voting, weighted voting and Behavior Knowledge Space). The experimental results presented are encouraging and open other perspectives in the classifiers selection field especially speaking for Arabic Handwritten word recognition.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
A Review of Physical Layer Security Techniques for Internet of Things: Challenges and Solutions. With the uninterrupted revolution of communications technologies and the great-leap-forward development of emerging applications, the ubiquitous deployment of Internet of Things (IoT) is imperative to accommodate constantly growing user demands and market scales. Communication security is critically important for the operations of IoT. Among the communication security provisioning techniques, physical layer security (PLS), which can provide unbreakable, provable, and quantifiable secrecy from an information-theoretical point of view, has drawn considerable attention from both the academia and the industries. However, the unique features of IoT, such as low-cost, wide-range coverage, massive connection, and diversified services, impose great challenges for the PLS protocol design in IoT. In this article, we present a comprehensive review of the PLS techniques toward IoT applications. The basic principle of PLS is first briefly introduced, followed by the survey of the existing PLS techniques. Afterwards, the characteristics of IoT are identified, based on which the challenges faced by PLS protocol design are summarized. Then, three newly-proposed PLS solutions are highlighted, which match the features of IoT well and are expected to be applied in the near future. Finally, we conclude the paper and point out some further research directions.
Cooperative wireless communications: a cross-layer approach This article outlines one way to address these problems by using the notion of cooperation between wireless nodes. In cooperative communications, multiple nodes in a wireless network work together to form a virtual antenna array. Using cooperation, it is possible to exploit the spatial diversity of the traditional MIMO techniques without each node necessarily having multiple antennas. Multihop networks use some form of cooperation by enabling intermediate nodes to forward the message from source to destination. However, cooperative communication techniques described in this article are fundamentally different in that the relaying nodes can forward the information fully or in part. Also the destination receives multiple versions of the message from the source, and one or more relays and combines these to obtain a more reliable estimate of the transmitted signal as well as higher data rates. The main advantages of cooperative communications are presented
On the Performance of Cognitive Underlay Multihop Networks with Imperfect Channel State Information. This paper proposes and analyzes cognitive multihop decode-and-forward networks in the presence of interference due to channel estimation errors. To reduce interference on the primary network, a simple yet effective back-off control power method is applied for secondary multihop networks. For a given threshold of interference probability at the primary network, we derive the maximum back-off control power coefficient, which provides the best performance for secondary multihop networks. Moreover, it is shown that the number of hops for secondary network is upper-bounded under the fixed settings of the primary network. For secondary multihop networks, new exact and asymptotic expressions for outage probability (OP), bit error rate (BER) and ergodic capacity over Rayleigh fading channels are derived. Based on the asymptotic OP and BEP, a pivotal conclusion is reached that the secondary multihop network offers the same diversity order as compared with the network without back off. Finally, we verify the performance analysis through various numerical examples which confirm the correctness of our analysis for many channel and system settings and provide new insight into the design and optimization of cognitive multihop networks.
Robust Secure Beamforming in MISO Full-Duplex Two-Way Secure Communications Considering worst-case channel uncertainties, we investigate the robust secure beamforming design problem in multiple-input-single-output full-duplex two-way secure communications. Our objective is to maximize worst-case sum secrecy rate under weak secrecy conditions and individual transmit power constraints. Since the objective function of the optimization problem includes both convex and concave terms, we propose to transform convex terms into linear terms. We decouple the problem into four optimization problems and employ alternating optimization algorithm to obtain the locally optimal solution. Simulation results demonstrate that our proposed robust secure beamforming scheme outperforms the non-robust one. It is also found that when the regions of channel uncertainties and the individual transmit power constraints are sufficiently large, because of self-interference, the proposed two-way robust secure communication is proactively degraded to one-way communication.
Secure Relaying in Multihop Communication Systems. This letter considers improving end-to-end secrecy capacity of a multihop decode-and-forward relaying system. First, a secrecy rate maximization problem without transmitting artificial noise (AN) is considered, following which the AN-aided secrecy schemes are proposed. Assuming that global channel state information (CSI) is available, an optimal power splitting solution is proposed. Furthermore, an iterative joint optimization of transmit power and power splitting coefficient has also been considered. For the scenario of no eavesdropper's CSI, we provide a suboptimal solution. The simulation results demonstrate that the AN-aided optimal scheme outperforms other schemes.
Performance Analysis of Two-Way Multi-Antenna Multi-Relay Networks With Hardware Impairments. In this paper, a two-way multi-antenna and multi-relay amplify-and-forward (AF) network with hardware impairments is analyzed. The opportunistic relay selection scheme is used in the relay selection. Maximum ratio transmission and maximum ratio combining were used in transmitted and received slot by the multi-antenna relay, respectively. In this paper, we consider two AF protocols, one is variable gain protocol and the other is fixed gain protocol. Especially, the closed-from expressions for the outage probability of the system and the closed-expressions for the throughput of the system are derived, respectively. The system performance at high signal-to-noise ratio (SNR) is very important in real scenes. In order to analyze the impact of hardware impairments on the system at high SNRs, the asymptotic analysis for the system is also derived. In order to analyze the power efficiency, the closed-form expression for the energy-efficiency performance is derived, and a brief analysis is given, which provides a powerful reference for engineering practice. In addition, simulation results are provided to show the correctness of our analysis. From the results, we know that the system will have better performance when the number of relay is growing larger and the impairments' level is growing smaller. Moreover, the results reveal that the outage floor and the throughput bound appear when the hardware impairments exist.
A New Look at Dual-Hop Relaying: Performance Limits with Hardware Impairments. Physical transceivers have hardware impairments that create distortions which degrade the performance of communication systems. The vast majority of technical contributions in the area of relaying neglect hardware impairments and, thus, assume ideal hardware. Such approximations make sense in low-rate systems, but can lead to very misleading results when analyzing future high-rate systems. This paper quantifies the impact of hardware impairments on dual-hop relaying, for both amplify-and-forward and decode-and-forward protocols. The outage probability (OP) in these practical scenarios is a function of the effective end-to-end signal-to-noise-and-distortion ratio (SNDR). This paper derives new closed-form expressions for the exact and asymptotic OPs, accounting for hardware impairments at the source, relay, and destination. A similar analysis for the ergodic capacity is also pursued, resulting in new upper bounds. We assume that both hops are subject to independent but non-identically distributed Nakagami-m fading. This paper validates that the performance loss is small at low rates, but otherwise can be very substantial. In particular, it is proved that for high signal-to-noise ratio (SNR), the end-to-end SNDR converges to a deterministic constant, coined the SNDR ceiling, which is inversely proportional to the level of impairments. This stands in contrast to the ideal hardware case in which the end-to-end SNDR grows without bound in the high-SNR regime. Finally, we provide fundamental design guidelines for selecting hardware that satisfies the requirements of a practical relaying system.
The Manchester prototype dataflow computer The Manchester project has developed a powerful dataflow processor based on dynamic tagging. This processor is large enough to tackle realistic applications and exhibits impressive speedup for programs with sufficient parallelism.
Constraint logic programming for reasoning about discrete event processes The purpose of this paper is to show that constraint logic programming is a useful computational logic for modeling, simulating, and verifying real-time discrete event processes. The designer's knowledge about discrete event processes can be represented by a constraint logic program in a fashion that stays close to the mathematical definition of the processes, and can be used to semiautomate verification of possibly infinite-state systems. The constraint language CPL( R ) is used to illustrate verification techniques.
Software process modeling: principles of entity process models
Animation of Object-Z Specifications with a Set-Oriented Prototyping Language
3rd international workshop on software evolution through transformations: embracing change Transformation-based techniques such as refactoring, model transformation and model-driven development, architectural reconfiguration, etc. are at the heart of many software engineering activities, making it possible to cope with an ever changing environment. This workshop provides a forum for discussing these techniques, their formal foundations and applications.
One VM to rule them all Building high-performance virtual machines is a complex and expensive undertaking; many popular languages still have low-performance implementations. We describe a new approach to virtual machine (VM) construction that amortizes much of the effort in initial construction by allowing new languages to be implemented with modest additional effort. The approach relies on abstract syntax tree (AST) interpretation where a node can rewrite itself to a more specialized or more general node, together with an optimizing compiler that exploits the structure of the interpreter. The compiler uses speculative assumptions and deoptimization in order to produce efficient machine code. Our initial experience suggests that high performance is attainable while preserving a modular and layered architecture, and that new high-performance language implementations can be obtained by writing little more than a stylized interpreter.
New results on stability analysis for systems with discrete distributed delay The integral inequality technique is widely used to derive delay-dependent conditions, and various integral inequalities have been developed to reduce the conservatism of the conditions derived. In this study, a new integral inequality was devised that is tighter than existing ones. It was used to investigate the stability of linear systems with a discrete distributed delay, and a new stability condition was established. The results can be applied to systems with a delay belonging to an interval, which may be unstable when the delay is small or nonexistent. Three numerical examples demonstrate the effectiveness and the smaller conservatism of the method.
1.24
0.24
0.24
0.24
0.24
0.24
0.08
0
0
0
0
0
0
0
Inconsistency Handling in Multiperspective Specifications The development of most large and complex systems necessarily involves many people - each with their own perspectives on the system defined by their knowledge, responsibilities, and commitments. To address this we have advocated distributed development of specification s from multiple perspectives. However, this leads to problems of identifying and handling inconsistencies between such perspectives. Maintaining absolute consistency is not always possible. Often this is not even desirable since this can unnecessarily constrain the development process, and can lead to the loss of important information. Indeed since the real-world forces us to work with inconsistencies, we should formalise some of the usually informal or extra-logical ways of responding to them. This is not necessarily done by eradicating inconsistencies but rather by supplying logical rules specifying how we should act on them. To achieve this, we combine two lines of existing research: the ViewPoints framework for perspective development, interaction and organisation, and a logic-based approach to inconsistency handling. This paper presents our technique for inconsistency handling in the ViewPoints framework by using simple examples.
Approaches to interface design The current literature on interface design is reviewed. Four major approaches to interface design are identified; craft, cognitive engineering, enhanced software engineering and technologist. The aim of this classification framework is not to split semantic hairs, but to provide a comprehensive overview of a complex field and to clarify some of the issues involved. The paper goes on to discuss the source of quality in interface design and concludes with some recommendations on how to improve HCI methods.
Merging individual conceptual models of requirements While it is acknowledged that system requirements will never be complete, incompleteness is often due to an inadequate process and methods for acquiring and tracking a representative set of requirements. Viewpoint development has been proposed to address these problems. We offer a viewpoint development approach that fits easily into the current practice of capturing requirements as use case descriptions. However, current practice does not support visualization of use case descriptions, the capture of multiple use case descriptions, the modeling of conflicts and the reconciliation of viewpoints. In our approach we apply techniques from natural language processing, term subsumption and set-theory to automatically convert the use case descriptions into a line diagram. The visualisation of use case descriptions is a natural addition to the object-oriented design of systems using the Unified Modelling Language where diagrams act as communication and validation devices. RECOCASE is a comprehensive methodology that includes use case description guidelines, a controlled language to support natural language translation, a requirements engineering process model and a tool to assist the specification and reconciliation of requirements. Our approach combines group and individual processes to minimise contradictions and missing information and maximise ownership of the requirements models. In this paper we describe each of the parts of our methodology following an example through each section.
A situated classification solution of a resource allocation task represented in a visual language The Sisyphus room allocation problem solving example has been solved using a situated classification approach. A solution was developed from the protocol provided in terms of three heuristic classification systems, one classifying people, another rooms, and another tasks on an agenda of recommended room allocations. The domain ontology, problem data, problem-solving method, and domain-specific classification rules, have each been represented in a visual language. These knowledge structures compile to statements in a term subsumption knowledge representation language, and are loaded and run in a knowledge representation server to solve the problem. The user interface has been designed to provide support for human intervention in under-determi ned and over- determined situations, allowing advantage to be taken of the additional choices available in the first case, and a compromise solution to be developed in the second.
Validating Requirements for Fault Tolerant Systems using Model Checking Model checking is shown to be an effective tool in validating the behavior of a fault tolerant embedded spacecraft controller. The case study presented here shows that by judiciously abstracting away extraneous complexity, the state space of the model could be exhaustively searched allowing critical functional requirements to be validated down to the design level. Abstracting away detail not germane to the problem of interest leaves by definition a partial specification behind. The success of this procedure shows that it is feasible to effectively validate a partial specification with this technique. Three anomalies were found in the system. One was an error in the detailed requirements, and the other two were missing/ ambiguous requirements. Because the method allows validation of partial specifications, it is also an effective approach for maintaining fidelity between a co-evolving specification and an implementation.
A Blackboard-Based Cooperative System for Schema Integration We describe a four-level blackboard architecture that supports schema integration and provide a detailed description of the communication among human and computational agents that this system allows.Today's corporate information system environments are heterogeneous, consisting of multiple and independently managed databases. Many applications that assist decision making call for access to data from multiple heterogeneous databases. To facilitate this, there needs to be an integrated representation of the underlying databases that allows users to query multiple databases simultaneously. The process of deriving this integrated representation is called schema integration.Schema integration is time consuming and complex, as it requires a thorough understanding of the underlying database semantics. Since no data model can capture the entire real world semantics of each database's objects, this process requires human agent assistance. Although certain aspects of schema integration can be automated, interaction with designers and users is still necessary. In this article, we describe how blackboard architectures can facilitate the communication among human and computational agents for schema integration.
Dependence Directed Reasoning and Learning in Systems Maintenance Support The maintenance of large information systems involves continuous modifications in response to evolving business conditions or changing user requirements. Based on evidence from a case study, it is shown that the system maintenance activity would benefit greatly if the process knowledge reflecting the teleology of a design could be captured and used in order to reason about he consequences of changing conditions or requirements, A formalism called REMAP (representation and maintenance of process knowledge) that accumulates design process knowledge to manage systems evolution is described. To accomplish this, REMAP acquires and maintains dependencies among the design decisions made during a prototyping process, and is able to learn general domain-specific design rules on which such dependencies are based. This knowledge cannot only be applied to prototype refinement and systems maintenance, but can also support the reuse of existing design or software fragments to construct similar ones using analogical reasoning techniques.
Requirements Dynamics in Large Software Projects: A Perspective on New Directions in Software Engineering Process
Subsumption between queries to object-oriented databases Most work on query optimization in relational and object-oriented databaseshas concentrated on tuning algebraic expressions and the physical access tothe database contents. The attention to semantic query optimization, however,has been restricted due to its inherent complexity. We take a second lookat semantic query optimization in object-oriented databases and find thatreasoning techniques for concept languages developed in Artificial Intelligenceapply to this problem because concept...
Functional documents for computer systems Although software documentation standards often go into great detail about the format of documents, describing such details as paragraph numbering and section headings, they fail to give precise descriptions of the information to be contained in the documents. This paper does the opposite; it defines the contents of documents without specifying their format or the notation to be used in them. We describe documents such as the “System Requirements Document”, the “System Design Document”, the “Software Requirements Document”, the “Software Behaviour Specification”, the “Module Interface Specification”, and the “Module Internal Design Document” as representations of one or more mathematical relations. By describing those relations, we specify what information should be contained in each document.
A Software Engineering View of Data Base Management This paper examines the field of data base management from the perspective of software engineering. Key topics in software engineering are related to specific activities in data base design and implementation. An attempt is made to show the similarities between steps in the creation of systems involving data bases and other kinds of software systems. It is argued that there is a need to unify thinking about data base systems with other kinds of software systems and tools in order to build high quality systems. The progrming language PLAIN and its programning environment is introduced as a tool for integrating notions of programning languages, data base management, and software engineering.
A process model for interactive systems Designing user interfaces and designing computational software are very different processes. The differences lead to late discovery of design conflicts, which drives up development costs. A unifying methodology that could provide early discovery and resolution of design conflicts must account for the governing principles of both processes. Disciplined long-term investigation of candidate methodologies requires that these governing principles be fixed and that evolving development methods comprising each process be accommodated. This article describes an application of general systems theory to integrate these principles, proposes a process model that fixes them as explicit elements of a process program, argues the feasibility of the model and its worthiness for further study, and describes its initial implementation.
OWLPath: An OWL Ontology-Guided Query Editor Most Semantic Web technology-based applications need users to have a deep background on the formal underpinnings of ontology languages and some basic skills in these technologies. Generally, only experts in the field meet these requirements. In this paper, we present OWLPath, a natural language-query editor guided by multilanguage OWL-formatted ontologies. This application allows nonexpert users to easily create SPARQL queries that can be issued over most existing ontology storage systems. Our approach is a fully fledged solution backed with a proof-of-concept implementation and the empirical results of two challenging use cases: one in the domain of e-finance and the other in e-tourism.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1.007453
0.007843
0.007411
0.007059
0.006003
0.006003
0.004109
0.003006
0.00183
0.000583
0.000005
0
0
0
Rapid prototyping in human-computer interface development Some conventional approaches to interactive system development tend to force commitment to design detail without a means for visualizing the result until it is too late to make significant changes. Rapid prototyping and iterative system refinement, especially for the human interface, allow early observation of system behaviour and opportunities for refinement in response to user feedback. The role of rapid prototyping for evaluation of interface designs is set in the system development life-cycle. Advantages and pitfalls are weighed, and detailed examples are used to show the application of rapid prototyping in a real development project. Kinds of prototypes are classified according to how they can be used in the development process, and system development issues are presented. The future of rapid prototyping depends on solutions to technical problems that presently limit effectiveness of the technique in the context of present day software development environments.
CASE: reliability engineering for information systems Classical and formal methods of information and software systems development are reviewed. The use of computer-aided software engineering (CASE) is discussed. These automated environments and tools make it practical and economical to use formal system-development methods. Their features, tools, and adaptability are discussed. The opportunities that CASE environments provide to use analysis techniques to assess the reliability of information systems before they are implemented and to audit a completed system against its design and maintain the system description as accurate documentation are examined.<>
A Taxonomy of Current Issues in Requirements Engineering The purpose of this article is to increase awareness of several requirements specifications issues: (1) the role they play in the full system development life cycle, (2) the diversity of forms they assume, and (3) the problems we continue to face. The article concentrates on ways of expressing requirements rather than ways of generating them. A discussion of various classification criteria for existing requirements specification techniques follows a brief review of requirements specification contents and concerns.
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The mystery of the tower revealed: a non-reflective description of the reflective tower Abstract In an important series of papers [8, 9], Brian Smith has discussed the nature of programs that know about their text and the context in which they are executed. He called this kind of knowledge,reflection. Smith proposed a programming language, called 3-LISP, which embodied such self-knowledge in the domain of metacircular interpreters. Every 3-LISP program is interpreted by a metacircular interpreter, also written in 3-LISP. This gives rise to a picture of an infinite tower of metacircular interpreters, each being interpreted by the one above it. Such a metaphor poses a serious challenge for conventional modes of understandingof programming languages. In our earlier work on reflection [4], we showed how a useful species of reflection could be modeled without the use of towers. In this paper, we give a semantic account of the reflective tower. This account is self-contained in the sense that it does not em- ploy reflection to explain reflection. 1. Modeling reflection
Statecharts: A visual formalism for complex systems Abstract. We,present,a broad,extension,of the,conventional,formalism,of state machines,and state diagrams, that is relevant to the specification and design of complex discrete-event systems, such as multi-computer real-time systems, communication protocols and digital control units. Our diagrams, which we call statecharts, extend conventional state-transition diagrams with essentially three elements, dealing, respectively, with the notions of hierarchy, concurrency and communica- tion. These,transform,the language,of state diagrams,into a highly,structured,and,economical description,language.,Statecharts,are thus,compact,and,expressiv-small,diagrams,can,express complex,behavior-as,well,as compositional,and,modular.,When,coupled,with,the capabilities of computerized graphics, statecharts enable viewing the description at different levels of detail, and make even very large specifications manageable and comprehensible. In fact, we intend to demonstrate,here that statecharts,counter,many,of the objections,raised,against,conventional,state diagrams, and thus appear to render specification by diagrams an attractive and plausible approach. Statecharts,can be used,either as a stand-alone,behavioral,description,or as part of a more,general design methodology that deals also with the system’s other aspects, such as functional decomposi- tion and,data-flow specification. We also discuss,some,practical,experience,that was,gained,over the last three,years,in applying,the statechart,formalism,to the specification,of a particularly complex,system.
A calculus of refinements for program derivations A calculus of program refinements is described, to be used as a tool for the step-by-step derivation of correct programs. A derivation step is considered correct if the new program preserves the total correctness of the old program. This requirement is expressed as a relation of (correct) refinement between nondeterministic program statements. The properties of this relation are studied in detail. The usual sequential statement constructors are shown to be monotone with respect to this relation and it is shown how refinement between statements can be reduced to a proof of total correctness of the refining statement. A special emphasis is put on the correctness of replacement steps, where some component of a program is replaced by another component. A method by which assertions can be added to statements to justify replacements in specific contexts is developed. The paper extends the weakest precondition technique of Dijkstra to proving correctness of larger program derivation steps, thus providing a unified framework for the axiomatic, the stepwise refinement and the transformational approach to program construction and verification.
Symbolic Model Checking Symbolic model checking is a powerful formal specification and verification method that has been applied successfully in several industrial designs. Using symbolic model checking techniques it is possible to verify industrial-size finite state systems. State spaces with up to 1030 states can be exhaustively searched in minutes. Models with more than 10120 states have been verified using special techniques.
2009 Data Compression Conference (DCC 2009), 16-18 March 2009, Snowbird, UT, USA
Voice as sound: using non-verbal voice input for interactive control We describe the use of non-verbal features in voice for direct control of interactive applications. Traditional speech recognition interfaces are based on an indirect, conversational model. First the user gives a direction and then the system performs certain operation. Our goal is to achieve more direct, immediate interaction like using a button or joystick by using lower-level features of voice such as pitch and volume. We are developing several prototype interaction techniques based on this idea, such as "control by continuous voice", "rate-based parameter control by pitch," and "discrete parameter control by tonguing." We have implemented several prototype systems, and they suggest that voice-as-sound techniques can enhance traditional voice recognition approach.
An ontological model of an information system An ontological model of an information system that provides precise definitions of fundamental concepts like system, subsystem, and coupling is proposed. This model is used to analyze some static and dynamic properties of an information system and to examine the question of what constitutes a good decomposition of an information system. Some of the major types of information system formalisms that bear on the authors' goals and their respective strengths and weaknesses relative to the model are briefly reviewed. Also articulated are some of the fundamental notions that underlie the model. Those basic notions are then used to examine the nature and some dynamics of system decomposition. The model's predictive power is discussed.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.2
0.011111
0.001754
0
0
0
0
0
0
0
0
0
0
0
A Picture from the Model-Based Testing Area: Concepts, Techniques, and Challenges Model-Based Testing (MBT) represents a feasible and interesting testing strategy where test cases are generated from formal models describing the software behavior/structure. The MBT field is continuously evolving, as it could be observed in the increasing number of MBT techniques published at the technical literature. However, there is still a gap between researches regarding MBT and its application in the software industry, mainly occasioned by the lack of information regarding the concepts, available techniques, and challenges in using this testing strategy in real software projects. This chapter presents information intended to support researchers and practitioners reducing this gap, consequently contributing to the transfer of this technology from the academia to the industry. It includes information regarding the concepts of MBT, characterization of 219 MBT available techniques, approaches supporting the selection of MBT techniques for software projects, risk factors that may influence the use of these techniques in the industry together with some mechanisms to mitigate their impact, and future perspectives regarding the MBT field.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Procedures and atomicity refinement The introduction of an early return from a (remote) procedure call can increase the degree of parallelism in a parallel or distributed algorithm modeled by an action system. We define a return statement for procedures in an action systems framework and show that it corresponds to carrying out an atomicity refinement.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
New delay-dependent stabilization conditions of T--S fuzzy systems with constant delay This paper focuses on the problem of robust control for Takagi-Sugeno (T-S) fuzzy systems with time-delay. The delay-dependent stability analysis and controller synthesis have been addressed. The free weighting matrix method has been used for stability analysis and controller synthesis. New and less conservative delay-dependent stability conditions are proposed in terms of linear matrix inequalities (LMI). Finally, some examples are given to illustrate the effectiveness of the proposed approaches.
Robust sliding-mode control for uncertain time-delay systems: an LMI approach This note is devoted to robust sliding-mode control for time-delay systems with mismatched parametric uncertainties. A delay-independent sufficient condition for the existence of linear sliding surfaces is given in terms of linear matrix inequalities, based on which the corresponding reaching motion controller is also developed. The results are illustrated by an example.
Delay-Dependent Robust H-Infinity Control For T-S Fuzzy Systems With Time Delay This paper focuses on the problem of delay-dependent robust fuzzy control for a class of nonlinear delay systems via state feedback. The Takagi-Sugeno (T-S) fuzzy model is adopted for representing a nonlinear system with time delayed state. A delay-dependent stabilization criterion is first presented. Then, the methods of robust stabilization and robust H. control are developed, which are dependent on the size of the delay and are based on the solutions of linear matrix inequalities (LMIs). Finally, a design example of robust H. controller for uncertain nonlinear systems is given to illustrate the effectiveness of the approaches proposed in this paper.
Nonsynchronized-State estimation of multichannel networked nonlinear systems with multiple packet dropouts Via TS Fuzzy-Affine dynamic models This paper investigates the problem of robust ℋ∞state estimation for a class of multichannel networked nonlinear systems with multiple packet dropouts. The nonlinear plant is represented by TakagiSugeno (TS) fuzzy-affine dynamic models with norm-bounded uncertainties, and stochastic variables with general probability distributions are adopted to characterize the data missing phenomenon in output channels. The objective is to design an admissible state estimator guaranteeing the stochastic stability of the resulting estimation-error system with a prescribed ℋ ∞disturbance attenuation level. It is assumed that the plant premise variables, which are often the state variables or their functions, are not measurable so that the estimator implementation with state-space partition may not be synchronized with the state trajectories of the plant. Based on a piecewise-quadratic Lyapunov function combined with S -procedure and some matrix-inequality-convexifying techniques, two different approaches are developed to robust filtering design for the underlying TS fuzzy-affine systems with unreliable communication links. All the solutions to the problem are formulated in the form of linear-matrix inequalities (LMIs). Finally, simulation examples are provided to illustrate the effectiveness of the proposed approaches. © 2006 IEEE.
Stability and stabilization of T-S fuzzy systems with time delay via Wirtinger-based double integral inequality. This paper concerns the issue of stabilization and stability analysis for Takagi-Sugeno (T-S) fuzzy systems with time delay. A new type of Lyapunov-Krasovskii functional (LKF), including Non-Quadratic Lyapunov functional and triple integral term, is introduced to obtain stability conditions of fuzzy time-delay systems. A Wirtinger-based double integral inequality is used to estimate integral term, and the free weighting variable technique is also employed for controller syntheses and stability analysis. Additionally, less conservative and newer stability conditions with delay-dependent are proposed in the form of linear matrix inequalities (LMIs). Furthermore, Several examples are provided to illustrate how effective the suggested approaches are. (c) 2017 Elsevier B.V. All rights reserved.
Delay-dependent stability analysis and synthesis of uncertain T--S fuzzy systems with time-varying delay This paper considers the delay-dependent stability analysis and controller design for uncertain T-S fuzzy system with time-varying delay. A new method is provided by introducing some free-weighting matrices and employing the lower bound of time-varying delay. Based on the Lyapunov-Krasovskii functional method, sufficient condition for the asymptotical stability of the system is obtained. By constructing the Lyapunov-Krasovskii functional appropriately, we can avoid the supplementary requirement that the time-derivative of time-varying delay must be smaller than one. The fuzzy state feedback gain is derived through the numerical solution of a set of linear matrix inequalities (LMIs). The upper bound of time-delay can be obtained by using convex optimization such that the system can be stabilized for all time-delays. The efficiency of our method is demonstrated by two numerical examples.
New stability and stabilization conditions for T-S fuzzy systems with time delay This paper is concerned with the problem of the stability analysis and stabilization for Takagi-Sugeno (T-S) fuzzy systems with time delay. A new Lyapunov-Krasovskii functional containing the fuzzy line-integral Lyapunov function and the simple functional is chosen. By using a recently developed Wirtinger-based integral inequality and introducing slack variables, less conservative conditions in terms of linear matrix inequalities (LMIs) are derived. Several examples are given to show the advantages of the proposed results.
Brief Robust control of uncertain distributed delay systems with application to the stabilization of combustion in rocket motor chambers The problems of robust stability and robust stabilization of uncertain linear systems with distributed delay occurring in the state variables are studied in this paper. The essential requirement for the uncertainties is that they are norm-bounded with known bounds. Conditions for the robust stability of distributed time delay systems are given and a design method for the robust stabilizing control law of the uncertain systems is presented. The proposed method is applied to the stabilization of combustion in the chamber of a liquid monopropellant rocket motor. It is found that the combustion can be robustly stabilized when the two parameters pressure exponent γ and maximal time lag r vary in specified intervals, respectively.
A note on equivalence between two integral inequalities for time-delay systems. Jensen’s inequality and extended Jensen’s inequality are two important integral inequalities when problems of stability analysis and controller synthesis for time-delay systems are considered. The extended Jensen’s inequality introduces two additional free matrices and is generally regarded to be less conservative than Jensen’s inequality. The equivalence between Jensen’s inequality and extended Jensen’s inequality in bounding the quadratic term −h∫t−htẋT(s)Zẋ(s)ds in Lyapunov functional of time-delay systems is presented and theoretically proved. It is shown that the extended Jensen’s inequality does not decrease the lower bound of this quadratic term obtained using Jensen’s inequality, and then it does not reduce the conservativeness though two additional free matrices M1 and M2 are involved.
Digital watermarking robust to geometric distortions. In this paper, we present two watermarking approaches that are robust to geometric distortions. The first approach is based on image normalization, in which both watermark embedding and extraction are carried out with respect to an image normalized to meet a set of predefined moment criteria. We propose a new normalization procedure, which is invariant to affine transform attacks. The resulting watermarking scheme is suitable for public watermarking applications, where the original image is not available for watermark extraction. The second approach is based on a watermark resynchronization scheme aimed to alleviate the effects of random bending attacks. In this scheme, a deformable mesh is used to correct the distortion caused by the attack. The watermark is then extracted from the corrected image. In contrast to the first scheme, the latter is suitable for private watermarking applications, where the original image is necessary for watermark detection. In both schemes, we employ a direct-sequence code division multiple access approach to embed a multibit watermark in the discrete cosine transform domain of the image. Numerical experiments demonstrate that the proposed watermarking schemes are robust to a wide range of geometric attacks.
WebWork: METEOR2's Web-Based Workflow Management System. METEOR workflow management systems consist of both (1) design/build-time and (2) run-time/enactment components for implementing workflow applications. An enactment system provides the command, communication and control for the individual tasks in the workflow. Tasks are the run-time instances of intra- or inter-enterprise applications. We are developing three implementations of the METEOR model: WebWork, OrbWork and NeoWork. This paper discusses WebWork, an implementation relying solely on Web technology as the infrastructure for the enactment system. WebWork supports a distributed implementation with participation of multiple Web servers. It also supports automatic code generation of workflow applications from design specifications produced by a comprehensive graphical designer. WebWork has been developed as a complement of its more heavyweight counterparts (OrbWork and NeoWork), with the goal of providing ease of workflow application development, installation, use and maintenance. At the time of this writing, WebWork has been installed by several of the LSDIS Lab's industrial partners for testing, evaluation and building workflow applications.
Plan Abstraction Based on Operator Generalization We describe a planning system which automatically creates abstract operators while organizing a given set of primitive operators into a taxonomic hierarchy. At the same time, the system creates categories of abstract object types which allow abstract operators to apply to broad classes of functionally similar ob- jects. After the system has found a plan to achieve a particular goal, it replaces each primitive operator in the plan with one of its ancestors from the operator taxonomy. The resulting abstract plan is incorpo- rated into the operator hierarchy as a new abstract operator, an abstract-macro. The next time the plan- ner is faced with a similar task, it can specialize the abstract-macro into a suitable plan by again using the operator taxonomy, this time replacing the abstract operators with appropriate descendants.
Status report: requirements engineering It is argued that, in general, requirements engineering produces one large document, written in a natural language, that few people bother to read. Projects that do read and follow the document often build systems that do not satisfy needs. The reasons for the current state of the practice are listed. Research areas that have significant payoff potential, including improving natural-language specifications, rapid prototyping and requirements animation, requirements clustering, requirements-based testing, computer-aided requirements engineering, requirements reuse, research into methods, knowledge engineering, formal methods, and a unified framework, are outlined.<>
A brief review of modeling approaches based on fuzzy time series Recently, there seems to be increased interest in time series forecasting using soft computing (SC) techniques, such as fuzzy sets, artificial neural networks (ANNs), rough set (RS) and evolutionary computing (EC). Among them, fuzzy set is widely used technique in this domain, which is referred to as “Fuzzy Time Series (FTS)”. In this survey, extensive information and knowledge are provided for the FTS concepts and their applications in time series forecasting. This article reviews and summarizes previous research works in the FTS modeling approach from the period 1993–2013 (June). Here, we also provide a brief introduction to SC techniques, because in many cases problems can be solved most effectively by integrating these techniques into different phases of the FTS modeling approach. Hence, several techniques that are hybridized with the FTS modeling approach are discussed briefly. We also identified various domains specific problems and research trends, and try to categorize them. The article ends with the implication for future works. This review may serve as a stepping stone for the amateurs and advanced researchers in this domain.
1.019154
0.027266
0.018949
0.018177
0.015961
0.00944
0.003188
0.000159
0.000021
0
0
0
0
0
A PaaS to Support Collaborations through Service Composition The French project OpenPaaS aims at providing a social platform to help companies initiating and managing their collaborations. Nowadays, data exchange is no sufficient and collaborations need to be supported for every interaction between the actors of the collaboration. This paper concerns a PaaS in charge of supporting the deduction of collaborative business processes that involve subscribing organizations of the PaaS. In order to be applicable to industrial needs, the process deduction should require the minimum knowledge from users: collaborative objectives and a repository of all the capabilities made available by the subscribing organizations. Functional and non-functional gaps are filled simultaneously when building the process: (i) a collaborative ontology allows finding sets of capacities able to achieve the collaborative objectives and (ii) a non-functional assessment builds the optimal process i.e. the sequences of activities and also the set of partners with their corresponding capabilities. This article focuses on the first point and brings a methodology based on semantics to deduce a collaborative process.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Fuzzy Event-Triggered Control for PDE Systems With Pointwise Measurements Based on Relaxed Lyapunov–Krasovskii Functionals In this article, an event-triggered control problem for partial differential equation systems with pointwise measurements is investigated via relaxed Lyapunov–Krasovskii functionals. First, the Takagi–Sugeno fuzzy model is introduced to describe the nonlinear systems and a fuzzy event-triggered pointwise controller is proposed with pointwise measurements, which can make a tradeoff between the system’s performance and implementation complexity subject to limited transmission bandwidth. Second, some relaxed conditions are established to ensure the closed-loop system’s stability by using the Lyapunov method and inequality techniques. Finally, two simulation examples are provided to demonstrate the effectiveness and practicability of the designed controller.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1
0
0
0
0
0
0
0
0
0
0
0
0
0
A relation algebraic model of robust correctness We propose a new and uniform abstract relational approach todemonic nondeterminism and robust correctness similar to Hoare's chaossemantics. It is based on a specific set of relations on flat lattices.This set forms a complete lattice. Furthermore, we deal with therefinement of programs. Among other things, we show the correctness ofthe unfold/fold method for demonic nondeterminism and robust correctnessas refinement relation and investigate relationships to Dijkstra'swp-calculus and Morgan's specification statement.—Authors' Abstract
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
A Conceptual Graph Model for W3C Resource Description Framework With the aim of building a "Semantic Web", the content of the documents must be explicitly represented through metadata in order to enable contents-guided search. Our approach is to exploit a standard language (RDF, recommended by W3C) for expressing such metadata and to interpret these metadata in conceptual graphs (CG) in order to exploit querying and inferencing capabilities enabled by CG formalism. The paper presents our mapping of RDF into CG and its interest in the context of the semantic Web.
Procedures and atomicity refinement The introduction of an early return from a (remote) procedure call can increase the degree of parallelism in a parallel or distributed algorithm modeled by an action system. We define a return statement for procedures in an action systems framework and show that it corresponds to carrying out an atomicity refinement.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
The college admissions problem with a continuum of students In many two-sided matching markets, agents on one side are matched to a large number of agents on the other side (e.g. college admissions). Yet little is known about the structure of stable matchings when there are many agents on one side. We propose a variation of the Gale and Shapley [3] college admissions model where a finite number of colleges is matched to a continuum of students. It is shown that, generically (though not always) (i) there is a unique stable matching, (ii) this stable matching varies continuously with the underlying economy, and (iii) it is the limit of the set of stable matchings of approximating large discrete economies.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Active Learning for Entity Filtering in Microblog Streams Monitoring the reputation of entities such as companies or brands in microblog streams (e.g., Twitter) starts by selecting mentions that are related to the entity of interest. Entities are often ambiguous (e.g., \"Jaguar\" or \"Ford\") and effective methods for selectively removing non-relevant mentions often use background knowledge obtained from domain experts. Manual annotations by experts, however, are costly. We therefore approach the problem of entity filtering with active learning, thereby reducing the annotation load for experts. To this end, we use a strong passive baseline and analyze different sampling methods for selecting samples for annotation. We find that margin sampling--an informative type of sampling that considers the distance to the hyperplane used for class separation--can effectively be used for entity filtering and can significantly reduce the cost of annotating initial training data.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Still-image watermarking robust to local geometric distortions. Geometrical distortions are the Achilles heel for many watermarking schemes. Most countermeasures proposed in the literature only address the problem of global affine transforms (e.g., rotation, scaling, and translation). In this paper, we propose an original blind watermarking algorithm robust to local geometrical distortions such as the deformations induced by Stirmark. Our method consists in adding a predefined additional information to the useful message bits at the insertion step. These additional bits are labeled as resynchronization bits or reference bits and they are modulated in the same way as the information bits. During the extraction step, the reference bits are used as anchor points to estimate and compensate for small local and global geometrical distortions. The deformations are approximated using a modified basic optical flow algorithm.
Improved seam carving for video retargeting Video, like images, should support content aware resizing. We present video retargeting using an improved seam carving operator. Instead of removing 1D seams from 2D images we remove 2D seam manifolds from 3D space-time volumes. To achieve this we replace the dynamic programming method of seam carving with graph cuts that are suitable for 3D volumes. In the new formulation, a seam is given by a minimal cut in the graph and we show how to construct a graph such that the resulting cut is a valid seam. That is, the cut is monotonic and connected. In addition, we present a novel energy criterion that improves the visual quality of the retargeted images and videos. The original seam carving operator is focused on removing seams with the least amount of energy, ignoring energy that is introduced into the images and video by applying the operator. To counter this, the new criterion is looking forward in time - removing seams that introduce the least amount of energy into the retargeted result. We show how to encode the improved criterion into graph cuts (for images and video) as well as dynamic programming (for images). We apply our technique to images and videos and present results of various applications.
DAISY: an efficient dense descriptor applied to wide-baseline stereo. In this paper, we introduce a local image descriptor, DAISY, which is very efficient to compute densely. We also present an EM-based algorithm to compute dense depth and occlusion maps from wide-baseline image pairs using this descriptor. This yields much better results in wide-baseline situations than the pixel and correlation-based algorithms that are commonly used in narrow-baseline stereo. Also, using a descriptor makes our algorithm robust against many photometric and geometric transformations. Our descriptor is inspired from earlier ones such as SIFT and GLOH but can be computed much faster for our purposes. Unlike SURF, which can also be computed efficiently at every pixel, it does not introduce artifacts that degrade the matching performance when used densely. It is important to note that our approach is the first algorithm that attempts to estimate dense depth maps from wide-baseline image pairs, and we show that it is a good one at that with many experiments for depth estimation accuracy, occlusion detection, and comparing it against other descriptors on laser-scanned ground truth scenes. We also tested our approach on a variety of indoor and outdoor scenes with different photometric and geometric transformations and our experiments support our claim to being robust against these.
Robust video watermarking based on affine invariant regions in the compressed domain This paper proposes a novel robust video watermarking scheme based on local affine invariant features in the compressed domain. This scheme is resilient to geometric distortions and quite suitable for DCT-encoded compressed video data because it performs directly in the block DCTs domain. In order to synchronize the watermark, we use local invariant feature points obtained through the Harris-Affine detector which is invariant to affine distortions. To decode the frames from DCT domain to the spatial domain as fast as possible, a fast inter-transformation between block DCTs and sub-block DCTs is employed and down-sampling frames in the spatial domain are obtained by replacing each sub-blocks DCT of 2x2 pixels with half of the corresponding DC coefficient. The above-mentioned strategy can significantly save computational cost in comparison with the conventional method which accomplishes the same task via inverse DCT (IDCT). The watermark detection is performed in spatial domain along with the decoded video playing. So it is not sensitive to the video format conversion. Experimental results demonstrate that the proposed scheme is transparent and robust to signal-processing attacks, geometric distortions including rotation, scaling, aspect ratio changes, linear geometric transforms, cropping and combinations of several attacks, frame dropping, and frame rate conversion.
Real-Time Compressed- Domain Video Watermarking Resistance to Geometric Distortions A proposed real-time video watermarking scheme is transparent and robust to geometric distortions, including rotation with cropping, scaling, aspect ratio change, frame dropping, and swapping.
Evaluation of feature extraction techniques for robust watermarking This paper addresses feature extraction techniques for robust watermarking. Geometric distortion attacks desynchronize the location of the inserted watermark and hence prevent watermark detection. Watermark synchronization, which is a process of finding the location for watermark insertion and detection, is crucial to design robust watermarking. One solution is to use image features. This paper reviews feature extraction techniques that have been used in featurebased watermarking: the Harris corner detector and the Mexican Hat wavelet scale interaction method. We also evaluate the scale-invariant keypoint extractor in comparison with other techniques in aspect of watermarking. After feature extraction, the set of triangles is generated by Delaunay tessellation. These triangles are the location for watermark insertion and detection. Redetection ratio of triangles is evaluated against geometric distortion attacks as well as signal processing attacks. Experimental results show that the scale-invariant keypoint extractor is appropriate for robust watermarking.
Quantization index modulation: a class of provably good methods for digital watermarking and information embedding We consider the problem of embedding one signal (e.g., a digital watermark), within another “host” signal to form a third, “composite” signal. The embedding is designed to achieve efficient tradeoffs among the three conflicting goals of maximizing the information-embedding rate, minimizing the distortion between the host signal and composite signal, and maximizing the robustness of the embedding. We introduce new classes of embedding methods, termed quantization index modulation (QIM) and distortion-compensated QIM (DC-QIM), and develop convenient realizations in the form of what we refer to as dither modulation. Using deterministic models to evaluate digital watermarking methods, we show that QIM is “provably good” against arbitrary bounded and fully informed attacks, which arise in several copyright applications, and in particular it achieves provably better rate distortion-robustness tradeoffs than currently popular spread-spectrum and low-bit(s) modulation methods. Furthermore, we show that for some important classes of probabilistic models, DC-QIM is optimal (capacity-achieving) and regular QIM is near-optimal. These include both additive white Gaussian noise (AWGN) channels, which may be good models for hybrid transmission applications such as digital audio broadcasting, and mean-square-error-constrained attack channels that model private-key watermarking applications
Stability analysis of some classes of input-affine nonlinear systems with aperiodic sampled-data control. In this paper we investigate the stability analysis of nonlinear sampled-data systems, which are affine in the input. We assume that a stabilizing controller is designed using the emulation technique. We intend to provide sufficient stability conditions for the resulting sampled-data system. This allows to find an estimate of the upper bound on the asynchronous sampling intervals, for which stability is ensured. The main idea of the paper is to address the stability problem in a new framework inspired by the dissipativity theory. Furthermore, the result is shown to be constructive. Numerically tractable criteria are derived using linear matrix inequality for polytopic systems and using sum of squares technique for the class of polynomial systems.
On the JPEG Model for Lossless Image Compression
Maris: map recognition input system A map recognition input system called MARIS is developed to digitize large-reduced-scale maps into a layered data form. This paper presents an experimental workstation, a vector-based recognition method, and an intelligent interaction function which are devised in order to enhance input speed. The recognition method is capable of extracting building lines, contour lines, and lines representing railways, roads and water areas. The recognition and the interaction utilize new efficient line tracing/tracking techniques. Experimental results show that the input time using MARIS can be reduced to about 25% of that of a system using a conventional interactive digitizer.
Image warping by radial basis functions: applications to facial expressions The human face is an elastic object. A natural paradigm for representing facial expressions is to form a complete 3D model of facial muscles and tissues. However, determining the actual parameter values for synthesizing and animating facial expressions is tedious; evaluating these parameters for facial expression analysis out of grey-level images is ahead of the state of the art in computer vision. Using only 2D face images and a small number of anchor points, we show that the method of radial...
The architecture and design of a collaborative environment for systems definition Defining systems requirements and specifications is a collaborative effort among managers, users, and systems developers. The difficulty of systems definition is caused by the human's limited cognitive capabilities, that is compounded by the complexity of group communication and coordination processes. Current system analysis methodologies are first evaluated regarding to the level of support to users. Since systems definition is a knowledge-intensive activity, the knowledge contents and structures employed in systems definition are discussed. For any large-scale system, no one person possesses all the knowledge that is needed, therefore, the authors proposed a collaborative approach to systems definition. The use of a group decision support system (GDSS) for systems definition is first described and limitations of the current GDSS are identified. The architecture and design of a collaborative computer-aided software engineering (CASE) environment, called C-CASE, is then discussed. C-CASE can be used to assist users in defining the requirements of their organization and information systems as well as to analyze the consistency and completeness of the requirements. C-CASE integrates GDSS and CASE such that users can actively participate in the requirements elicitation process. Users can use the metasystem capability of C-CASE to define domain specific systems definition languages, which are adaptable to different systems development settings. An example of using C-CASE in a collaborative environment is given. The implications C-CASE and the authors' ongoing research are also discussed.
Evaluation of JPEG-LS, the new lossless and controlled-lossy still image compression standard, for compression of high-resolution elevation data The compression of elevation data is studied. The performance of JPEG-LS, the new international ISO/ITU standard for lossless and near-lossless (controlled-lossy) still-image compression, is investigated both for data from the USGS digital elevation model (DEM) database and the navy-provided digital terrain model (DTM) data. Using JPEG-LS has the advantage of working with a standard algorithm. Moreover, in contrast with algorithms like the popular JPEG-lossy standard, this algorithm permits the completely lossless compression of the data as well as a controlled lossy mode where a sharp upper bound on the elevation error is selected by the user. All these are achieved at a very low computational complexity. In addition to these algorithmic advantages, they show that JPEG-LS achieves significantly better compression results than those obtained with other (nonstandard) algorithms previously investigated for the compression of elevation data. The results here reported suggest that JPEG-LS can immediately be adopted for the compression of elevation data for a number of applications
Dual-Clustering-Based Hyperspectral Band Selection by Contextual Analysis. Hyperspectral image (HSI) involves vast quantities of information that can help with the image analysis. However, this information has sometimes been proved to be redundant, considering specific applications such as HSI classification and anomaly detection. To address this problem, hyperspectral band selection is viewed as an effective dimensionality reduction method that can remove the redundant components of HSI. Various HSI band selection methods have been proposed recently, and the clustering-based method is a traditional one. This agglomerative method has been considered simple and straightforward, while the performance is generally inferior to the state of the art. To tackle the inherent drawbacks of the clustering-based band selection method, a new framework concerning on dual clustering is proposed in this paper. The main contribution can be concluded as follows: 1) a novel descriptor that reveals the context of HSI efficiently; 2) a dual clustering method that includes the contextual information in the clustering process; 3) a new strategy that selects the cluster representatives jointly considering the mutual effects of each cluster. Experimental results on three real-world HSIs verify the noticeable accuracy of the proposed method, with regard to the HSI classification application. The main comparison has been conducted among several recent clustering-based band selection methods and constraint-based band selection methods, demonstrating the superiority of the technique that we present.
1.104674
0.101904
0.101904
0.101904
0.101904
0.052267
0.017029
0.000002
0
0
0
0
0
0
Mechanising some Advanced Refinement Concepts We describe how proof rules for three advanced refinement features are mechanicallyverified using the HOL theorem prover. These features are data refinement, backwardsdata refinement and superposition refinement of initialised loops. We also show howapplications of these proof rules to actual program refinement can be checked using theHOL system, with the HOL system generating the verification conditions.1 IntroductionStepwise refinement is a methodology for developing programs from...
Window Inference in the HOL System
A Window Inference Tool for Refinement
Statement inversion and strongest postcondition A notion of inverse commands is defined for a language which permits both demonic and angelic nondeterminism, as well as miracles and nontermination. Every conjunctive and terminating command is invertible, the inverse being non-miraculous and disjunctive. A simulation relation between commands is described using inverse commands. A generalised form of inverse is defined for arbitrary conjunctive commands. The generalised inverses are shown to be closely related to strongest postconditions.
A Program Refinement Tool .   The refinement calculus for the development of programs from specifications is well suited to mechanised support. We review the requirements for tool support of refinement as gleaned from our experience with existing refinement tools, and report on the design and implementation of a new tool to support refinement based on these requirements. The main features of the new tool are close integration of refinement and proof in a single tool (the same mechanism is used for both), good management of the refinement context, an extensible theory base that allows the tool to be adapted to new application domains, and a flexible user interface.
Informal Strategies in Design by Refinement To become more widely accepted, formal development methods must come to be seen to complement existing systems design techniques, rather than to replace them. This paper proposes one way in which this can take place—in a formal development framework, closely based on the refinement calculus but simultaneously accommodating some important informal design strategies.
A Single Complete Rule for Data Refinement One module is said to be refined by a second if no program using the second module can detect that it is not using the first; in that case the second module can replace the first in any program. Data refinement transforms the interior pieces of a module — its state and consequentially its operations — in order to refine the module overall.
Refining Specifications to Logic Programs The refinement calculus provides a framework for the stepwise development of imperative programsfrom specifications. In this paper we study a refinement calculus for deriving logic programs.Dealing with logic programs rather than imperative programs has the dual advantages that, dueto the expressive power of logic programs, the final program is closer to the original specification,and each refinement step can achieve more. Together these reduce the overall number of derivationsteps....
Time-Dependent Distributed Systems: Proving Safety, Liveness and Real-Time Properties Most communication protocol systems utilize timers to implement real-time constraints between event occurrences. Such systems are said to betime-dependent if the real-time constraints are crucial to their correct operation. We present a model for specifying and verifying time-dependent distributed systems. We consider networks of processes that communicate with one another by message-passing. Each process has a set of state variables and a set of events. An event is described by a predicate that relates the values of the network's state variables immediately before to their values immediately after the event occurrence. The predicate embodies specifications of both the event's enabling condition and action. Inference rules for both safety and liveness properties are presented. Real-time progress properties can be verified as safety properties. We illustrate with three sliding window data transfer protocols that use modulo-2 sequence numbers. The first protocol operates over channels that only lose messages. It is a time-independent protocol. The second and third protocols operate over channels that lose, reorder, and duplicate messages. For their correct operation, it is necessary that messages in the channels have bounded lifetimes. They are time-dependent protocols.
A comparative analysis of methodologies for database schema integration One of the fundamental principles of the database approach is that a database allows a nonredundant, unified representation of all data managed in an organization. This is achieved only when methodologies are available to support integration across organizational and application boundaries. Methodologies for database design usually perform the design activity by separately producing several schemas, representing parts of the application, which are subsequently merged. Database schema integration is the activity of integrating the schemas of existing or proposed databases into a global, unified schema. The aim of the paper is to provide first a unifying framework for the problem of schema integration, then a comparative review of the work done thus far in this area. Such a framework, with the associated analysis of the existing approaches, provides a basis for identifying strengths and weaknesses of individual methodologies, as well as general guidelines for future improvements and extensions
An Algebraic Foundation for Graph-based Diagrams in Computing We develop an algebraic foundation for some of the graph-based structures underlying a variety of popular diagrammatic notations for the specification, modelling and programming of computing systems. Using hypergraphs and higraphs as leading examples, a locally ordered category Graph(C) of graphs in a locally ordered category C is defined and endowed with symmetric monoidal closed structure. Two other operations on higraphs and variants, selected for relevance to computing applications, are generalised in this setting.
Understanding quality in conceptual modeling With the increasing focus on early development as a major factor in determining overall quality, many researchers are trying to define what makes a good conceptual model. However, existing frameworks often do little more than list desirable properties. The authors examine attempts to define quality as it relates to conceptual models and propose their own framework, which includes a systematic approach to identifying quality-improvement goals and the means to achieve them. The framework has two unique features: it distinguishes between goals and means by separating what you are trying to achieve in conceptual modeling from how to achieve it (it has been made so that the goals are more realistic by introducing the notion of feasibility); and it is closely linked to linguistic concepts because modeling is essentially making statements in some language.<>
A Picture from the Model-Based Testing Area: Concepts, Techniques, and Challenges Model-Based Testing (MBT) represents a feasible and interesting testing strategy where test cases are generated from formal models describing the software behavior/structure. The MBT field is continuously evolving, as it could be observed in the increasing number of MBT techniques published at the technical literature. However, there is still a gap between researches regarding MBT and its application in the software industry, mainly occasioned by the lack of information regarding the concepts, available techniques, and challenges in using this testing strategy in real software projects. This chapter presents information intended to support researchers and practitioners reducing this gap, consequently contributing to the transfer of this technology from the academia to the industry. It includes information regarding the concepts of MBT, characterization of 219 MBT available techniques, approaches supporting the selection of MBT techniques for software projects, risk factors that may influence the use of these techniques in the industry together with some mechanisms to mitigate their impact, and future perspectives regarding the MBT field.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.058987
0.036895
0.035937
0.017296
0.011111
0.002222
0.000171
0.000053
0.000015
0
0
0
0
0
Bessel inequality for robust stability analysis of time-delay system This paper addresses the problem of the stability analysis for a linear time-delay systems via a robust analysis approach and especially the quadratic separation framework. To this end, we use the Bessel inequality for building operators that depend on the delay. They not only allow us to model the system as an uncertain feedback system but also to control the accuracy of the approximations made. Then, a set of LMIs conditions are proposed which tends on examples to the analytical bounds for both delay dependent stability and delay range stability.
New stability conditions for systems with distributed delays In the present paper, sufficient conditions for the exponential stability of linear systems with infinite distributed delays are presented. Such systems arise in population dynamics, in traffic flow models, in networked control systems, in PID controller design and in other engineering problems. In the early Lyapunov-based analysis of systems with distributed delays (Kolmanovskii & Myshkis, 1999), the delayed terms were treated as perturbations, where it was assumed that the system without the delayed term is asymptotically stable. Later, for the case of constant kernels and finite delays, less conservative conditions were derived under the assumption that the corresponding system with the zero-delay is stable (Chen & Zheng, 2007). We will generalize these results to the infinite delay case by extending the corresponding Jensen's integral inequalities and Lyapunov-Krasovskii constructions. Our main challenge is the stability conditions for systems with gamma-distributed delays, where the delay is stabilizing, i.e. the corresponding system with the zero-delay as well as the system without the delayed term are not asymptotically stable. Here the results are derived by using augmented Lyapunov functionals. Polytopic uncertainties in the system matrices can be easily included in the analysis. Numerical examples illustrate the efficiency of the method. Thus, for the traffic flow model on the ring, where the delay is stabilizing, the resulting stability region is close to the theoretical one found in Michiels, Morarescu, and Niculescu (2009) via the frequency domain analysis.
Quadratic separation for feedback connection of an uncertain matrix and an implicit linear transformation Topological separation is investigated in the case of an uncertain time-invariant matrix interconnected with an implicit linear transformation. A quadratic separator independent of the uncertainty is shown to prove losslessly the closed-loop well-posedness. Several applications for LTI descriptor system analysis are then given. First, some known results for stability and pole location of descriptor systems are demonstrated in a new way. Second, contributions to robust stability analysis and time-delay systems stability analysis are exposed. These prove to be new even when compared to results for usual LTI systems (not in descriptor form). All results are formulated as linear matrix inequalities (LMIs).
Complete Quadratic Lyapunov functionals using Bessel-Legendre inequality The article is concerned with the stability analysis of time-delay systems using complete-Lyapunov functionals. This class of functionals has been employed in the literature because of their nice properties. Indeed, such a functional can be built if a system with a constant time delay is asymptotically stable. Hence, several articles aim at approximating their parameters thanks to a discretization method or polynomial modeling. The interest of such approximation is the design of tractable sufficient stability conditions expressed on the Linear Matrix Inequality or the Sum of Squares setups. In the present article, we provide an alternative method based on polynomial approximation which takes advantages of the Legendre polynomials and their properties. The resulting stability conditions are scalable with respect to the degree of the Legendre polynomials and are expressed in terms of a tractable LMI.
Improved delay-range-dependent stability criteria for linear systems with time-varying delays This paper is concerned with the stability analysis of linear systems with time-varying delays in a given range. A new type of augmented Lyapunov functional is proposed which contains some triple-integral terms. In the proposed Lyapunov functional, the information on the lower bound of the delay is fully exploited. Some new stability criteria are derived in terms of linear matrix inequalities without introducing any free-weighting matrices. Numerical examples are given to illustrate the effectiveness of the proposed method.
Fixed-Order Piecewise-Affine Output Feedback Controller for Fuzzy-Affine-Model-Based Nonlinear Systems With Time-Varying Delay. This paper studies the problem of delay-dependent fixed-order memory piecewise-affine H∞ output feedback control for a class of nonlinear systems with time-varying delay via a descriptor system approach. The nonlinear plant is expressed by a continuous-time Takagi-Sugeno (T-S) fuzzy-affine model. Specifically, by utilizing a descriptor model transformation, the original closed-loop system is first...
Novel Lyapunov-Krasovskii functional with delay-dependent matrix for stability of time-varying delay systems. This paper investigates the stability criteria of time-varying delay systems with known bounds of the delay and its derivative. To obtain a tighter bound of integral term, quadratic generalized free-weighting matrix inequality (QGFMI) is proposed. Furthermore, a novel augmented LyapunovKrasovskii functional (LKF) are constructed with a delay-dependent matrix, which impose the information for a bound of delay derivative. Relaxed stability condition using QGFMI and LKF provides a larger delay bound with low computational burden. The superiority of the proposed stability condition is verified by comparing to recent results.
On stability criteria for neural networks with time-varying delay using Wirtinger-based multiple integral inequality This paper investigates the problem of delay-dependent stability analysis of neural networks with time-varying delay. Based on Wirtinger-based integral inequality which suggests very closed lower bound of Jensen's inequality, a new Wirtinger-based multiple integral inequality is presented and it is applied to time-varying delayed neural networks by using reciprocally convex combination approach of high order cases. Three numerical examples are given to describe the less conservatism of the proposed methods.
Further results on delay-dependent stability criteria of discrete systems with an interval time-varying delay. This paper deals with stability of discrete-time systems with an interval-like time-varying delay. By constructing a novel augmented Lyapunov functional and using an improved finite-sum inequality to deal with some sum-terms appearing in the forward difference of the Lyapunov functional, a less conservative stability criterion is obtained for the system under study if compared with some existing methods. Moreover, as a special case, the stability of discrete-time systems with a constant time delay is also investigated. Three numerical examples show that the derived stability criteria are less conservative and require relatively small number of decision variables.
Unscented filtering and nonlinear estimation The extended Kalman filter (EKF) is probably the most widely used estimation algorithm for nonlinear systems. However, more than 35 years of experience in the estimation community has shown that is difficult to implement, difficult to tune, and only reliable for systems that are almost linear on the time scale of the updates. Many of these difficulties arise from its use of linearization. To overcome this limitation, the unscented transformation (UT) was developed as a method to propagate mean and covariance information through nonlinear transformations. It is more accurate, easier to implement, and uses the same order of calculations as linearization. This paper reviews the motivation, development, use, and implications of the UT.
Degrees of acyclicity for hypergraphs and relational database schemes Database schemes (winch, intuitively, are collecuons of table skeletons) can be wewed as hypergraphs (A hypergraph Is a generalization of an ordinary undirected graph, such that an edge need not contain exactly two nodes, but can instead contain an arbitrary nonzero number of nodes.) A class of "acychc" database schemes was recently introduced. A number of basic desirable propemes of database schemes have been shown to be equivalent to acyclicity This shows the naturalness of the concept. However, unlike the situation for ordinary, undirected graphs, there are several natural, noneqmvalent notions of acyclicity for hypergraphs (and hence for database schemes). Various desirable properties of database schemes are constdered and it is shown that they fall into several equivalence classes, each completely characterized by the degree of acycliclty of the scheme The results are also of interest from a purely graph-theoretic viewpomt. The original notion of aeyclicity has the countermtmtive property that a subhypergraph of an acychc hypergraph can be cyclic. This strange behavior does not occur for the new degrees of acyelicity that are considered.
Designing And Building A Negotiating Automated Agent Negotiations are very important in a multiagent environment, particularly, in an environment where there are conflicts between the agents, and cooperation would be beneficial. We have developed a general structure for a Negotiating Automated Agent that consists of five modules: a Prime Minister, a Ministry of Defense, a Foreign Office, a Headquarters and Intelligence. These modules are implemented using a dynamic set of local agents belonging to the different modules. We used this structure to develop a Diplomacy player, Diplomat. Playing Diplomacy involves a certain amount of technical skills as in other board games, but the capacity to negotiate, explain, convince, promise, keep promises or break them, is an essential ingredient in good play. Diplomat was evaluated and consistently played better than human players.
Anomaly-Based JPEG2000 Compression of Hyperspectral Imagery Lossy compression of hyperspectral imagery is considered, with special emphasis on the preservation of anomalous pixels. In the proposed scheme, anomalous pixels are extracted before compression and replaced with interpolation from surrounding nonanomalous pixels. The image is then coded using principal component analysis for spectral decorrelation followed by JPEG2000. The anomalous pixels do not participate in this lossy compression and are rather transmitted separately in a lossless fashion. Upon decoding, the anomalous pixels are inserted back into the image. Experimental results demonstrate that the proposed scheme improves not only anomaly detection performed subsequent to decoding but also the rate-distortion performance of the lossy-compression process.
Novel stability conditions for discrete-time T-S fuzzy systems: A Kronecker-product approach. This paper is concerned with the issue of developing a novel strategy to reduce the conservatism of stability conditions for discrete-time Takagi–Sugeno (T–S) fuzzy systems. Unlike the previous ones which are almost quadratic with respect to the state vector, a new class of Lyapunov functions is proposed which is quadratic with respect to the Kronecker products of the state vector, thus including almost the existing ones found in the literature as special cases. By combining the characterizations of homogeneous matrix polynomials and the properties of membership functions, relaxed stability conditions are derived in the form of linear matrix inequalities which can be efficiently solved by the convex optimization techniques. Finally, a numerical example is provided to illustrate the effectiveness of the proposed approach.
1.100205
0.014426
0.014354
0.008868
0.003302
0.00037
0.000134
0.000084
0.000037
0
0
0
0
0
Developing business object models with patterns and ontologies We propose a new approach for developing Business Object Model (BOMs). The approach uses ontologies to unify the representation and integration of knowledge from analysis patterns with different structures.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Qualitative Action Systems An extension to action systems is presented facilitating the modeling of continuous behavior in the discrete domain. The original action system formalism has been developed by Back et al. in order to describe parallel and distributed computations of discrete systems, i.e. systems with discrete state space and discrete control. In order to cope with hybrid systems, i.e. systems with continuous evolution and discrete control, two extensions have been proposed: hybrid action systems and continuous action systems. Both use differential equations (relations) to describe continuous evolution. Our version of action systems takes an alternative approach by adding a level of abstraction: continuous behavior is modeled by Qualitative Differential Equations that are the preferred choice when it comes to specifying abstract and possibly non-deterministic requirements of continuous behavior. Because their solutions are transition systems, all evolutions in our qualitative action systems are discrete. Based on hybrid action systems, we develop a new theory of qualitative action systems and discuss how we have applied such models in the context of automated test-case generation for hybrid systems.
Alternating simulation and IOCO We propose a symbolic framework called guarded labeled as- signment systems or GLASs and show how GLASs can be used as a foundation for symbolic analysis of various aspects of formal specification languages. We define a notion of i/o- refinement over GLASs as an alternating simulation relation and provide formal proofs that relate i/o-refinement to ioco. We show that non-i/o-refinement reduces to a reachability problem and provide a translation from bounded non-i/o- refinement or bounded non-ioco to checking first-order as- sertions. alternating simulation and show that it is a generalization of ioco for all GLASs, generalizing an earlier result (29) for the deterministic case. The notion of i/o-refinement is essen- tially a compositional version of ioco. We provide a rigorous account for formally dealing with quiescence in GLASs in a way that supports symbolic analysis with or without the presence of quiescence. We also define the notion of a sym- bolic composition of GLASs that generalizes the composition of model programs (31) and respects the standard parallel synchronous composition of LTSs (21, 23) with the interleav- ing semantics of unshared labels. Composition of GLASs is used to show that the i/o-refinement relation between two GLASs can be formulated as an condition of the composite GLAS. This leads to a mapping of the non-i/o-refinement checking problem into a reachability checking problem for a pair of GLASs. For a class of GLASs that we callrobust we can furthermore use established methods developed for ver- ifying safety properties of reactive systems. We show that the non-i/o-refinement checking problem can be reduced to first-order assertion checking by using proof-rules similar to those that have been formulated for checking invariants of reactive systems. It can also be approximated as a bounded model program checking problem or BMPC (30). The practi- cal implications regarding symbolic analysis are not studied in this paper, but lead to a way of applying state-of-the- art satisfiability modulo theories (SMT) technology that are outlined in (30, 29). However, the concrete examples used in the paper are tailored to such analysis and illustrate the use background theories that are supported by state-of-the-art SMT solvers such as Z3 (14).
Hybrid action systems In this paper we investigate the use of action systems with differential actions in the specification of hybrid systems. As the main contribution we generalize the definition of a differential action, allowing the use of arbitrary relations over model variables and their time derivatives in modelling continuous-time dynamics. The generalized differential action has an intuitively appealing predicate transformer semantics, which we show to be both conjunctive and monotonic. In addition, we show that differential actions blend smoothly with conventional actions in action systems even under parallel composition. Moreover, as the strength of the action system formalism is the support for stepwise development by refinement, we investigate refinement involving a differential action. We show that, due to the predicate transformer semantics, standard action refinement techniques apply also to the differential action, thus, allowing stepwise development of hybrid Systems.
The Composition of Event-B Models The transition from classical B [2] to the Event-B language and method [3] has seen the removal of some forms of model structuring and composition, with the intention of reinventing them in future. This work contributes to that reinvention. Inspired by a proposed method for state-based decomposition and refinement [5] of an Event-B model, we propose a familiar parallel event composition (over disjoint state variable lists), and the less familiar event fusion (over intersecting state variable lists). A brief motivation is provided for these and other forms of composition of models, in terms of feature-based modelling. We show that model consistency is preserved under such compositions. More significantly we show that model composition preserves refinement.
Mapping UML to labeled transition systems for test-case generation: a translation via object-oriented action systems The Unified Modeling Language (UML) is a well known and widely used standard for building software models. While it is familiar to many software engineers, it lacks standardized formal semantics. In this paper, we extend on the formalism of object-oriented action systems (OOAS) and describe a mapping of a selected UML-subset to OOAS by choosing one of the several possible semantics of UML. This mapping, together with the introduction of a trace semantics for OOAS, paves the way for applying tools for and theory of labeled transition systems to UML-models. As a running example, we use a car alarm system in the context of model-based test-case generation and show how the UML mapping is done.
Towards Symbolic Model-Based Mutation Testing: Combining Reachability And Refinement Checking Model-based mutation testing uses altered test models to derive test cases that are able to reveal whether a modelled fault has been implemented. This requires conformance checking between the original and the mutated model. This paper presents an approach for symbolic conformance checking of action systems, which are well-suited to specify reactive systems. We also consider non-determinism in our models. Hence, we do not check for equivalence, but for refinement. We encode the transition relation as well as the conformance relation as a constraint satisfaction problem and use a constraint solver in our reachability and refinement checking algorithms. Explicit conformance checking techniques often face state space explosion. First experimental evaluations show that our approach has potential to outperform explicit conformance checkers.
Linear hybrid action systems Action Systems is a predicate transformer based formalism. It supports the development of provably correct reactive and distributed systems by refinement. Recently, Action Systems were extended with a differential action. It is used for modelling continuous behaviour, thus, allowing the use of refinement in the development of provably correct hybrid systems, i.e, a discrete controller interacting with some continuously evolving environment. However, refinement as a method is concerned with correctness issues only. It offers very little guidance in what details one should consider during the refinement steps to make the system more robust. That information is revealed by robustness analysis. Other formalisms not supporting refinement do have tool support for automating the robustness analysis, e.g., HyTech for linear hybrid automata. Consequently, we study in this paper the non-trivial translation problem between Action Systems and linear hybrid automata. As the main contribution, we give and prove correct an algorithm that translates a linear hybrid action system to a linear hybrid automaton. With this algorithm we combine the strengths of the two formalisms: we may use HyTech for the robustness analysis to guide the development by refinement.
Generalizing Action Systems to Hybrid Systems Action systems have been used successfully to describe discrete systems, i.e., systems with discrete control acting upon a discrete state space. In this paper we extend the action system approach to hybrid systems by defining continuous action systems. These are systems with discrete control over a continuously evolving state, whose semantics is defined in terms of traditional action systems. We show that continuous action systems are very general and can be used to describe a diverse range of hybrid systems. Moreover, the properties of continuous action systems are proved using standard action systems proof techniques.
The multiway rendezvous The multiway rendezvous is a natural generalization of the rendezvous in which more than two processes may participate. The utility of the multiway rendezvous is illustrated by solutions to a variety of problems. To make their simplicity apparent, these solutions are written using a construct tailor-made to support the multiway rendezvous. The degree of support for multiway rendezvous applications by several well-known languages that support the two-way rendezvous is examined. Since such support for the multiway rendezvous is found to be inadequate, well-integrated extensions to these languages are considered that would help provide such support.
Structured Analysis (SA): A Language for Communicating Ideas Structured analysis (SA) combines blueprint-like graphic language with the nouns and verbs of any other language to provide a hierarchic, top-down, gradual exposition of detail in the form of an SA model. The things and happenings of a subject are expressed in a data decomposition and an activity decomposition, both of which employ the same graphic building block, the SA box, to represent a part of a whole. SA arrows, representing input, output, control, and mechanism, express the relation of each part to the whole. The paper describes the rationalization behind some 40 features of the SA language, and shows how they enable rigorous communication which results frorn disciplined, recursive application of the SA maxim: "Everything worth saying about anything worth saying something about must be expressed in six or fewer pieces."
PARIS: a system for reusing partially interpreted schemas This paper describes PARIS, an implemented system that facilitates the reuse of partially interpreted schemas. A schema is a program and specification with abstract, or uninterpreted, entities. Different interpretations of those entities will produce different programs. The PARIS System maintains a library of such schemas and provides an interactive mechanism to interpret a schema into a useful program by means of partially automated matching and verification procedures.
A Formal Approach To Large Software Construction In this short synthesis, we have shown that the theory of software construction exists and begins to be applied.
Application and benefits of formal methods in software development Formal methods for software development receive much attention in research centres, but are rarely used in industry for the development of (large) software systems. One of the reasons is that little is known about the integration of formal methods in the software process, and the exact role of formal methods in the software life-cycle is still unclear. In this paper, a detailed examination is made of the application of, and the benefits resulting from, a generally applicable formal method (VDM) in a standard model for software development (DoD-STD-2167A). Currently, there is no general agreement on how formal methods should be used, but in order to analyse the use of formal methods in the software process, a clear view of such use is essential. Therefore, we show what is meant by 'using a formal method'. The different activities of DoD-STD-2167A are analysed with regard to their suitability for applying VDM and the benefits that may result from applying VDM for that activity. Based on this analysis, an overall view on the usage of formal methods in the software process is formulated.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.056215
0.0624
0.029159
0.0156
0.011986
0.002389
0.000233
0.000021
0.000001
0
0
0
0
0
Stable fuzzy control and observer via LMIs in a fermentation process. •A fuzzy model is presented for the fermentation process.•Based in the fuzzy model a state feedback gain is computed for each fuzzy rule based on Lyapunov's theory.•As it is not guaranteed to have the entire state, a fuzzy observer is developed using LMIs.•The simulations show the control performance and the result of using the estimated state to compute the feedback.•The schemes show the way to apply the control action and the resultant gains are also presented.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1
0
0
0
0
0
0
0
0
0
0
0
0
0