Query Text
stringlengths
9
8.71k
Ranking 1
stringlengths
14
5.31k
Ranking 2
stringlengths
11
5.31k
Ranking 3
stringlengths
11
8.42k
Ranking 4
stringlengths
17
8.71k
Ranking 5
stringlengths
14
4.95k
Ranking 6
stringlengths
14
8.42k
Ranking 7
stringlengths
17
8.42k
Ranking 8
stringlengths
10
5.31k
Ranking 9
stringlengths
9
8.42k
Ranking 10
stringlengths
9
8.42k
Ranking 11
stringlengths
10
4.11k
Ranking 12
stringlengths
14
8.33k
Ranking 13
stringlengths
17
3.82k
score_0
float64
1
1.25
score_1
float64
0
0.25
score_2
float64
0
0.25
score_3
float64
0
0.24
score_4
float64
0
0.24
score_5
float64
0
0.24
score_6
float64
0
0.21
score_7
float64
0
0.1
score_8
float64
0
0.02
score_9
float64
0
0
score_10
float64
0
0
score_11
float64
0
0
score_12
float64
0
0
score_13
float64
0
0
Improved delay-dependent stability criteria for generalized neural networks with time-varying delays. This paper is concerned with the problem of stability analysis for generalized neural networks with time-varying delays. A novel integral inequality which includes several existing inequalities as special cases is presented. By employing a suitable Lyapunov–Krasovskii functional (LKF) and using the proposed integral inequality to estimate the derivative of the LKF, improved delay-dependent stability criteria expressed in terms of linear matrix inequalities are derived. Finally, four numerical examples are provided to demonstrate the effectiveness and the improvement of the proposed method.
Generalized integral inequality: Application to time-delay systems. This paper investigates a stability problem for linear systems with time-delay. By constructing simple Lyapunov–Krasovskii functional (LKF), and utilizing a new generalized integral inequality (GII) proposed in this paper, a sufficient stability condition for the systems will be derived in terms of linear matrix inequalities (LMIs). Two illustrative examples are given to show the superiorities of the proposed criterion.
New stability results for delayed neural networks. This paper is concerned with the stability for delayed neural networks. By more fully making use of the information of the activation function, a new Lyapunov–Krasovskii functional (LKF) is constructed. Then a new integral inequality is developed, and more information of the activation function is taken into account when the derivative of the LKF is estimated. By Lyapunov stability theory, a new stability result is obtained. Finally, three examples are given to illustrate the stability result is less conservative than some recently reported ones.
Improved integral inequalities for stability analysis of delayed neural networks. This paper is concerned with the exponential stability of delayed neural networks. Combined the Legendre polynomials with free-weighting matrices technique, an improved free-matrix-based single integral inequality is given, which includes the general single integral inequality and the free-matrix-based single integral inequality as special cases. Furthermore, a free-matrix-based double integral inequality which improves the existing results is derived. As applications of these novel free-matrix-based integral inequalities, several exponential stability criteria with less conservatism for the delayed neural networks are obtained. The effectiveness of our main results is illustrated by three numerical examples from the literatures.
Stability Analysis for Neural Networks With Time-Varying Delay via Improved Techniques. This paper is concerned with the stability problem for neural networks with a time-varying delay. First, an improved generalized free-weighting-matrix integral inequality is proposed, which encompasses the conventional one as a special case. Second, an improved Lyapunov-Krasovskii functional is constructed that contains two complement triple-integral functionals. Third, based on the improved techniques, a new stability condition is derived for neural networks with a time-varying delay. Finally, two widely used numerical examples are given to demonstrate that the proposed stability condition is very competitive in both conservatism and complexity.
Stability criterion for delayed neural networks via Wirtinger-based multiple integral inequality. This brief provides an alternative way to reduce the conservativeness of the stability criterion for neural networks (NNs) with time-varying delays. The core is that a series of multiple integral terms are considered as a part of the Lyapunov-Krasovskii functional (LKF). In order to estimate the multiple integral terms in the derivative of the LKF, a multiple integral inequality, named Wirtinger-based multiple integral inequality (WMII), is proposed. This inequality includes some recent related results as its special cases. Based on the multiple integral forms of LKF and the WMII, a novel delay dependent stability criterion for NNs with time-varying delays is derived. The effectiveness of the established stability criterion is verified by an open example.
An extended reciprocally convex matrix inequality for stability analysis of systems with time-varying delay. The reciprocally convex combination lemma (RCCL) is an important technique to develop stability criteria for the systems with a time-varying delay. This note develops an extended reciprocally convex matrix inequality, which reduces the estimation gap of the RCCL-based matrix inequality and reduces the number of decision variables of the recently proposed delay-dependent RCCL. A stability criterion of a linear time-delay system is established through the proposed matrix inequality. Finally, a numerical example is given to demonstrate the advantage of the proposed method.
Delay-Dependent Global Exponential Stability for Delayed Recurrent Neural Networks. This paper deals with the global exponential stability for delayed recurrent neural networks (DRNNs). By constructing an augmented Lyapunov-Krasovskii functional and adopting the reciprocally convex combination approach and Wirtinger-based integral inequality, delay-dependent global exponential stability criteria are derived in terms of linear matrix inequalities. Meanwhile, a general and effectiv...
Algebraic tools for the performance evaluation of discrete event systems In this paper, it is shown that a certain class of Petri nets called event graphs can be represented as linear "time-invariant" flnite-dimensional sys- tems using some particular algebras. This sets the ground on which a theory of these systems can be developped in a manner which is very analogous to that of conventional linear system theory. Part 2 of the paper is devoted to showing some preliminary basic developments in that direction. Indeed, there are several ways in which one can consider event graphs as linear sys- tems: these ways correspond to approaches in the time domain, in the event domain and in a two-dimensional domain. In each of these approaches, a difierent algebra has to be used for models to remain linear. However, the common feature of these algebras is that they all fall into the axiomatic deflnition of "dioids". Therefore, Part 1 of the paper is devoted to a unifled presentation of basic algebraic results on dioids.
Engineering and theoretical underpinnings of retrenchment Refinement is reviewed, highlighting in particular the distinction between its use as a specification constructor at a high level, and its use as an implementation mechanism at a low level. Some of its shortcomings as a specification constructor at high levels of abstraction are pointed out, and these are used to motivate the adoption of retrenchment for certain high level development steps. Basic properties of retrenchment are described, including a justification of the operation proof obligation, simple examples, its use in requirements engineering and model evolution, and simulation properties. The interaction of retrenchment with refinement notions of correctness is overviewed, as is a range of other technical issues. Two case study scenarios are presented. One is a simple digital redesign control theory problem, and the other is an overview of the application of retrenchment to the Mondex Purse development.
Consistency of the static and dynamic components of object-oriented specifications Object-oriented (OO) modeling and design methodologies have been receiving a significant attention since they allow a quick and easy-to-gasp overview about a complex model. However, in the literature there are no formal frameworks that allow designers to verify the consistency (absence of contradictions) of both the static and dynamic components of the specified models, that are often assumed to be consistent. In this paper, a unifying formal framework is proposed that allows the consistency checking of both the static and dynamic components of a simplified OO model.
PPM: One Step to Practicality New mechanism for PPM data compression scheme is invented. Simple procedures for adaptive escape estimation are proposed. Practical implementation of these methods is described and showed that this implementation gives best to date results at complexity comparable with widespread LZ77- and BWT-based algorithms.
Parallel image normalization on a mesh connected array processor Image normalization is a basic operation in various image processing tasks. A parallel algorithm for fast binary image normalization is proposed for a mesh connected array processor. The principal operation in this algorithm is pixel mapping. The basic idea of parallel pixel mapping is to utilize a store and forward mechanism which routes pixels from their source locations to destinations in parallel along the paths of minimum length. The routing is based on a simple yet powerful concept of flow control patterns . This can form the basis for designing other parallel algorithms for low level image processing. The normalization process is decomposed into three procedures: translation, rotation and scaling. In each procedure, a mapping algorithm is employed to route the object pixels from source locations to destinations. Simulation results for the parallel image normalization on generated images are provided.
An Algorithm For Key-Dependent S-Box Generation In Block Cipher System A nonlinear substitution operation of bytes is the main strength factor of the Advanced Encryption Standard (AES) and other modern cipher systems. In this paper we have presented a new simple algorithm to generate key-dependent S-boxes and inverse S-boxes for block cipher systems. The quality of this algorithm was tested by using NIST tests, and changing only one bit of the secret key to generate new key-dependent S-boxes. The fact that the S-boxes are key-dependent and unknown is the main strength of the algorithm, since the linear and differential cryptanalysis require known S-boxes. In the second section of the paper, we analyze S-boxes. In the third section we describe the key-dependent S-boxes and inverse S-boxes generation algorithm. Afterwards, we experimentally investigate the quality of the generated key-dependent S-boxes. Comparison results suggest that the key-dependent S-boxes have good performance and can be applied to AES.
1.0525
0.05
0.04
0.025
0.016667
0.012346
0.003708
0.000556
0
0
0
0
0
0
Reduced-order observer design for a class of generalized Lipschitz nonlinear systems with time-varying delay. This paper investigates the H∞ reduced-order observer design problem for a class of nonlinear systems with interval time-varying delay which satisfies the quadratically inner-bounded condition and encompasses the family of Lipschitz systems. A novel reduced-order observer design methodology for nonlinear systems is proposed. By utilizing a newly extended reciprocal convexity inequality, free-weighting matrix technique, and quadratically inner-bounded condition, the less conservative existence conditions of the proposed nonlinear H∞ observer are derived. The new sufficient conditions in terms of linear matrix inequalities (LMIs) guarantee asymptotic stability of the estimation error dynamics with a prescribed performance γ. Two numerical examples are given to illustrate the effectiveness of the proposed approach.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1
0
0
0
0
0
0
0
0
0
0
0
0
0
New results on stability analysis of delayed systems derived from extended wirtinger's integral inequality. This work is concerned with the stability analysis of continuous systems with interval time-varying delays. Novel delay-dependent and delay-rate-dependent stability criteria in terms of linear matrix inequalities (LMIs) are established, which is made possible by: (i) an extended Wirtinger's integral inequality which includes the celebrated Wirtinger-based integral inequality as a special case and delivers more accurate lower bounds than the latter does; (ii) a type of new augmented Lyapunov–Krasovskii functional (LKF) where all possible information of the delay such as its lower, upper bounds, upper bound of its derivative and the relationship among a current state, an exactly delayed state, marginally delayed states are fully exploited; and (iii) transforming the upper bounds of the derivative of the LKF into an affine function concerning the delay. The developed stability conditions for systems with time-varying delays are less conservative as compared with most existing ones. Numerical examples authenticate the effectiveness and improvement of the proposed method over existing results.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1
0
0
0
0
0
0
0
0
0
0
0
0
0
A New Look at Dual-Hop Relaying: Performance Limits with Hardware Impairments. Physical transceivers have hardware impairments that create distortions which degrade the performance of communication systems. The vast majority of technical contributions in the area of relaying neglect hardware impairments and, thus, assume ideal hardware. Such approximations make sense in low-rate systems, but can lead to very misleading results when analyzing future high-rate systems. This paper quantifies the impact of hardware impairments on dual-hop relaying, for both amplify-and-forward and decode-and-forward protocols. The outage probability (OP) in these practical scenarios is a function of the effective end-to-end signal-to-noise-and-distortion ratio (SNDR). This paper derives new closed-form expressions for the exact and asymptotic OPs, accounting for hardware impairments at the source, relay, and destination. A similar analysis for the ergodic capacity is also pursued, resulting in new upper bounds. We assume that both hops are subject to independent but non-identically distributed Nakagami-m fading. This paper validates that the performance loss is small at low rates, but otherwise can be very substantial. In particular, it is proved that for high signal-to-noise ratio (SNR), the end-to-end SNDR converges to a deterministic constant, coined the SNDR ceiling, which is inversely proportional to the level of impairments. This stands in contrast to the ideal hardware case in which the end-to-end SNDR grows without bound in the high-SNR regime. Finally, we provide fundamental design guidelines for selecting hardware that satisfies the requirements of a practical relaying system.
Opportunistic relay selection for cooperative networks with secrecy constraints This study deals with opportunistic relay selection in cooperative networks with secrecy constraints, where an eavesdropper node tries to overhead the source message. Previously reported relay selection techniques are optimised for non-eavesdropper environments and cannot ensure security. Two new opportunistic relay selection techniques, which incorporate the quality of the relay-eavesdropper links and take into account secrecy rate issues, are investigated. The first scheme assumes an instantaneous knowledge of the eavesdropper channels and maximises the achievable secrecy rate. The second one assumes an average knowledge of the eavesdropper channels and is a suboptimal selection solution appropriate for practical applications. Both schemes are analysed in terms of secrecy outage probability and their enhancements against conventional opportunistic selection policies are validated via numerical and theoretical results.
Cooperative wireless communications: a cross-layer approach This article outlines one way to address these problems by using the notion of cooperation between wireless nodes. In cooperative communications, multiple nodes in a wireless network work together to form a virtual antenna array. Using cooperation, it is possible to exploit the spatial diversity of the traditional MIMO techniques without each node necessarily having multiple antennas. Multihop networks use some form of cooperation by enabling intermediate nodes to forward the message from source to destination. However, cooperative communication techniques described in this article are fundamentally different in that the relaying nodes can forward the information fully or in part. Also the destination receives multiple versions of the message from the source, and one or more relays and combines these to obtain a more reliable estimate of the transmitted signal as well as higher data rates. The main advantages of cooperative communications are presented
Secure Connectivity Using Randomize-and-Forward Strategy in Cooperative Wireless Networks. In this letter, we study the problem of secure connectivity for colluding eavesdropper using randomize-and-forward (RF) strategy in cooperative wireless networks, where the distribution of the eavesdroppers is a homogenous Poisson point process (PPP). Considering the case of fixed relay, the exact expression for the secure connectivity probability is obtained. Then we obtain the lower bound and find that the lower bound gives accurate approximation of the exact secure connectivity probability when the eavesdropper density is small. Based on the lower bound expression, we obtain the optimal area of relay location and the approximate farthest secure distance between the source and destination for a given secure connectivity probability in the small eavesdropper density regime. Furthermore, we extend the model of fixed relay to random relay, and get the lower bound expression for the secure connectivity probability. © 1997-2012 IEEE.
How Much Does I/Q Imbalance Affect Secrecy Capacity? Radio frequency front ends constitute a fundamental part of both conventional and emerging wireless systems. However, in spite of their importance, they are often assumed ideal, although they are practically subject to certain detrimental impairments, such as amplifier nonlinearities, phase noise, and in-phase and quadrature (I/Q) imbalance (IQI). This letter is devoted to the quantification and e...
Cognitive radio network with secrecy and interference constraints. In this paper, we investigate the physical-layer security of a secure communication in single-input multiple-output (SIMO) cognitive radio networks (CRNs) in the presence of two eavesdroppers. In particular, both primary user (PU) and secondary user (SU) share the same spectrum, but they face with different eavesdroppers who are equipped with multiple antennas. In order to protect the PU communication from the interference of the SU and the risks of eavesdropping, the SU must have a reasonable adaptive transmission power which is set on the basis of channel state information, interference and security constraints of the PU. Accordingly, an upper bound and lower bound for the SU transmission power are derived. Furthermore, a power allocation policy, which is calculated on the convex combination of the upper and lower bound of the SU transmission power, is proposed. On this basis, we investigate the impact of the PU transmission power and channel mean gains on the security and system performance of the SU. Closed-form expressions for the outage probability, probability of non-zero secrecy capacity, and secrecy outage probability are obtained. Interestingly, our results show that the strong channel mean gain of the PU transmitter to the PU's eavesdropper in the primary network can enhance the SU performance.
Massive MIMO Systems With Non-Ideal Hardware: Energy Efficiency, Estimation, and Capacity Limits The use of large-scale antenna arrays can bring substantial improvements in energy and/or spectral efficiency to wireless systems due to the greatly improved spatial resolution and array gain. Recent works in the field of massive multiple-input multiple-output (MIMO) show that the user channels decorrelate when the number of antennas at the base stations (BSs) increases, thus strong signal gains are achievable with little interuser interference. Since these results rely on asymptotics, it is important to investigate whether the conventional system models are reasonable in this asymptotic regime. This paper considers a new system model that incorporates general transceiver hardware impairments at both the BSs (equipped with large antenna arrays) and the single-antenna user equipments (UEs). As opposed to the conventional case of ideal hardware, we show that hardware impairments create finite ceilings on the channel estimation accuracy and on the downlink/uplink capacity of each UE. Surprisingly, the capacity is mainly limited by the hardware at the UE, while the impact of impairments in the large-scale arrays vanishes asymptotically and interuser interference (in particular, pilot contamination) becomes negligible. Furthermore, we prove that the huge degrees of freedom offered by massive MIMO can be used to reduce the transmit power and/or to tolerate larger hardware impairments, which allows for the use of inexpensive and energy-efficient antenna elements.
List processing in real time on a serial computer A real-time list processing system is one in which the time required by the elementary list operations (e.g. CONS, CAR, CDR, RPLACA, RPLACD, EQ, and ATOM in LISP) is bounded by a (small) constant. Classical implementations of list processing systems lack this property because allocating a list cell from the heap may cause a garbage collection, which process requires time proportional to the heap size to finish. A real-time list processing system is presented which continuously reclaims garbage, including directed cycles, while linearizing and compacting the accessible cells into contiguous locations to avoid fragmenting the free storage pool. The program is small and requires no time-sharing interrupts, making it suitable for microcode. Finally, the system requires the same average time, and not more than twice the space, of a classical implementation, and those space requirements can be reduced to approximately classical proportions by compact list representation. Arrays of different sizes, a program stack, and hash linking are simple extensions to our system, and reference counting is found to be inferior for many applications.
Scikit-learn: Machine Learning in Python Scikit-learn is a Python module integrating a wide range of state-of-the-art machine learning algorithms for medium-scale supervised and unsupervised problems. This package focuses on bringing machine learning to non-specialists using a general-purpose high-level language. Emphasis is put on ease of use, performance, documentation, and API consistency. It has minimal dependencies and is distributed under the simplified BSD license, encouraging its use in both academic and commercial settings. Source code, binaries, and documentation can be downloaded from http://scikit-learn.sourceforge.net.
Language constructs for managing change in process-centered environments Change is pervasive during software development, affecting objects, processes, and environments. In process centered environments, change management can be facilitated by software-process programming, which formalizes the representation of software products and processes using software-process programming languages (SPPLs). To fully realize this goal SPPLs should include constructs that specifically address the problems of change management. These problems include lack of representation of inter-object relationships, weak semantics for inter-object relationships, visibility of implementations, lack of formal representation of software processes, and reliance on programmers to manage change manually.APPL/A is a prototype SPPL that addresses these problems. APPL/A is an extension to Ada.. The principal extensions include abstract, persistent relations with programmable implementations, relation attributes that may be composite and derived, triggers that react to relation operations, optionally-enforceable predicates on relations, and five composite statements with transaction-like capabilities.APPL/A relations and triggers are especially important for the problems raised here. Relations enable inter-object relationships to be represented explicitly and derivation dependencies to be maintained automatically. Relation bodies can be programmed to implement alternative storage and computation strategies without affecting users of relation specifications. Triggers can react to changes in relations, automatically propagating data, invoking tools, and performing other change management tasks. Predicates and the transaction-like statements support change management in the face of evolving standards of consistency. Together, these features mitigate many of the problems that complicate change management in software processes and process-centered environments.
Animation of Object-Z Specifications with a Set-Oriented Prototyping Language
3rd international workshop on software evolution through transformations: embracing change Transformation-based techniques such as refactoring, model transformation and model-driven development, architectural reconfiguration, etc. are at the heart of many software engineering activities, making it possible to cope with an ever changing environment. This workshop provides a forum for discussing these techniques, their formal foundations and applications.
One VM to rule them all Building high-performance virtual machines is a complex and expensive undertaking; many popular languages still have low-performance implementations. We describe a new approach to virtual machine (VM) construction that amortizes much of the effort in initial construction by allowing new languages to be implemented with modest additional effort. The approach relies on abstract syntax tree (AST) interpretation where a node can rewrite itself to a more specialized or more general node, together with an optimizing compiler that exploits the structure of the interpreter. The compiler uses speculative assumptions and deoptimization in order to produce efficient machine code. Our initial experience suggests that high performance is attainable while preserving a modular and layered architecture, and that new high-performance language implementations can be obtained by writing little more than a stylized interpreter.
New results on stability analysis for systems with discrete distributed delay The integral inequality technique is widely used to derive delay-dependent conditions, and various integral inequalities have been developed to reduce the conservatism of the conditions derived. In this study, a new integral inequality was devised that is tighter than existing ones. It was used to investigate the stability of linear systems with a discrete distributed delay, and a new stability condition was established. The results can be applied to systems with a delay belonging to an interval, which may be unstable when the delay is small or nonexistent. Three numerical examples demonstrate the effectiveness and the smaller conservatism of the method.
1.092444
0.089333
0.08
0.08
0.08
0.08
0.056
0
0
0
0
0
0
0
A Circus Semantics for Ravenscar Protected Objects The Ravenscar profile is a subset of the Ada 95 tasking model: it is certifiable, deterministic, supports schedulability analysis, and meets tight memory constraints and performance requirements. A central feature of Ravenscar is the use of protected objects to ensure mutually exclusive access to shared data. We give a semantics to protected objects using Circus, a combination of Z and CSP, and prove several important properties; this is the first time that these properties have been verified. Interestingly, all the proofs are conducted in Z, even the ones concerning reactive behaviour.
Refinement of Actions in Circus, This paper presents refinement laws to support the development of actions in Circus, a combination of Z and CSP adequate to specify the data structures and behavioural aspects of concurrent systems. In this language, systems are characterised as a set of processes; each process is a unit that encapsulates state and reactive behaviour defhed by actions. Previously, we have addressed the issue of refining processes. Here, we are concerned with the actions that compose the behaviour of such processes, and that may involve both Z and CSP constructs. We present a number of useful laws, and a case study that illustrates their application.
Refinement in Circus We describe refinement in Circus, a concurrent specification language that integrates imperative CSP, Z, and the refinement calculus. Each Circus process has a state and accompanying actions that define both the internal state transitions and the changes in control flow that occur during execution. We define the meaning of refinement of processes and their actions, and propose a sound data refinement technique for process refinement. Refinement laws for CSP and Z are directly relevant and applicable to Circus, but our focus here is on new laws for processes that integrate state and control. We give some new results about the distribution of data refinement through the combinators of CSP. We illustrate our ideas with the development of a distributed system of cooperating processes from a centralised specification.
The specification statement Dijkstra's programming language is extended by specification statements, which specify parts of a program “yet to be developed.” A weakest precondition semantics is given for these statements so that the extended language has a meaning as precise as the original.The goal is to improve the development of programs, making it closer to manipulations within a single calculus. The extension does this by providing one semantic framework for specifications and programs alike: Developments begin with a program (a single specification statement) and end with a program (in the executable language). And the notion of refinement or satisfaction, which normally relates a specification to its possible implementations, is automatically generalized to act between specifications and between programs as well.A surprising consequence of the extension is the appearance of miracles: program fragments that do not satisfy Dijkstra's Law of the Excluded Miracle. Uses for them are suggested.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
A semantics of multiple inheritance this paper is to present a clean semantics of multiple inheritance and to show that, in the context of strongly-typed, statically-scoped languages, a sound typechecking algorithm exists. Multiple inheritance is also interpreted in a broad sense: instead of being limited to objects, it is extended in a natural way to union types and to higher-order functional types. This constitutes a semantic basis for the unification of functional and object-oriented programming.
The Manchester prototype dataflow computer The Manchester project has developed a powerful dataflow processor based on dynamic tagging. This processor is large enough to tackle realistic applications and exhibits impressive speedup for programs with sufficient parallelism.
A lazy evaluator A different way to execute pure LISP programs is presented. It delays the evaluation of parameters and list structures without ever having to perform more evaluation steps than the usual method. Although the central idea can be found in earlier work this paper is of interest since it treats a rather well-known language and works out an algorithm which avoids full substitution. A partial correctness proof using Scott-Strachey semantics is sketched in a later section.
Modelling information flow for organisations: A review of approaches and future challenges. Modelling is a classic approach to understanding complex problems that can be achieved diagrammatically to visualise concepts, and mathematically to analyse attributes of concepts. An organisation as a communicating entity is a made up of constructs in which people can have access to information and speak to each other. Modelling information flow for organisations is a challenging task that enables analysts and managers to better understand how to: organise and coordinate processes, eliminate redundant information flows and processes, minimise the duplication of information and manage the sharing of intra- and inter-organisational information.
A Theory of Prioritizing Composition An operator for the composition of two processes, where one process has priority over the other process, is studied. Processes are described by action systems, and data refinement is used for transforming processes. The operator is shown to be compositional, i.e. monotonic with respect to refinement. It is argued that this operator is adequate for modelling priorities as found in programming languages and operating systems. Rules for introducing priorities and for raising and lowering priorities of processes are given. Dynamic priorities are modelled with special priority variables which can be freely mixed with other variables and the prioritising operator in program development. A number of applications show the use of prioritising composition for modelling and specification in general.
Repository support for multi-perspective requirements engineering Relationships among different modeling perspectives have been systematically investigated focusing either on given notations (e.g. UML) or on domain reference models (e.g. ARIS/SAP). In contrast, many successful informal methods for business analysis and requirements engineering (e.g. JAD) emphasize team negotiation, goal orientation and flexibility of modeling notations. This paper addresses the question how much formal and computerized support can be provided in such settings without destroying their creative tenor. Our solution is based on a novel modeling language, M-Telos, that integrates the adaptability and analysis advantages of the logic-based meta modeling language Telos with a module concept covering the structuring mechanisms of scalable software architectures. It comprises four components: (1) A modular conceptual modeling formalism organizes individual perspectives and their interrelationships. (2) Perspective schemata are linked to a conceptual meta meta model of shared domain terms, thus giving the architecture a semantic meaning and enabling adaptability and extensibility of the network of perspectives. (3) Inconsistency management across perspectives is handled in a goal-oriented manner, by formalizing analysis goals as meta rules which are automatically customized to perspective schemata. (4) Continuous incremental maintenance of inconsistency information is provided by exploiting recent view maintenance techniques from deductive databases. The approach has been implemented as an extension to the ConceptBase ‡ ‡ ConceptBase is available through web site http://www-i5.Informatik.RWTH-Aachen.de/Cbdor/index.html. meta database management system and has been applied in a number of real-world requirements engineering projects.
Characterizing plans as a set of constraints—the model—a framework for comparative analysis This paper presents an approach to representing and manipulating plans based on a model of plans as a set of constraints. The <I-N-OVA> model 1 is used to characterise the plan representation used within O-Plan and to relate this work to emerging formal analyses of plans and planning. This synergy of practical and formal approaches can stretch the formal methods to cover realistic plan representations as needed for real problem solving, and can improve the analysis that is possible for production planning systems.<I-N-OVA> is intended to act as a bridge to improve dialogue between a number of communities working on formal planning theories, practical planning systems and systems engineering process management methodologies. It is intended to support new work on automatic manipulation of plans, human communication about plans, principled and reliable acquisition of plan information, and formal reasoning about plans.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.2
0.05
0.028571
0.001124
0
0
0
0
0
0
0
0
0
0
Method of Lossless and Near-Lossless Color Image Compression In this paper, it is presented a detailed description of a proposal of an effective image encoder suitable for hardware realizations. A modelling block based on the blending prediction method and an encoding block performing a contextual, adaptive arithmetic encoding are proposed. The encoding operation in both lossless and near-lossless modes are described. It is analyzed how to lossless transform colours to obtain the best solution. An effectiveness analysis is also done by comparison of the compression rate of the proposed method with another hardware realization and a number of software implementations known from literature. Some suggestions are made regarding the implementation of the algorithm as the video sequence encoding system working in the real-time utilizing an efficient Network on Chip architecture.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
On the benefits of industrial network: a new approach with market survey and fuzzy statistical analysis Making the most of the industrial network to reduce management costs and increase competitiveness has gained a lot of attention among enterprises lately. In traditional market research methods, the approach always puts its emphasis on the data of a single value without considering the complexity of human thoughts. Thus, it is the purpose of this paper to first define fuzzy mode, fuzzy expect value and fuzzy χ2test. Then, a new market survey is proposed to develop a more efficient survey analysis to examine the characteristics of fuzzy questionnaire and sample material. A comparison between fuzzy χ2 test and traditional χ2 test was carried on at the end.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
CORE : A Method for Controlled Requirement Expression
CASE productivity perceptions of software engineering professionals Computer-aided software engineering (CASE) is moving into the problem-solving domain of the systems analyst. The authors undertook a study to investigate the various functional and behavioral aspects of CASE and determine the impact it has over manual methods of software engineering productivity.
Managing standards compliance Software engineering standards determine practices that “compliant” software processes shall follow. Standards generally define practices in terms of constraints that must hold for documents. The document types identified by standards include typical development products, such as user requirements, and also process-oriented documents, such as progress reviews and management reports. The degree of standards compliance can be established by checking these documents against the constraints. It is neither practical nor desirable to enforce compliance at all points in the development process. Thus, compliance must be managed rather than imposed. We outline a model of standards and compliance and illustrate it with some examples. We give a brief account of the notations and method we have developed to support the use of the model and describe a support environment we have constructed. The principal contributions of our work are: the identification of the issue of standards compliance; the development of a model of standards and support for compliance management; the development of a formal model of product state with associated notation; a powerful policy scheme that triggers checks; and a flexible and scalable compliance management view
Using group support systems and joint application development for requirements specification The increasing use of integrated computer-aided software engineering (CASE) environments is shifting the bottleneck of systems development from physical design and coding to upstream activities, particularly requirements specification. This paper explores the use of group support systems (GSS) and joint application development (JAD) in the context of CASE environments to facilitate the requirements specification process. We first examine the relevance of GSS, JAD, and CASE to requirements specification. We then propose an integrated framework that is augmented by a domain-analysis methodology. Results of a pilot study that evaluated two process models using tools of a GSS, GroupSystemsTM, for requirements specification are discussed.
Generating, integrating, and activating thesauri for concept-based document retrieval A blackboard-based document management system that uses a neural network spreading-activation algorithm which lets users traverse multiple thesauri is discussed. Guided by heuristics, the algorithm activates related terms in the thesauri and converges of the most pertinent concepts. The system provides two control modes: a browsing module and an activation module that determine the sequence of operations. With the browsing module, users have full control over which knowledge sources to browse and what terms to select. The system's query formation; the retrieving, ranking and selection of documents; and thesaurus activation are described.<>
CREWS : Towards Systematic Usage of Scenarios, Use Cases and Scenes In the wake of object-oriented software engineering, use cases have gainedenormous popularity as tools for bridging the gap between electronicbusiness management and information systems engineering. A wide varietyof practices has emerged but their relationships to each other, and withrespect to the traditional change management process, are poorlyunderstood. The ESPRIT Long Term Research Project CREWS(Cooperative Requirements Engineering With Scenarios) has conductedsurveys of the...
A program integration algorithm that accommodates semantics-preserving transformations Given a program Base and two variants, A and B, each created by modifying separate copies of Base, the goal of program integration is to determine whether the modifications interfere, and if they do not, to create an integrated program that includes both sets of changes as well as the portions of Base preserved in both variants. Text-based integration techniques, such as the one used by the UNIX diff3 utility, are obviously unsatisfactory because one has no guarantees about how the execution behavior of the integrated program relates to the behaviors of Base, A, and B. The first program-integration algorithm to provide such guarantees was developed by Horwitz, Prins, and Reps. However, a limitation of that algorithm is that it incorporates no notion of semantics-preserving transformations. This limitation causes the algorithm to be overly conservative in its definition of interference. For example, if one variant changes the way a computation is performed (without changing the values computed) while the other variant adds code that uses the result of the computation, the algorithm would classify those changes as interfering. This paper describes a new integration algorithm that is able to accommodate semantics-preserving transformations.
Generating test cases for real-time systems from logic specifications We address the problem of automated derivation of functional test cases for real-time systems, by introducing techniques for generating test cases from formal specifications written in TRIO, a language that extends classical temporal logic to deal explicitly with time measures. We describe an interactive tool that has been built to implement these techniques, based on interpretation algorithms of the TRIO language. Several heuristic criteria are suggested to reduce drastically the size of the test cases that are generated. Experience in the use of the tool on real-life cases is reported.
Distributed artificial intelligence for runtime feature-interaction resolution The feature-interaction problem has many different instances. It is argued that some instances lend themselves to a distributed artificial intelligence (DAI) approach. The use of DAI techniques in current telecommunications systems appears quite natural in light of two trends in the way these systems are designed: the distribution of functionality and the incorporation of intelligence. The author illustrates the relevance of DAI techniques to the feature-interaction problem by discussing existing work (lodes, team-CPS, multistage negotiation, and negotiating agents) that address one or more instances of the problem. He further identifies the kind of cooperation and coordination that the feature-interaction problem requires and the interesting research problems it poses to distributed artificial intelligence.<>
Construction of network protocols by stepwise refinement We present a heuristic to derive specifications of distributed systems by stepwise refinement. The heuristic is based upon a conditional refinement relation between specifications. It is applied to construct four sliding window protocols that provide reliable data transfer over unreliable communication channels. The protocols use modulo-N sequence numbers. They are less restrictive and easier to implement than sliding window protocols previously studied in the protocol verification literature.
Design of an Adaptive, Parallel Finite-Element System
A survey on palette reordering methods for improving the compression of color-indexed images Palette reordering is a well-known and very effective approach for improving the compression of color-indexed images. In this paper, we provide a survey of palette reordering methods, and we give experimental results comparing the ability of seven of them in improving the compression efficiency of JPEG-LS and lossless JPEG 2000. We concluded that the pairwise merging heuristic proposed by Memon et al. is the most effective, but also the most computationally demanding. Moreover, we found that the second most effective method is a modified version of Zeng's reordering technique, which was 3%-5% worse than pairwise merging, but much faster.
A Formal Privacy Management Framework Privacy is a complex issue which cannot be handled by exclusively technical means. The work described in this paper results from a multidisciplinary project involving lawyers and computer scientists with the double goal to (1) reconsider the fundamental values motivating privacy protection and (2) study the conditions for a better protection of these values by a combination of legal and technical means. One of these conditions is to provide to the individuals effective ways to convey their consent to the disclosure of their personal data. This paper focuses on the formal framework proposed in the project to deliver this consent through software agents.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1.026624
0.026653
0.026645
0.025886
0.025128
0.025128
0.013523
0.00915
0.004115
0.000197
0.000003
0
0
0
Comparison of airborne hyperspectral data and EO-1 Hyperion for mineral mapping Airborne hyperspectral data have been available to researchers since the early 1980s and their use for geologic applications is well documented. The launch of the National Aeronautics and Space Administration Earth Observing 1 Hyperion sensor in November 2000 marked the establishment of a test bed for spaceborne hyperspectral capabilities. Hyperion covers the 0.4-2.5-μm range with 242 spectral bands at approximately 10-nm spectral resolution and 30-m spatial resolution. Analytical Imaging and Geophysics LLC and the Commonwealth Scientific and Industrial Research Organisation have been involved in efforts to evaluate, validate, and demonstrate Hyperions's utility for geologic mapping in a variety of sites in the United States and around the world. Initial results over several sites with established ground truth and years of airborne hyperspectral data show that Hyperion data from the shortwave infrared spectrometer can be used to produce useful geologic (mineralogic) information. Minerals mapped include carbonates, chlorite, epidote, kaolinite, alunite, buddingtonite, muscovite, hydrothermal silica, and zeolite. Hyperion data collected under optimum conditions (summer season, bright targets, well-exposed geology) indicate that Hyperion data meet prelaunch specifications and allow subtle distinctions such as determining the difference between calcite and dolomite and mapping solid solution differences in micas caused by substitution in octahedral molecular sites. Comparison of airborne hyperspectral data [from the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS)] to the Hyperion data establishes that Hyperion provides similar basic mineralogic information, with the principal limitation being limited mapping of fine spectral detail under less-than-optimum acquisition conditions (winter season, dark targets) based on lower signal-to-noise ratios. Case histories demonstrate the analysis methodologies and level of information available from the Hyperion data. They also show the viability of Hyperion as a means of extending hyperspectral mineral mapping to areas not accessible to aircraft sensors. The analysis results demonstrate that spaceborne hyperspectral sensors can produce useful mineralogic information, but also indicate that SNR improvements a- re required for future spaceborne sensors to allow the same level of mapping that is currently possible from airborne sensors such as AVIRIS.
3D medical image compression based on multiplierless low-complexity RKLT and shape-adaptive wavelet transform A multiplierless low complexity reversible integer Karhunen-Loe¿ve transform (Low-RKLT) is proposed based on matrix factorization. Conventional methods based on KLT suffer from high computational complexity and unability of applying in lossless medical image compression. To solve the two problems, multiplierless Low-RKLT is investigated using multi-lifting in this paper. Combined with ROI coding method, we have proposed a progressive lossy-to-lossless ROI compression method for three dimensional (3D) medical images with high performance. In our proposed method Low-RKLT is used for the inter-frame decorrelation after SA-DWT in the spatial domain. Simulation results show that, the proposed method performs much better in both lossless and lossy compression than 3D-DWT-based method.
Fast Piecewise Linear Predictors For Lossless Compression Of Hyperspectral Imagery The work presented here deals with the design of predictors for the lossless compression of hyperspectral imagery. The large number of spectral bands that characterize hyperspectral imagery give it properties that can be exploited when performing compression. Specifically, in addition to the spatial correlation which is similar to all images, the large number of spectral bands means a high spectral correlation also. Lossless compression algorithms are typically divided into two stages, a decorrelation stage and a coding stage. This work deals with the design of predictors for the decorrelation stage which are both fast and good. Fast implies low complexity, which was achieved by having predictors with no multiplications, only comparisons and additions. Good means predictors that have performance close to the state of the art. To achieve this, both spectral and spatial correlations are used for the predictor. The performance of the developed predictors are compared to those in the most widely known algorithms, LOCO-I, used in JPEG-Lossless, and CALIC-Extended, the original version of which had the best compression performance of all the algorithms submitted to the JPEG-LS committee. The developed algorithms are shown to be much less complex than CALIC-Extended with better compression performance.
A Block-Based Inter-Band Lossless Hyperspectral Image Compressor We propose a hyperspectral image compressor called BH which considers its input image as being partitioned into square blocks, each lying entirely within a particular band, and compresses one such block at a time by using the following steps: first predict the block from the corresponding block in the previous band, then select a predesigned code based on the prediction errors, and nally encode the predictor coeffcient and errors. Apart from giving good compression rates and being fast, BH can provide random access to spatial locations in the image. We hypothesize that BH works well because it accommodates the rapidly changing image brightness that often occurs in hyperspectral images. We also propose an intraband compressor called LM which is worse than BH, but whose performance helps explain BH's performance.
Extending the CCSDS Recommendation for Image Data Compression for Remote Sensing Scenarios This paper presents prominent extensions that have been proposed for the Consultative Committee for Space Data Systems Recommendation for Image Data Compression (CCSDS-122-B-1). Thanks to the proposed extensions, the Recommendation gains several important featured advantages: It allows any number of spatial wavelet decomposition levels; it provides scalability by quality, position, resolution, and...
Clustered dpcm for the lossless compression of hyperspectral images A clustered differential pulse code modulation lossless compression method for hyperspectral images is presented. The spectra of a hyperspectral image is clustered, and an optimized predictor is calculated for each cluster. Prediction is performed using a linear predictor. After prediction, the difference between the predicted and original values is computed. The difference is entropy-coded using ...
Lossless compression of hyperspectral images: Look-up tables with varying degrees of confidence State-of-the-art algorithms LUT and LAIS-LUT, proposed for lossless compression of hyperspectral images, exploit high spectral correlations in these images, and use look-up tables to perform predictions. However, there are cases where their predictions are not accurate. In this work we also use look-up tables, but give these tables different degrees of confidence, based on the local variations of the scaling factor. Our results are highly satisfactory and outperform both LUT and LAIS-LUT methods.
Compression of hyperspectral imagery using hybrid DPCM/DCT and entropy-constrained trellis coded quantization A system is presented for compression of hyperspectral imagery which utilizes trellis coded quantization (TCQ). Specifically, DPCM is used to spectrally decorrelate the hyperspectral data, while a 2-D discrete cosine transform (DCT) coding scheme is used for spatial decorrelation. Entropy-constrained codebooks are designed using a modified version of the generalized Lloyd algorithm. This coder achieves compression ratios of greater than 70:1 with average PSNR of the coded hyperspectral sequence exceeding 40.0 dB.
Effect of lossy vector quantization hyperspectral data compression on retrieval of red-edge indices This paper evaluates lossy vector quantization-based hyperspectral data compression algorithms, using red-edge indices as end-products. Three compact airborne spectrographic imager (CASI) data sets and one airborne visible/infrared imaging spectrometer (AVIRIS) data set from vegetated areas were tested. A basic compression system for hyperspectral data called the “reference” system, and three-speed improved compression systems called systems 1, 2, and 3, respectively, were examined. Five red-edge products representing the near infrared (NIR) reflectance shoulder (Vog 1), the NIR reflectance maximum (Red rs), the difference between the reflectance maximum and the minimum (Red rd), the wavelength of the reflectance maximum (Red lo), and the wavelength of the point of inflection of the NIR vegetation reflectance curve (Red lp) were retrieved from each original data set and from their decompressed data sets. The experiments show that the reference system induces the smallest product errors of the four compression systems. System 1 and 2 perform fairly closely to the reference system. They are the recommended compression systems since they compress a data set hundreds of times faster than the reference system. System 3 performs similarly to the reference system at high compression ratios. Product errors increase with the increase of compression ratio. The overall product errors are dominated by Vog 1, Red rs, and Red rd, since the amplitude of product error for these products is over one order of magnitude greater than those for the Red-lo and Red lp products. The difference between the overall error from the reference and that from system 1 or 2 is below 0.5% at all compression ratios. The overall product error induced by system 1 or 2 is below 3.0% and 2.0% for CASI and AVIRIS data sets, respectively, when the compression ratio is 100 and below. Spatial patterns of the product errors were examined for tha AVRIS data set. For all products, the errors are uniformly distributed in vegetated areas. Errors are relatively high in nonvegeted and mixed-pixel areas
I-structures: data structures for parallel computing It is difficult to achieve elegance, efficiency, and parallelism simultaneously in functional programs that manipulate large data structures. We demonstrate this through careful analysis of program examples using three common functional data-structuring approaches-lists using Cons, arrays using Update (both fine-grained operators), and arrays using make-array (a “bulk” operator). We then present I-structure as an alternative and show elegant, efficient, and parallel solutions for the program examples in Id, a language with I-structures. The parallelism in Id is made precise by means of an operational semantics for Id as a parallel reduction system. I-structures make the language nonfunctional, but do not lose determinacy. Finally, we show that even in the context of purely functional languages, I-structures are invaluable for implementing functional data abstractions.
Visual support for reengineering work processes
Rank and relevance in novelty and diversity metrics for recommender systems The Recommender Systems community is paying increasing attention to novelty and diversity as key qualities beyond accuracy in real recommendation scenarios. Despite the raise of interest and work on the topic in recent years, we find that a clear common methodological and conceptual ground for the evaluation of these dimensions is still to be consolidated. Different evaluation metrics have been reported in the literature but the precise relation, distinction or equivalence between them has not been explicitly studied. Furthermore, the metrics reported so far miss important properties such as taking into consideration the ranking of recommended items, or whether items are relevant or not, when assessing the novelty and diversity of recommendations. We present a formal framework for the definition of novelty and diversity metrics that unifies and generalizes several state of the art metrics. We identify three essential ground concepts at the roots of novelty and diversity: choice, discovery and relevance, upon which the framework is built. Item rank and relevance are introduced through a probabilistic recommendation browsing model, building upon the same three basic concepts. Based on the combination of ground elements, and the assumptions of the browsing model, different metrics and variants unfold. We report experimental observations which validate and illustrate the properties of the proposed metrics.
Parallel image normalization on a mesh connected array processor Image normalization is a basic operation in various image processing tasks. A parallel algorithm for fast binary image normalization is proposed for a mesh connected array processor. The principal operation in this algorithm is pixel mapping. The basic idea of parallel pixel mapping is to utilize a store and forward mechanism which routes pixels from their source locations to destinations in parallel along the paths of minimum length. The routing is based on a simple yet powerful concept of flow control patterns . This can form the basis for designing other parallel algorithms for low level image processing. The normalization process is decomposed into three procedures: translation, rotation and scaling. In each procedure, a mapping algorithm is employed to route the object pixels from source locations to destinations. Simulation results for the parallel image normalization on generated images are provided.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.050331
0.025477
0.010112
0.007417
0.005233
0.002541
0.000657
0.000156
0.000044
0
0
0
0
0
A Scalable Algorithm for Event-Triggered State Estimation With Unknown Parameters and Switching Topologies Over Sensor Networks An event-triggered distributed state estimation problem is investigated for a class of discrete-time nonlinear stochastic systems with unknown parameters over sensor networks (SNs) subject to switched topologies. An event-triggered communication strategy is employed to govern the information broadcast and reduce the unnecessary resource consumption. Based on the adopted communication strategy, a distributed state estimator is designed to estimate the plant states and also identify the unknown parameters. In the framework of input-to-state stability, sufficient conditions with an average dwell time are established to ensure the boundedness of estimation errors in mean-square sense. In addition, the gains of the designed estimators are dependent on the solution of a set of matrix inequalities whose dimensions are unrelated to the scale of underlying SNs, thereby fulfill the scalability requirement. Finally, an illustrative simulation is utilized to verify the feasibility of the proposed design scheme.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Termination detection for diffusing computations
Distributed deadlock detection Distributed deadlock models are presented for resource and communication deadlocks. Simple distributed algorithms for detection of these deadlocks are given. We show that all true deadlocks are detected and that no false deadlocks are reported. In our algorithms, no process maintains global information; all messages have an identical short length. The algorithms can be applied in distributed database and other message communication systems. Categories and Subject Descriptors: C. 2.4 [Computer-Communication Networks]: Distributed Systems--distributed applications; D. 4.1 [Operating Systems]: Process Management--deadlocks; synchronization; D. 4.4 [Operating Systems]: Communications Management--network communi-cation General Terms: Algorithms Additional Key Words and Phrases: Distributed deadlock detection, message communication systems, resource deadlock, communication deadlock
A distributed algorithm for detecting resource deadlocks in distributed systems This paper presents a distributed algorithm to detect deadlocks in distributed data bases. Features of this paper are (1) a formal model of the problem is presented, (2) the correctness of the algorithm is proved, i.e. we show that all true deadlocks will be detected and deadlocks will not be reported falsely, (3) no assumptions are made other than that messages are received correctly and in order and (4) the algorithm is simple.
Hazard Analysis in Formal Specification Action systems have proven their worth in the design of safetycritical systems. The approach is based on a firm mathematical foundation within which the reasoning about the correctness and behaviour of the system under development is carried out. Hazard analysis is a vital part of the development of safety-critical systems. The results of the hazard analysis are semantically different from the specification terms of the controlling software. The purpose of this paper is to show how we can incorporate the results of hazard analysis into an action system specification by encoding this information via available composition operators for action systems in order to specify robust and safe controllers.
Operational specification with joint actions: serializable databases Joint actions are introduced as a language basis for operational specification of reactive systems. Joint action systems are closed systems with no communication primitives. Their nondeterministic execution model is based on multi-party actions without an explicit control flow, and they are amenable for stepwise derivation by superposition. The approach is demonstrated by deriving a specification for serializable databases in simple derivation steps. Two different implementation strategies are imposed on this as further derivations. One of the strategies is two-phase locking, for which a separate implementation is given and proved correct. The other is multiversion timestamp ordering, for which the derivation itself is an implementation.
Towards programming with knowledge expressions Explicit use of knowledge expressions in the design of distributed algorithms is explored. A non-trivial case study is carried through, illustrating the facilities that a design language could have for setting and deleting the knowledge that the processes possess about the global state and about the knowledge of other processes. No implicit capabilities for logical reasoning are assumed. A language basis is used that allows common knowledge not only by an eager protocol but also in the true sense. The observation is made that the distinction between these two kinds of common knowledge can be associated with the level of abstraction: true common knowledge of higher levels of abstraction: true common knowledge of higher levels can be implemented as eager common knowledge on lower levels. A knowledge-motivated abstraction tool is therefore suggested to be useful in supporting stepwise refinement of distributed algorithms.
Action-Based Concurrency and Synchronization for Objects We extend the Action-Oberon language for executing action systems with type-bound actions. Type-bound actions combine the concepts of type-bound procedures (methods) and actions, bringing object orientation to action systems. Type-bound actions are created at runtime along with the objects of their bound types. They permit the encapsulation of data and code in objects. Allowing an action to have more than one participant gives us a mechanism for expressing n-ary communication between objects. By showing how type-bound actions can logically be reduced to plain actions, we give our extension a firm foundation in the Refinement Calculus.
An Approach to Object-Orientation in Action Systems We extend the action system formalism with a notion of objects that can be active and distributed. With this extension we can model class-based systems as action systems. Moreover, as the introduced constructs can be translated into ordinary action systems, we can use the theory developed for action systems, especially the refinement calculus, even for class-based systems. We show how inheritance can be modelled in different ways via class refinement. Refining a class with an other class within the refinement calculus ensures that the original behavior of the class is maintained throughout the refinements. Finally, we show how to reuse proofs and entire class modules in a refinement step.
Abstract interpretation of reactive systems The advent of ever more complex reactive systems in increasingly critical areas calls for the development of automated verication techniques. Model checking is one such technique, which has proven quite successful. However, the state-explosion problem remains a major stumbling block. Recent experience indicates that solutions are to be found in the application of techniques for property-preserving abstraction and successive approximation of models. Most such applications have so far been based solely on the property-preserving characteristics of simulation relations. A major drawback of all these results is that they do not oer a satisfactory formalization of the notion of precision of abstractions. The theory of Abstract Interpretation oers a framework for the denition and justication of property-preserving abstractions. Furthermore, it provides a method for the eective computation of abstract models directly from the text of a program, thereby avoiding the need for intermediate storage of a full-blown model. Finally, it formalizes the notion of optimality, while allowing to trade precision for speed by computing suboptimal approximations. For a long time, applications of Abstract Interpretation have mainly focused on the analysis of universal safety properties, i.e., properties that hold in all states along every possible execution path. In this article, we extend Abstract Interpretation to the analysis of both existential and universal reactive properties, as expressible in the modal -calculus .I t is shown how abstract models may be constructed by symbolic execution of programs. A notion of approximation between abstract models is dened while conditions are given under which optimal models can be constructed. Examples are given to illustrate this. We indicate conditions under which also falsehood of formulae is preserved. Finally, we compare our approach to those based on simulation relations.
Being Suspicious: Critiquing Problem Specifications One should look closely at problem specifications before attempting solutions: we may find that the specifier has only a vague or even erroneous notion of what is required, that the solution of a more general or more specific problem may be of more use, or simply that the problem as given is misstated. Using software development as an example, we present a knowledge-based system for critiquing one form of problem specification, that of a formal software specification.
Developing Mode-Rich Satellite Software by Refinement in Event B To ensure dependability of on-board satellite systems, the designers should, in particular, guarantee correct implementation of the mode transition scheme, i.e., ensure that the states of the system components are consistent with the global system mode. However, there is still a lack of scalable approaches to formal verification of correctness of complex mode transitions. In this paper we present a formal development of an Attitude and Orbit Control System (AOCS) undertaken within the ICT DEPLOY project. AOCS is a complex mode-rich system, which has an intricate mode-transition scheme. We show that refinement in Event B provides the engineers with a scalable formal technique that enables both development of mode-rich systems and proof-based verification of their mode consistency.
An Algebraic Foundation for Graph-based Diagrams in Computing We develop an algebraic foundation for some of the graph-based structures underlying a variety of popular diagrammatic notations for the specification, modelling and programming of computing systems. Using hypergraphs and higraphs as leading examples, a locally ordered category Graph(C) of graphs in a locally ordered category C is defined and endowed with symmetric monoidal closed structure. Two other operations on higraphs and variants, selected for relevance to computing applications, are generalised in this setting.
Nonrepetitive colorings of trees A coloring of the vertices of a graph G is nonrepetitive if no path in G forms a sequence consisting of two identical blocks. The minimum number of colors needed is the Thue chromatic number, denoted by @p(G). A famous theorem of Thue asserts that @p(P)=3 for any path P with at least four vertices. In this paper we study the Thue chromatic number of trees. In view of the fact that @p(T) is bounded by 4 in this class we aim to describe the 4-chromatic trees. In particular, we study the 4-critical trees which are minimal with respect to this property. Though there are many trees T with @p(T)=4 we show that any of them has a sufficiently large subdivision H such that @p(H)=3. The proof relies on Thue sequences with additional properties involving palindromic words. We also investigate nonrepetitive edge colorings of trees. By a similar argument we prove that any tree has a subdivision which can be edge-colored by at most @D+1 colors without repetitions on paths.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.011371
0.013483
0.011782
0.011429
0.007835
0.005771
0.002859
0.000638
0.000085
0.000027
0.000004
0
0
0
Affine Correction Based Image Watermarking Robust to Geometric Attacks How to resist combined geometric attacks effectively while maintain a high embedding capacity is still a challenging task for the digital watermarking research. An affine correction based algorithm is proposed in this paper, which can resist combined geometric attacks and keep a higher watermark embedding capacity. The SURF algorithm and the RANSAC algorithm are used to extract, match and select feature points from the attacked image and the original image. Then, the least square algorithm is used to estimate the affine matrix of the geometric attacks according to the relationship between the matched feature points. The attacks are corrected based on the estimated affine matrix. A fine correction step is included to improve the precision of the watermark detection. To resist the cropping attacks, the watermark information is encoded with LT-coding. The encoded watermark is embedded in the DWT-DCT composite domain of the image. Experimental results show that the proposed algorithm not only has a high embedding capacity, but also is robust to many kinds of geometric attacks.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1
0
0
0
0
0
0
0
0
0
0
0
0
0
A Framework to Support Software Quality Trade-Offs from a Process-Based Perspective.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
A discrete event system model of business system-a systems theoretic foundation for information systems analysis. I A variety of information systems methodologies for information systems analysis have been proposed. Though each methodology has its own effective concepts and tools, there still does not seem to exist a rigorous model of a business system. In this paper a general model of business systems is proposed. The model describes the whole mechanism of typical routine processing of business tasks with slips and business papers. Furthermore it can be used to examine the ability and limitation of popular tools. The model proposed is called a business transaction system. It consists of both static and dynamic structures. The former depicts the interconnection of the transactions and file system in a business system. The dynamic structure is constructed as a state space representation by introducing a state space, and then the resultant dynamic system is a discrete event system. The state space consists of the file system and schedule of transactions processing. The model provides answers to some questions about the nature of information systems methodologies
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Addressing degraded service outcomes and exceptional modes of operation in behavioural models A dependable software system should attempt to at least partially satisfy user goals if full service provision is impossible due to an exceptional situation. In addition, a dependable system should evaluate the effects of the exceptional situation on future service provision and adjust the set of services it promises to deliver accordingly. In this paper we show how to express degraded service outcomes and exceptional modes of operation in behavioural models, i.e. use cases, activity diagrams and state charts. We also outline how to integrate the task of discovering and defining degraded outcomes and exceptional modes of operation into a requirements engineering process by presenting the relevant parts of our dependability-focused requirements engineering process DREP.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Procedures and atomicity refinement The introduction of an early return from a (remote) procedure call can increase the degree of parallelism in a parallel or distributed algorithm modeled by an action system. We define a return statement for procedures in an action systems framework and show that it corresponds to carrying out an atomicity refinement.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
On the design of a graphical transition network editor This paper describes a graphical editor for the SYNICS user-interface development system. SYNICS is based on transition network diagrams and uses a grammar to specify the pattern matching of user input. Complex SYNICS programs can be difficult for the designer to comprehend making development slow and prone to errors. The editor described here is intended to increase the designers comprehension by allowing him to define the user interface on a computer in terms of diagrams. Transition network diagrams are used to describe the control structure of the interface and railroad diagrams are used to describe associated syntax rules. These diagrams may be drawn and manipulated by the designer and the information extracted from them is used to check the design for errors and automatically generate their corresponding SYNICS description. The design considerations for this system are discussed.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Lossless compression of color map images by context tree modeling. Significant lossless compression results of color map images have been obtained by dividing the color maps into layers and by compressing the binary layers separately using an optimized context tree model that exploits interlayer dependencies. Even though the use of a binary alphabet simplifies the context tree construction and exploits spatial dependencies efficiently, it is expected that an equivalent or better result would be obtained by operating directly on the color image without layer separation. In this paper, we extend the previous context-tree-based method to operate on color values instead of binary layers. We first generate an n-ary context tree by constructing a complete tree up to a predefined depth, and then prune out nodes that do not provide compression improvements. Experiments show that the proposed method outperforms existing methods for a large set of different color map images.
A survey on palette reordering methods for improving the compression of color-indexed images Palette reordering is a well-known and very effective approach for improving the compression of color-indexed images. In this paper, we provide a survey of palette reordering methods, and we give experimental results comparing the ability of seven of them in improving the compression efficiency of JPEG-LS and lossless JPEG 2000. We concluded that the pairwise merging heuristic proposed by Memon et al. is the most effective, but also the most computationally demanding. Moreover, we found that the second most effective method is a modified version of Zeng's reordering technique, which was 3%-5% worse than pairwise merging, but much faster.
Lossy to Lossless Spatially Scalable Depth Map Coding with Cellular Automata Spatially scalable image coding algorithms are mostly based on linear filtering techniques that give a multi-resolution representation of the data. Reversible cellular automata can be instead used as simpler, non-linear filter banks that give similar performance. In this paper, we investigate the use of reversible cellular automata for lossy to lossless and spatially scalable coding of smooth multi-level images, such as depth maps. In a few cases, the compression performance of the proposed coding method is comparable to that of the JBIG standard, but, under most test conditions, we show better compression performances than those obtained with the JBIG or the JPEG2000 standards. The results stimulate further investigation into cellular automata-based methods for multi-level image compression.
Improved CABAC design in H.264/AVC for lossless depth map coding The depth map represents three-dimensional (3D) data and is used for depth image-based rendering (DIBR) to synthesize virtual views. Since the quality of synthesized virtual views highly depends on the quality of the depth map, we encode the depth map by lossless coding mode. However, context-based adaptive binary arithmetic coding (CABAC) for the H.264/AVC standard does not guarantee the best coding performance for lossless depth map coding because CABAC was originally designed for lossy coding. In this paper, we propose an improved coding method of CABAC for lossless depth map coding considering the statistical properties of residual data from lossless depth map coding. Experimental results show that the proposed CABAC method provides approximately 4.3% bit saving, compared to the original CABAC in H.264/AVC.
Universal modeling and coding The problems arising in the modeling and coding of strings for compression purposes are discussed. The notion of an information source that simplifies and sharpens the traditional one is axiomatized, and adaptive and nonadaptive models are defined. With a measure of complexity assigned to the models, a fundamental theorem is proved which states that models that use any kind of alphabet extension are inferior to the best models using no alphabet extensions at all. A general class of so-called first-in first-out (FIFO) arithmetic codes is described which require no alphabet extension devices and which therefore can be used in conjunction with the best models. Because the coding parameters are the probabilities that define the model, their design is easy, and the application of the code is straightforward even with adaptively changing source models.
Hierarchical Oriented Predictions for Resolution Scalable Lossless and Near-Lossless Compression of CT and MRI Biomedical Images We propose a new hierarchical approach to resolution scalable lossless and near-lossless (NLS) compression. It combines the adaptability of DPCM schemes with new hierarchical oriented predictors to provide resolution scalability with better compression performances than the usual hierarchical interpolation predictor or the wavelet transform. Because the proposed hierarchical oriented prediction (HOP) is not really efficient on smooth images, we also introduce new predictors, which are dynamically optimized using a least-square criterion. Lossless compression results, which are obtained on a large-scale medical image database, are more than 4% better on CTs and 9% better on MRIs than resolution scalable JPEG-2000 (J2K) and close to nonscalable CALIC. The HOP algorithm is also well suited for NLS compression, providing an interesting rate-distortion tradeoff compared with JPEG-LS and equivalent or a better PSNR than J2K for a high bit rate on noisy (native) medical images.
LOCO-I: a low complexity, context-based, lossless image compression algorithm LOCO-I (low complexity lossless compression for images) is a novel lossless compression algorithm for continuous-tone images which combines the simplicity of Huffman coding with the compression potential of context models, thus “enjoying the best of both worlds.” The algorithm is based on a simple fixed context model, which approaches the capability of the more complex universal context modeling techniques for capturing high-order dependencies. The model is tuned for efficient performance in conjunction with a collection of (context-conditioned) Huffman codes, which is realized with an adaptive, symbol-wise, Golomb-Rice code. LOCO-I attains, in one pass, and without recourse to the higher complexity arithmetic coders, compression ratios similar or superior to those obtained with state-of-the-art schemes based on arithmetic coding. In fact, LOCO-I is being considered by the ISO committee as a replacement for the current lossless standard in low-complexity applications
Lossless Interframe Image Compression via Context Modeling
Compression of multispectral images by three-dimensional SPIHT algorithm The authors carry out low bit-rate compression of multispectral images by means of the Said and Pearlman&#39;s SPIHT algorithm, suitably modified to take into account the interband dependencies. Two techniques are proposed: in the first, a three-dimensional (3D) transform is taken (wavelet in the spatial domain, Karhunen-Loeve in the spectral domain) and a simple 3D SPIHT is used; in the second, after...
A practical theory of programming Programs are predicates, programming is proving, and termination is timing.
Object-oriented design of distributed systems (abstract and references only) Object-oriented approaches to software development have become popular since the early 1980s. They are believed to be revolutionary methods which may help to solve the existing software crisis. The object-oriented concepts have been successfully applied to many different applications, such as reusable software, database, and distributed systems. In addition, object-based methodologies are developing through the entire life-cycle of software. In this work, we introduce a new method for distributed object-oriented system design which spans the analysis phase and the design phase. A structured format is often used to obtain the domain information for object-oriented methods [1,2,3]. Our method uses physical data flow diagrams (dfds) to represent the initial problem. Initially, the information in the dfds is converted into an internal knowledge base representation form. The knowledge base is built for saving facts and inferring new facts with heuristic rules, which are required to generate the object model. There are five primary steps in our method. First, objects and their actions (operations) are extracted from the dfd internal form. Objects are selected from the source, sink, data store, process name, and function name of the process in the dfd. Main actions of an object are selected from the function name in the process. Other actions are identified from the data flow between a process and a data store. Second, selected objects are classified into two types: active and passive. An active object is an object which performs some action(s) on another object(s) or receives actions from another object. The passive object is an object which merely receives some action(s) from another object or objects. With the exception of the process name, all other objects in the dfd represent passive objects. Third, the environments for each object module are defined. Class and instance objects are identified from the different levels of dfds. The visibility of an object is identified by indicating the other objects which are related to the current object. The related actions of the current object are collected and saved in the knowledge base. Fourth, a frame object model is generated from the information in the knowledge base. The object module is treated as an abstract data type which encapsulates data and operations from other modules so that only internal operations manipulates the defined data. There are two types of object modules as defined above. Both the active and passive object modules are needed to identify the concurrent processing explicitly. An active object
WorkFlow systems: a few definitions and a few suggestions This paper hopes to make a contribution on three aspects of workflow systems: we stress the fact that there is a broken symetry between the level of the specification of the procedures and the level of their enactment; we propose some ways of classifying activities and exceptions; and we propose some run-time functionalities to help users deal with exceptions.
A component-based framework for modeling and analyzing probabilistic real-time systems A challenging research issue of analyzing a real-time system is to model the tasks composing the system and the resource provided to the system. In this paper, we propose a probabilistic component-based model which abstracts in the interfaces both the functional and non-functional requirements of such systems. This approach allows designers to unify in the same framework probabilistic scheduling techniques and compositional guarantees that go from soft to hard real-time. We provide sufficient schedulability tests for task systems using such framework when the scheduler is either preemptive Fixed-Priority or Earliest Deadline First.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.075891
0.028571
0.026126
0.017418
0.002717
0.000286
0.000037
0.000011
0
0
0
0
0
0
Abstractions of non-interference security: probabilistic versus possibilistic. The Shadow Semantics (Morgan, Math Prog Construction, vol 4014, pp 359–378, 2006; Morgan, Sci Comput Program 74(8):629–653, 2009) is a possibilistic (qualitative) model for noninterference security. Subsequent work (McIver et al., Proceedings of the 37th international colloquium conference on Automata, languages and programming: Part II, 2010) presents a similar but more general quantitative model that treats probabilistic information flow. Whilst the latter provides a framework to reason about quantitative security risks, that extra detail entails a significant overhead in the verification effort needed to achieve it. Our first contribution in this paper is to study the relationship between those two models (qualitative and quantitative) in order to understand when qualitative Shadow proofs can be “promoted” to quantitative versions, i.e. in a probabilistic context. In particular we identify a subset of the Shadow’s refinement theorems that, when interpreted in the quantitative model, still remain valid even in a context where a passive adversary may perform probabilistic analysis. To illustrate our technique we show how a semantic analysis together with a syntactic restriction on the protocol description, can be used so that purely qualitative reasoning can nevertheless verify probabilistic refinements for an important class of security protocols. We demonstrate the semantic analysis by implementing the Shadow semantics in Rodin, using its special-purpose refinement provers to generate (and discharge) the required proof obligations (Abrial et al., STTT 12(6):447–466, 2010). We apply the technique to some small examples based on secure multi-party computations.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Procedures and atomicity refinement The introduction of an early return from a (remote) procedure call can increase the degree of parallelism in a parallel or distributed algorithm modeled by an action system. We define a return statement for procedures in an action systems framework and show that it corresponds to carrying out an atomicity refinement.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Interpreting the B-Method in the Refinement Calculus In this paper, we study the B-Method in the light of the theory of refinement calculus. It allows us to explain the proof obligations for a refinement component in terms of standard data refinement. A second result is an improvement of the architectural condition of [PR98], ensuring global correctness of a B software system using the sees primitive.
A comparison of refinement orderings and their associated simulation rules In this paper we compare the refinement orderings, and their associated simulation rules, of state-based specification languages such as Z and Object-Z with the refinement orderings of event-based specification languages such as CSP. We prove with a simple counter-example that data refinement, established using the standard simulation rules for Z, does not imply failures refinement in CSP. This contradicts accepted results.
Refinement in Object-Z and CSP In this paper we explore the relationship between refinement in Object-Z and refinement in CSP. We prove with a simple counterexample that refinement within Object-Z, established using the standard simulation rules, does not imply failures-divergences refinement in CSP. This contradicts accepted results.Having established that data refinement in Object-Z and failures refinement in CSP are not equivalent we identify alternative refinement orderings that may be used to compare Object-Z classes and CSP processes. When reasoning about concurrent properties we need the strength of the failures-divergences refinement ordering and hence identify equivalent simulation rules for Object-Z. However, when reasoning about sequential properties it is sufficient to work within the simpler relational semantics of Object-Z. We discuss an alternative denotational semantics for CSP, the singleton failures semantic model, which has the same information content as the relational model of Object-Z.
A Theory of Generalised Substitutions We augment the usual wp semantics of substitutions with an explicit notion of frame, which allows us to develop a simple selfcontained theory of generalised substitutions outside their usual context of the B Method. We formulate three fundamental healthiness conditions which semantically characterise all substitutions, and from which we are able to derive directly, without need of any explicit further appeal to syntax, a number of familiar properties of substitutions, as well as several new ones specifically concerning frames. In doing so we gain some useful insights about the nature of substitutions, which enables us to resolve some hitherto problematic issues concerning substitutions within the B Method.
Refinement of State-Based Concurrent Systems The traces, failures, and divergences of CSP can be expressed as weakest precondition formulæ over action systems. We show how such systems may be refined up to failures-divergences, by giving two proof methods which are sound and jointly complete: forwards and backwards simulations. The technical advantage of our weakest precondition approach over the usual relational approach is in our simple handling of divergence; the practical advantage is in the fact that the refinement calculus for sequential programs may be used to calculate forwards simulations. Our methods may be adapted to state-based development methods such as VDM or Z.
Refining Action Systems within B-Tool . Action systems is a formalism designed for the constructionof parallel and distributed systems in a stepwise manner within the refinementcalculus. In this paper we show how action systems can be derivedand refined within a mechanical proof tool, the B-Tool. We describe howaction systems are embedded in B-Tool. Due to this embedding we cannow develop parallel and distributed systems within the B-Tool. We alsoshow how a typical and nontrivial refinement rule, the superposition refinement...
A superimposition control construct for distributed systems A control structure called a superimposition is proposed. The structure contains schematic abstractions of processes called roletypes in its declaration. Each roletype may be bound to processes from a basic distributed algorithm, and the operations of the roletype will then execute interleaved with those of the basic processes, over the same state space. This structure captures a kind of modularity natural for distributed programming, which previously has been treated using a macro-like implantation of code. The elements of a superimposition are identified, a syntax is suggested, correctness criteria are defined, and examples are presented.
Hierarchical correctness proofs for distributed algorithms This thesis introduces a new model for distributed computation in asynchronous networks, the input-output automaton. This simple, powerful model captures in a novel way the game-theoretical interaction between a system and its environment, and allows fundamental properties of distributed computation such as fair computation to be naturally expressed. Furthermore, this model can be used to construct modular, hierarchical correctness proofs of distributed algorithms. This thesis defines the input-output automaton model, and presents an interesting example of how this model can be used to construct such proofs.
Machine Learning This exciting addition to the McGraw-Hill Series in Computer Science focuses on the concepts and techniques that contribute to the rapidly changing field of machine learning--including probability and statistics, artificial intelligence, and neural networks--unifying them all in a logical and coherent manner. Machine Learning serves as a useful reference tool for software developers and researchers, as well as an outstanding text for college students.Table of contentsChapter 1. IntroductionChapter 2. Concept Learning and the General-to-Specific OrderingChapter 3. Decision Tree LearningChapter 4. Artificial Neural NetworksChapter 5. Evaluating HypothesesChapter 6. Bayesian LearningChapter 7. Computational Learning TheoryChapter 8. Instance-Based LearningChapter 9. Inductive Logic ProgrammingChapter 10. Analytical LearningChapter 11. Combining Inductive and Analytical LearningChapter 12. Reinforcement Learning.
Strategies for information requirements determination Correct and complete information requirements are key ingredients in planning organizational information systems and in implementing information systems applications. Yet, there has been relatively little research on information requirements determination, and there are relatively few practical, well-formulated procedures for obtaining complete, correct information requirements. Methods for obtaining and documenting information requirements are proposed, but they tend to be presented as general solutions rather than alternative methods for implementing a chosen strategy of requirements determination. This paper identifies two major levels of requirements: the organizational information requirements reflected in a planned portfolio of applications and the detailed information requirements to be implemented in a specific application. The constraints on humans as information processors are described in order to explain why "asking" users for information requirements may not yield a complete, correct set. Various strategies for obtaining information requirements are explained. Examples are given of methods that fit each strategy. A contingency approach is then presented for selecting an information requirements determination strategy. The contingency approach is explained both for defining organizational information requirements and for defining specific, detailed requirements in the development of an application.
Conceptual Structures: Fulfilling Peirce's Dream, Fifth International Conference on Conceptual Structures, ICCS '97, Seattle, Washington, USA, August 3-8, 1997, Proceedings
Fuzzy logic as a basis for reusing task‐based specifications
Incremental planning using conceptual graphs
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.103556
0.1
0.053333
0.033333
0.004642
0.000649
0.000136
0.000008
0
0
0
0
0
0
Synchronization of Lur'e systems via stochastic reliable sampled-data controller. This paper deals with the problem of synchronization methods for Lur׳e system with stochastic reliable sampled-data control scheme. By constructing the framework of linear matrix inequalities (LMIs) based on Lyapunov method and utilizing Wirtinger-based integral inequality (WBI), Jensen׳s inquality (JI), and other lemmas, synchronization criteria of sampled-data control for Lur׳e systems are derived and compared with the existing works. The necessity and validity of the proposed results are illustrated by two numerical examples.
Betweenness Centrality-Based Consensus Protocol for Second-Order Multiagent Systems With Sampled-Data. This paper designs a new leader-following consensus protocol for second-order multiagent systems with time-varying sampling. For the first time in designing a leader-following protocol, the concept of betweenness centrality is adopted to analyze the information flow in the consensus problem for multiagent systems. By construction of a suitable Lyapunov-Krasovskii functional, some criteria for desi...
Improved stabilization criteria for fuzzy systems under variable sampling. This paper investigates the problem of stabilization for fuzzy sampled-data systems with variable sampling. A novel Lyapunov–Krasovskii functional (LKF) is introduced to the fuzzy systems. The benefit of the new approach is that the LKF develops more information about actual sampling pattern of the fuzzy sampled-data systems. In addition, some symmetric matrices involved in the LKF are not required to be positive definite. Based on a recently introduced Wirtinger-based integral inequality that has been shown to be less conservative than Jensen’s inequality, much less conservative stabilization conditions are obtained. Then, the corresponding sampled-data controller can be synthesized by solving a set of linear matrix inequalities (LMIs). Finally, an illustrative example is given to show the feasibility and effectiveness of the proposed method.
Stochastic switched sampled-data control for synchronization of delayed chaotic neural networks with packet dropout. This paper addresses the exponential synchronization issue of delayed chaotic neural networks (DCNNs) with control packet dropout. A novel stochastic switched sampled-data controller with time-varying sampling is developed in the frame of the zero-input strategy. First, by making full use of the available characteristics on the actual sampling pattern, a newly loop-delay-product-type Lyapunov–Krasovskii functional (LDPTLKF) is constructed via introducing a matrix-refined-function, which can reflect the information of delay variation. Second, based on the LDPTLKF and the relaxed Wirtinger-based integral inequality (RWBII), novel synchronization criteria are established to guarantee that DCNNs are synchronous exponentially when the control packet dropout occurs in a random way, which obeys certain Bernoulli distributed white noise sequences. Third, a desired sampled-data controller can be designed on account of the proposed optimization algorithm. Finally, the effectiveness and advantages of the obtained results are illustrated by two numerical examples with simulations.
Stability and Stabilization of Takagi-Sugeno Fuzzy Systems via Sampled-Data and State Quantized Controller. In this paper, we investigate the problem of stability and stabilization for sampled-data fuzzy systems with state quantization. By using an input delay approach, the sampled-data fuzzy systems with state quantization are transformed into a continuous-time system with a delay in the state. The transformed system contains nondifferentiable time-varying state delay. Based on some integral techniques...
Robust sampled-data stabilization of linear systems: an input delay approach A new approach to robust sampled-data control is introduced. The system is modelled as a continuous-time one, where the control input has a piecewise-continuous delay. Sufficient linear matrix inequalities (LMIs) conditions for sampled-data state-feedback stabilization of such systems are derived via descriptor approach to time-delay systems. The only restriction on the sampling is that the distance between the sequel sampling times is not greater than some prechosen h>0 for which the LMIs are feasible. For h→0 the conditions coincide with the necessary and sufficient conditions for continuous-time state-feedback stabilization. Our approach is applied to two problems: to sampled-data stabilization of systems with polytopic type uncertainities and to regional stabilization by sampled-data saturated state-feedback.
An Improved Input Delay Approach to Stabilization of Fuzzy Systems Under Variable Sampling In this paper, we investigate the problem of stabilization for sampled-data fuzzy systems under variable sampling. A novel Lyapunov–Krasovskii functional (LKF) is defined to capture the characteristic of sampled-data systems, and an improved input delay approach is proposed. By the use of an appropriate enlargement scheme, new stability and stabilization criteria are obtained in terms of linear matrix inequalities (LMIs). Compared with the existing results, the newly obtained ones contain less conservatism. Some illustrative examples are given to show the effectiveness of the proposed method and the significant improvement on the existing results.
Stability and stabilization of delayed T-S fuzzy systems: a delay partitioning approach This paper proposes a new approach, namely, the delay partitioning approach, to solving the problems of stability analysis and stabilization for continuous time-delay Takagi-Sugeno fuzzy systems. Based on the idea of delay fractioning, a new method is proposed for the delay-dependent stability analysis of fuzzy time-delay systems. Due to the instrumental idea of delay partitioning, the proposed stability condition is much less conservative than most of the existing results. The conservatism reduction becomes more obvious with the partitioning getting thinner. Based on this, the problem of stabilization via the so-called parallel distributed compensation scheme is also solved. Both the stability and stabilization results are further extended to time-delay fuzzy systems with time-varying parameter uncertainties. All the results are formulated in the form of linear matrix inequalities (LMIs), which can be readily solved via standard numerical software. The advantage of the results proposed in this paper lies in their reduced conservatism, as shown via detailed illustrative examples. The idea of delay partitioning is well demonstrated to be efficient for conservatism reduction and could be extended to solving other problems related to fuzzy delay systems.
An Augmented Model For Robust Stability Analysis Of Time-Varying Delay Systems Stability analysis of linear systems with time-varying delay is investigated. In order to highlight the relations between the variation of the delay and the states, redundant equations are introduced to construct a new modelling of the delay system. New types of Lyapunov-Krasovskii functionals are then proposed allowing to reduce the conservatism of the stability criterion. Delay-dependent stability conditions are then formulated in terms of linear matrix inequalities. Finally, several examples show the effectiveness of the proposed methodology.
Joining specification statements The specification statement allows us to easily express what a program statement does. This paper shows how refinement of specification statements can be directly expressed using the predicate calculus. It also shows that the specification statements interpreted as predicate transformers form a complete lattice, and that this lattice is the lattice of conjunctive predicate transformers. The join operator of this lattice is constructed as a specification statement. The join operators of two interesting sublattices of the set of specification statements are also investigated.
Automatically generating test data from a Boolean specification This paper presents a family of strategies for automatically generating test data for any implementation intended to satisfy a given specification that is a Boolean formula. The fault detection effectiveness of these strategies is investigated both analytically and empirically, and the costs, assessed in terms of test set size, are compared.
Non-speech input and speech recognition for real-time control of computer games This paper reports a comparison of user performance (time and accuracy) when controlling a popular arcade game of Tetris using speech recognition or non-speech (humming) input techniques. The preliminary qualitative study with seven participants shows that users were able to control the game using both methods but required more training and feedback for the humming control. The revised interface, which implemented these requirements, was positively responded by users. The quantitative test with 12 other participants shows that humming excelled in both time and accuracy, especially over longer distances and advanced difficulty levels.
Performance Analysis of the JPEG 2000 Image Coding Standard Some of the major objectives of the JPEG 2000 still image coding standard were compression and memory efficiency, lossy to lossless coding, support for continuous-tone to bi-level images, error resilience, and random access to regions of interest. This paper will provide readers with some insight on various features and functionalities supported by a baseline JPEG 2000-compliant codec. Three JPEG 2000 software implementations (Kakadu, JasPer, JJ2000) are compared with several other codecs, including JPEG, JBIG, JPEG-LS, MPEG-4 VTC and H.264 intra coding. This study can serve as a guideline for users to estimate the effectiveness of JPEG 2000 for various applications, and to select optimal parameters according to specific application requirements.
A User-Centric Typology of Information System Requirements AbstractKeeping in view the increasing importance of users in shaping and acceptance of Information Systems IS products, there is a need for deeper understanding of IS user requirements. However, currently there is a gap in IS literature. Virtually no theory-based Typological Scheme TS exists for IS user requirements. And a typology is widely acknowledged as the first step towards understanding a phenomenon. Using concepts from inter-disciplinary review of research in the areas of requirements engineering, product quality, and customer satisfaction this study explores the possibility of developing a TS suitable to the IS context. The proposed TS scheme, iteratively constructed, through literature review and experimentation demonstrates promise. The requirement types identified in the suggested TS are found to have both theoretical and empirical support and have useful implications for future research as well as practice.
1.067778
0.066667
0.033333
0.025
0.01
0.001645
0.000545
0.000115
0.000005
0
0
0
0
0
A Framework of Syntactic Models for the Implementation of Visual Languages In this paper we present a framework of syntactic models for the definition and implementation of visual languages. We analyze a wide range of existing visual languages and, for each of them, we propose a characterization according to a syntactic model. The framework has been implemented in the Visual Language Compiler-Compiler (VLCC) system. VLCC is a practical, flexible and extensible tool for the automatic generation of visual programming environments which allows to implement visual languages once they are modeled according to a syntactic model.
Frameworks for interactive, extensible, information-intensive applications We describe a set of application frameworks designed especially to support information-intensive applications in complex domains, where the visual organization of an application's information is critical. Our frameworks, called visual formalisms, provide the semantic structures and editing operations, as well as the visual layout algorithms, needed to create a complete application. Examples of visual formalisms include tables, panels, graphs, and outlines. They are designed to be extended both by programmers, through subclassing, and by end users, through an integrated extension language.
Abstract Syntax and Semantics of Visual Languages The effective use of visual languages requires a precise understanding of their meaning. Moreover, it is impossible to prove properties of visual languages like soundness of transformation rules or correctness results without having a formal language definition. Although this sounds obvious, it is surprising that only little work has been done about the semantics of visual languages, and even worse, there is no general framework available for the semantics specification of different visual languages. We present such a framework that is based on a rather general notion of abstract visual syntax. This framework allows a logical as well as a denotational approach to visual semantics, and it facilitates the formal reasoning about visual languages and their properties. We illustrate the concepts of the proposed approach by defining abstract syntax and semantics for the visual languages VEX, Show and Tell and Euler circles. We demonstrate the semantics in action by proving a rule for visual reasoning with Euler circles and by showing the correctness of a Show and Tell program.
An Algebraic Foundation for Graph-based Diagrams in Computing We develop an algebraic foundation for some of the graph-based structures underlying a variety of popular diagrammatic notations for the specification, modelling and programming of computing systems. Using hypergraphs and higraphs as leading examples, a locally ordered category Graph(C) of graphs in a locally ordered category C is defined and endowed with symmetric monoidal closed structure. Two other operations on higraphs and variants, selected for relevance to computing applications, are generalised in this setting.
Query Optimization Techniques Utilizing Path Indexes in Object-Oriented Database Systems We propose query optimization techniquesthat fully utilize the advantages of path indexesin object-oriented database systems. Althoughpath indexes provide an efficient accessto complex objects, little research has beendone on query optimization that fully utilizepath indexes. We first devise a generalizedindex intersection technique, adapted to thestructure of the path index extended fromconventional indexes, for utilizing multiple(path) indexes to access each class in a query.We...
Connections in acyclic hypergraphs We demonstrate a sense in which the equivalence between blocks (subgraphs without articulation points) and biconnected components (subgraphs in which there are two edge-disjoint paths between any pair of nodes) that holds in ordinary graph theory can be generalized to hypergraphs. The result has an interpretation for relational databases that the universal relations described by acyclic join dependencies are exactly those for which the connections among attributes are defined uniquely. We also exhibit a relationship between the process of Graham reduction (Graham, 1979) of hypergraphs and the process of tableau reduction (Aho, Sagiv and Ullman, 1979) that holds only for acyclic hypergraphs.
A distributed alternative to finite-state-machine specifications A specification technique, formally equivalent to finite-state machines, is offered as an alternative because it is inherently distributed and more comprehensible. When applied to modules whose complexity is dominated by control, the technique guides the analyst to an effective decomposition of complexity, encourages well-structured error handling, and offers an opportunity for parallel computation. When applied to distributed protocols, the technique provides a unique perspective and facilitates automatic detection of some classes of error. These applications are illustrated by a controller for a distributed telephone system and the full-duplex alternating-bit protocol for data communication. Several schemes are presented for executing the resulting specifications.
The use of goals to surface requirements for evolving systems This paper addresses the use of goals to surface requirements for the redesign of existing or legacy systems. Goals are widely recognized as important precursors to system requirements, but the process of identifying and abstracting them has not been researched thoroughly. We present a summary of a goal-based method (GBRAM) for uncovering hidden issues, goals, and requirements and illustrate its application to a commercial system, an Intranet-based electronic commerce application, evaluating the method in the process. The core techniques comprising GBRAM are the systematic application of heuristics and inquiry questions for the analysis of goals, scenarios and obstacles. We conclude by discussing the lessons learned through applying goal refinement in the field and the implications for future research.
Toward the next generation of recommender systems: a survey of the state-of-the-art and possible extensions This paper presents an overview of the field of recommender systems and describes the current generation of recommendation methods that are usually classified into the following three main categories: content-based, collaborative, and hybrid recommendation approaches. This paper also describes various limitations of current recommendation methods and discusses possible extensions that can improve recommendation capabilities and make recommender systems applicable to an even broader range of applications. These extensions include, among others, an improvement of understanding of users and items, incorporation of the contextual information into the recommendation process, support for multicriteria ratings, and a provision of more flexible and less intrusive types of recommendations.
Language constructs for managing change in process-centered environments Change is pervasive during software development, affecting objects, processes, and environments. In process centered environments, change management can be facilitated by software-process programming, which formalizes the representation of software products and processes using software-process programming languages (SPPLs). To fully realize this goal SPPLs should include constructs that specifically address the problems of change management. These problems include lack of representation of inter-object relationships, weak semantics for inter-object relationships, visibility of implementations, lack of formal representation of software processes, and reliance on programmers to manage change manually.APPL/A is a prototype SPPL that addresses these problems. APPL/A is an extension to Ada.. The principal extensions include abstract, persistent relations with programmable implementations, relation attributes that may be composite and derived, triggers that react to relation operations, optionally-enforceable predicates on relations, and five composite statements with transaction-like capabilities.APPL/A relations and triggers are especially important for the problems raised here. Relations enable inter-object relationships to be represented explicitly and derivation dependencies to be maintained automatically. Relation bodies can be programmed to implement alternative storage and computation strategies without affecting users of relation specifications. Triggers can react to changes in relations, automatically propagating data, invoking tools, and performing other change management tasks. Predicates and the transaction-like statements support change management in the face of evolving standards of consistency. Together, these features mitigate many of the problems that complicate change management in software processes and process-centered environments.
Extending Statecharts For Representing Parts And Wholes As state-based formalisms and object-oriented development methods meet, Statecharts represent a natural choice for object behavioural modelling. This is essentially due to built-in features that enforce modularity and control complexity. The paper suggests to improve the effectiveness of the Statechart approach in achieving both modularity and reuse of behavioural abstractions by analysing the general problem of modelling parts and wholes. An extended Statechart construct is proposed, which improves the capability of separating global from local contexts in the early phases of the object development process, getting better global software quality factors.
DOODLE: a visual language for object-oriented databases In this paper we introduce DOODLE, a new visual and declarative language for object-oriented databases. The main principle behind the language is that it is possible to display and query the database with arbitrary pictures. We allow the user to tailor the display of the data to suit the application at hand or her preferences. We want the user-defined visualizations to be stored in the database, and the language to express all kinds of visual manipulations. For extendibility reasons, the language is object-oriented. The semantics of the language is given by a well-known deductive query language for object-oriented databases. We hope that the formal basis of our language will contribute to the theoretical study of database visualizations and visual query languages, a subject that we believe is of great interest, but largely left unexplored.
Maintaining a legacy: towards support at the architectural level An organization that develops large, software intensive systems with a long lifetime will encounter major changes in the market requirements, the software development environment, including its platform, and the target platform. In order to meet the challenges associated with these changes, software development has to undergo major changes as well, Especially when these systems are successful, and hence become an asset, particular care shall be taken to maintain this legacy; large systems with a long lifetime tend to become very complex and difficult to understand. Software architecture plays a vital role in the development of large software systems. For the purpose of maintenance, an up-to-date explicit description of the software architecture of a system supports understanding and comprehension of it, amongst other things. However, many large! complex systems do not have an up-to-date documented software architecture. Particularly in cases where these systems have a long lifetime, the (natural) turnover of personnel will make it very likely that many employees contributing to previous generations of the system are no longer available. A need to 'recover' the software architecture of the system may become prevalent, facilitating the understanding of the system, providing ways to improve its maintainability and quality and to control architectural changes. This paper gives an overview of an on-going effort to improve the maintainability and quality of a legacy system, and describes the recent introduction of support at the architectural level for program understanding and complexity control. Copyright (C) 2000 John Wiley & Sons, Ltd.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.115556
0.088889
0.08
0.001333
0.000022
0.000011
0.000005
0
0
0
0
0
0
0
A Quantitative Evaluation of the Feasibility of, and Suitable Hardware Architectures for, an Adaptive, Parallel Finite-Element System An experimental implementation of a design for an adaptive, parallel finite-element system is described. The implementation was used to simulate the performance of this design on several microprocessor-based multiprocessor architectures The real-time speedup observed was architectureindependent, but hmited. Nevertheless, the approach to data segmentation and management worked well and has interesting applications.
Design of an Adaptive, Parallel Finite-Element System
The operational versus the conventional approach to software development The conventional approach to software development is being challenged by new ideas, many of which can be organized into an alternative decision structure called the “operational” approach. The operational approach is explained and compared to the conventional one.
Decision Trees and Diagrams
Determining an organization's information requirements: a state of the art survey The issue of information requirements of an organization and their specifications span two isolated territories. One territory is that of organization and management and the other belongs to technicians. There is a considerable gap between these two territories. Research in requirements engineering (technician's side) has primarily concentrated on designing and developing formal languages to document and analyze user requirements, once they have been determined. This research has ignored the organizational issues involved in information requirements determination. Research in the field of organization and management has addressed the organizational issues which affect information requirements of an organization. Various frameworks reported in the literature provide insights, but they cannot be considered as methods of determining requirements. Little work has been done on the process of determining requirements. This process must start with the understanding of an organization and end with a formal specification of information requirements. Here, it is worth emphasizing the fact that the process of determining and specifying information requirements of an organization is very different from the process of specifying design requirements of an information system. Therefore, program design methodologies, which are helpful in designing a system are not suitable for the process of determining and specifying information requirements of an organization.This paper discusses the state of the art in information requirements determination methodologies. Excluded are those methodologies which emphasize system design and have little to offer for requirements determination of an organization.
A Modeling Foundation for a Second Generation System Engineering Tool
Embedded System Description Using Petri Nets By a number of examples it will be demonstrated how Petri nets can be applied to design and to analyze embedded systems. Principles of Petri nets as a design method are discussed. Various application areas are glanced, and some relationships to other methods are sketched.
A Look at Japan's Development of Software Engineering Technology First Page of the Article
An object-oriented requirements specifications method Analyzing requirements for object-oriented software is examined in an alternative methodology from the more standard structured analysis approach. Through parallel processes of decomposing objects and allocating functions, the methodology is explained in detail.
Concept acquisition and analysis for requirements specifications
A framework for the development of object-oriented distributed systems A framework for the development of distributed object-oriented systems in heterogeneous environments is presented. This model attempts to incorporate concepts of distributed computing technology together with software engineering methods. A tool has been developed to enforce the model by means of a high-level specification language and an automatic code generator. However, it is emphasized that the model may be exploited as a methodologic foundation even outside an automatic generation approach
Rastreabilidade de Requisitos 2 Resumo: A Gerência dos Requisitos, processo associado à qualidade do desenvolvimento de software, é baseada na rastreabilidade de requisitos. Neste trabalho tratamos da rastreabilidade de requisitos, buscando apresentar aspectos associados ao seu uso no contexto do desenvolvimento e da evolução de sistemas de software. Visamos também conscientizar o leitor das implicações da rastreabilidade para a qualidade do produto sendo desenvolvido, associando a rastreabilidade tanto a aspectos técnicos como gerenciais. Apresentamos dois modelos de referência para apoiar a equipe de desenvolvimento no processo de definição do modelo de rastreabilidade para o sistema a ser desenvolvido. Esses modelos foram também importantes para identificarmos problemas em aberto nessa área. Palavras-chave: rastreabilidade de requisitos, gerencia de requisitos, qualidade de software, processo de desenvolvimento de software. Abstract: Requirements traceability is central to the issue of requirements management, which is essential for producing quality software. Our focus on requirements traceability addresses aspects related to software development and evolution. Our survey deals both with technical and managerial aspects and aims at contextualizing requirements traceability as well as providing basic reference on the subject. Two traceability reference models are used to help the definition of a traceability policy by developers. These models are also reference to the open problems in the area.
Lossless compression of AVIRIS images. Adaptive DPCM methods using linear prediction are described for the lossless compression of hyperspectral (224-band) images recorded by the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS). The methods have two stages-predictive decorrelation (which produces residuals) and residual encoding. Good predictors are described, whose performance closely approaches limits imposed by sensor noise. It is imperative that these predictors make use of the high spectral correlations between bands. The residuals are encoded using variable-length coding (VLC) methods, and compression is improved by using eight codebooks whose design depends on the sensor's noise characteristics. Rice (1979) coding has also been evaluated; it loses 0.02-0.05 b/pixel compression compared with better VLC methods but is much simpler and faster. Results for compressing ten AVIRIS images are reported.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.220134
0.073454
0.005003
0.001184
0.000357
0.000188
0.00008
0.000013
0.000002
0
0
0
0
0
Understanding the role of negotiation in distributed search among heterogeneous agents In our research, we explore the role of negotiation for conflict resolution in distributed search among heterogeneous and reusable agents. We present negotiated search, an algorithm that explicitly recognizes and exploits conflict to direct search activity across a set of agents. In negotiated search, loosely coupled agents interleave the tasks of 1) local search for a solution to some subproblem; 2) integration of local subproblem solutions into a shared solution; 3) information exchange to define and refine the shared search space of the agents; and 4) assessment and reassessment of emerging solutions. Negotiated search is applicable to diverse application areas and problem-solving environments. It requires only basic search operators and allows maximum flexibility in the distribution of those operators. These qualities make the algorithm particularly appropriate for the integration of heterogeneous agents into application systems. The algorithm is implemented in a multi-agent framework, TEAM, that provides the infrastructure required for communication and cooperation.
Distributed Intelligent Agents In Retsina, the authors have developed a distributed collection of software agents that cooperate asynchronously to perform goal-directed information retrieval and integration for supporting a variety of decision-making tasks. Examples for everyday organizational decision making and financial portfolio management demonstrate its effectiveness.
Validating Requirements for Fault Tolerant Systems using Model Checking Model checking is shown to be an effective tool in validating the behavior of a fault tolerant embedded spacecraft controller. The case study presented here shows that by judiciously abstracting away extraneous complexity, the state space of the model could be exhaustively searched allowing critical functional requirements to be validated down to the design level. Abstracting away detail not germane to the problem of interest leaves by definition a partial specification behind. The success of this procedure shows that it is feasible to effectively validate a partial specification with this technique. Three anomalies were found in the system. One was an error in the detailed requirements, and the other two were missing/ ambiguous requirements. Because the method allows validation of partial specifications, it is also an effective approach for maintaining fidelity between a co-evolving specification and an implementation.
Repository support for multi-perspective requirements engineering Relationships among different modeling perspectives have been systematically investigated focusing either on given notations (e.g. UML) or on domain reference models (e.g. ARIS/SAP). In contrast, many successful informal methods for business analysis and requirements engineering (e.g. JAD) emphasize team negotiation, goal orientation and flexibility of modeling notations. This paper addresses the question how much formal and computerized support can be provided in such settings without destroying their creative tenor. Our solution is based on a novel modeling language, M-Telos, that integrates the adaptability and analysis advantages of the logic-based meta modeling language Telos with a module concept covering the structuring mechanisms of scalable software architectures. It comprises four components: (1) A modular conceptual modeling formalism organizes individual perspectives and their interrelationships. (2) Perspective schemata are linked to a conceptual meta meta model of shared domain terms, thus giving the architecture a semantic meaning and enabling adaptability and extensibility of the network of perspectives. (3) Inconsistency management across perspectives is handled in a goal-oriented manner, by formalizing analysis goals as meta rules which are automatically customized to perspective schemata. (4) Continuous incremental maintenance of inconsistency information is provided by exploiting recent view maintenance techniques from deductive databases. The approach has been implemented as an extension to the ConceptBase ‡ ‡ ConceptBase is available through web site http://www-i5.Informatik.RWTH-Aachen.de/Cbdor/index.html. meta database management system and has been applied in a number of real-world requirements engineering projects.
Designing And Building A Negotiating Automated Agent Negotiations are very important in a multiagent environment, particularly, in an environment where there are conflicts between the agents, and cooperation would be beneficial. We have developed a general structure for a Negotiating Automated Agent that consists of five modules: a Prime Minister, a Ministry of Defense, a Foreign Office, a Headquarters and Intelligence. These modules are implemented using a dynamic set of local agents belonging to the different modules. We used this structure to develop a Diplomacy player, Diplomat. Playing Diplomacy involves a certain amount of technical skills as in other board games, but the capacity to negotiate, explain, convince, promise, keep promises or break them, is an essential ingredient in good play. Diplomat was evaluated and consistently played better than human players.
A metamodel approach for the management of multiple models and the translation of schemes A metamodel approach is proposed as a framework for the definition of different data models and the management of translations of schemes from one model to another. This notion is useful in an environment for the support of the design and development of information systems, since different data models can be used and schemes referring to different models need to be exchanged. The approach is based on the observation that the constructs used in the various models can be classified into a limited set of basic types, such as lexical type, abstract type, aggregation, function. It follows that the translations of schemes can be specified on the basis of translations of the involved types of constructs: this is effectively performed by means of a procedural language and a number of predefined modules that express the standard translations between the basic constructs.
A theory of action for multi-agent planning A theory of action suitable for reasoning about events in multiagent or dynamically changing environments is pre- scntcrl. A device called a process model is used to represent the observable behavior of an agent in performing an ac- tion. This model is more general than previous models of act ion, allowing sequencing, selection, nondeterminism, it- eration, and parallelism to be represented. It is shown how this model can be utilized in synthesizing plans and rea- soning about concurrency. In parbicular, conditions are de- rived for determining whether or not concurrent actions are free from mutual interference. It is also indicated how this theory pro!.ides a basis for understanding and reasoning about act,ion sentences in both natural and programming lariguagcs.
Agent-based support for communication between developers and users in software design Research in knowledge-based software engineering has led to advances in the ability to specify and automatically generate software. Advances in the support of upstream activities have focussed on assisting software developers. We examine the possibility of extending computer-based support in the software development process to allow end users to participate, providing feedback directly to devel- opers. The approach uses the notion of "agents" devel- oped in artificial intelligence research and concepts of participatory design. Namely, agents monitor end users working with prototype systems and report mismatches between developers' expectations and a system's actual usage. At the same time, the agents provide end users with an opportunity to communicate with developers, either synchronously or asynchronously. The use of agents is based on actual software development experiences.
A framework for formalizing inconsistencies and deviations in human-centered systems Most modern business activities are carried out by a combination of computerized tools and human agents. Typical examples are engineering design activities, office procedures, and banking systems. All these human-centered systems are characterized by the interaction among people, and between people and computerized tools. This interaction defines a process, whose effectiveness is essential to ensure the quality of the delivered products and/or services. To support these systems, process-centered environments and workflow management systems have been recently developed. They can be collectively identified with the term process technology. This technology is based on the explicit definition of the process to be followed (the process model). The model specifies the kind of support that has to be provided to human agents. An essential property that process technology mut exhibit is the ability of tolerating, controlling, and supporting deviations and inconsistencies of the real-world behaviors with respect to the proocess model. This is necessary to provide consistent and effective support to the human-centered system, still maintaining a high degree of flexibility and adaptability to the evolving needs, preferences, an expertise of the the human agents. This article presents a formal framework to characterize the interaction between a human-centered system and its automated support. It does not aim at introducing a new language or system to describe processes. Rather, it aims at identifying the basic properties and features that make it possible to formally define the concepts of inconsistency and deviation. This formal framework can then be used to compare existing solutions and guide future research work.
Goal-Oriented Requirements Enginering: A Roundtrip from Research to Practice The software industry is more than ever facing the challenge of delivering WYGIWYW software (What You Get Is What You Want). A well-structured document specifying adequate, complete, consistent, precise, and measurable requirements is a critical prerequisite for such software. Goals have been recognized to be among the driving forces for requirements elicitation, elaboration, organization, analysis, negotiation, documentation, and evolution. Growing experience with goal-oriented requirements engineering suggests synergistic links between research in this area and good practice. We discuss one journey along this road from influencing ideas and research results to tool developments to good practice in industrial projects. On the way, we discuss some lessons learnt, obstacles to technonogy transfer, and challenges for better requirements engineering research and practice.
Operational Requirements Accommodation in Distributed System Design Operational requirements are qualities which influence a software system's entire development cycle. The investigation reported here concentrated on three of the most important operational requirements: reliability via fault tolerance, growth, and availability. Accommodation of these requirements is based on an approach to functional decomposition involving representation in terms of potentiafly independent processors, called virtual machines. Functional requirements may be accommodated through hierarchical decomposition of virtual machines, while performance requirements may be associated with individual virtual machines. Virtual machines may then be mapped to a representation of a confilguration of physical resources, so that performance requirements may be reconciled with available performance characteristics.
Design with Asynchronously Communicating Components Software oriented methods allow a higher level of abstraction than the often quite low-level hardware design methods used today. We propose a component-based method to organise a large system derivation within the B Method via its facilities as provided by the tools. The designer proceeds from an abstract high-level specification of the intended behaviour of the target system via correctness-preserving transformation steps towards an implementable architecture of library components which communicate asynchronously. At each step a pre-defined component is extracted and the correctness of the step is proved using the tool support of the B Method. We use Action Systems as our formal approach to system design.
Removing edge-node intersections in drawings of graphs
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1.10233
0.100502
0.100502
0.100502
0.100502
0.100502
0.050558
0.026397
0.011584
0.000178
0.000001
0
0
0
Research Directions in Requirements Engineering In this paper, we review current requirements engineering (RE) research and identify future research directions suggested by emerging software needs. First, we overview the state of the art in RE research. The research is considered with respect to technologies developed to address specific requirements tasks, such as elicitation, modeling, and analysis. Such a review enables us to identify mature areas of research, as well as areas that warrant further investigation. Next, we review several strategies for performing and extending RE research results, to help delineate the scope of future research directions. Finally, we highlight what we consider to be the "hot" current and future research topics, which aim to address RE needs for emerging systems of the future.
Requirements Classification and Reuse: Crossing Domain Boundaries A serious problem in the classification of software project artefacts for reuse is the natural partitioning of classification terms into many separate domains of discourse. This problem is particularly pronounced when dealing with requirements artefacts that need to be matched with design components in the refinement process. in such a case, requirements can be described with terms drawn from a problem domain (e.g. games), whereas designs with the use of terms characteristic for the solution domain (e.g. implementation). The two domains have not only distinct terminology, but also different semantics and use of their artefacts. This paper describes a method of cross-domain classification of requirements texts with a view to facilitate their reuse and their refinement into reusable design components.
The state of the art in automated requirements elicitation Context: In large software development projects a huge number of unstructured text documents from various stakeholders becomes available and needs to be analyzed and transformed into structured requirements. This elicitation process is known to be time-consuming and error-prone when performed manually by a requirements engineer. Consequently, substantial research has been done to automate the process through a plethora of tools and technologies. Objective: This paper aims to capture the current state of automated requirements elicitation and derive future research directions by identifying gaps in the existing body of knowledge and through relating existing works to each other. More specifically, we are investigating the following research question: What is the state of the art in research covering tool support for automated requirements elicitation from natural language documents? Method: A systematic review of the literature in automated requirements elicitation is performed. Identified works are categorized using an analysis framework comprising tool categories, technological concepts and evaluation approaches. Furthermore, the identified papers are related to each other through citation analysis to trace the development of the research field. Results: We identified, categorized and related 36 relevant publications. Summarizing the observations we made, we propose future research to (1) investigate alternative elicitation paradigms going beyond a pure automation approach (2) compare the effects of different types of knowledge on elicitation results (3) apply comparative evaluation methods and multi-dimensional evaluation measures and (4) strive for a closer integration of research activities across the sub-fields of automatic requirements elicitation. Conclusion: Through the results of our paper, we intend to contribute to the Requirements Engineering body of knowledge by (1) conceptualizing an analysis framework for works in the area of automated requirements elicitation, going beyond former classifications (2) providing an extensive overview and categorization of existing works in this area (3) formulating concise directions for future research.
Diagramming information structures using 3D perceptual primitives The class of diagrams known collectively as node-link diagrams are used extensively for many applications, including planning, communications networks, and computer software. The defining features of these diagrams are nodes, represented by a circle or rectangle connected by links usually represented by some form of line or arrow. We investigate the proposition that drawing three-dimensional shaded elements instead of using simple lines and outlines will result in diagrams that are easier to interpret. A set of guidelines for such diagrams is derived from perception theory and these collectively define the concept of the geon diagram. We also introduce a new substructure identification task for evaluating diagrams and use it to test the effectiveness of geon diagrams. The results from five experiments are reported. In the first three experiments geon diagrams are compared to Unified Modeling Language (UML) diagrams. The results show that substructures can be identified in geon diagrams with approximately half the errors and significantly faster. The results also show that geon diagrams can be recalled much more reliably than structurally equivalent UML diagrams. In the final two experiments geon diagrams are compared with diagrams having the same outline but not constructed with shaded solids. This is designed to specifically test the importance of using 3D shaded primitives. The results also show that substructures can be identified much more accurately with shaded components than with 2D outline equivalents and remembered more reliably. Implications for the design of diagrams are discussed.
The “Physics” of Notations: Toward a Scientific Basis for Constructing Visual Notations in Software Engineering Visual notations form an integral part of the language of software engineering (SE). Yet historically, SE researchers and notation designers have ignored or undervalued issues of visual representation. In evaluating and comparing notations, details of visual syntax are rarely discussed. In designing notations, the majority of effort is spent on semantics, with graphical conventions largely an afterthought. Typically, no design rationale, scientific or otherwise, is provided for visual representation choices. While SE has developed mature methods for evaluating and designing semantics, it lacks equivalent methods for visual syntax. This paper defines a set of principles for designing cognitively effective visual notations: ones that are optimized for human communication and problem solving. Together these form a design theory, called the Physics of Notations as it focuses on the physical (perceptual) properties of notations rather than their logical (semantic) properties. The principles were synthesized from theory and empirical evidence from a wide range of fields and rest on an explicit theory of how visual notations communicate. They can be used to evaluate, compare, and improve existing visual notations as well as to construct new ones. The paper identifies serious design flaws in some of the leading SE notations, together with practical suggestions for improving them. It also showcases some examples of visual notation design excellence from SE and other fields.
The three dimensions of requirements engineering: a framework and its applications There is an increasing number of contributions on how to solve the various problems within requirements engineering (RE). The purpose of this paper is to identify the main goals to be reached during the RE process in order to develop a framework for RE. This framework consists of three dimensions: • • the specification dimension • • the representation dimension • • the agreement dimension. We show how this framework can be used to classify and clarify current RE research as well as RE support offered by methods and tools. In addition, the framework can be applied to the analysis of existing RE practise and the establishment of suitable process guidance. Last but not least, the framework offers a first step towards a common understanding of RE.
Telos: representing knowledge about information systems We describe Telos, a language intended to support the development of information systems. The design principles for the language are based on the premise that information system development is knowledge intensive and that the primary responsibility of any language intended for the task is to be able to formally represent the relevent knowledge. Accordingly, the proposed language is founded on concepts from knowledge representations. Indeed, the language is appropriate for representing knowledge about a variety of worlds related to a particular information system, such as the subject world (application domain), the usage world (user models, environments), the system world (software requirements, design), and the development world (teams, metodologies).We introduce the features of the language through examples, focusing on those provided for desribing metaconcepts that can then be used to describe knowledge relevant to a particular information system. Telos' fetures include an object-centered framework which supports aggregation, generalization, and classification; a novel treatment of attributes; an explicit representation of time; and facilities for specifying integrity constraints and deductive rules. We review actual applications of the language through further examples, and we sketch a formalization of the language.
Cooperative negotiation in concurrent engineering design Design can be modeled as a cooperative multi-agent problem solving task where different agents possess different knowledge and evaluation criteria. These differences may result in inconsistent design decisions and conflicts that have to be resolved during design. The process by which resolution of inconsistencies is achieved in order to arrive at a coherent set of design decisions is negotiation. In this paper, we discuss some of the characteristics of design which make it a very challenging domain for investigating negotiation techniques. We propose a negotiation model that incorporates accessing information in existing designs, communication of design rationale and criticisms of design decisions, as well as design modifications based on constraint relaxation and comparison of utilities. The model captures the dynamic interactions of the cooperating agents during negotiations. We also present representational structures of the expertise of the various agents and a communication protocol that supports multi-agent negotiation.
Sorting out searching: a user-interface framework for text searches Current user interfaces for textual database searching leave much to be desired: individually, they are often confusing, and as a group, they are seriously inconsistent. We propose a four-phase framework for user-interface design. The framework provides common structure and terminology for searching while preserving the distinct features of individual collections and search mechanisms.
Toward reference models for requirements traceability Requirements traceability is intended to ensure continued alignment between stakeholder requirements and various outputs of the system development process. To be useful, traces must be organized according to some modeling framework. Indeed, several such frameworks have been proposed, mostly based on theoretical considerations or analysis of other literature. This paper, in contrast, follows an empirical approach. Focus groups and interviews conducted in 26 major software development organizations demonstrate a wide range of traceability practices with distinct low-end and high-end users of traceability. From these observations, reference models comprising the most important kinds of traceability links for various development tasks have been synthesized. The resulting models have been validated in case studies and are incorporated in a number of traceability tools. A detailed case study on the use of the models is presented. Four kinds of traceability link types are identified and critical issues that must be resolved for implementing each type and potential solutions are discussed. Implications for the design of next-generation traceability methods and tools are discussed and illustrated.
Strategies for incorporating formal specifications in software development
Navigating hierarchically clustered networks through fisheye and full-zoom methods Many information structures are represented as two-dimensional networks (connected graphs) of links and nodes. Because these network tend to be large and quite complex, people often perfer to view part or all of the network at varying levels of detail. Hierarchical clustering provides a framework for viewing the network at different levels of detail by superimposing a hierarchy on it. Nodes are grouped into clusters, and clusters are themselves place into other clusters. Users can then navigate these clusters until an appropiate level of detail is reached. This article describes an experiment comparing two methods for viewing hierarchically clustered networks. Traditional full-zoom techniques provide details of only the current level of the hierarchy. In contrast, fisheye views, generated by the “variable-zoom” algorithm described in this article, provide information about higher levels as well. Subjects using both viewing methods were given problem-solving tasks requiring them to navigate a network, in this case, a simulated telephone system, and to reroute links in it. Results suggest that the greater context provided by fisheye views significantly improved user performance. Users were quicker to complete their task and made fewer unnecessary navigational steps through the hierarchy. This validation of fisheye views in important for designers of interfaces to complicated monitoring systems, such as control rooms for supervisory control and data acquistion systems, where efficient human performance is often critical. However, control room operators remained concerned about the size and visibility tradeoffs between the fine room operators remained concerned about the size and visibility tradeoffs between the fine detail provided by full-zoom techniques and the global context supplied by fisheye views. Specific interface feaures are required to reconcile the differences.
Verifying task-based specifications in conceptual graphs A conceptual model is a model of real world concepts and application domains as perceived by users and developers. It helps developers investigate and represent the semantics of the problem domain, as well as communicate among themselves and with users. In this paper, we propose the use of task-based specifications in conceptual graphs (TBCG) to construct and verify a conceptual model. Task-based specification methodology is used to serve as the mechanism to structure the knowledge captured in the conceptual model; whereas conceptual graphs are adopted as the formalism to express task-based specifications and to provide a reasoning capability for the purpose of verification. Verifying a conceptual model is performed on model specifications of a task through constraints satisfaction and relaxation techniques, and on process specifications of the task based on operators and rules of inference inherited in conceptual graphs.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.02321
0.026667
0.022222
0.013333
0.004444
0.001389
0.000408
0.000059
0.000015
0.000002
0
0
0
0
Improved low-complexity intraband lossless compression of hyperspectral images by means of Slepian-Wolf coding In remote sensing systems, on-board data compression is a crucial task that has to be carried out with limited computational resources. In this paper we propose a novel lossless compression scheme for multispectral and hyperspectral images, which combines low encoding complexity and high-performance. The encoder is based on distributed source coding concepts, and employs Slepian-Wolf coding of the bitplanes of the CALIC prediction errors to achieve improved performance. Experimental results on AVIRIS data show that the proposed scheme exhibits performance similar to CALIC, and significantly better than JPEG 2000.
Distributed source coding of hyperspectral images A first attempt to exploit distributed source coding (DSC) principles for the lossless compression of hyperspectral images is presented. The DSC paradigm is exploited to design a very light coder which minimizes the exploitation of the correlation between the image bands. In this way we managed to move the computational complexity from the encoder to the decoder, thus matching the needs of classical acquisition system where compression is achieved on board of the aerial platform and decoding at the ground station. Though the encoder does not explicitly exploit inter-band correlation, the achieved bit rate is about 1 bit/pixel lower than classical 2D schemes such as JPEG-LS or CALID 2D, and only about 1 b/p higher than the best performing, and much more complex, 3D schemes.
Context-based lossless interband compression--extending CALIC. This paper proposes an interband version of CALIC (context-based, adaptive, lossless image codec) which represents one of the best performing, practical and general purpose lossless image coding techniques known today. Interband coding techniques are needed for effective compression of multispectral images like color images and remotely sensed images. It is demonstrated that CALIC's techniques of context modeling of DPCM errors lend themselves easily to modeling of higher-order interband correlations that cannot be exploited by simple interband linear predictors alone. The proposed interband CALIC exploits both interband and intraband statistical redundancies, and obtains significant compression gains over its intrahand counterpart. On some types of multispectral images, interband CALIC can lead to a reduction in bit rate of more than 20% as compared to intraband CALIC. Interband CALIC only incurs a modest increase in computational cost as compared to intraband CALIC.
A new, fast, and efficient image codec based on set partitioning in hierarchical trees Embedded zerotree wavelet (EZW) coding, introduced by Shapiro (see IEEE Trans. Signal Processing, vol.41, no.12, p.3445, 1993), is a very effective and computationally simple technique for image compression. We offer an alternative explanation of the principles of its operation, so that the reasons for its excellent performance can be better understood. These principles are partial ordering by magnitude with a set partitioning sorting algorithm, ordered bit plane transmission, and exploitation of self-similarity across different scales of an image wavelet transform. Moreover, we present a new and different implementation based on set partitioning in hierarchical trees (SPIHT), which provides even better performance than our previously reported extension of EZW that surpassed the performance of the original EZW. The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods. In addition, the new coding and decoding procedures are extremely fast, and they can be made even faster, with only small loss in performance, by omitting entropy coding of the bit stream by the arithmetic code
Distributed source coding techniques for lossless compression of hyperspectral images This paper deals with the application of distributed source coding (DSC) theory to remote sensing image compression. Although DSC exhibits a significant potential in many application fields, up till now the results obtained on real signals fall short of the theoretical bounds, and often impose additional system-level constraints. The objective of this paper is to assess the potential of DSC for lossless image compression carried out onboard a remote platform. We first provide a brief overview of DSC of correlated information sources. We then focus on onboard lossless image compression, and apply DSC techniques in order to reduce the complexity of the onboard encoder, at the expense of the decoder's, by exploiting the correlation of different bands of a hyperspectral dataset. Specifically, we propose two different compression schemes, one based on powerful binary error-correcting codes employed as source codes, and one based on simpler multilevel coset codes. The performance of both schemes is evaluated on a few AVIRIS scenes, and is compared with other state-of-the-art 2D and 3D coders. Both schemes turn out to achieve competitive compression performance, and one of them also has reduced complexity. Based on these results, we highlight the main issues that are still to be solved to further improve the performance of DSC-based remote sensing systems.
Low Complexity, High Efficiency Probability Model for Hyper-spectral Image Coding This paper describes a low-complexity, high-efficiency lossy-to-lossless coding scheme for hyper-spectral images. Together with only a 2D wavelet transform on individual image components, the proposed scheme achieves coding performance similar to that achieved by a 3D transform strategy that adds one level of wavelet decomposition along the depth axis of the volume. The proposed schemes operates by means of a probability model for symbols emitted by the bit plane coding engine. This probability model captures the statistical behavior of hyper-spectral images with high precision. The proposed method is implemented in the core coding system of JPEG2000 reducing computational costs by 25%.
Near-lossless compression of 3-D optical data Near-lossless compression yielding strictly bounded reconstruction error is proposed for high-quality compression of remote sensing images. A classified causal differential pulse code modulation scheme is presented for optical data, either multi/hyperspectral three-dimensional (3-D) or panchromatic two-dimensional (2-D) observations. It is based on a classified linear-regression prediction, follow...
An online preprocessing technique for improving the lossless compression of images with sparse histograms This letter addresses the problem of improving the efficiency of lossless compression of images with sparse histograms. An online preprocessing technique is proposed, which, although very simple, is able to provide significant improvements in the compression ratio of the images that it targets and shows a good robustness on other images.
The JPEG still picture compression standard A joint ISO/CCITT committee known as JPEG (Joint Photographic Experts Group) has been working to establish the first international compression standard for continuous-tone still images, both grayscale and color. JPEG's proposed standard aims to be generic, to support a wide variety of applications for continuous-tone images. To meet the differing needs of many applications, the JPEG standard includes two basic compression methods, each with various modes of operation. A DCT (discrete cosine transform)-based method is specified for `lossy' compression, and a predictive method for `lossless' compression. JPEG features a simple lossy technique known as the Baseline method, a subset of the other DCT-based modes of operation. The Baseline method has been by far the most widely implemented JPEG method to date, and is sufficient in its own right for a large number of applications. The author provides an overview of the JPEG standard, and focuses in detail on the Baseline method
Working in the Garden Environment for Conceptual Programming Program designers use a variety of techniques when creating their systems. This automated design system conforms to the programmer.
A system development methodology for knowledge-based systems Phased linear system-development methodologies inadequately address the problems of knowledge acquisition and engineering. A different approach, detailed and systematic, to building knowledge-based systems is presented, namely a knowledge-based system development life cycle. This specially tailored prototyping methodology replaces traditional phases and stages with 'processes'. Processes are activated, deactivated, and reactivated dynamically as needed during system development, thereby allowing knowledge engineers to iteratively define, develop, refine, and test an evolutionary knowledge/data representation. A case study in which this method was used is presented in which Blue Cross/Blue Shield of South Carolina and the Institute of Information Management, Technology, and Policy at the College of Business Administration, University of South Carolina, initiated a joint venture to automate the review process for medical insurance claims.< <ETX xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">&gt;</ETX>
Striving For Correctness In developing information technology, you want assurance that systems are secure and reliable, but you cannot have assurance or security without correctness. We discuss methods used to achieve correctness, focusing on weaknesses and approaches that management might take to increase belief in correctness. Formal methods, simulation, testing, and process modeling are addressed in detail. Structured programming, life-cycle modeling like the spiral model, use of CASE tools, use of formal methods, object-oriented design, reuse of existing code are also mentioned. Reliance on these methods involves some element of belief since no validated metrics on the effectiveness of these methods exist. Suggestions for using these methods as the basis for managerial decisions conclude the paper.
Construction of Finite Labelled Transistion Systems from B Abstract Systems In this paper, we investigate how to represent the behaviour of B abstract systems by finite labelled transition systems (LTS). We choose to decompose the state of an abstract system in several disjunctive predicates. These predicates provide the basis for defining a set of states which are the nodes of the LTS, while the events are the transitions. We have carried out a connection between the B environment (Atelier B) and the Cæsar/Aldebaran Development Package (CADP) which is able to deal with LTS. We illustrate the method by developing the SCSI-2 (Small Computer Systems Interface) input-output system. Finally, we discuss about the outcomes of this method and about its applicability.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.204899
0.204899
0.009769
0.004231
0.002273
0.001099
0.000479
0.000246
0.000022
0
0
0
0
0
An Ontology-Based Approach to Enable Knowledge Representation and Reasoning in Worker-Cobot Agile Manufacturing. There is no doubt that the rapid development in robotics technology has dramatically changed the interaction model between the Industrial Robot (IR) and the worker. As the current robotic technology has afforded very reliable means to guarantee the physical safety of the worker during a close proximity interaction with the IR. Therefore, new forms of cooperation between the robot and the worker can now be achieved. Collaborative/Cooperative robotics is the new branch of industrial robotics which empowers the idea of cooperative manufacturing. Cooperative manufacturing significantly depends on the existence of a collaborative/cooperative robot (cobot). A cobot is usually a Light-Weight Robot (LWR) which is capable of operating safely with the human co-worker in a shared work environment. This is in contrast with the conventional IR which can only operate in isolation from the worker workspace, due to the fact that the conventional IR can manipulate very heavy objects, which makes it so dangerous to operate in direct contact with the worker. There is a slight difference between the definition of collaboration and cooperation in robotics. In cooperative robotics, both the worker and the robot are performing tasks over the same product in the same shared workspace but not simultaneously. Collaborative robotics has a similar definition, except that the worker and the robot are performing a simultaneous task. Gathering the worker and the cobot in the same manufacturing workcell can provide an easy and cheap method to flexibly customize the production. Moreover, to adapt with the production demands in the real time of production, without the need to stop or to modify the production operations. There are many challenges and problems that can be addressed in the cooperative manufacturing field. However, one of the most important challenges in this field is the representation of the cooperative manufacturing environment and components. Thus, in order to accomplish the cooperative manufacturing concept, a proper approach is required to describe the shared environment between the worker and the cobot. The cooperative manufacturing shared environment includes the cobot, the co-worker, and other production components such as the product itself. Furthermore, the whole cooperative manufacturing system components need to communicate and share their knowledge, to reason and process the shared information, which eventually gives the control solution the capability of obtaining collective manufacturing decisions. Putting into consideration that the control solution should also provide a natural language which is human readable and in the same time can be understood by the machine (i.e., the cobot). Accordingly, a distributed control solution which combines an ontology-based Multi-Agent System (MAS) and a Business Rule Management System (BRMS) is proposed, in order to solve the mentioned challenges in the cooperative manufacturing, which are: manufacturing knowledge representation, sharing, and reasoning.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Requirements specification for Ada software under DoD-STD-2167A The design of Ada software is best initiated from a set of specifications that complements Ada software engineering. Under DoD-STD-2167A the specification writer may inadvertently impede Ada design methods through the over-specification of program structure before the problem domain is totally understood in the context of Ada implementation. This paper identifies some of the potential pitfalls in expressing requirements under the DoD-STD-2167A, Software Requirements Specification format and gives recommendations on how to write compliant requirements that facilitates Ada design.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Coordination middleware for decentralized applications in dynamic networks The Ph.D. work presented in this paper describes novel middleware abstractions for the support of decentralized applications in dynamic networks. Decentralized applications are characterized by the absence of an application component that has global control; a network is dynamic if its composition changes frequently and unexpectedly over time. In such a domain, application components are necessarily spread over the network nodes and need to coordinate among each other to achieve the application's functionality. The goal of the Ph.D. research is to support this coordination by suitable middleware abstractions. We describe two prototypes that were built with this goal in mind. First, a middleware supporting views, abstractions for representing and maintaining context information in a mobile ad hoc network is presented. A second middleware, that enables protocol-based interaction in mobile networks by supporting roles as a first order abstraction, is described. The application of this second middleware in a real world case study of automatic guided vehicle control is presented, showing its usefulness. Ongoing and further research in this area is discussed.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
The disappearing boundary between development-time and run-time Modern software systems are increasingly embedded in an open world that is constantly evolving, because of changes in the requirements, in the surrounding environment, and in the way people interact with them. The platform itself on which software runs may change over time, as we move towards cloud computing. These changes are difficult to predict and anticipate, and their occurrence is out of control of the application developers. Because of these changes, the applications themselves need to change. Often, changes in the applications cannot be handled off-line, but require the software to self-react by adapting its behavior dynamically, to continue to ensure the desired quality of service. The big challenge in front of us is how to achieve the necessary degrees of flexibility and dynamism required by software without compromising the necessary dependability. This paper advocates that future software engineering research should focus on providing intelligent support to software at run-time, breaking today's rigid boundary between development-time and run-time. Models need to continue to live at run-time and evolve as changes occur while the software is running. To ensure dependability, analysis that the updated system models continue to satisfy the goals must be performed by continuous verification. If verification fails, suitable adjustment policies, supported by model-driven re-derivation of parts of the system, must be activated to keep the system aligned with its expected requirements. The paper presents the background that motivates this research focus, the main existing research directions, and an agenda for future work.
Object and reference immutability using Java generics A compiler-checked immutability guarantee provides useful documentation, facilitates reasoning, and enables optimizations. This paper presents Immutability Generic Java (IGJ), a novel language extension that expresses immutability without changing Java's syntax by building upon Java's generics and annotation mechanisms. In IGJ, each class has one additional type parameter that is Immutable, Mutable, or ReadOnly. IGJ guarantees both reference immutability (only mutable references can mutate an object) and object immutability (an immutable reference points to an immutable object). IGJ is the first proposal for enforcing object immutability within Java's syntax and type system, and its reference immutability is more expressive than previous work. IGJ also permits covariant changes of type parameters in a type-safe manner, e.g., a readonly list of integers is a subtype of a readonly list of numbers. IGJ extends Java's type system with a few simple rules. We formalize this type system and prove it sound. Our IGJ compiler works by type-erasure and generates byte-code that can be executed on any JVM without runtime penalty.
It's alive! continuous feedback in UI programming Live programming allows programmers to edit the code of a running program and immediately see the effect of the code changes. This tightening of the traditional edit-compile-run cycle reduces the cognitive gap between program code and execution, improving the learning experience of beginning programmers while boosting the productivity of seasoned ones. Unfortunately, live programming is difficult to realize in practice as imperative languages lack well-defined abstraction boundaries that make live programming responsive or its feedback comprehensible. This paper enables live programming for user interface programming by cleanly separating the rendering and non-rendering aspects of a UI program, allowing the display to be refreshed on a code change without restarting the program. A type and effect system formalizes this separation and provides an evaluation model that incorporates the code update step. By putting live programming on a more formal footing, we hope to enable critical and technical discussion of live programming systems.
Reflection in direct style A reflective language enables us to access, inspect, and/or modify the language semantics from within the same language framework. Although the degree of semantics exposure differs from one language to another, the most powerful approach, referred to as the behavioral reflection, exposes the entire language semantics (or the language interpreter) that defines behavior of user programs for user inspection/modification. In this paper, we deal with the behavioral reflection in the context of a functional language Scheme. In particular, we show how to construct a reflective interpreter where user programs are interpreted by the tower of metacircular interpreters and have the ability to change any parts of the interpreters during execution. Its distinctive feature compared to the previous work is that the metalevel interpreters observed by users are written in direct style. Based on the past attempt of the present author, the current work solves the level-shifting anomaly by defunctionalizing and inspecting the top of the continuation frames. The resulting system enables us to freely go up and down the levels and access/modify the direct-style metalevel interpreter. This is in contrast to the previous system where metalevel interpreters were written in continuation-passing style (CPS) and only CPS functions could be exposed to users for modification.
Bootstrapping a self-hosted research virtual machine for JavaScript: an experience report JavaScript is one of the most widely used dynamic languages. The performance of existing JavaScript VMs, however, is lower than that of VMs for static languages. There is a need for a research VM to easily explore new implementation approaches. This paper presents the Tachyon JavaScript VM which was designed to be flexible and to allow experimenting with new approaches for the execution of JavaScript. The Tachyon VM is itself implemented in JavaScript and currently supports a subset of the full language that is sufficient to bootstrap itself. The paper discusses the architecture of the system and in particular the bootstrapping of a self-hosted VM. Preliminary performance results indicate that our VM, with few optimizations, can already execute code faster than a commercial JavaScript interpreter on some benchmarks.
The mystery of the tower revealed: a non-reflective description of the reflective tower Abstract In an important series of papers [8, 9], Brian Smith has discussed the nature of programs that know about their text and the context in which they are executed. He called this kind of knowledge,reflection. Smith proposed a programming language, called 3-LISP, which embodied such self-knowledge in the domain of metacircular interpreters. Every 3-LISP program is interpreted by a metacircular interpreter, also written in 3-LISP. This gives rise to a picture of an infinite tower of metacircular interpreters, each being interpreted by the one above it. Such a metaphor poses a serious challenge for conventional modes of understandingof programming languages. In our earlier work on reflection [4], we showed how a useful species of reflection could be modeled without the use of towers. In this paper, we give a semantic account of the reflective tower. This account is self-contained in the sense that it does not em- ploy reflection to explain reflection. 1. Modeling reflection
On Overview of KRL, a Knowledge Representation Language
Facile: a symmetric integration of concurrent and functional programming Facile is a symmetric integration of concurrent and functional programming. The language supports both function and process abstraction. Functions may be defined and used within processes, and processes can be dynamically created during expression evaluation. In this work we present two different descriptions of the operational semantics of Facile. First, we develop a structural operational semantics for a small core subset of Facile using a labeled transition system. Such a semantics is useful for reasoning about the operational behavior of Facile programs. We then provide an abstract model of implementation for Facile: theConcurrent and Functional Abstract Machine (C-FAM). The C-FAM executes concurrent processes evaluating functional expressions. The implementation semantics includes compilation rules from Facile to C-FAM instructions and execution rules for the abstract machine. This level of semantic description is suitable for those interested in implementations.
Generating test cases for real-time systems from logic specifications We address the problem of automated derivation of functional test cases for real-time systems, by introducing techniques for generating test cases from formal specifications written in TRIO, a language that extends classical temporal logic to deal explicitly with time measures. We describe an interactive tool that has been built to implement these techniques, based on interpretation algorithms of the TRIO language. Several heuristic criteria are suggested to reduce drastically the size of the test cases that are generated. Experience in the use of the tool on real-life cases is reported.
MetaEdit+: A Fully Configurable Multi-User and Multi-Tool CASE and CAME Environment Computer Aided Software Engineering (CASE) environments have spread at a lower pace than expected. One reason for this is the immaturity of existing environments in supporting development in-the-large and by-many and their inability to address the varying needs of the software developers. In this paper we report on the development of a next generation CASE environment called MetaEdit+. The environment seeks to overcome all the above deficiencies, but in particular pays attention to catering for the varying needs of the software developers. MetaEdit+ is a multi-method, multi-tool platform for both CASE and Computer Aided Method Engineering (CAME). As a CASE tool it establishes a versatile and powerful multi-tool environment which enables flexible creation, maintenance, manipulation, retrieval and representation of design information among multiple developers. As a CAME environment it offers an easy-to-use yet powerful environment for method specification, integration, management and re-use. The paper explains the motivation for developing MetaEdit+, its design goals and philosophy and discusses the functionality of the CAME tools.
Design and analysis of high-throughput lossless image compression engine using VLSI-oriented FELICS algorithm In this paper, the VLSI-oriented fast, efficient, lossless image compression system (FELICS) algorithm, which consists of simplified adjusted binary code and Golomb-Rice code with storage-less k parameter selection, is proposed to provide the lossless compression method for high-throughput applications. The simplified adjusted binary code reduces the number of arithmetic operation and improves processing speed. According to theoretical analysis, the storage-less k parameter selection applies a fixed k value in Golomb-Rice code to remove data dependency and extra storage for cumulation table. Besides, the color difference preprocessing is also proposed to improve coding efficiency with simple arithmetic operation. Based on VLSI-oriented FELICS algorithm, the proposed hardware architecture features compactly regular data flow, and two-level parallelism with four-stage pipelining is adopted as the framework of the proposed architecture. The chip is fabricated in TSMC 0.13-µm 1P8M CMOS technology with Artisan cell library. Experiment results reveal that the proposed architecture presents superior performance in parallelism-efficiency and power-efficiency compared with other existing works, which characterize high-speed lossless compression. The maximum throughput can achieve 4.36 Gb/s. Regarding high definition (HD) display applications, our encoding capability can achieve a high-quality specification of full-HD 1080p at 60 Hz with complete red, green, blue color components. Furthermore, with the configuration as the multilevel parallelism, the proposed architecture can be applied to the advanced HD display specifications, which demand huge requirement of throughput.
Visualizing Argument Structure Constructing arguments and understanding them is not easy. Visualization of argument structure has been shown to help understanding and improve critical thinking. We describe a visualization tool for understanding arguments. It utilizes a novel hi-tree based representation of the argument’s structure and provides focus based interaction techniques for visualization. We give efficient algorithms for computing these layouts.
A Task-Based Methodology for Specifying Expert Systems A task-based specification methodology for expert system specification that is independent of the problem solving architecture, that can be applied to many expert system applications, that focuses on what the knowledge is, not how it is implemented, that introduces the major concepts involved gradually, and that supports verification and validation is discussed. To evaluate the methodology, a specification of R1/SOAR, an expert system that reimplements a major portion of the R1 expert system, was reverse engineered.<>
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1.208107
0.208107
0.208107
0.208107
0.104053
0.069476
0
0
0
0
0
0
0
0
Compression of map images by multilayer context tree modeling. We propose a method for compressing color map images by context tree modeling and arithmetic coding. We consider multicomponent map images with semantic layer separation and images that are divided into binary layers by color separation. The key issue in the compression method is the utilization of interlayer correlations, and to solve the optimal ordering of the layers. The interlayer dependencies are acquired by optimizing the context tree for every pair of image layers. The resulting cost matrix of the interlayer dependencies is considered as a directed spanning tree problem and solved by an algorithm based on the Edmond's algorithm for optimum branching and by the optimal selection and removal of the background color. The proposed method gives results 50% better than JBIG and 25% better than a single-layer context tree modeling.
A universal finite memory source An irreducible parameterization for a finite memory source is constructed in the form of a tree machine. A universal information source for the set of finite memory sources is constructed by a predictive modification of an earlier studied algorithm-Context. It is shown that this universal source incorporates any minimal data-generating tree machine in an asymptotically optimal manner in the following sense: the negative logarithm of the probability it assigns to any long typical sequence, generated by any tree machine, approaches that assigned by the tree machine at the best possible rate
On ordering color maps for lossless predictive coding Linear predictive techniques perform poorly when used with color-mapped images where pixel values represent indices that point to color values in a look-up table. Reordering the color table, however, can lead to a lower entropy of prediction errors. In this paper, we investigate the problem of ordering the color table such that the absolute sum of prediction errors is minimized. The problem turns out to be intractable, even for the simple case of one-dimensional (1-D) prediction schemes. We give two heuristic solutions for the problem and use them for ordering the color table prior to encoding the image by lossless predictive techniques. We demonstrate that significant improvements in actual bit rates can be achieved over dictionary-based coding schemes that are commonly employed for color-mapped images
A universal algorithm for sequential data compression A universal algorithm for sequential data compression is presented. Its performance is investigated with respect to a nonprobabilistic model of constrained sources. The compression ratio achieved by the proposed universal code uniformly approaches the lower bounds on the compression ratios attainable by block-to-variable codes and variable-to-block codes designed to match a completely specified source.
2009 Data Compression Conference (DCC 2009), 16-18 March 2009, Snowbird, UT, USA
On the JPEG Model for Lossless Image Compression
Benchmarking and hardware implementation of JPEG-LS The JPEG-LS algorithm is one of the recently designated standards for lossless compression of grayscale and color images. In this paper, simulation results for lossless and near lossless compression of various image types are presented in order to explore the algorithm's effectiveness for a number of applications. A hardware implementation using VHDL is proposed and the schematic of a JPEG-LS codec, that is capable of standard- compliant lossless and near-lossless encoding and decoding, was generated using the Synopsys synthesis tool. Hardware implementation of the proposed solution on an FPGA allows for real-time processing of large image volumes.
Lossless compression of multispectral image data While spatial correlations are adequately exploited by standard lossless image compression techniques, little success has been attained in exploiting spectral correlations when dealing with multispectral image data. The authors present some new lossless image compression techniques that capture spectral correlations as well as spatial correlation in a simple and elegant manner. The schemes are based on the notion of a prediction tree, which defines a noncausal prediction model for an image. The authors present a backward adaptive technique and a forward adaptive technique. They then give a computationally efficient way of approximating the backward adaptive technique. The approximation gives good results and is extremely easy to compute. Simulation results show that for high spectral resolution images, significant savings can be made by using spectral correlations in addition to spatial correlations. Furthermore, the increase in complexity incurred in order to make these gains is minimal
Performance Evaluation of the H.264/AVC Video Coding Standard for Lossy Hyperspectral Image Compression In this paper, a performance evaluation of the state-of-the-art H.264/AVC video coding standard is carried out with the aim of determining its feasibility when applied to hyperspectral image compression. Results are obtained based on configuring diverse parameters in the encoder in order to achieve an optimal trade-off between compression ratio, accuracy of unmixing and computation time. In this s...
Supporting systems development by capturing deliberations during requirements engineering Support for various stakeholders involved in software projects (designers, maintenance personnel, project managers and executives, end users) can be provided by capturing the history about design decisions in the early stages of the system's development life cycle in a structured manner. Much of this knowledge, which is called the process knowledge, involving the deliberation on alternative requirements and design decisions, is lost in the course of designing and changing such systems. Using an empirical study of problem-solving behavior of individual and groups of information systems professionals, a conceptual model called REMAP (representation and maintenance of process knowledge) that relates process knowledge to the objects that are created during the requirements engineering process has been developed. A prototype environment that provides assistance to the various stakeholders involved in the design and management of large systems has been implemented.
On the relation between Memon's and the modified Zeng's palette reordering methods Palette reordering has been shown to be a very effective approach for improving the compression of color-indexed images by general purpose continuous-tone image coding techniques. In this paper, we provide a comparison, both theoretical and experimental, of two of these methods: the pairwise merging heuristic proposed by Memon et al. and the recently proposed modification of Zeng's method. This analysis shows how several parts of the algorithms relate and how their performance is affected by some modifications. Moreover, we show that Memon's method can be viewed as an extension of the modified version of Zeng's technique and, therefore, that the modified Zeng's method can be obtained through some simplifications of Memon's method.
On object state testing The importance of object state testing is illustrated through a simple example. We show that certain errors in the implementation of object state behavior cannot be readily detected by conventional structural testing, functional testing, and state testing. We describe an object state test model and outline a reverse engineering method for extracting object state behaviors from C++ source code. The object state test model is a hierarchical, concurrent, communicating state machine. It resembles the concepts of inheritance and aggregation in the object-oriented paradigm rather than the concept of state decomposition as in some existing models. The reverse engineering method is based on symbolic execution to extract the states and effects of the member functions. The symbolic execution results are used to construct the state machines. The usefulness of the model and of the method is discussed in the context of object state testing in the detection of a state behavior error
A Picture from the Model-Based Testing Area: Concepts, Techniques, and Challenges Model-Based Testing (MBT) represents a feasible and interesting testing strategy where test cases are generated from formal models describing the software behavior/structure. The MBT field is continuously evolving, as it could be observed in the increasing number of MBT techniques published at the technical literature. However, there is still a gap between researches regarding MBT and its application in the software industry, mainly occasioned by the lack of information regarding the concepts, available techniques, and challenges in using this testing strategy in real software projects. This chapter presents information intended to support researchers and practitioners reducing this gap, consequently contributing to the transfer of this technology from the academia to the industry. It includes information regarding the concepts of MBT, characterization of 219 MBT available techniques, approaches supporting the selection of MBT techniques for software projects, risk factors that may influence the use of these techniques in the industry together with some mechanisms to mitigate their impact, and future perspectives regarding the MBT field.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.215178
0.035983
0.017384
0.007974
0.000871
0.000101
0.000013
0.000001
0
0
0
0
0
0
From Domain to Requirements This is a discursive paper. That is, it shows some formulas (but only as examples so that the reader may be convinced that there is, perhaps, some substance to our claims), no theorems, no proofs. Instead it postulates. The postulates are, however, firmly rooted, we think, in Vol.3 ('Domains, Requirements and Software Design') of the three volume book 'Software Engineering' (Springer March 2006) (6, 7, 8). First we present a summary of essentials of domain engineering, its motivation, and its mod- elling of abstractions of domains through the modelling of the intrinsics, support technologies, management and organisation, rules and regulations, scripts, and human behaviour of whichever domain is being described. Then we present the essence of two (of three) aspects of requirements: the domain requirements and the interface requirements prescriptions as they relate to domain descriptions and we survey the basic operations that "turn" a domain description into a domain requirements prescription: projection, instantiation, determination, extension and fitting. An essence of interface require- ments is also presented: the "merging" of shared entities, operations, events and behaviours of the domain with those of the machine (i.e., the hardware and software to be designed). An objective of the paper is to summarise my work in recent years. Another objective is make a plea for what I consider a more proper approach to software development.
STATEMATE: a working environment for the development of complex reactive systems This paper provides a brief overview of the STATE MATE system, constructed over the past three years by i-Logix - r - ., Inc., and Ad Cad Ltd. STATEMATE is a graphical working en- vironment, intended for the specification, analysis, design and documentation of large and complex reactive systems, such as real-time embedded systems, control and communication sys- tems, and interactive software. It enables a user to prepare, analyze and debug diagrammatic, yet precise, descriptions of the system under development from three inter-related points of view, capturing, structure, functionality and behavior. These views are represented by three graphical languages, the most intricate of which is the language of statecharts used to depict reactive behavior over time. In addition to the use of state- charts, the main novelty of STATEMATE is in the fact that it 'understands' the entire descriptions perfectly, to the point of being able to analyze them for crucial dynamic properties, to carry out rigorous animated executions and simulations of the described system, and to create running code automatically. These features are invaluable when it comes to the quality and reliability of the final outcome.
Statecharts: A visual formalism for complex systems Abstract. We,present,a broad,extension,of the,conventional,formalism,of state machines,and state diagrams, that is relevant to the specification and design of complex discrete-event systems, such as multi-computer real-time systems, communication protocols and digital control units. Our diagrams, which we call statecharts, extend conventional state-transition diagrams with essentially three elements, dealing, respectively, with the notions of hierarchy, concurrency and communica- tion. These,transform,the language,of state diagrams,into a highly,structured,and,economical description,language.,Statecharts,are thus,compact,and,expressiv-small,diagrams,can,express complex,behavior-as,well,as compositional,and,modular.,When,coupled,with,the capabilities of computerized graphics, statecharts enable viewing the description at different levels of detail, and make even very large specifications manageable and comprehensible. In fact, we intend to demonstrate,here that statecharts,counter,many,of the objections,raised,against,conventional,state diagrams, and thus appear to render specification by diagrams an attractive and plausible approach. Statecharts,can be used,either as a stand-alone,behavioral,description,or as part of a more,general design methodology that deals also with the system’s other aspects, such as functional decomposi- tion and,data-flow specification. We also discuss,some,practical,experience,that was,gained,over the last three,years,in applying,the statechart,formalism,to the specification,of a particularly complex,system.
On Overview of KRL, a Knowledge Representation Language
Implementing Remote procedure calls Remote procedure calls (RPC) are a useful paradigm for providing communication across a network between programs written in a high level language. This paper describes a package, written as part of the Cedar project, providing a remote procedure call facility. The paper describes the options that face a designer of such a package, and the decisions we made. We describe the overall structure of our RPC mechanism, our facilities for binding RPC clients, the transport level communication protocol, and some performance measurements. We include descriptions of some optimisations we used to achieve high performance and to minimize the load on server machines that have many clients. Our primary aim in building an RPC package was to make the building of distributed systems easier. Previous protocols were sufficiently hard to use that only members of a select group of communication experts were willing to undertake the construction of distributed systems. We hoped to overcome this by providing a communication paradigm as close as possible to the familiar facilities of our high level languages. To achieve this aim, we concentrated on making remote calls efficient, and on making the semantics of remote calls as close as possible to those of local calls.
Alloy: a lightweight object modelling notation Alloy is a little language for describing structural properties. It offers a declaration syntax compatible with graphical object models, and a set-based formula syntax powerful enough to express complex constraints and yet amenable to a fully automatic semantic analysis. Its meaning is given by translation to an even smaller (formally defined) kernel. This paper presents the language in its entirety, and explains its motivation, contributions and deficiencies.
A calculus of refinements for program derivations A calculus of program refinements is described, to be used as a tool for the step-by-step derivation of correct programs. A derivation step is considered correct if the new program preserves the total correctness of the old program. This requirement is expressed as a relation of (correct) refinement between nondeterministic program statements. The properties of this relation are studied in detail. The usual sequential statement constructors are shown to be monotone with respect to this relation and it is shown how refinement between statements can be reduced to a proof of total correctness of the refining statement. A special emphasis is put on the correctness of replacement steps, where some component of a program is replaced by another component. A method by which assertions can be added to statements to justify replacements in specific contexts is developed. The paper extends the weakest precondition technique of Dijkstra to proving correctness of larger program derivation steps, thus providing a unified framework for the axiomatic, the stepwise refinement and the transformational approach to program construction and verification.
Symbolic Model Checking Symbolic model checking is a powerful formal specification and verification method that has been applied successfully in several industrial designs. Using symbolic model checking techniques it is possible to verify industrial-size finite state systems. State spaces with up to 1030 states can be exhaustively searched in minutes. Models with more than 10120 states have been verified using special techniques.
2009 Data Compression Conference (DCC 2009), 16-18 March 2009, Snowbird, UT, USA
Hex-splines: a novel spline family for hexagonal lattices This paper proposes a new family of bivariate, nonseparable splines, called hex-splines, especially designed for hexagonal lattices. The starting point of the construction is the indicator function of the Voronoi cell, which is used to define in a natural way the first-order hex-spline. Higher order hex-splines are obtained by successive convolutions. A mathematical analysis of this new bivariate spline family is presented. In particular, we derive a closed form for a hex-spline of arbitrary order. We also discuss important properties, such as their Fourier transform and the fact they form a Riesz basis. We also highlight the approximation order. For conventional rectangular lattices, hex-splines revert to classical separable tensor-product B-splines. Finally, some prototypical applications and experimental results demonstrate the usefulness of hex-splines for handling hexagonally sampled data.
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Procedures and atomicity refinement The introduction of an early return from a (remote) procedure call can increase the degree of parallelism in a parallel or distributed algorithm modeled by an action system. We define a return statement for procedures in an action systems framework and show that it corresponds to carrying out an atomicity refinement.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.2
0.003704
0.00081
0
0
0
0
0
0
0
0
0
0
0
Further Results on Stabilization of Chaotic Systems Based on Fuzzy Memory Sampled-Data Control. This note investigates sampled-data control for chaotic systems. A memory sampled-data control scheme that involves a constant signal transmission delay is employed for the first time to tackle the stabilization problem for Takagi-Sugeno fuzzy systems. The advantage of the constructed Lyapunov functional lies in the fact that it is neither necessarily positive on sampling intervals nor necessarily...
Delay-dependent H∞ synchronization for chaotic neural networks with network-induced delays and packet dropouts. This paper investigates the problem of H ∞ synchronization for chaotic neural networks with network-induced delays and packet dropouts. A novel master-slave synchronization scheme is established where the network-induced delays and data packet dropouts are taken into consideration. By constructing the Lyapunov functional and employing the Wirtinger-based integral inequality, several delay-dependent conditions are obtained to guarantee that the error system is globally asymptotically stable and satisfies a prescribed H ∞ performance constraint. Finally, two numerical examples are presented to validate the feasibility and effectiveness of the results derived.
Non-fragile H∞ filtering for delayed Takagi-Sugeno fuzzy systems with randomly occurring gain variations. In this study, we consider a non-fragile H filter design for a class of continuous time delayed TakagiSugeno (TS) fuzzy systems. The filter design is assumed to include randomly occurring gain variations according to the filter's implementation. Two independent Bernoulli distributions are employed to describe these random phenomena. We focus mainly on the design of a non-fragile filter so the filtering error system is asymptotically mean-square stable with the prescribed H performance in the presence of both randomly occurring gain variations and a time delay. A sufficient condition is developed by employing the Lyapunov functional approach. Based on this condition, an improved H filter for delayed TS fuzzy system is described, which is free of randomly occurring gain variations. The filter parameters are obtained by solving a set of linear matrix inequalities. The effectiveness and the advantages of the proposed techniques are demonstrated by two numerical examples.
Exponential synchronization of a class of neural networks with sampled-data control. This paper investigates the problem of the master-slave synchronization for a class of neural networks with discrete and distributed delays under sampled-data control. By introducing some new terms, a novel piecewise time-dependent Lyapunov-Krasovskii functional (LKF) is constructed to fully capture the available characteristics of real sampling information and nonlinear function vector of the system. Based on the LKF and Wirtinger-based inequality, less conservative synchronization criteria are obtained to guarantee the exponential stability of the error system, and then the slave system is synchronized with the master system. The designed sampled-data controller can be obtained by solving a set of linear matrix inequalities (LMIs), which depend on the maximum sampling period and the decay rate. The criteria are less conservative than the ones obtained in the existing works. A numerical example is presented to illustrate the effectiveness and merits of the proposed method.
Stochastic switched sampled-data control for synchronization of delayed chaotic neural networks with packet dropout. This paper addresses the exponential synchronization issue of delayed chaotic neural networks (DCNNs) with control packet dropout. A novel stochastic switched sampled-data controller with time-varying sampling is developed in the frame of the zero-input strategy. First, by making full use of the available characteristics on the actual sampling pattern, a newly loop-delay-product-type Lyapunov–Krasovskii functional (LDPTLKF) is constructed via introducing a matrix-refined-function, which can reflect the information of delay variation. Second, based on the LDPTLKF and the relaxed Wirtinger-based integral inequality (RWBII), novel synchronization criteria are established to guarantee that DCNNs are synchronous exponentially when the control packet dropout occurs in a random way, which obeys certain Bernoulli distributed white noise sequences. Third, a desired sampled-data controller can be designed on account of the proposed optimization algorithm. Finally, the effectiveness and advantages of the obtained results are illustrated by two numerical examples with simulations.
Reliable asynchronous sampled-data filtering of T–S fuzzy uncertain delayed neural networks with stochastic switched topologies This paper investigates the issue of the reliable asynchronous sampled-data filtering of Takagi–Sugeno (T–S) fuzzy delayed neural networks with stochastic intermittent faults, randomly occurring time-varying parameters uncertainties and controller gain fluctuation. The asynchronous phenomenon occurs between the system modes and controller modes. First, in order to reduce the utilization rate of communication bandwidth, a novel alterable sampled-data terminal method is considered via variable sampling rates. Second, based on the fuzzy-model-based control approach, improved reciprocally convex inequality and new parameter-time-dependent discontinuous Lyapunov approach, several relaxed conditions are derived and compared with the existing work. Third, the intermittent fault-tolerance scheme is also taken into fully account in designing a reliable asynchronous sampled-data controller, which ensures such that the resultant neural networks is asymptotically stable. Finally, two numerical examples are presented to illustrate the effectiveness and advantages of the theoretical results.
Relaxed conditions for stability of time-varying delay systems. In this paper, the problem of delay-dependent stability analysis of time-varying delay systems is investigated. Firstly, a new inequality which is the modified version of free-matrix-based integral inequality is derived, and then by aid of this new inequality, two novel lemmas which are relaxed conditions for some matrices in a Lyapunov function are proposed. Based on the lemmas, improved delay-dependent stability criteria which guarantee the asymptotic stability of the system are presented in the form of linear matrix inequality (LMI). Two numerical examples are given to describe the less conservatism of the proposed methods.
New results on stability analysis for systems with discrete distributed delay The integral inequality technique is widely used to derive delay-dependent conditions, and various integral inequalities have been developed to reduce the conservatism of the conditions derived. In this study, a new integral inequality was devised that is tighter than existing ones. It was used to investigate the stability of linear systems with a discrete distributed delay, and a new stability condition was established. The results can be applied to systems with a delay belonging to an interval, which may be unstable when the delay is small or nonexistent. Three numerical examples demonstrate the effectiveness and the smaller conservatism of the method.
A new double integral inequality and application to stability test for time-delay systems. This paper is concerned with stability analysis for linear systems with time delays. Firstly, a new double integral inequality is proposed. Then, it is used to derive a new delay-dependent stability criterion in terms of linear matrix inequalities (LMIs). Two numerical examples are given to demonstrate the effectiveness and merits of the present result.
Text Categorization with Suport Vector Machines: Learning with Many Relevant Features This paper explores the use of Support Vector Machines (SVMs) for learning text classifiers from examples. It analyzes the particular properties of learning with text data and identifies why SVMs are appropriate for this task. Empirical results support the theoretical findings. SVMs achieve substantial improvements over the currently best performing methods and behave robustly over a variety of different learning tasks. Furthermore, they are fully automatic, eliminating the need for manual...
Strategies for incorporating formal specifications in software development
Negotiation in distributed artificial intelligence: drawing from human experience Distributed artificial intelligence and cooperative problem solving deal with agents who allocate resources, make joint decisions and develop plans. Negotiation may be important for interacting agents who make sequential decisions. We discuss the use of negotiation in conflict resolution in distributed AI and select those elements of human negotiations that can help artificial agents better to resolve conflicts. Problem evolution, a basic aspect of negotiation, can be represented using the restructurable modelling method of developing decision support systems. Restructurable modelling is implemented in the knowledge-based generic decision analysis and simulation system Negoplan. Experiments show that Negoplan can effectively help resolve individual and group conflicts in negotiation.<>
Structuring and verifying distributed algorithms We present a structuring and verification method for distributed algorithms. The basic idea is that an algorithm to be verified is stepwise transformed into a high level specification through a number of steps, so-called coarsenings. At each step some mechanism of the algorithm is identified, verified and removed while the basic computation of the original algorithm is preserved. The method is based on a program development technique called superposition and it is formalized within the refinement calculus. We will show the usefulness of the method by verifying a complex distributed algorithm for minimum-hop route maintenance due to Chu.
Core of coalition formation games and fixed-point methods. In coalition formation games where agents have preferences over coalitions to which they belong, the set of fixed points of an operator and the core of coalition formation games coincide. An acyclicity condition on preference profiles guarantees the existence of a unique core. An algorithm using that operator finds all core partitions whenever there exists one.
1.020386
0.020763
0.018182
0.012669
0.010152
0.004545
0.001408
0.000411
0.000061
0
0
0
0
0
Commitment analysis to operationalize software requirements from privacy policies Online privacy policies describe organizations’ privacy practices for collecting, storing, using, and protecting consumers’ personal information. Users need to understand these policies in order to know how their personal information is being collected, stored, used, and protected. Organizations need to ensure that the commitments they express in their privacy policies reflect their actual business practices, especially in the United States where the Federal Trade Commission regulates fair business practices. Requirements engineers need to understand the privacy policies to know the privacy practices with which the software must comply and to ensure that the commitments expressed in these privacy policies are incorporated into the software requirements. In this paper, we present a methodology for obtaining requirements from privacy policies based on our theory of commitments, privileges, and rights, which was developed through a grounded theory approach. This methodology was developed from a case study in which we derived software requirements from seventeen healthcare privacy policies. We found that legal-based approaches do not provide sufficient coverage of privacy requirements because privacy policies focus primarily on procedural practices rather than legal practices.
Formal analysis of privacy requirements specifications for multi-tier applications Companies require data from multiple sources to develop new information systems, such as social networking, e-commerce and location-based services. Systems rely on complex, multi-stakeholder data supply-chains to deliver value. These data supply-chains have complex privacy requirements: privacy policies affecting multiple stakeholders (e.g. user, developer, company, government) regulate the collection, use and sharing of data over multiple jurisdictions (e.g. California, United States, Europe). Increasingly, regulators expect companies to ensure consistency between company privacy policies and company data practices. To address this problem, we propose a methodology to map policy requirements in natural language to a formal representation in Description Logic. Using the formal representation, we reason about conflicting requirements within a single policy and among multiple policies in a data supply chain. Further, we enable tracing data flows within the supply-chain. We derive our methodology from an exploratory case study of Facebook platform policy. We demonstrate the feasibility of our approach in an evaluation involving Facebook, Zynga and AOL-Advertising policies. Our results identify three conflicts that exist between Facebook and Zynga policies, and one conflict within the AOL Advertising policy.
Legally \"reasonable\" security requirements: A 10-year FTC retrospective Growth in electronic commerce has enabled businesses to reduce costs and expand markets by deploying information technology through new and existing business practices. However, government laws and regulations require businesses to employ reasonable security measures to thwart risks associated with this technology. Because many security vulnerabilities are only discovered after attacker exploitation, regulators update their interpretation of reasonable security to stay current with emerging threats. With a focus on determining what businesses must do to comply with these changing interpretations of the law, we conducted an empirical, multi-case study to discover and measure the meaning and evolution of ''reasonable'' security by examining 19 regulatory enforcement actions by the U.S. Federal Trade Commission (FTC) over a 10 year period. The results reveal trends in FTC enforcement actions that are institutionalizing security knowledge as evidenced by 39 security requirements that mitigate 110 legal security vulnerabilities.
Semantic parameterization: A process for modeling domain descriptions Software engineers must systematically account for the broad scope of environmental behavior, including nonfunctional requirements, intended to coordinate the actions of stakeholders and software systems. The Inquiry Cycle Model (ICM) provides engineers with a strategy to acquire and refine these requirements by having domain experts answer six questions: who, what, where, when, how, and why. Goal-based requirements engineering has led to the formalization of requirements to answer the ICM questions about when, how, and why goals are achieved, maintained, or avoided. In this article, we present a systematic process called Semantic Parameterization for expressing natural language domain descriptions of goals as specifications in description logic. The formalization of goals in description logic allows engineers to automate inquiries using who, what, and where questions, completing the formalization of the ICM questions. The contributions of this approach include new theory to conceptually compare and disambiguate goal specifications that enables querying goals and organizing goals into specialization hierarchies. The artifacts in the process include a dictionary that aligns the domain lexicon with unique concepts, distinguishing between synonyms and polysemes, and several natural language patterns that aid engineers in mapping common domain descriptions to formal specifications. Semantic Parameterization has been empirically validated in three case studies on policy and regulatory descriptions that govern information systems in the finance and health-care domains.
Towards Regulatory Compliance: Extracting Rights and Obligations to Align Requirements with Regulations In the United States, federal and state regulations prescribe stakeholder rights and obligations that must be satisfied by the requirements for software systems. These regulations are typically wrought with ambiguities, making the process of deriving system requirements ad hoc and error prone. In highly regulated domains such as healthcare, there is a need for more comprehensive standards that can be used to assure that system requirements conform to regulations. To address this need, we expound upon a process called Semantic Parameterization previously used to derive rights and obligations from privacy goals. In this work, we apply the process to the Privacy Rule from the U.S. Health Insurance Portability and Accountability Act (HIPAA). We present our methodology for extracting and prioritizing rights and obligations from regulations and show how semantic models can be used to clarify ambiguities through focused elicitation and to balance rights with obligations. The results of our analysis can aid requirements engineers, standards organizations, compliance officers, and stakeholders in assuring systems conform to policy and satisfy requirements.
Simulation of hepatological models: a study in visual interactive exploration of scientific problems In many different fields of science and technology, visual expressions formed by diagrams, sketches, plots and even images are traditionally used to communicate not only data but also procedures. When these visual expressions are systematically used within a scientific community, bi-dimensional notations often develop which allow the construction of complex messages from sets of primitive icons. This paper discusses how these notations can be translated into visual languages and organized into an interactive environment designed to improve the user's ability to explore scientific problems. To facilitate this translation, the use of Conditional Attributed Rewriting Systems has been extended to visual language definition. The case of a visual language in the programming of a simulation of populations of hepatic cells is studied. A discussion is given of how such a visual language allows the construction of programs through the combination of graphical symbols which are familiar to the physician or which schematize shapes familiar to him in that they resemble structures the observes in real experiments. It is also shown how such a visual approach allows the user to focus on the solution of his problems, avoiding any request for unnecessary precision and most requests for house-keeping data during the interaction.
Formal verification for fault-tolerant architectures: prolegomena to the design of PVS PVS is the most recent in a series of verification systems developed at SRI. Its design was strongly influenced, and later refined, by our experiences in developing formal specifications and mechanically checked verifications for the fault-tolerant architecture, algorithms, and implementations of a model 驴reliable computing platform驴 (RCP) for life-critical digital flight-control applications, and by a collaborative project to formally verify the design of a commercial avionics processor called AAMP5. Several of the formal specifications and verifications performed in support of RCP and AAMP5 are individually of considerable complexity and difficulty. But in order to contribute to the overall goal, it has often been necessary to modify completed verifications to accommodate changed assumptions or requirements, and people other than the original developer have often needed to understand, review, build on, modify, or extract part of an intricate verification. In this paper, we outline the verifications performed, present the lessons learned, and describe some of the design decisions taken in PVS to better support these large, difficult, iterative, and collaborative verifications.
Formal methods: state of the art and future directions ing with credit is permitted. To copy otherwise, to republish, to post onservers, to redistribute to lists, or to use any component of this work in other works, requires priorspecific permission and/or a fee. Permissions may be requested from Publications Dept, ACMInc., 1515 Broadway, New York, NY 10036 USA, fax +1 (212) 869-0481, or permissions@acm.org.2 \Delta E.M. Clarke and J.M. WingAbout Programs---Mechanical verification, Specification techniques; F.4.1 [Mathematical Logicand...
Efficient Processing of Deep Neural Networks: A Tutorial and Survey. Deep neural networks (DNNs) are currently widely used for many artificial intelligence (AI) applications including computer vision, speech recognition, and robotics. While DNNs deliver state-of-the-art accuracy on many AI tasks, it comes at the cost of high computational complexity. Accordingly, techniques that enable efficient processing of DNNs to improve energy efficiency and throughput without...
On visual formalisms The higraph, a general kind of diagramming object, forms a visual formalism of topological nature. Higraphs are suited for a wide array of applications to databases, knowledge representation, and, most notably, the behavioral specification of complex concurrent systems using the higraph-based language of statecharts.
The Conical Methodology and the evolution of simulation model development Originating with ideas generated in the mid-1970s, the Conical Methodology (CM) is the oldest procedural approach to simulation model development. This evolutionary overview describes the principles underlying the CM, the environment structured according to these principles, and the capabilities for large complex simulation modeling tasks not provided in textbook descriptions. The CM is an object-oriented, hierarchical specification language that iteratively prescribes object attributes in a definitional phase that is topdown, followed by a specification phase that is bottom-up. The intent is to develop successive model representations at various levels of abstraction that can be diagnosed for correctness, completeness, consistency, and other characteristics prior to implementation as an executable program. Related or competitive approaches, throughout the evolutionary period are categorized as emanating from: artificial intelligence, mathematical programming, software engineering, conceptual modeling, systems theory, logic-based theory, or graph theory. Work in each category is briefly described.
S/NET: A High-Speed Interconnect for Multiple Computers This paper describes S/NET (symmetric network), a high-speed small area interconnect that supports effective multiprocessing using message-based communication. This interconnect provides low latency, bounded contention time, and high throughput. It further provides hardware support for low level flow control and signaling. The interconnect is a star network with an active switch. The computers connect to the switch through full duplex fiber links. The S/NET provides a simple memory addressable interface to the processors and appears as a logical bus interconnect. The switch provides fast, fair, and deterministic contention resolution. It further supports high priority signals to be sent unimpeded in presence of data traffic (this can viewed as equivalent to interrupts on a conventional memory bus). The initial implementation supports a mix of VAX computers and Motorola 68000 based single board computers up to a maximum of 12. The switch throughput is 80 Mbits/s and the fiber links operate at a data rate of 10 Mbits/s. The kernel-to-kernel latency is only100 mus. We present a description of the architecture and discuss the performance of current systems.
Developing Mode-Rich Satellite Software by Refinement in Event B To ensure dependability of on-board satellite systems, the designers should, in particular, guarantee correct implementation of the mode transition scheme, i.e., ensure that the states of the system components are consistent with the global system mode. However, there is still a lack of scalable approaches to formal verification of correctness of complex mode transitions. In this paper we present a formal development of an Attitude and Orbit Control System (AOCS) undertaken within the ICT DEPLOY project. AOCS is a complex mode-rich system, which has an intricate mode-transition scheme. We show that refinement in Event B provides the engineers with a scalable formal technique that enables both development of mode-rich systems and proof-based verification of their mode consistency.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.2
0.1
0.1
0.016667
0.005882
0
0
0
0
0
0
0
0
0
A soft computing approach for benign and malicious web robot detection. •We propose a method called SMART (Soft computing for MAlicious RoboT detection).•The method detects benign and malicious robots, and human visitors to a web server.•SMART selects its features on a particular web server by fuzzy rough set theory.•A graph-based clustering algorithm classifies sessions into the three agent types.•Analyses on web logs suggest state-of-the-art results to detect both robot types.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Understanding and Controlling Software Costs A discussion is presented of the two primary ways of understanding software costs. The black-box or influence-function approach provides useful experimental and observational insights on the relative software productivity and quality leverage of various management, technical, environmental, and personnel options. The glass-box or cost distribution approach helps identify strategies for integrated software productivity and quality improvement programs using such structures as the value chain and the software productivity opportunity tree. The individual strategies for improving software productivity are identified. Issues related to software costs and controlling them are examined and discussed. It is pointed out that a good framework of techniques exists for controlling software budgets, schedules, and work completed, but that a great deal of further progress is needed to provide an overall set of planning and control techniques covering software product qualities and end-user system objectives.
Towards Ontologically Based Semantics for UML Constructs Conceptual models are formal descriptions of application domains that are used in early stages of system development to support requirements analysis.The Unified Modeling Language was formed by integrating several diagramming techniques for the purpose of software specification, design, construction and maintenance. It would be advantageous to use the same modeling method throughout the development process of an information system, namely, to extend the use of UML to conceptual modeling. This would require assigning well-defined, real-world meaning to UML constructs.In order to model the real-world, we need to specify what might exist in the world, namely, an ontology. We suggest that by mapping UML constructs to well-defined ontological concepts, we can form clear semantics for UML diagrams. Furthermore, based on the mapping we can suggest ontologically-based intra- and inter-diagram integrity rules to guide the construction of conceptual models.In this paper we describe the results we obtained by mapping UML constructs to a specific well-formalized ontological model. In particular, we discuss the ontological meaning of objects, classes, and of interactions.
Software requirements as negotiated win conditions Current processes and support systems for software requirements determination and analysis often neglect the critical needs of important classes of stakeholders, and limit themselves to the concerns of the developers, users and customers. These stakeholders can include maintainers, interfacers, testers, product line managers, and sometimes members of the general public. This paper describes the results to date in researching and prototyping a next-generation process model (NGPM) and support system (NGPSS) which directly addresses these issues. The NGPM emphasizes collaborative processes, involving all of the significant constituents with a stake in the software product. Its conceptual basis is a set of “theory W” (win-win) extensions to the spiral model of software development
On formal requirements modeling languages: RML revisited No abstract available.
On visual formalisms The higraph, a general kind of diagramming object, forms a visual formalism of topological nature. Higraphs are suited for a wide array of applications to databases, knowledge representation, and, most notably, the behavioral specification of complex concurrent systems using the higraph-based language of statecharts.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
The use of goals to surface requirements for evolving systems This paper addresses the use of goals to surface requirements for the redesign of existing or legacy systems. Goals are widely recognized as important precursors to system requirements, but the process of identifying and abstracting them has not been researched thoroughly. We present a summary of a goal-based method (GBRAM) for uncovering hidden issues, goals, and requirements and illustrate its application to a commercial system, an Intranet-based electronic commerce application, evaluating the method in the process. The core techniques comprising GBRAM are the systematic application of heuristics and inquiry questions for the analysis of goals, scenarios and obstacles. We conclude by discussing the lessons learned through applying goal refinement in the field and the implications for future research.
Petri nets: Properties, analysis and applications Starts with a brief review of the history and the application areas considered in the literature. The author then proceeds with introductory modeling examples, behavioral and structural properties, three methods of analysis, subclasses of Petri nets and their analysis. In particular, one section is devoted to marked graphs, the concurrent system model most amenable to analysis. Introductory discussions on stochastic nets with their application to performance modeling, and on high-level nets with their application to logic programming, are provided. Also included are recent results on reachability criteria. Suggestions are provided for further reading on many subject areas of Petri nets
Performance evaluation in content-based image retrieval: overview and proposals Evaluation of retrieval performance is a crucial problem in content-based image retrieval (CBIR). Many different methods for measuring the performance of a system have been created and used by researchers. This article discusses the advantages and shortcomings of the performance measures currently used. Problems such as defining a common image database for performance comparisons and a means of getting relevance judgments (or ground truth) for queries are explained. The relationship between CBIR and information retrieval (IR) is made clear, since IR researchers have decades of experience with the evaluation problem. Many of their solutions can be used for CBIR, despite the differences between the fields. Several methods used in text retrieval are explained. Proposals for performance measures and means of developing a standard test suite for CBIR, similar to that used in IR at the annual Text REtrieval Conference (TREC), are presented.
WebWork: METEOR2's Web-Based Workflow Management System. METEOR workflow management systems consist of both (1) design/build-time and (2) run-time/enactment components for implementing workflow applications. An enactment system provides the command, communication and control for the individual tasks in the workflow. Tasks are the run-time instances of intra- or inter-enterprise applications. We are developing three implementations of the METEOR model: WebWork, OrbWork and NeoWork. This paper discusses WebWork, an implementation relying solely on Web technology as the infrastructure for the enactment system. WebWork supports a distributed implementation with participation of multiple Web servers. It also supports automatic code generation of workflow applications from design specifications produced by a comprehensive graphical designer. WebWork has been developed as a complement of its more heavyweight counterparts (OrbWork and NeoWork), with the goal of providing ease of workflow application development, installation, use and maintenance. At the time of this writing, WebWork has been installed by several of the LSDIS Lab's industrial partners for testing, evaluation and building workflow applications.
An ontological model of an information system An ontological model of an information system that provides precise definitions of fundamental concepts like system, subsystem, and coupling is proposed. This model is used to analyze some static and dynamic properties of an information system and to examine the question of what constitutes a good decomposition of an information system. Some of the major types of information system formalisms that bear on the authors' goals and their respective strengths and weaknesses relative to the model are briefly reviewed. Also articulated are some of the fundamental notions that underlie the model. Those basic notions are then used to examine the nature and some dynamics of system decomposition. The model's predictive power is discussed.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.04
0.04
0.006154
0.003478
0.000065
0
0
0
0
0
0
0
0
0
Dynamic tree cross products Range searching over tree cross products – a variant of classic range searching – recently has been introduced by Buchsbaum et al (Proc 8th ESA, vol 1879 of LNCS, pp 120–131, 2000) A tree cross product consist of hyperedges connecting the nodes of trees T1,...,Td In this context, range searching means to determine all hyperedges connecting a given set of tree nodes Buchsbaum et al describe a data structure which supports, besides queries, adding and removing of edges; the tree nodes remain fixed In this paper we present a new data structure, which additionally provides insertion and deletion of leaves of T1,...,Td; it combines the former approach with a novel technique of using search trees superimposed over ordered list maintenance structures The extra cost for this dynamization is roughly a factor of ${\mathcal O}({\rm log} {\it n}/{\rm log log} {\it n})$ The trees being dynamic is especially important for maintaining hierarchical graph views, a problem that can be modeled as tree cross product Such views evolve from a large base graph by the contraction of subgraphs defined recursively by an associated hierarchy The graph view maintenance problem is to provide methods for refining and coarsening a view In previous solutions only the edges of the underlying graph were dynamic; with the application of our data structure, the node set becomes dynamic as well.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Procedures and atomicity refinement The introduction of an early return from a (remote) procedure call can increase the degree of parallelism in a parallel or distributed algorithm modeled by an action system. We define a return statement for procedures in an action systems framework and show that it corresponds to carrying out an atomicity refinement.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
The basic entity model: a theoretical model of information processing, decision making and information systems The basic entity model was constructed to provide information processing with a better theoretical foundation. The human information processing systems are perceived as physical symbol systems. The artifacts or basic entities that these systems handle and that are unique to information (processing) theory have been observed. The four basic entities are data, information, knowledge and wisdom. The three postulates that are fundamental to the model are the law of boundary, the law of interaction, and the law of constructed (information) systems. The transformations of the basic entities taking place in the model create an information space that contains a set of information states in a particular knowledge domain. The information space then serves as the platform for decision making to be executed. The model is also used to analyze the structure of constructed information systems mathematically. The ontological, deep structure approach is adopted. A theoretical foundation of this nature is beneficial to unify all information-related disciplines.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Three dimensional discrete wavelet transform with deduced number of lifting steps This report reduces the total number of lifting steps in a three-dimensional (3D) double lifting discrete wavelet transform (DWT), which has been widely applied for analyzing volumetric medical images. The lifting steps are necessary components in a DWT. Since calculation in a lifting step must wait for a result of former step, cascading many lifting steps brings about increase of delay from input to output. We decrease the total number of lifting steps introducing 3D memory accessing for the implementation of low delay 3D DWT. We also maintain compatibility with the conventional 5/3 DWT defined by JPEG 2000 international standard for utilization of its software and hardware resources. Finally, the total number of lifting steps and rounding operations were reduced to 67 % and 33 %, respectively. It was observed that total amount of errors due to rounding operations in the lifting steps was also reduced.
Avoidance of Singular Point in Integer Orthonormal Transform for Lossless Coding In this correspondence, the singular point (SP) problem peculiar to an integer orthonormal transform is discussed, and compatibility of the integer transform with the corresponding real-valued transform is improved. To avoid the SP problem, we introduce two-point permutations of order and sign of signals. Since it satisfies the commutative law, it becomes possible to reduce computational cost to find the best combination of the permutations which minimizes errors due to rounding of signals inside the integer transform.
Non separable 3D lifting structure compatible with separable quadruple lifting DWT This report reduces the total number of lifting steps in a 3D quadruple lifting DWT (discrete wavelet transform). In the JPEG 2000 international standard, the 9/7 quadruple lifting DWT has been widely utilized for image data compression. It has been also applied to volumetric medical image data analysis. However, it has long delay from input to output due to cascading four (quadruple) lifting steps per dimension. We reduce the total number of lifting steps introducing 3D direct memory accessing under the constraint that it has backward compatibility with the conventional DWT in JPEG 2000. As a result, the total number of lifting steps is reduced from 12 to 8 (67 %) without significant degradation of data compression performance.
A low complexity and lossless frame memory compression for display devices In this paper, we introduce a new low complexity and lossless algorithm based on subband decomposition with the modified Hadamard transform (MHT) and adaptive Golomb-Rice (AGR) coding for display devices. The main goal of the proposed method is to reduce memory requirement for display devices. A basic unit of the proposed method is a line of the image so that the method is processed line by line. Also, MHT and AGR coding are utilized to achieve low complexity. The major improvement of the method is from the use of AGR and subband processing compared with exiting methods which are similar to the method in terms of complexity and applications. Simulation results show that the algorithm achieves a superior compression performance to the existing methods. In addition, the proposed method is hardware-friendly and could be easily implemented in any display devices.
Why does histogram packing improve lossless compression rates? The performance of state-of-the-art lossless image coding methods (such as JPEG-LS, lossless JPEG-2000, and context-based adaptive lossless image coding (CALIC)) can be considerably improved by a recently introduced preprocessing technique that can be applied whenever the images have sparse histograms. Bitrate savings of up to 50% have been reported, but so far no theoretical explanation of the fact has been advanced. This letter addresses this issue and analyzes the effect of the technique in terms of the interplay between histogram packing and the image total variation, emphasizing the lossless JPEG-2000 case. Index Terms—Context-based adaptive lossless image coding (CALIC), histogram packing, image variation, JPEG-2000, JPEG-LS, linear approximation, lossless image coding, nonlinear approximation.
Coding Techniques for CFA Data Images In this paper we present a comparison between different approaches to CFA (Colour Filter Array) images encoding. We show different performances offered by a new algorithm based on a vector quantization technique, JPEG-LS a low complexity encoding standard and classical JPEG. We also show the effects of CFA image encoding on the colour reconstructed images by a typical image generation pipeline. A discussion about different computational complexity and memory requirement of the different encoding approaches is also presented.
Optimal prefix codes for sources with two-sided geometric distributions A complete characterization of optimal prefix codes for off-centered, two-sided geometric distributions of the integers is presented. These distributions are often encountered in lossless image compression applications, as probabilistic models for image prediction residuals. The family of optimal codes described is an extension of the Golomb codes, which are optimal for one-sided geometric distributions. The new family of codes allows for encoding of prediction residuals at a complexity similar to that of Golomb codes, without recourse to the heuristic approximations frequently used when modifying a code designed for nonnegative integers so as to apply to the encoding of any integer. Optimal decision rules for choosing among a lower complexity subset of the optimal codes, given the distribution parameters, are also investigated, and the relative redundancy of the subset with respect to the full family of optimal codes is bounded
Universal coding, information, prediction, and estimation A connection between universal codes and the problems of prediction and statistical estimation is established. A known lower bound for the mean length of universal codes is sharpened and generalized, and optimum universal codes constructed. The bound is defined to give the information in strings relative to the considered class of processes. The earlier derived minimum description length criterion for estimation of parameters, including their number, is given a fundamental information, theoretic justification by showing that its estimators achieve the information in the strings. It is also shown that one cannot do prediction in Gaussian autoregressive moving average (ARMA) processes below a bound, which is determined by the information in the data.
Optimal, efficient, recursive edge detection filters The design of an optimal, efficient, infinite-impulse-response (IIR) edge detection filter is described. J. Canny (1986) approached the problem by formulating three criteria designed in any edge detection filter: good detection, good localization, and low spurious response. He maximized the product of the first two criteria while keeping the spurious response criterion constant. Using the variational approach, he derived a set of finite extent step edge detection filters corresponding to various values of the spurious response criterion, approximating the filters by the first derivative of a Gaussian. A more direct approach is described in this paper. The three criteria are formulated as appropriate for a filter of infinite impulse response, and the calculus of variations is used to optimize the composite criteria. Although the filter derived is also well approximated by first derivative of a Gaussian, a superior recursively implemented approximation is achieved directly. The approximating filter is separable into two linear filters operating in two orthogonal directions allowing for parallel edge detection processing. The implementation is very simple and computationally efficient
Strategies for information requirements determination Correct and complete information requirements are key ingredients in planning organizational information systems and in implementing information systems applications. Yet, there has been relatively little research on information requirements determination, and there are relatively few practical, well-formulated procedures for obtaining complete, correct information requirements. Methods for obtaining and documenting information requirements are proposed, but they tend to be presented as general solutions rather than alternative methods for implementing a chosen strategy of requirements determination. This paper identifies two major levels of requirements: the organizational information requirements reflected in a planned portfolio of applications and the detailed information requirements to be implemented in a specific application. The constraints on humans as information processors are described in order to explain why "asking" users for information requirements may not yield a complete, correct set. Various strategies for obtaining information requirements are explained. Examples are given of methods that fit each strategy. A contingency approach is then presented for selecting an information requirements determination strategy. The contingency approach is explained both for defining organizational information requirements and for defining specific, detailed requirements in the development of an application.
A superimposition control construct for distributed systems A control structure called a superimposition is proposed. The structure contains schematic abstractions of processes called roletypes in its declaration. Each roletype may be bound to processes from a basic distributed algorithm, and the operations of the roletype will then execute interleaved with those of the basic processes, over the same state space. This structure captures a kind of modularity natural for distributed programming, which previously has been treated using a macro-like implantation of code. The elements of a superimposition are identified, a syntax is suggested, correctness criteria are defined, and examples are presented.
The architecture of a Linda coprocessor We describe the architecture of a coprocessor that supports the communication primitives of the Linda parallel programming environment in hardware. The coprocessor is a critical element in the architecture of the Linda Machine, an MIMD parallel processing system that is designed top down from the specifications of Linda. Communication in Linda programs takes place through a logically shared associative memory mechanism called tuple space. The Linda Machine, however, has no physically shared memory. The microprogrammable coprocessor implements distributed protocols for executing tuple space operations over the Linda Machine communication network. The coprocessor has been designed and is in the process of fabrication. We discuss the projected performance of the coprocessor and compare it with software Linda implementations. This work is supported in part by National Science Foundation grants CCR-8657615 and ONR N00014-86-K-0310.
Abstraction of objects by conceptual clustering Very bound to the logic of first-rate predicates, the formalism of conceptual graphs constitutes a knowledge representation language. The abstraction of systems presents several advantages. It helps to render complex systems more understandable, thus facilitating their analysis and their conception. Our approach of conceptual graphs abstraction, or conceptual clustering, is based on rectangular decomposition. It produces a set of clusters representing similarities between subsets of objects to be abstracted, organized into a hierarchy of classes: the Knowledge Space. Some conceptual clustering methods already exist. Our approach is distinguishable from other approaches in as far as it allows a gain in space and time.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.071111
0.066667
0.066667
0.016667
0.008333
0.001481
0.000125
0.000003
0
0
0
0
0
0
Selecting a software development process This paper describes the focusing on methods of selecting a software development process. It describes the analysis and design methods used to develop large and complex sytems. It provides guidelines for the use of Object Oriented and Structured Design variants. Although the selection of one methodology over the other is not clear, a series of rules to apply in the selection process are given. The multiplicity of client/server architectures, plus several programming languages and fourth generation languages further complicate the selection. This paper proposes that the idea of following only one methodology is sometimes inappropriate. It also emphasizes the factors to consider when one methodology is chosen. There are several factors to be considered including size and complexity of the project, programming language(s) to be used, experience level of project staff, finally performance considerations.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Representing programs in multiparadigm software development environments In this paper, we describe a canonical pro- gram representation, Semantic Program Graphs (SPGs), and we show how SPGs can act as the foundation for multiparadigm software develop ment environments. Using SPGs as the basis for program representation allows developers to see different views of programs that correspond to different ways of thinking about them, and it allows editors to be created so that the un- derlying program may be edited using any of the paradigms. As the sole program representa- tion, SPGs also facilitate communication between paradigms: changes made in one view can be im- mediately reflected in all other views.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Use cases for distributed real-time software architectures This paper describes how use cases can be applied to the architectural design of distributed real-time applications. In order to benefit from use cases in distributed real-time design, it is necessary to extend use cases, particularly in the design phase, when important design decisions need to be made. To achieve this, use cases are integrated with CODARTS (Concurrent Design Approach for Real-Time Systems) distributed design concepts. Three different types of architectural use cases are described, client/server use cases, subscription use cases and real-time control use cases. Different forms of message communication are associated with the different use case types. The software architecture of the distributed real-time system is achieved by composing it from the use cases.
Object-oriented modeling and design
On visual formalisms The higraph, a general kind of diagramming object, forms a visual formalism of topological nature. Higraphs are suited for a wide array of applications to databases, knowledge representation, and, most notably, the behavioral specification of complex concurrent systems using the higraph-based language of statecharts.
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The mystery of the tower revealed: a non-reflective description of the reflective tower Abstract In an important series of papers [8, 9], Brian Smith has discussed the nature of programs that know about their text and the context in which they are executed. He called this kind of knowledge,reflection. Smith proposed a programming language, called 3-LISP, which embodied such self-knowledge in the domain of metacircular interpreters. Every 3-LISP program is interpreted by a metacircular interpreter, also written in 3-LISP. This gives rise to a picture of an infinite tower of metacircular interpreters, each being interpreted by the one above it. Such a metaphor poses a serious challenge for conventional modes of understandingof programming languages. In our earlier work on reflection [4], we showed how a useful species of reflection could be modeled without the use of towers. In this paper, we give a semantic account of the reflective tower. This account is self-contained in the sense that it does not em- ploy reflection to explain reflection. 1. Modeling reflection
Statecharts: A visual formalism for complex systems Abstract. We,present,a broad,extension,of the,conventional,formalism,of state machines,and state diagrams, that is relevant to the specification and design of complex discrete-event systems, such as multi-computer real-time systems, communication protocols and digital control units. Our diagrams, which we call statecharts, extend conventional state-transition diagrams with essentially three elements, dealing, respectively, with the notions of hierarchy, concurrency and communica- tion. These,transform,the language,of state diagrams,into a highly,structured,and,economical description,language.,Statecharts,are thus,compact,and,expressiv-small,diagrams,can,express complex,behavior-as,well,as compositional,and,modular.,When,coupled,with,the capabilities of computerized graphics, statecharts enable viewing the description at different levels of detail, and make even very large specifications manageable and comprehensible. In fact, we intend to demonstrate,here that statecharts,counter,many,of the objections,raised,against,conventional,state diagrams, and thus appear to render specification by diagrams an attractive and plausible approach. Statecharts,can be used,either as a stand-alone,behavioral,description,or as part of a more,general design methodology that deals also with the system’s other aspects, such as functional decomposi- tion and,data-flow specification. We also discuss,some,practical,experience,that was,gained,over the last three,years,in applying,the statechart,formalism,to the specification,of a particularly complex,system.
Reasoning Algebraically about Loops We show here how to formalize different kinds of loop constructs within the refinement calculus, and how to use this formalization to derive general loop transformation rules. The emphasis is on using algebraic methods for reasoning about equivalence and refinement of loops, rather than looking at operational ways of reasoning about loops in terms of their execution sequences. We apply the algebraic reasoning techniques to derive a collection of different loop transformation rules that have been found important in practical program derivations: merging and reordering of loops, data refinement of loops with stuttering transitions and atomicity refinement of loops.
Separation and information hiding We investigate proof rules for information hiding, using the recent formalism of separation logic. In essence, we use the separating conjunction to partition the internal resources of a module from those accessed by the module's clients. The use of a logical connective gives rise to a form of dynamic partitioning, where we track the transfer of ownership of portions of heap storage between program components. It also enables us to enforce separation in the presence of mutable data structures with embedded addresses that may be aliased.
Joining specification statements The specification statement allows us to easily express what a program statement does. This paper shows how refinement of specification statements can be directly expressed using the predicate calculus. It also shows that the specification statements interpreted as predicate transformers form a complete lattice, and that this lattice is the lattice of conjunctive predicate transformers. The join operator of this lattice is constructed as a specification statement. The join operators of two interesting sublattices of the set of specification statements are also investigated.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.2
0.001325
0.000325
0
0
0
0
0
0
0
0
0
0
0
Heuristic models of fuzzy time series for forecasting Song and Chissom first proposed the definitions of fuzzy time series and time-invariant and variant models of fuzzy time series. Chen then proposed arithmetic operations to replace the complex computations in Song and Chissom's models. This study proposes heuristic models by integrating problem-specific heuristic knowledge with Chen's model to improve forecasting. This is because Chen's model was easy to calculate, was straightforward to integrate heuristic knowledge, and forecast better than the others. Both university enrollment and futures index are chosen as the forecasting targets. The empirical analyses show that the heuristic models reflect the fluctuations in fuzzy time series better and provide better overall forecasting results than the previous models.
A Multivariate Heuristic Model for Fuzzy Time-Series Forecasting Fuzzy time-series models have been widely applied due to their ability to handle nonlinear data directly and because no rigid assumptions for the data are needed. In addition, many such models have been shown to provide better forecasting results than their conventional counterparts. However, since most of these models require complicated matrix computations, this paper proposes the adoption of a multivariate heuristic function that can be integrated with univariate fuzzy time-series models into multivariate models. Such a multivariate heuristic function can easily be extended and integrated with various univariate models. Furthermore, the integrated model can handle multiple variables to improve forecasting results and, at the same time, avoid complicated computations due to the inclusion of multiple variables.
A fuzzy seasonal ARIMA model for forecasting This paper proposes a fuzzy seasonal ARIMA (FSARIMA) forecasting model, which combines the advantages of the seasonal time series ARIMA (SARIMA) model and the fuzzy regression model. It is used to forecast two seasonal time series data of the total production value of the Taiwan machinery industry and the soft drink time series. The intention of this paper is to provide business which are affected by diversified management with a new method to conduct short-term forecasting. This model includes both interval models with interval parameters and the possible distribution of future value. Based on the results of practical application, it can be shown that this model makes good forecasts and is realistic. Furthermore, this model makes it possible for decision makers to forecast the best and worst estimates based on fewer observations than the SARIMA model.
AN ENHANCED DETERMINISTIC FUZZY TIME SERIES FORECASTING MODEL The study of fuzzy time series has attracted great interest and is expected to expand rapidly. Various forecasting models including high-order models have been proposed to improve forecasting accuracy or reducing computational cost. However, there exist two important issues, namely, rule redundancy and high-order redundancy that have not yet been investigated. This article proposes a novel forecasting model to tackle such issues. It overcomes the major hurdle of determining the k-order in high-order models and is enhanced to allow the handling of multi-factor forecasting problems by removing the overhead of deriving all fuzzy logic relationships beforehand. Two novel performance evaluation metrics are also formally derived for comparing performances of related forecasting models. Experimental results demonstrate that the proposed forecasting model outperforms the existing models in efficiency.
A new hybrid approach based on SARIMA and partial high order bivariate fuzzy time series forecasting model In the literature, there have been many studies using fuzzy time series for the purpose of forecasting. The most studied model is the first order fuzzy time series model. In this model, an observation of fuzzy time series is obtained by using the previous observation. In other words, only the first lagged variable is used when constructing the first order fuzzy time series model. Therefore, this model can not be sufficient for some time series such as seasonal time series which is an important class in time series models. Besides, the time series encountered in real life have not only autoregressive (AR) structure but also moving average (MA) structure. The fuzzy time series models available in the literature are AR structured and are not appropriate for MA structured time series. In this paper, a hybrid approach is proposed in order to analyze seasonal fuzzy time series. The proposed hybrid approach is based on partial high order bivariate fuzzy time series forecasting model which is first introduced in this paper. The order of this model is determined by utilizing Box-Jenkins method. In order to show the efficiency of the proposed hybrid method, real time series are analyzed with this method. The results obtained from the proposed method are compared with the other methods. As a result, it is observed that more accurate results are obtained from the proposed hybrid method.
Deterministic vector long-term forecasting for fuzzy time series In the last decade, fuzzy time series have received more attention due their ability to deal with the vagueness and incompleteness inherent in time series data. Although various improvements, such as high-order models, have been developed to enhance the forecasting performance of fuzzy time series, their forecasting capability is mostly limited to short-term time spans and the forecasting of a single future value in one step. This paper presents a new method to overcome this shortcoming, called deterministic vector long-term forecasting (DVL). The proposed method, built on the basis of our previous deterministic forecasting method that does not require the overhead of determining the order number, as in other high-order models, utilizes a vector quantization technique to support forecasting if there are no matching historical patterns, which is usually the case with long-term forecasting. The vector forecasting method is further realized by seamlessly integrating it with the sliding window scheme. Finally, the forecasting effectiveness and stability of DVL are validated and compared by performing Monte Carlo simulations on real-world data sets.
A mathematical perspective for software measures research Basic principles which necessarily underlie software measures research are analysed. In the prevailing paradigm for the validation of software measures, there is a fundamental assumption that the sets of measured documents are ordered and that measures should report these orders. The authors describe mathematically, the nature of such orders. Consideration of these orders suggests a hierarchy of software document measures, a methodology for developing new measures and a general approach to the analytical evaluation of measures. They also point out the importance of units for any type of measurement and stress the perils of equating document structure complexity and psychological complexity
Machine Learning This exciting addition to the McGraw-Hill Series in Computer Science focuses on the concepts and techniques that contribute to the rapidly changing field of machine learning--including probability and statistics, artificial intelligence, and neural networks--unifying them all in a logical and coherent manner. Machine Learning serves as a useful reference tool for software developers and researchers, as well as an outstanding text for college students.Table of contentsChapter 1. IntroductionChapter 2. Concept Learning and the General-to-Specific OrderingChapter 3. Decision Tree LearningChapter 4. Artificial Neural NetworksChapter 5. Evaluating HypothesesChapter 6. Bayesian LearningChapter 7. Computational Learning TheoryChapter 8. Instance-Based LearningChapter 9. Inductive Logic ProgrammingChapter 10. Analytical LearningChapter 11. Combining Inductive and Analytical LearningChapter 12. Reinforcement Learning.
Goal-Based Requirements Analysis Goals are a logical mechanism for identifying, organizing and justifying software requirements. Strategies are needed for the initial identification and construction of goals. In this paper we discuss goals from the perspective of two themes: goal analysis and goal evolution. We begin with an overview of the goal-based method we have developed and summarize our experiences in applying our method to a relatively large example. We illustrate some of the issues that practitioners face when using a goal-based approach to specify the requirements for a system and close the paper with a discussion of needed future research on goal-based requirements analysis and evolution. Keywords: goal identification, goal elaboration, goal refinement, scenario analysis, requirements engineering, requirements methods
A logic covering undefinedness in program proofs Recursive definition often results in partial functions; iteration gives rise to programs which may fail to terminate for some imputs. Proofs about such functions or programs should be conducted in logical systems which reflect the possibility of \"undefined values\". This paper provides an axiomatization of such a logic together with examples of its use.
Design And Implementation Of A Low Complexity Lossless Video Codec A low complexity loss less video codec design, which is an extension to the well known CALIC system, is presented. It starts with a reversible color space transform to decorrelate the video signal components. A gradient-adjusted prediction scheme facilitated with an error feedback mechanism follows to obtain the prediction value for each pixel. Finally, an adaptive Golomb-Rice coding scheme in conjunction with a context modeling technique to determine the K value adaptively is applied to encode the prediction errors faithfully. The proposed scheme exhibits a compression performance comparable to that of CALIC but with almost one half computing complexity. It also outperforms other well recognized schemes such as JPEG-LS scheme by 12% in compression ratio. A chip design using TSMC 0.18um technology was also developed. It features a throughput rate of 13.83 M pixels per second and a design gate count of 35k plus 3.7kB memory.
Expressing the relationships between multiple views in requirements specification The authors generalize and formalize the definition of a ViewPoint to facilitate its manipulation for composite system development. A ViewPoint is defined to be a loosely-coupled, locally managed object encapsulating representation knowledge, development process knowledge and partial specification knowledge about a system and its domain. In attempting to integrate multiple requirements specification ViewPoints, overlaps must be identified and expressed, complementary participants made to interact and cooperate, and contradictions resolved. The notion of inter-ViewPoint communication is addressed as a vehicle for ViewPoint integration. The communication model presented straddles both the method construction stage during which inter-ViewPoint relationships are expressed, and the method application stage during which these relationships are enacted
Verifying task-based specifications in conceptual graphs A conceptual model is a model of real world concepts and application domains as perceived by users and developers. It helps developers investigate and represent the semantics of the problem domain, as well as communicate among themselves and with users. In this paper, we propose the use of task-based specifications in conceptual graphs (TBCG) to construct and verify a conceptual model. Task-based specification methodology is used to serve as the mechanism to structure the knowledge captured in the conceptual model; whereas conceptual graphs are adopted as the formalism to express task-based specifications and to provide a reasoning capability for the purpose of verification. Verifying a conceptual model is performed on model specifications of a task through constraints satisfaction and relaxation techniques, and on process specifications of the task based on operators and rules of inference inherited in conceptual graphs.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.02017
0.021178
0.019662
0.019083
0.016263
0.011791
0
0
0
0
0
0
0
0
Stochastic Model of Block Segmentation Based on Improper Quadtree and Optimal Code under the Bayes Criterion Most previous studies on lossless image compression have focused on improving preprocessing functions to reduce the redundancy of pixel values in real images. However, we assumed stochastic generative models directly on pixel values and focused on achieving the theoretical limit of the assumed models. In this study, we proposed a stochastic model based on improper quadtrees. We theoretically derive the optimal code for the proposed model under the Bayes criterion. In general, Bayes-optimal codes require an exponential order of calculation with respect to the data lengths. However, we propose an efficient algorithm that takes a polynomial order of calculation without losing optimality by assuming a novel prior distribution.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1
0
0
0
0
0
0
0
0
0
0
0
0
0
OWLPath: An OWL Ontology-Guided Query Editor Most Semantic Web technology-based applications need users to have a deep background on the formal underpinnings of ontology languages and some basic skills in these technologies. Generally, only experts in the field meet these requirements. In this paper, we present OWLPath, a natural language-query editor guided by multilanguage OWL-formatted ontologies. This application allows nonexpert users to easily create SPARQL queries that can be issued over most existing ontology storage systems. Our approach is a fully fledged solution backed with a proof-of-concept implementation and the empirical results of two challenging use cases: one in the domain of e-finance and the other in e-tourism.
Ontology, Metadata, and Semiotics The Internet is a giant semiotic system. It is a massive collection of Peirce's three kinds of signs: icons, which show the form of something; indices, which point to something; and symbols, which represent something according to some convention. But current proposals for ontologies and metadata have overlooked some of the most important features of signs. A sign has three aspects: it is (1) an entity that represents (2) another entity to (3) an agent. By looking only at the signs themselves, some metadata proposals have lost sight of the entities they represent and the agents  human, animal, or robot  which interpret them. With its three branches of syntax, semantics, and pragmatics, semiotics provides guidelines for organizing and using signs to represent something to someone for some purpose. Besides representation, semiotics also supports methods for translating patterns of signs intended for one purpose to other patterns intended for different but related purposes. This article shows how the fundamental semiotic primitives are represented in semantically equivalent notations for logic, including controlled natural languages and various computer languages.
Semantic parameterization: A process for modeling domain descriptions Software engineers must systematically account for the broad scope of environmental behavior, including nonfunctional requirements, intended to coordinate the actions of stakeholders and software systems. The Inquiry Cycle Model (ICM) provides engineers with a strategy to acquire and refine these requirements by having domain experts answer six questions: who, what, where, when, how, and why. Goal-based requirements engineering has led to the formalization of requirements to answer the ICM questions about when, how, and why goals are achieved, maintained, or avoided. In this article, we present a systematic process called Semantic Parameterization for expressing natural language domain descriptions of goals as specifications in description logic. The formalization of goals in description logic allows engineers to automate inquiries using who, what, and where questions, completing the formalization of the ICM questions. The contributions of this approach include new theory to conceptually compare and disambiguate goal specifications that enables querying goals and organizing goals into specialization hierarchies. The artifacts in the process include a dictionary that aligns the domain lexicon with unique concepts, distinguishing between synonyms and polysemes, and several natural language patterns that aid engineers in mapping common domain descriptions to formal specifications. Semantic Parameterization has been empirically validated in three case studies on policy and regulatory descriptions that govern information systems in the finance and health-care domains.
On Overview of KRL, a Knowledge Representation Language
Implementing Remote procedure calls Remote procedure calls (RPC) are a useful paradigm for providing communication across a network between programs written in a high level language. This paper describes a package, written as part of the Cedar project, providing a remote procedure call facility. The paper describes the options that face a designer of such a package, and the decisions we made. We describe the overall structure of our RPC mechanism, our facilities for binding RPC clients, the transport level communication protocol, and some performance measurements. We include descriptions of some optimisations we used to achieve high performance and to minimize the load on server machines that have many clients. Our primary aim in building an RPC package was to make the building of distributed systems easier. Previous protocols were sufficiently hard to use that only members of a select group of communication experts were willing to undertake the construction of distributed systems. We hoped to overcome this by providing a communication paradigm as close as possible to the familiar facilities of our high level languages. To achieve this aim, we concentrated on making remote calls efficient, and on making the semantics of remote calls as close as possible to those of local calls.
Alloy: a lightweight object modelling notation Alloy is a little language for describing structural properties. It offers a declaration syntax compatible with graphical object models, and a set-based formula syntax powerful enough to express complex constraints and yet amenable to a fully automatic semantic analysis. Its meaning is given by translation to an even smaller (formally defined) kernel. This paper presents the language in its entirety, and explains its motivation, contributions and deficiencies.
Semantic grammar: an engineering technique for constructing natural language understanding systems One of the major stumbling blocks to more effective used computers by naive users is the lack of natural means of communication between the user and the computer system. This report discusses a paradigm for constructing efficient and friendly man-machine interface systems involving subsets of natural language for limited domains of discourse. As such this work falls somewhere between highly constrained formal language query systems and unrestricted natural language under-standing systems. The primary purpose of this research is not to advance our theoretical under-standing of natural language but rather to put forth a set of techniques for embedding both semantic/conceptual and pragmatic information into a useful natural language interface module. Our intent has been to produce a front end system which enables the user to concentrate on his problem or task rather than making him worry about how to communicate his ideas or questions to the machine.
Recursive functions of symbolic expressions and their computation by machine, Part I this paper in L a TEXpartly supported by ARPA (ONR) grant N00014-94-1-0775to Stanford University where John McCarthy has been since 1962. Copied with minor notationalchanges from CACM, April 1960. If you want the exact typography, look there. Currentaddress, John McCarthy, Computer Science Department, Stanford, CA 94305, (email:jmc@cs.stanford.edu), (URL: <a href="http://citeseer.ist.psu.edu/rd/0/http%3AqSqqSqwww-formal.stanford.eduqSqjmcqSq" onmouseover="self.status="http://www-formal.stanford.edu/jmc/"; return true" onmouseout="self.status=""; return true">http://www-formal.stanford.edu/jmc/</a> )by starting with the class of expressions called S-expressions and the functionscalled...
A study of cross-validation and bootstrap for accuracy estimation and model selection We review accuracy estimation methods and compare the two most common methods crossvalidation and bootstrap. Recent experimental results on artificial data and theoretical re cults in restricted settings have shown that for selecting a good classifier from a set of classifiers (model selection), ten-fold cross-validation may be better than the more expensive leaveone-out cross-validation. We report on a largescale experiment--over half a million runs of C4.5 and a Naive-Bayes algorithm--to estimate the effects of different parameters on these algrithms on real-world datasets. For crossvalidation we vary the number of folds and whether the folds are stratified or not, for bootstrap, we vary the number of bootstrap samples. Our results indicate that for real-word datasets similar to ours, The best method to use for model selection is ten fold stratified cross validation even if computation power allows using more folds.
A Theory of Prioritizing Composition An operator for the composition of two processes, where one process has priority over the other process, is studied. Processes are described by action systems, and data refinement is used for transforming processes. The operator is shown to be compositional, i.e. monotonic with respect to refinement. It is argued that this operator is adequate for modelling priorities as found in programming languages and operating systems. Rules for introducing priorities and for raising and lowering priorities of processes are given. Dynamic priorities are modelled with special priority variables which can be freely mixed with other variables and the prioritising operator in program development. A number of applications show the use of prioritising composition for modelling and specification in general.
Inheritance of proofs The Curry-Howard isomorphism, a fundamental property shared by many type theories, establishes a direct correspondence between programs and proofs. This suggests that the same structuring principles that ease programming should be useful for proving as well. To exploit object-oriented structuring mechanisms for verification, we extend the object-model of Pierce and Turner, based on the higher-order typed X-calculus F less than or equal to(omega), with a logical component. By enriching the (functional) signature of objects with a specification, methods and their correctness proofs are packed together in objects. The uniform treatment of methods and proofs gives rise in a natural way to object-oriented proving principles - including inheritance of proofs, late binding of proofs, and encapsulation of proofs - as analogues to object-oriented programming principles. We have used Lego, a type-theoretic proof checker, to explore the feasibility of this approach. (C) 1998 John Wiley & Sons, Inc.
Software engineering for parallel systems Current approaches to software engineering practice for parallel systems are reviewed. The parallel software designer has not only to address the issues involved in the characterization of the application domain and the underlying hardware platform, but, in many instances, the production of portable, scalable software is desirable. In order to accommodate these requirements, a number of specific techniques and tools have been proposed, and these are discussed in this review in the framework of the parallel software life-cycle. The paper outlines the role of formal methods in the practical production of parallel software, but its main focus is the emergence of development methodologies and environments. These include CASE tools and run-time support systems, as well as the use of methods taken from experience of conventional software development. Because of the particular emphasis on performance of parallel systems, work on performance evaluation and monitoring systems is considered.
Procedures and atomicity refinement The introduction of an early return from a (remote) procedure call can increase the degree of parallelism in a parallel or distributed algorithm modeled by an action system. We define a return statement for procedures in an action systems framework and show that it corresponds to carrying out an atomicity refinement.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.2
0.133333
0.033333
0
0
0
0
0
0
0
0
0
0
0
Conceptual Structures: Knowledge Visualization and Reasoning, 16th International Conference on Conceptual Structures, ICCS 2008, Toulouse, France, July 7-11, 2008, Proceedings
Report from the Joint W3C/IETF URI Planning Interest Group: Uniform Resource Identifiers (URIs), URLs, and Uniform Resource Names (URNs): Clarifications and Recommendations
Modeling Real Reasoning In this article we set out to develop a mathematical model of real-life human reasoning. The most successful attempt to do this, classical formal logic, achieved its success by restricting attention on formal reasoning within pure mathematics; more precisely, the process of proving theorems in axiomatic systems. Within the framework of mathematical logic, a logical proof consists of a finite sequence σ 1, σ 2, ..., σ n of statements, such that for each i = 1,..., n, σ i is either an assumption for the argument (possibly an axiom), or else follows from one or more of σ 1, ..., σ i − 1 by a rule of logic.
The symbol grounding problem There has been much discussion recently about the scope and limits of purely symbolic models of the mind and about the proper role of connectionism in cognitive modeling. This paper describes the “symbol grounding problem”: How can the semantic interpretation of a formal symbol system be made intrinsic to the system, rather than just parasitic on the meanings in our heads? How can the meanings of the meaningless symbol tokens, manipulated solely on the basis of their (arbitrary) shapes, be grounded in anything but other meaningless symbols? The problem is analogous to trying to learn Chinese from a Chinese/Chinese dictionary alone. A candidate solution is sketched: Symbolic representations must be grounded bottom-up in nonsymbolic representations of two kinds: (1) iconic representations , which are analogs of the proximal sensory projections of distal objects and events, and (2) categorical representations , which are learned and innate feature detectors that pick out the invariant features of object and event categories from their sensory projections. Elementary symbols are the names of these object and event categories, assigned on the basis of their (nonsymbolic) categorical representations. Higher-order (3) symbolic representations , grounded in these elementary symbols, consist of symbol strings describing category membership relations (e.g. “An X is a Y that is Z ”). Connectionism is one natural candidate for the mechanism that learns the invariant features underlying categorical representations, thereby connecting names to the proximal projections of the distal objects they stand for. In this way connectionism can be seen as a complementary component in a hybrid nonsymbolic/symbolic model of the mind, rather than a rival to purely symbolic modeling. Such a hybrid model would not have an autonomous symbolic “module,” however; the symbolic functions would emerge as an intrinsically “dedicated” symbol system as a consequence of the bottom-up grounding of categories' names in their sensory representations. Symbol manipulation would be governed not just by the arbitrary shapes of the symbol tokens, but by the nonarbitrary shapes of the icons and category invariants in which they are grounded.
On visual formalisms The higraph, a general kind of diagramming object, forms a visual formalism of topological nature. Higraphs are suited for a wide array of applications to databases, knowledge representation, and, most notably, the behavioral specification of complex concurrent systems using the higraph-based language of statecharts.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
The use of goals to surface requirements for evolving systems This paper addresses the use of goals to surface requirements for the redesign of existing or legacy systems. Goals are widely recognized as important precursors to system requirements, but the process of identifying and abstracting them has not been researched thoroughly. We present a summary of a goal-based method (GBRAM) for uncovering hidden issues, goals, and requirements and illustrate its application to a commercial system, an Intranet-based electronic commerce application, evaluating the method in the process. The core techniques comprising GBRAM are the systematic application of heuristics and inquiry questions for the analysis of goals, scenarios and obstacles. We conclude by discussing the lessons learned through applying goal refinement in the field and the implications for future research.
Optimal, efficient, recursive edge detection filters The design of an optimal, efficient, infinite-impulse-response (IIR) edge detection filter is described. J. Canny (1986) approached the problem by formulating three criteria designed in any edge detection filter: good detection, good localization, and low spurious response. He maximized the product of the first two criteria while keeping the spurious response criterion constant. Using the variational approach, he derived a set of finite extent step edge detection filters corresponding to various values of the spurious response criterion, approximating the filters by the first derivative of a Gaussian. A more direct approach is described in this paper. The three criteria are formulated as appropriate for a filter of infinite impulse response, and the calculus of variations is used to optimize the composite criteria. Although the filter derived is also well approximated by first derivative of a Gaussian, a superior recursively implemented approximation is achieved directly. The approximating filter is separable into two linear filters operating in two orthogonal directions allowing for parallel edge detection processing. The implementation is very simple and computationally efficient
Design problem solving: a task analysis I propose a task structure for design by analyzing a general class of methods that I call propose- critique-modify methods. The task structure is constructed by identifying a range of methods for each task. For each method, the knowledge needed and the subtasks that it sets up are iden- tified. This recursive style of analysis provides a framework in which we can understand a number of particular proposals for design prob- lem solving as specific combinations of tasks, methods, and subtasks. Most of the subtasks are not really specific to design as such. The analy- sis shows that there is no one ideal method for design, and good design problem solving is a result of recursively selecting methods based on a number of criteria, including knowledge avail- ability. How the task analysis can help in knowledge acquisition and system design is dis- cussed.
Beyond models and metaphors: visual formalisms in user interface design The user interface has both syntactic functions-supplying commands and arguments to programs-and semantic functions-visually presenting application semantics and supporting problem solving cognition. The authors argue that though both functions are important, it is time to devote more resources to the problems of the semantic interface. Complex problem solving activities, e.g. for design and analysis tasks, benefit from clear visualizations of application semantics in the user interface. Designing the semantic interface requires computational building blocks capable of representing and visually presenting application semantics in a clear, precise way. The authors argue that neither mental models not metaphors provide a basis for designing and implementing such building blocks, but that visual formalisms do. They compare the benefits of mental models, metaphors and visual formalisms as the basis for designing the user interface, with particular attention to the practical solutions each provides to application developers
A Software Development Environment for Improving Productivity First Page of the Article
The navigation toolkit The problem
Procedures and atomicity refinement The introduction of an early return from a (remote) procedure call can increase the degree of parallelism in a parallel or distributed algorithm modeled by an action system. We define a return statement for procedures in an action systems framework and show that it corresponds to carrying out an atomicity refinement.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.24
0.24
0.24
0.12
0.00013
0
0
0
0
0
0
0
0
0
Degrees of acyclicity for hypergraphs and relational database schemes Database schemes (winch, intuitively, are collecuons of table skeletons) can be wewed as hypergraphs (A hypergraph Is a generalization of an ordinary undirected graph, such that an edge need not contain exactly two nodes, but can instead contain an arbitrary nonzero number of nodes.) A class of "acychc" database schemes was recently introduced. A number of basic desirable propemes of database schemes have been shown to be equivalent to acyclicity This shows the naturalness of the concept. However, unlike the situation for ordinary, undirected graphs, there are several natural, noneqmvalent notions of acyclicity for hypergraphs (and hence for database schemes). Various desirable properties of database schemes are constdered and it is shown that they fall into several equivalence classes, each completely characterized by the degree of acycliclty of the scheme The results are also of interest from a purely graph-theoretic viewpomt. The original notion of aeyclicity has the countermtmtive property that a subhypergraph of an acychc hypergraph can be cyclic. This strange behavior does not occur for the new degrees of acyelicity that are considered.
Integrity Checking in a Logic-Oriented ER Model
Query Optimization Techniques Utilizing Path Indexes in Object-Oriented Database Systems We propose query optimization techniquesthat fully utilize the advantages of path indexesin object-oriented database systems. Althoughpath indexes provide an efficient accessto complex objects, little research has beendone on query optimization that fully utilizepath indexes. We first devise a generalizedindex intersection technique, adapted to thestructure of the path index extended fromconventional indexes, for utilizing multiple(path) indexes to access each class in a query.We...
A Graphical Query Language Based on an Extended E-R Model
The Object Flow Model: A Formal Framework for Describing the Dynamic Construction, Destruction and Interaction of Complex Objects This research complements active object-oriented database management systems by providing a formal, yet conceptually-natural model for complex object construction and destruction. The Object Flow Model (OFM), introduced in this paper, assumes an object-oriented database for the rich structural description of objects and for the specification of methods to manipulate objects. The OFM contributes a third component, the Object Flow Diagram (OFD), which provides a visual formalism to describe how multiple objects and events can actively invoke processing steps, how objects can become part of progressively more complex objects, and how complex objects can be picked apart. The OFD thus provides an invocation mechanism that is more general than a single message and a processing mechanism that may invoke multiple methods (so long as they apply to either the input or output objects). The development of the OFD was influenced by conceptual modeling languages and discrete event simulation languages and the formal semantics of the OFD is based on work in deductive databases.
Behavioural Constraints Using Events
A visual framework for modelling with heterogeneous notations This paper presents a visual framework for organizing models of systems which allows a mixture of notations, diagrammatic or text-based, to be used. The framework is based on the use of templates which can be nested and sometimes flattened. It is modular and can be used to structure the constraint space of the system, making it scalable with appropriate tool support. It is also flexible and extensible: users can choose which notations to use, mix them and add new notations or templates. The goal of this work is to provide more intuitive and expressive languages and frameworks to support the construction and presentation of rich and precise models.
Enhancing compositional reachability analysis with context constraints Compositional techniques have been proposed for traditional reachability analysis in order to introduce modularity and to control the state explosion problem. While modularity has been achived, state explosion is still a problem. Indeed, this problem may even be exacerbated as a locally minimised subsystem may contain many states and transitions forbidden by its context or environments. This paper presents a method to alleviate this problem effectively by including context constraints in local subsystem minimisation. The global behaviour generated using the method is observationally equivalent to that generated by compositional reachability analysis without the inclusion of context constraints.Context constraints, specified as interface processes, are restrictions imposed by the environment on subsystem behaviour. The minimisation produces a simplified machine that describes the behaviour of the subsystem constrained by its context. This machine can also be used as a substitute for the original subsystem in the subsequent steps of the compositional reachability analysis. Interface processes capturing context constraints can be specified by users or automatically constructd using a simple algorithm. The concepts in the paper are illustrated with a clients/server system.
MetaEdit+: A Fully Configurable Multi-User and Multi-Tool CASE and CAME Environment Computer Aided Software Engineering (CASE) environments have spread at a lower pace than expected. One reason for this is the immaturity of existing environments in supporting development in-the-large and by-many and their inability to address the varying needs of the software developers. In this paper we report on the development of a next generation CASE environment called MetaEdit+. The environment seeks to overcome all the above deficiencies, but in particular pays attention to catering for the varying needs of the software developers. MetaEdit+ is a multi-method, multi-tool platform for both CASE and Computer Aided Method Engineering (CAME). As a CASE tool it establishes a versatile and powerful multi-tool environment which enables flexible creation, maintenance, manipulation, retrieval and representation of design information among multiple developers. As a CAME environment it offers an easy-to-use yet powerful environment for method specification, integration, management and re-use. The paper explains the motivation for developing MetaEdit+, its design goals and philosophy and discusses the functionality of the CAME tools.
Domain-Specific Automatic Programming Domain knowledge is crucial to an automatic programming system and the interaction between domain knowledge and programming at the current time. The NIX project at Schlumberger-Doll Research has been investigating this issue in the context of two application domains related to oil well logging. Based on these experiments we have developed a framework for domain-specific automatic programming. Within the framework, programming is modeled in terms of two activities, formalization and implementation, each of which transforms descriptions of the program as it proceeds through intermediate states of development. The activities and transformations may be used to characterize the interaction of programming knowledge and domain knowledge in an automatic programming system.
Extending the Entity-Relationship Approach for Dynamic Modeling Purposes
Logarithmical hopping encoding: a low computational complexity algorithm for image compression LHE (logarithmical hopping encoding) is a computationally efficient image compression algorithm that exploits the Weber-Fechner law to encode the error between colour component predictions and the actual value of such components. More concretely, for each pixel, luminance and chrominance predictions are calculated as a function of the surrounding pixels and then the error between the predictions and the actual values are logarithmically quantised. The main advantage of LHE is that although it is capable of achieving a low-bit rate encoding with high quality results in terms of peak signal-to-noise ratio (PSNR) and image quality metrics with full-reference (FSIM) and non-reference (blind/referenceless image spatial quality evaluator), its time complexity is O(n) and its memory complexity is O(1). Furthermore, an enhanced version of the algorithm is proposed, where the output codes provided by the logarithmical quantiser are used in a pre-processing stage to estimate the perceptual relevance of the image blocks. This allows the algorithm to downsample the blocks with low perceptual relevance, thus improving the compression rate. The performance of LHE is especially remarkable when the bit per pixel rate is low, showing much better quality, in terms of PSNR and FSIM, than JPEG and slightly lower quality than JPEG-2000 but being more computationally efficient.
Report from the Joint W3C/IETF URI Planning Interest Group: Uniform Resource Identifiers (URIs), URLs, and Uniform Resource Names (URNs): Clarifications and Recommendations
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.104357
0.100092
0.100092
0.100092
0.050057
0.025039
0.004495
0.000049
0.000021
0.000004
0.000001
0
0
0
A model driven architecture approach for user interface generation focused on content personalization Model Driven Architecture (MDA) has gained attention from human-computer interface community because of its capability of code generation from abstract models and transformations. MDA approaches and tools in this context include, usually, the personalization of interface design elements (such as I/O fields, screen resolution, screen size, and so on) based on some information about the context of use. However, to really achieve the personalization, it is important not only to consider the container but also the content that means, which information should be provided in each situation. This paper presents a MDA approach in this direction. Our goal is to define during user interface design what information should be personalized for a specific domain considering the context information. To address this goal, a context model and a domain ontology are used as central elements for models design and transformations.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Constructing specifications by combining parallel elaborations An incremental approach to construction is proposed, with the virtue of offering considerable opportunity for mechanized support. Following this approach one builds a specification through a series of elaborations that incrementally adjust a simple initial specification. Elaborations perform both refinements, adding further detail, and adaptations, retracting oversimplifications and tailoring approximations to the specifics of the task. It is anticipated that the vast majority of elaborations can be concisely described to a mechanism that will then perform them automatically. When elaborations are independent, they can be applied in parallel, leading to diverging specifications that must later be recombined. The approach is intended to facilitate comprehension and maintenance of specifications, as well as their initial construction.
Representing open requirements with a fragment-based specification The paper describes and evaluates an alternative representation scheme for software applications in which the requirements are poorly understood or dynamic (i.e., open). The discussion begins with a classification of requirements specification properties and their relationship to the software process. Emphasis is placed on the representation schemes that are most appropriate for projects with open requirements in a flexible development setting. Fragment-based specifications, which capture a conceptual model of the product under development, are recommended for such applications. The paper describes an environment, in production use since 1980, that employs this form of specification. Evaluations of the environment's economic benefit and of the specification scheme's properties follow. A final section contains observations about the nature of future software applications and the environments necessary to support their development and maintenance
Distributed Intelligent Agents In Retsina, the authors have developed a distributed collection of software agents that cooperate asynchronously to perform goal-directed information retrieval and integration for supporting a variety of decision-making tasks. Examples for everyday organizational decision making and financial portfolio management demonstrate its effectiveness.
Generating, integrating, and activating thesauri for concept-based document retrieval A blackboard-based document management system that uses a neural network spreading-activation algorithm which lets users traverse multiple thesauri is discussed. Guided by heuristics, the algorithm activates related terms in the thesauri and converges of the most pertinent concepts. The system provides two control modes: a browsing module and an activation module that determine the sequence of operations. With the browsing module, users have full control over which knowledge sources to browse and what terms to select. The system's query formation; the retrieving, ranking and selection of documents; and thesaurus activation are described.<>
Document ranking and the vector-space model Efficient and effective text retrieval techniques are critical in managing the increasing amount of textual information available in electronic form. Yet text retrieval is a daunting task because it is difficult to extract the semantics of natural language texts. Many problems must be resolved before natural language processing techniques can be effectively applied to a large collection of texts. Most existing text retrieval techniques rely on indexing keywords. Unfortunately, keywords or index terms alone cannot adequately capture the document contents, resulting in poor retrieval performance. Yet keyword indexing is widely used in commercial systems because it is still the most viable way by far to process large amounts of text. Using several simplifications of the vector-space model for text retrieval queries, the authors seek the optimal balance between processing efficiency and retrieval effectiveness as expressed in relevant document rankings
Tolerant planning and negotiation in generating coordinated movement plans in an automated factory Plan robustness is important for real world applications where modelling imperfections often result in execution deviations. The concept of tolerant planning is suggested as one of the ways to build robust plans. Tolerant planning achieves this aim by being tolerant of an agent's own execution deviations. When applied to multi-agent domains, it has the additional characteristic of being tolerant of other agents' deviant behaviour. Tolerant planning thus defers dynamic replanning until execution errors become excessive. The underlying strategy is to provide more than ample resources for agents to achieve their goals. Such redundancies aggravate the resource contention problem. To counter this, the iterative negotiation mechanism is suggested. It requires agents to be skillful in negotiating with other agents to resolve conflicts in such a way as to minimize compromising one's own tolerances and yet being benevolent in helping others find a feasible plan.
Automated assistance for conflict resolution in multiple perspective systems analysis and operation Interest in developing systems from multiple system perspectives, bpth in representational form and in semantic content, is becoming more common place. However, automated support for the tasks involved are relatively scarce. In this paper, we describe our ongoing effort to provide a single networked server interface to a variety of multiple perspective systems analysis tools. We illustrate our tool suite with two types of systems analysis: intra-organizational and inter-organizational. 1. Multiple Perspective System Analysis Researchers from a variety of backgrounds have converged on a single approach to representing development information-multiple perspectives. From the initial elicitation of user requirements to the tracking of multiple program versions, multiple representations have become common place throughout the life-cycle. While each aspect of the lifecycle has different needs, in all cases multiple product perspectives provides common benefits: version control, concurrent development, and increased reuse. Increasingly, researchers recognize the need of not only multiple product versions, but multiple representational forms (e.g., petri nets and state transition diagrams), as well as multiple original sources (e.g., user requirements and management requirements). While multiple perspectives are useful throughout the lifecycle, this paper focuses on the early stages of development, specifically analysis of system requirements. In addition to multiple requirements perspectives, we also want to include user goals and preferences. In fact, our users include all stakeholders-those agents who effect or are affected by the eventual artifact. By considering the origination of design goals, one can assist the negotiation between interacting goals as well as their relaxation against problem constraints. We call this paradigm, multiple perspective systems analysis.
Successful strategies for user participation in systems development Past MIS research has indicated a mixed relationship between user participation and user satisfaction with system development projects, suggesting that user participation is not equally effective in all situations. This has led researchers to investigate the contexts within which user participation can be used to improve user satisfaction. This study builds on this past body of research by examining the relationship between specific user participative behaviors and user satisfaction in different contextual situations in order to identify the most successful participative behaviors. To do this, data were collected from 151 independent system development projects in eight different organizations. The context of development was described by two factors--task complexity and system complexity. As suggested in the literature, the combination of these two contextual factors determine the need for user participation. The relationship between specific participative behaviors and user satisfaction was then examined where the need for participation was high and those results were compared with situations with a lower need for participation. Not all participative behaviors were equally effective in all situations. Depending on the level of task complexity and system complexity, some user participative behaviors resulted in improved user satisfaction, while others had no relationship with satisfaction. The results add to earlier studies by identifying those specific user participative behaviors most beneficial under different contexts. The implications apply to both practitioners involved in the development of systems and academicians seeking to explain where and how user participation should be used. Strategies based on the results are suggested for the most appropriate involvement for users during system development.
Seven basic principles of software engineering This paper attempts to distill the large number of individual aphorisms on good software engineering into a small set of basic principles. Seven principles have been determined which form a reasonably independent and complete set. These are: 1.(1) manage using a phased life-cycle plan. 2.(2) perform continuous validation. 3.(3) maintain disciplined product control. 4.(4) use modern programming practices. 5.(5) maintain clear accountability for results. 6.(6) use better and fewer people. 7.(7) maintain a commitment to improve the process. The overall rationale behind this set of principles is discussed, followed by a more detailed discussion of each of the principles.
A Conceptual Framework for Requirements Engineering. A framework for assessing research and practice in requirements engineering is proposed. The framework is used to survey state of the art research contributions and practice. The framework considers a task activity view of requirements, and elaborates different views of requirements engineering (RE) depending on the starting point of a system development. Another perspective is to analyse RE from different conceptions of products and their properties. RE research is examined within this framework and then placed in the context of how it extends current system development methods and systems analysis techniques.
A Method for Refining Atomicity in Parallel Algorithms Parallel programs are described as action systems. These are basically nondeterministic do-od programs that can be executed in both a sequential and a parallel fashion. A method for refining the atomicity of actions in a parallel program is described. This allows derivation of parallel programs by stepwise refinement, starting from an initial highl level and sequential program and ending in a parallel program for shared memory or message passing architectures. A calculus of refinements is used as a framework for the derivation method. The notion of correctness being preserved by the refinements is total correctness. The method is especially suited for derivation of parallel algorithms for MIMD-type multiprocessor systems.
Fast Convolution We present a very simple and fast algorithm to compute the convolu- tion of an arbitrary sequence x with a sequence of a specic type, a. The sequence a is any linear combination of polynomials, exponentials and trigonometric terms. The number of steps for computing the convolution depends on a certain complexity of a and not on its length, thus making it feasible to convolve a sequence with very large kernels fast. Computing the convolution (corre- lation, ltering) of a sequence x to- gether with a xed sequence a is one of the ubiquitous operations in graphics, image and signal processing. Often the sequence a is a polynomial, exponen- tial or trigonometric function sampled at discrete points or a piecewise sum of such terms, such as, splines, or else the Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for prot or commercial advantage and that copies bear this notice and the full citation on the rst page. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specic permis-
Hardware/software codesign: a perspective No abstract available.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1.016126
0.012038
0.011837
0.011837
0.011837
0.011837
0.006214
0.004062
0.002042
0.0003
0.000011
0
0
0
Lossless Compression of DNA Microarray Images with Inversion Coder DNA microarray images are used to identify and monitor gene activity, or expression. In this study, we investigate the performance of the inversion coding technique in lossless compression of DNA microarray images. We show that inversion coding outperforms commonly used entropy coders and generic image compressors.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
A Feedback-Based Multi-Classifier System Multi-classifier approach is a widespread strategy used in many difficult classification problems.Traditionally, in a multi-classifier approach, a classification decision based on the combination of a multitude of classifiers is expected to outperform the decisions of each individual classifier. Therefore, in a multi-classifier systems, the potential of the whole set of classifiers is only exploited at the level of the final decision, in which the contributions of all classifiers is used by combining their individual decisions.This paper shows a feed-back based multi-classifier system in which the multi-classifier approach is used not only for providing the final decision, but also for improving the performance of the individual classifiers,by means of a closed-loop strategy.The experimental tests have been carried out in the field of hand-written numeral recognition. The result demonstrates the effectiveness of the proposed approach and its superiority with respect to traditional approach.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Procedures and atomicity refinement The introduction of an early return from a (remote) procedure call can increase the degree of parallelism in a parallel or distributed algorithm modeled by an action system. We define a return statement for procedures in an action systems framework and show that it corresponds to carrying out an atomicity refinement.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Distributed Field Estimation Using Sensor Networks Based on H∞ Consensus Filtering.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Event-Triggered State Estimation For Markovian Jumping Impulsive Neural Networks With Interval Time-Varying Delays This paper investigates the event-triggered state estimation problem of Markovian jumping impulsive neural networks with interval time-varying delays. The purpose is to design a state estimator to estimate system states through available output measurements. In the neural networks, there are a set of modes, which are determined by Markov chain. A Markovian jumping time-delay impulsive neural networks model is employed to describe the event-triggered scheme and the network- related behaviour, such as transmission delay, data package dropout and disorder. The proposed event-triggered scheme is used to determine whether the sampled state information should be transmitted. The discrete delays are assumed to be time-varying and belong to a given interval, which means that the lower and upper bounds of interval time-varying delays are available. First, we design a state observer to estimate the neuron states. Second, based on a novel Lyapunov-Krasovskii functional (LKF) with triple-integral terms and using an improved inequality, several sufficient conditions are derived. The derived conditions are formulated in terms of a set of linear matrix inequalities , under which the estimation error system is globally asymptotically stable in the mean square sense. Finally, numerical examples are given to show the effectiveness and superiority of the results.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Semantic and structural delineation of market scenarios by the event bush method Considered is the retrospective application of a new method of knowledge engineering, the event bush, to a real collision that took place in the North-American market of cool sparkling drinks in the 1980s. The paper briefly introduces the modeled task, provides an outline of the method, presents the results of modeling and discusses them, stressing new opportunities for market analysis and directions of further work. The results of modeling provide ground to reasonably expect improvement of consulting and advising services with application of the event bush method.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Completeness and Consistency Analysis on Requirements of Distributed Event-Driven Systems For many event-driven systems, the completeness and consistency (C&C) are the most important characteristics of those software requirements. This paper presents a systematic approach to perform C&C analysis on the requirements, and an intelligent approach to correcting the inconsistencies identified. A formal scenario model is used to represent requirements such that scenario elements of condition guards, events and actions can be extracted automatically. Condition guards associated with a same event are constructed into a tree on which to perform completeness analysis and supplement missing specification. Consistency analysis focuses on three types of inconsistencies and is performed according to the intra-relations among condition guards and inter-relations with actions. An algorithm of inconsistent correction is proposed to guide eliminating the inconsistency identified interactively. Finally, we provide an example of car-alarm system to illustrate the proposed process and techniques.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Modified Rice-Golomb Code for Predictive Coding of Integers with Real-valued Predictions Rice-Golomb codes are widely used in practice to encode integer-valued prediction residuals. However, in lossless coding of audio, image, and video, specially those involving linear predictors, the predictions are from the real domain. In this paper, we have modified and extended the Rice-Golomb code so that it can operate at fractional precision to efficiently exploit the real-valued predictions. Coding at arbitrarily small precision allows the residuals to be modeled with the Laplace distribution instead of its discrete counterpart, namely the two-sided geometric distribution (TSGD). Unlike the Rice-Golomb code, which maps equally probable opposite-signed residuals to different integers, the proposed coding scheme is symmetric in the sense that, at arbitrarily small precision, it assigns codewords of equal length to equally probable residual intervals. The symmetry of both the Laplace distribution and the code facilitates the analysis of the proposed coding scheme to determine the average code-length and the optimal value of the associated coding parameter. Experimental results demonstrate that the proposed scheme, by making efficient use of real-valued predictions, achieves better compression as compared to the conventional scheme.
Optimal prefix codes for sources with two-sided geometric distributions A complete characterization of optimal prefix codes for off-centered, two-sided geometric distributions of the integers is presented. These distributions are often encountered in lossless image compression applications, as probabilistic models for image prediction residuals. The family of optimal codes described is an extension of the Golomb codes, which are optimal for one-sided geometric distributions. The new family of codes allows for encoding of prediction residuals at a complexity similar to that of Golomb codes, without recourse to the heuristic approximations frequently used when modifying a code designed for nonnegative integers so as to apply to the encoding of any integer. Optimal decision rules for choosing among a lower complexity subset of the optimal codes, given the distribution parameters, are also investigated, and the relative redundancy of the subset with respect to the full family of optimal codes is bounded
Optimal source codes for geometrically distributed integer alphabets (Corresp.) LetP(i)= (1 - theta)theta^ibe a probability assignment on the set of nonnegative integers wherethetais an arbitrary real number,0 < theta < 1. We show that an optimal binary source code for this probability assignment is constructed as follows. Letlbe the integer satisfyingtheta^l + theta^{l+1} leq 1 < theta^l + theta^{l-1}and represent each nonnegative integeriasi = lj + rwhenj = lfloor i/l rfloor, the integer part ofi/l, andr = [i] mod l. Encodejby a unary code (i.e.,jzeros followed by a single one), and encoderby a Huffman code, using codewords of lengthlfloor log_2 l rfloor, forr < 2^{lfloor log l+1 rfloor} - l, and lengthlfloor log_2 l rfloor + 1otherwise. An optimal code for the nonnegative integers is the concatenation of those two codes.
Dynamic Huffman coding This note shows how to maintain a prefix code that remains optimum as the weights change. A Huffman tree with nonnegative integer weights can be represented in such a way that any weight w at level l can be increased or decreased by unity in O(l) steps, preserving minimality of the weighted path length. One-pass algorithms for file compression can be based on such a representation.
Sources Which Maximize the Choice of a Huffman Coding Tree
Fast Constant Division Routines When there is no division circuit available, the arithmetical function of division is normally performed by a library subroutine. The library subroutine normally allows both the divisor and the dividend to be variables, and requires the execution of hundreds of assembly instructions. This correspondence provides a fast algorithm for performing the integer division of a variable by a predetermined divisor. Based upon this algorithm, an efficient division routine has been constructed for each odd divisor up to 55. These routines may be implemented in assembly languages, in microcodes, and in special-purpose circuits.
On ordering color maps for lossless predictive coding Linear predictive techniques perform poorly when used with color-mapped images where pixel values represent indices that point to color values in a look-up table. Reordering the color table, however, can lead to a lower entropy of prediction errors. In this paper, we investigate the problem of ordering the color table such that the absolute sum of prediction errors is minimized. The problem turns out to be intractable, even for the simple case of one-dimensional (1-D) prediction schemes. We give two heuristic solutions for the problem and use them for ordering the color table prior to encoding the image by lossless predictive techniques. We demonstrate that significant improvements in actual bit rates can be achieved over dictionary-based coding schemes that are commonly employed for color-mapped images
A calculus of refinements for program derivations A calculus of program refinements is described, to be used as a tool for the step-by-step derivation of correct programs. A derivation step is considered correct if the new program preserves the total correctness of the old program. This requirement is expressed as a relation of (correct) refinement between nondeterministic program statements. The properties of this relation are studied in detail. The usual sequential statement constructors are shown to be monotone with respect to this relation and it is shown how refinement between statements can be reduced to a proof of total correctness of the refining statement. A special emphasis is put on the correctness of replacement steps, where some component of a program is replaced by another component. A method by which assertions can be added to statements to justify replacements in specific contexts is developed. The paper extends the weakest precondition technique of Dijkstra to proving correctness of larger program derivation steps, thus providing a unified framework for the axiomatic, the stepwise refinement and the transformational approach to program construction and verification.
Distributed data structures in Linda A distributed data structure is a data structure that can be manipulated by many parallel processes simultaneously. Distributed data structures are the natural complement to parallel program structures, where a parallel program (for our purposes) is one that is made up of many simultaneously active, communicating processes. Distributed data structures are impossible in most parallel programming languages, but they are supported in the parallel language Linda and they are central to Linda programming style. We outline Linda, then discuss some distributed data structures that have arisen in Linda programming experiments to date. Our intent is neither to discuss the design of the Linda system nor the performance of Linda programs, though we do comment on both topics; we are concerned instead with a few of the simpler and more basic techniques made possible by a language model that, we argue, is subtly but fundamentally different in its implications from most others.This material is based upon work supported by the National Science Foundation under Grant No. MCS-8303905. Jerry Leichter is supported by a Digital Equipment Corporation Graduate Engineering Education Program fellowship.
Class-based n-gram models of natural language We address the problem of predicting a word from previous words in a sample of text. In particular, we discuss n-gram models based on classes of words. We also discuss several statistical algorithms for assigning words to classes based on the frequency of their co-occurrence with other words. We find that we are able to extract classes that have the flavor of either syntactically based groupings or semantically based groupings, depending on the nature of the underlying statistics.
Towards the Proper Integration of Extra-Functional Requirements In spite of the many achievements in software engineering, proper treatment of extra-functional requirements (also known as non-functional requirements) within the software development process is still a challenge to our discipline. The application of functionality-biased software development methodologies can lead to major contradictions in the joint modelling of functional and extra-functional requirements. Based on a thorough discussion on the nature of extra-functional requirements as well as on open issues in coping with them, this paper emphasizes the role of extra-functional requirements in the software development process. Particularly, a framework supporting the explicit integration of extra-functional requirements into a conventional phase-driven process model is proposed and outlined.
Manipulating and documenting software structures using SHriMP views An effective approach to program understanding involves browsing, exploring, and creating views that document software structures at different levels of abstraction. While exploring the myriad of relationships in a multi-million line legacy system, one can easily loose context. One approach to alleviate this problem is to visualize these structures using fisheye techniques. This paper introduces Simple Hierarchical Multi-Perspective views (SHriMPs). The SHriMP visualization technique has been incorporated into the Rigi reverse engineering system. This greatly enhances Rigi's capabilities for documenting design patterns and architectural diagrams that span multiple levels of abstraction. The applicability and usefulness of SHriMPs is illustrated with selected program understanding tasks.
Matching conceptual graphs as an aid to requirements re-use The types of knowledge used during requirements acquisition are identified and a tool to aid in this process, ReqColl (Requirements Collector) is introduced. The tool uses conceptual graphs to represent domain concepts and attempts to recognise new concepts through the use of a matching facility. The overall approach to requirements capture is first described and the approach to matching illustrated informally. The detailed procedure for matching conceptual graphs is then given. Finally ReqColl is compared to similar work elsewhere and some future research directions indicated.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.24
0.009138
0.007522
0.001643
0.000516
0.000066
0.000005
0
0
0
0
0
0
0
Workflow Modeling A discussion of workflow models and process description languages is presented. The relationshipbetween data, function and coordination aspects of the process is discussed, and a claim is made thatmore than one model view (or representation) is needed in order to grasp the complexity of processmodeling.The basis of a new model is proposed, showing that more expressive models can be built by supportingasynchronous events and batch activities, matched by powerfull run-time support.1...
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
Where Do Operations Come From? A Multiparadigm Specification Technique We propose a technique to help people organize and write complex specifications, exploiting the best features of several different specification languages. Z is supplemented, primarily with automata and grammars, to provide a rigorous and systematic mapping from input stimuli to convenient operations and arguments for the Z specification. Consistency analysis of the resulting specificaiton is based on the structural rules. The technique is illustrated by two examples, a graphical human-computer interface and a telecommunications system.
Towards an Automatic Integration of Statecharts The integration of statecharts is part of an integration methodology for object oriented views. Statecharts are the most important language for the representation of the behaviour of objects and are used in many object oriented modeling techniques, e.g. in UML ([23]). In this paper we focus on the situation where the behaviour of an object type is represented in several statecharts, which have to be integrated into a single statechart. The presented approach allows an automatic integration process but gives the designer possibilities to make own decisions to guide the integration process and to achieve qualitative design goals.
A Conceptual Graph Model for W3C Resource Description Framework With the aim of building a "Semantic Web", the content of the documents must be explicitly represented through metadata in order to enable contents-guided search. Our approach is to exploit a standard language (RDF, recommended by W3C) for expressing such metadata and to interpret these metadata in conceptual graphs (CG) in order to exploit querying and inferencing capabilities enabled by CG formalism. The paper presents our mapping of RDF into CG and its interest in the context of the semantic Web.
Procedures and atomicity refinement The introduction of an early return from a (remote) procedure call can increase the degree of parallelism in a parallel or distributed algorithm modeled by an action system. We define a return statement for procedures in an action systems framework and show that it corresponds to carrying out an atomicity refinement.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Stabilization by using artificial delays: An LMI approach. Static output-feedback stabilization for the nth order vector differential equations by using artificial multiple delays is considered. Under assumption of the stabilizability of the system by a static feedback that depends on the output and its derivatives up to the order n−1, a delayed static output-feedback is found that stabilizes the system. The conditions for the stability analysis of the resulting closed-loop system are given in terms of simple LMIs. It is shown that the LMIs are always feasible for appropriately chosen gains and small enough delays. Robust stability analysis in the presence of uncertain time-varying delays and stochastic perturbation of the system coefficients is provided. Numerical examples including chains of three and four integrators that are stabilized by static output-feedbacks with multiple delays illustrate the efficiency of the method.
Stability analysis of uncertain sampled-data systems with incremental delay using looped-functionals The robust stability analysis of asynchronous and uncertain sampled-data systems with constant incremental input delay is addressed in the looped-functional framework. These functionals have been shown to be suitable for the analysis of impulsive systems as they allow one to express discrete-time stability conditions in an affine way, enabling then the consideration of uncertain and time-varying systems. The stability conditions are obtained by first reformulating the sampled-data system as an impulsive system, and by then considering a tailored looped-functional along with Wirtinger's inequality, a recently introduced inequality that has been shown to be less conservative than Jensen's inequality. Several examples are given for illustration.
New stability conditions for systems with distributed delays In the present paper, sufficient conditions for the exponential stability of linear systems with infinite distributed delays are presented. Such systems arise in population dynamics, in traffic flow models, in networked control systems, in PID controller design and in other engineering problems. In the early Lyapunov-based analysis of systems with distributed delays (Kolmanovskii & Myshkis, 1999), the delayed terms were treated as perturbations, where it was assumed that the system without the delayed term is asymptotically stable. Later, for the case of constant kernels and finite delays, less conservative conditions were derived under the assumption that the corresponding system with the zero-delay is stable (Chen & Zheng, 2007). We will generalize these results to the infinite delay case by extending the corresponding Jensen's integral inequalities and Lyapunov-Krasovskii constructions. Our main challenge is the stability conditions for systems with gamma-distributed delays, where the delay is stabilizing, i.e. the corresponding system with the zero-delay as well as the system without the delayed term are not asymptotically stable. Here the results are derived by using augmented Lyapunov functionals. Polytopic uncertainties in the system matrices can be easily included in the analysis. Numerical examples illustrate the efficiency of the method. Thus, for the traffic flow model on the ring, where the delay is stabilizing, the resulting stability region is close to the theoretical one found in Michiels, Morarescu, and Niculescu (2009) via the frequency domain analysis.
Wirtinger-based integral inequality: Application to time-delay systems In the last decade, the Jensen inequality has been intensively used in the context of time-delay or sampled-data systems since it is an appropriate tool to derive tractable stability conditions expressed in terms of linear matrix inequalities (LMIs). However, it is also well-known that this inequality introduces an undesirable conservatism in the stability conditions and looking at the literature, reducing this gap is a relevant issue and always an open problem. In this paper, we propose an alternative inequality based on the Fourier Theory, more precisely on the Wirtinger inequalities. It is shown that this resulting inequality encompasses the Jensen one and also leads to tractable LMI conditions. In order to illustrate the potential gain of employing this new inequality with respect to the Jensen one, two applications on time-delay and sampled-data stability analysis are provided.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
A semantics of multiple inheritance this paper is to present a clean semantics of multiple inheritance and to show that, in the context of strongly-typed, statically-scoped languages, a sound typechecking algorithm exists. Multiple inheritance is also interpreted in a broad sense: instead of being limited to objects, it is extended in a natural way to union types and to higher-order functional types. This constitutes a semantic basis for the unification of functional and object-oriented programming.
The Manchester prototype dataflow computer The Manchester project has developed a powerful dataflow processor based on dynamic tagging. This processor is large enough to tackle realistic applications and exhibits impressive speedup for programs with sufficient parallelism.
A lazy evaluator A different way to execute pure LISP programs is presented. It delays the evaluation of parameters and list structures without ever having to perform more evaluation steps than the usual method. Although the central idea can be found in earlier work this paper is of interest since it treats a rather well-known language and works out an algorithm which avoids full substitution. A partial correctness proof using Scott-Strachey semantics is sketched in a later section.
Modelling information flow for organisations: A review of approaches and future challenges. Modelling is a classic approach to understanding complex problems that can be achieved diagrammatically to visualise concepts, and mathematically to analyse attributes of concepts. An organisation as a communicating entity is a made up of constructs in which people can have access to information and speak to each other. Modelling information flow for organisations is a challenging task that enables analysts and managers to better understand how to: organise and coordinate processes, eliminate redundant information flows and processes, minimise the duplication of information and manage the sharing of intra- and inter-organisational information.
From Action Systems to Modular Systems Action systems are used to extend program refinement methods for sequential programs, as described in the refinement calculus, to parallel and reactive system refinement. They provide a general description of reactive systems, capable of modeling terminating, possibly aborting and infinitely repeating systems. We show how to extend the action system model to refinement of modular systems. A module may export and import variables, it may provide access procedures for other modules, and it may itself access procedures of other modules. Modules may have autonomous internal activity and may execute in parallel or in sequence. Modules may be nested within each other. They may communicate by shared variables, shared actions, a generalized form of remote procedure calls and by persistent data structures. Both synchronous and asynchronous communication between modules is supported. The paper shows how a single framework can be used for both the specification of large systems, the modular decomposition of the system into smaller units and the refinement of the modules into program modules that can be described in a standard programming language and executed on standard hardware.
A Software Development Environment for Improving Productivity First Page of the Article
The navigation toolkit The problem
Analogical retrieval in reuse-oriented requirements engineering Computational mechanisms are presented for analogical retrieval of domain knowledge as a basis for intelligent tool-based assistance for requirements engineers, A first mechanism, called the domain matcher, retrieves object system models which describe key features for new problems, A second mechanism, called the problem classifier, reasons with analogical mappings inferred by the domain matcher to detect potential incompleteness, overspecification and inconsistencies in entered facts and requirements, Both mechanisms are embedded in AIR, a toolkit that provides co-operative reuse-oriented assistance for requirements engineers.
Cognitive Relaying With Transceiver Hardware Impairments Under Interference Constraints. In this letter, we analyze the performance of cognitive amplify-and-forward multirelay networks with active direct link in the presence of relay transceiver hardware impairments. Considering distortion noises on both interference and main data links, we derive tight closed-form outage probability expressions and their asymptotic behavior for partial relay selection (PRS) and opportunistic relay se...
1.2
0.04
0.028571
0.000429
0
0
0
0
0
0
0
0
0
0
Query log driven web search results clustering Different important studies in Web search results clustering have recently shown increasing performances motivated by the use of external resources. Following this trend, we present a new algorithm called Dual C-Means, which provides a theoretical background for clustering in different representation spaces. Its originality relies on the fact that external resources can drive the clustering process as well as the labeling task in a single step. To validate our hypotheses, a series of experiments are conducted over different standard datasets and in particular over a new dataset built from the TREC Web Track 2012 to take into account query logs information. The comprehensive empirical evaluation of the proposed approach demonstrates its significant advantages over traditional clustering and labeling techniques.
A comparison of extrinsic clustering evaluation metrics based on formal constraints There is a wide set of evaluation metrics available to compare the quality of text clustering algorithms. In this article, we define a few intuitive formal constraints on such metrics which shed light on which aspects of the quality of a clustering are captured by different metric families. These formal constraints are validated in an experiment involving human assessments, and compared with other constraints proposed in the literature. Our analysis of a wide range of metrics shows that only BCubed satisfies all formal constraints. We also extend the analysis to the problem of overlapping clustering, where items can simultaneously belong to more than one cluster. As Bcubed cannot be directly applied to this task, we propose a modified version of Bcubed that avoids the problems found with other metrics.
A general evaluation measure for document organization tasks A number of key Information Access tasks -- Document Retrieval, Clustering, Filtering, and their combinations -- can be seen as instances of a generic {\em document organization} problem that establishes priority and relatedness relationships between documents (in other words, a problem of forming and ranking clusters). As far as we know, no analysis has been made yet on the evaluation of these tasks from a global perspective. In this paper we propose two complementary evaluation measures -- Reliability and Sensitivity -- for the generic Document Organization task which are derived from a proposed set of formal constraints (properties that any suitable measure must satisfy). In addition to be the first measures that can be applied to any mixture of ranking, clustering and filtering tasks, Reliability and Sensitivity satisfy more formal constraints than previously existing evaluation metrics for each of the subsumed tasks. Besides their formal properties, its most salient feature from an empirical point of view is their strictness: a high score according to the harmonic mean of Reliability and Sensitivity ensures a high score with any of the most popular evaluation metrics in all the Document Retrieval, Clustering and Filtering datasets used in our experiments.
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The mystery of the tower revealed: a non-reflective description of the reflective tower Abstract In an important series of papers [8, 9], Brian Smith has discussed the nature of programs that know about their text and the context in which they are executed. He called this kind of knowledge,reflection. Smith proposed a programming language, called 3-LISP, which embodied such self-knowledge in the domain of metacircular interpreters. Every 3-LISP program is interpreted by a metacircular interpreter, also written in 3-LISP. This gives rise to a picture of an infinite tower of metacircular interpreters, each being interpreted by the one above it. Such a metaphor poses a serious challenge for conventional modes of understandingof programming languages. In our earlier work on reflection [4], we showed how a useful species of reflection could be modeled without the use of towers. In this paper, we give a semantic account of the reflective tower. This account is self-contained in the sense that it does not em- ploy reflection to explain reflection. 1. Modeling reflection
Statecharts: A visual formalism for complex systems Abstract. We,present,a broad,extension,of the,conventional,formalism,of state machines,and state diagrams, that is relevant to the specification and design of complex discrete-event systems, such as multi-computer real-time systems, communication protocols and digital control units. Our diagrams, which we call statecharts, extend conventional state-transition diagrams with essentially three elements, dealing, respectively, with the notions of hierarchy, concurrency and communica- tion. These,transform,the language,of state diagrams,into a highly,structured,and,economical description,language.,Statecharts,are thus,compact,and,expressiv-small,diagrams,can,express complex,behavior-as,well,as compositional,and,modular.,When,coupled,with,the capabilities of computerized graphics, statecharts enable viewing the description at different levels of detail, and make even very large specifications manageable and comprehensible. In fact, we intend to demonstrate,here that statecharts,counter,many,of the objections,raised,against,conventional,state diagrams, and thus appear to render specification by diagrams an attractive and plausible approach. Statecharts,can be used,either as a stand-alone,behavioral,description,or as part of a more,general design methodology that deals also with the system’s other aspects, such as functional decomposi- tion and,data-flow specification. We also discuss,some,practical,experience,that was,gained,over the last three,years,in applying,the statechart,formalism,to the specification,of a particularly complex,system.
A calculus of refinements for program derivations A calculus of program refinements is described, to be used as a tool for the step-by-step derivation of correct programs. A derivation step is considered correct if the new program preserves the total correctness of the old program. This requirement is expressed as a relation of (correct) refinement between nondeterministic program statements. The properties of this relation are studied in detail. The usual sequential statement constructors are shown to be monotone with respect to this relation and it is shown how refinement between statements can be reduced to a proof of total correctness of the refining statement. A special emphasis is put on the correctness of replacement steps, where some component of a program is replaced by another component. A method by which assertions can be added to statements to justify replacements in specific contexts is developed. The paper extends the weakest precondition technique of Dijkstra to proving correctness of larger program derivation steps, thus providing a unified framework for the axiomatic, the stepwise refinement and the transformational approach to program construction and verification.
Symbolic Model Checking Symbolic model checking is a powerful formal specification and verification method that has been applied successfully in several industrial designs. Using symbolic model checking techniques it is possible to verify industrial-size finite state systems. State spaces with up to 1030 states can be exhaustively searched in minutes. Models with more than 10120 states have been verified using special techniques.
2009 Data Compression Conference (DCC 2009), 16-18 March 2009, Snowbird, UT, USA
Voice as sound: using non-verbal voice input for interactive control We describe the use of non-verbal features in voice for direct control of interactive applications. Traditional speech recognition interfaces are based on an indirect, conversational model. First the user gives a direction and then the system performs certain operation. Our goal is to achieve more direct, immediate interaction like using a button or joystick by using lower-level features of voice such as pitch and volume. We are developing several prototype interaction techniques based on this idea, such as "control by continuous voice", "rate-based parameter control by pitch," and "discrete parameter control by tonguing." We have implemented several prototype systems, and they suggest that voice-as-sound techniques can enhance traditional voice recognition approach.
An ontological model of an information system An ontological model of an information system that provides precise definitions of fundamental concepts like system, subsystem, and coupling is proposed. This model is used to analyze some static and dynamic properties of an information system and to examine the question of what constitutes a good decomposition of an information system. Some of the major types of information system formalisms that bear on the authors' goals and their respective strengths and weaknesses relative to the model are briefly reviewed. Also articulated are some of the fundamental notions that underlie the model. Those basic notions are then used to examine the nature and some dynamics of system decomposition. The model's predictive power is discussed.
Characterizing plans as a set of constraints—the model—a framework for comparative analysis This paper presents an approach to representing and manipulating plans based on a model of plans as a set of constraints. The <I-N-OVA> model 1 is used to characterise the plan representation used within O-Plan and to relate this work to emerging formal analyses of plans and planning. This synergy of practical and formal approaches can stretch the formal methods to cover realistic plan representations as needed for real problem solving, and can improve the analysis that is possible for production planning systems.<I-N-OVA> is intended to act as a bridge to improve dialogue between a number of communities working on formal planning theories, practical planning systems and systems engineering process management methodologies. It is intended to support new work on automatic manipulation of plans, human communication about plans, principled and reliable acquisition of plan information, and formal reasoning about plans.
Maintaining a legacy: towards support at the architectural level An organization that develops large, software intensive systems with a long lifetime will encounter major changes in the market requirements, the software development environment, including its platform, and the target platform. In order to meet the challenges associated with these changes, software development has to undergo major changes as well, Especially when these systems are successful, and hence become an asset, particular care shall be taken to maintain this legacy; large systems with a long lifetime tend to become very complex and difficult to understand. Software architecture plays a vital role in the development of large software systems. For the purpose of maintenance, an up-to-date explicit description of the software architecture of a system supports understanding and comprehension of it, amongst other things. However, many large! complex systems do not have an up-to-date documented software architecture. Particularly in cases where these systems have a long lifetime, the (natural) turnover of personnel will make it very likely that many employees contributing to previous generations of the system are no longer available. A need to 'recover' the software architecture of the system may become prevalent, facilitating the understanding of the system, providing ways to improve its maintainability and quality and to control architectural changes. This paper gives an overview of an on-going effort to improve the maintainability and quality of a legacy system, and describes the recent introduction of support at the architectural level for program understanding and complexity control. Copyright (C) 2000 John Wiley & Sons, Ltd.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.2
0.0125
0.003774
0
0
0
0
0
0
0
0
0
0
0
Compression of 3D MRI images based on symmetry in prediction-error field Three dimensional MRI images which are power tools for diagnosis of many diseases require large storage space. A number of lossless compression schemes exist for this purpose. In this paper we propose a new approach for the compression of these images which exploits the inherent symmetry that exists in the 3D MRI images. A block matching routine is employed to work on the symmetrical characteristics of these images. Another type of block matching is also applied to eliminate the inter-slice temporal correlations. The obtained results outperform the existing standard compression techniques.
Exploiting Interframe Redundancies in the Lossless Compression of 3D Medical Images In the lossless compression of 3D medical images, a non-linear approach is required to exploit the interframe redundancies. Even an optimal 3D linear predictor does not outperform the very simple 2D Lossless JPEG no. 7 predictor. However, the intraframe prediction errors from the previous frame can be used as a context in the current frame. Using this interframe context-modeling approach, a reduction of the compressed bit rate by 0.5 to 1 bit per pixel (i.e., 10-20%) could be found for both Lossless JPEG and JPEG-LS.
Recent Developments in Context-Based Predictive Techniques for Lossless Image Compression In this paper we describe some recent developments that have taken place in context-based predictive coding, in response to the JPEG/JBIG committee&#39;s recent call for proposals for a new international standard on lossless compression of continuous-tone images. We describe the different prediction techniques that were proposed and give a performance comparison. We describe the notion of context-base...
Context-based, adaptive, lossless image coding We propose a context-based, adaptive, lossless image codec (CALIC). The codec obtains higher lossless compression of continuous-tone images than other lossless image coding techniques in the literature. This high coding efficiency is accomplished with relatively low time and space complexities. The CALIC puts heavy emphasis on image data modeling. A unique feature of the CALIC is the use of a large number of modeling contexts (states) to condition a nonlinear predictor and adapt the predictor to varying source statistics. The nonlinear predictor can correct itself via an error feedback mechanism by learning from its mistakes under a given context in the past. In this learning process, the CALIC estimates only the expectation of prediction errors conditioned on a large number of different contexts rather than estimating a large number of conditional error probabilities. The former estimation technique can afford a large number of modeling contexts without suffering from the context dilution problem of insufficient counting statistics as in the latter approach, nor from excessive memory use. The low time and space complexities are also attributed to efficient techniques for forming and quantizing modeling contexts
Formal Derivation of Strongly Correct Concurrent Programs. Summary  A method is described for deriving concurrent programs which are consistent with the problem specifications and free from deadlock and from starvation. The programs considered are expressed by nondeterministic repetitive selections of pairs of synchronizing conditions and subsequent actions. An iterative, convergent calculus is developed for synthesizing the invariant and synchronizing conditions which guarantee strong correctness. These conditions are constructed as limits of recurrences associated with the specifications and the actions. An alternative method for deriving starvationfree programs by use of auxiliary variables is also given. The applicability of the techniques presented is discussed through various examples; their use for verification purposes is illustrated as well.
A mathematical perspective for software measures research Basic principles which necessarily underlie software measures research are analysed. In the prevailing paradigm for the validation of software measures, there is a fundamental assumption that the sets of measured documents are ordered and that measures should report these orders. The authors describe mathematically, the nature of such orders. Consideration of these orders suggests a hierarchy of software document measures, a methodology for developing new measures and a general approach to the analytical evaluation of measures. They also point out the importance of units for any type of measurement and stress the perils of equating document structure complexity and psychological complexity
Distributed snapshots: determining global states of distributed systems This paper presents an algorithm by which a process in a distributed system determines a global state of the system during a computation. Many problems in distributed systems can be cast in terms of the problem of detecting global states. For instance, the global state detection algorithm helps to solve an important class of problems: stable property detection. A stable property is one that persists: once a stable property becomes true it remains true thereafter. Examples of stable properties are “computation has terminated,” “ the system is deadlocked” and “all tokens in a token ring have disappeared.” The stable property detection problem is that of devising algorithms to detect a given stable property. Global state detection can also be used for checkpointing.
ACE: building interactive graphical applications
A study of cross-validation and bootstrap for accuracy estimation and model selection We review accuracy estimation methods and compare the two most common methods crossvalidation and bootstrap. Recent experimental results on artificial data and theoretical re cults in restricted settings have shown that for selecting a good classifier from a set of classifiers (model selection), ten-fold cross-validation may be better than the more expensive leaveone-out cross-validation. We report on a largescale experiment--over half a million runs of C4.5 and a Naive-Bayes algorithm--to estimate the effects of different parameters on these algrithms on real-world datasets. For crossvalidation we vary the number of folds and whether the folds are stratified or not, for bootstrap, we vary the number of bootstrap samples. Our results indicate that for real-word datasets similar to ours, The best method to use for model selection is ten fold stratified cross validation even if computation power allows using more folds.
A Theory of Prioritizing Composition An operator for the composition of two processes, where one process has priority over the other process, is studied. Processes are described by action systems, and data refinement is used for transforming processes. The operator is shown to be compositional, i.e. monotonic with respect to refinement. It is argued that this operator is adequate for modelling priorities as found in programming languages and operating systems. Rules for introducing priorities and for raising and lowering priorities of processes are given. Dynamic priorities are modelled with special priority variables which can be freely mixed with other variables and the prioritising operator in program development. A number of applications show the use of prioritising composition for modelling and specification in general.
Inheritance of proofs The Curry-Howard isomorphism, a fundamental property shared by many type theories, establishes a direct correspondence between programs and proofs. This suggests that the same structuring principles that ease programming should be useful for proving as well. To exploit object-oriented structuring mechanisms for verification, we extend the object-model of Pierce and Turner, based on the higher-order typed X-calculus F less than or equal to(omega), with a logical component. By enriching the (functional) signature of objects with a specification, methods and their correctness proofs are packed together in objects. The uniform treatment of methods and proofs gives rise in a natural way to object-oriented proving principles - including inheritance of proofs, late binding of proofs, and encapsulation of proofs - as analogues to object-oriented programming principles. We have used Lego, a type-theoretic proof checker, to explore the feasibility of this approach. (C) 1998 John Wiley & Sons, Inc.
Software engineering for parallel systems Current approaches to software engineering practice for parallel systems are reviewed. The parallel software designer has not only to address the issues involved in the characterization of the application domain and the underlying hardware platform, but, in many instances, the production of portable, scalable software is desirable. In order to accommodate these requirements, a number of specific techniques and tools have been proposed, and these are discussed in this review in the framework of the parallel software life-cycle. The paper outlines the role of formal methods in the practical production of parallel software, but its main focus is the emergence of development methodologies and environments. These include CASE tools and run-time support systems, as well as the use of methods taken from experience of conventional software development. Because of the particular emphasis on performance of parallel systems, work on performance evaluation and monitoring systems is considered.
Maintaining a legacy: towards support at the architectural level An organization that develops large, software intensive systems with a long lifetime will encounter major changes in the market requirements, the software development environment, including its platform, and the target platform. In order to meet the challenges associated with these changes, software development has to undergo major changes as well, Especially when these systems are successful, and hence become an asset, particular care shall be taken to maintain this legacy; large systems with a long lifetime tend to become very complex and difficult to understand. Software architecture plays a vital role in the development of large software systems. For the purpose of maintenance, an up-to-date explicit description of the software architecture of a system supports understanding and comprehension of it, amongst other things. However, many large! complex systems do not have an up-to-date documented software architecture. Particularly in cases where these systems have a long lifetime, the (natural) turnover of personnel will make it very likely that many employees contributing to previous generations of the system are no longer available. A need to 'recover' the software architecture of the system may become prevalent, facilitating the understanding of the system, providing ways to improve its maintainability and quality and to control architectural changes. This paper gives an overview of an on-going effort to improve the maintainability and quality of a legacy system, and describes the recent introduction of support at the architectural level for program understanding and complexity control. Copyright (C) 2000 John Wiley & Sons, Ltd.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.2
0.2
0.008333
0.000791
0
0
0
0
0
0
0
0
0
0
Heuristic algorithms and learning techniques: applications to the graph coloring problem.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Exits in the Refinement Calculus Although many programming languages contain exception handling mechanisms, their formal treatment — necessary for rigorous development — can be complex. Nevertheless, this paper presents a simple incorporation ofexit commands and exception blocks into a rigorous program development method. The refinement calculus, chosen for the exercise, is a method of developing imperative programs. It is based on weakest preconditions, although they are not used explicitly during program construction; they merely justify the general method. In the style of the refinement calculus, program development laws are given that introduce and allow the manipulation ofexits. The soundness of the new laws is shown using weakest preconditions (as for the existing refinement calculus laws). The extension of weakest preconditions needed to handleexits is a variation on earlier work of Cristian; the variation is necessary to handle nondeterminism.
Stepwise Refinement of Action Systems A method for the formal development of provably correct parallel algorithms by stepwise refinement is presented. The entire derivation procedure is carried out in the context of purely sequential programs. The resulting parallel algorithms can be efficiently executed on different architectures. The methodology is illustrated by showing the main derivation steps in a construction of a parallel algorithm for matrix multiplication.
A generalization of Dijkstra's calculus Dijsktra's calculus of guarded commands can be generalized and simplified by dropping the law of the excluded miracle. This paper gives a self-contained account of the generalized calculus from first principles through the semantics of recursion. The treatment of recursion uses the fixpoint method from denotational semantics. The paper relies only on the algebraic properties of predicates; individual states are not mentioned (except for motivation). To achieve this, we apply the correspondence between programs and predicates that underlies predicative programming.The paper is written from the axiomatic semantic point of view, but its contents can be described from the denotational semantic point of view roughly as follows: The Plotkin-Apt correspondence between wp semantics and the Smyth powerdomain is extended to a correspondence between the full wp/wlp semantics and the Plotkin powerdomain extended with the empty set.
On Overview of KRL, a Knowledge Representation Language
Implementing Remote procedure calls Remote procedure calls (RPC) are a useful paradigm for providing communication across a network between programs written in a high level language. This paper describes a package, written as part of the Cedar project, providing a remote procedure call facility. The paper describes the options that face a designer of such a package, and the decisions we made. We describe the overall structure of our RPC mechanism, our facilities for binding RPC clients, the transport level communication protocol, and some performance measurements. We include descriptions of some optimisations we used to achieve high performance and to minimize the load on server machines that have many clients. Our primary aim in building an RPC package was to make the building of distributed systems easier. Previous protocols were sufficiently hard to use that only members of a select group of communication experts were willing to undertake the construction of distributed systems. We hoped to overcome this by providing a communication paradigm as close as possible to the familiar facilities of our high level languages. To achieve this aim, we concentrated on making remote calls efficient, and on making the semantics of remote calls as close as possible to those of local calls.
Alloy: a lightweight object modelling notation Alloy is a little language for describing structural properties. It offers a declaration syntax compatible with graphical object models, and a set-based formula syntax powerful enough to express complex constraints and yet amenable to a fully automatic semantic analysis. Its meaning is given by translation to an even smaller (formally defined) kernel. This paper presents the language in its entirety, and explains its motivation, contributions and deficiencies.
Semantic grammar: an engineering technique for constructing natural language understanding systems One of the major stumbling blocks to more effective used computers by naive users is the lack of natural means of communication between the user and the computer system. This report discusses a paradigm for constructing efficient and friendly man-machine interface systems involving subsets of natural language for limited domains of discourse. As such this work falls somewhere between highly constrained formal language query systems and unrestricted natural language under-standing systems. The primary purpose of this research is not to advance our theoretical under-standing of natural language but rather to put forth a set of techniques for embedding both semantic/conceptual and pragmatic information into a useful natural language interface module. Our intent has been to produce a front end system which enables the user to concentrate on his problem or task rather than making him worry about how to communicate his ideas or questions to the machine.
Symbolic Model Checking Symbolic model checking is a powerful formal specification and verification method that has been applied successfully in several industrial designs. Using symbolic model checking techniques it is possible to verify industrial-size finite state systems. State spaces with up to 1030 states can be exhaustively searched in minutes. Models with more than 10120 states have been verified using special techniques.
Joining specification statements The specification statement allows us to easily express what a program statement does. This paper shows how refinement of specification statements can be directly expressed using the predicate calculus. It also shows that the specification statements interpreted as predicate transformers form a complete lattice, and that this lattice is the lattice of conjunctive predicate transformers. The join operator of this lattice is constructed as a specification statement. The join operators of two interesting sublattices of the set of specification statements are also investigated.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
A Conceptual Graph Model for W3C Resource Description Framework With the aim of building a "Semantic Web", the content of the documents must be explicitly represented through metadata in order to enable contents-guided search. Our approach is to exploit a standard language (RDF, recommended by W3C) for expressing such metadata and to interpret these metadata in conceptual graphs (CG) in order to exploit querying and inferencing capabilities enabled by CG formalism. The paper presents our mapping of RDF into CG and its interest in the context of the semantic Web.
Procedures and atomicity refinement The introduction of an early return from a (remote) procedure call can increase the degree of parallelism in a parallel or distributed algorithm modeled by an action system. We define a return statement for procedures in an action systems framework and show that it corresponds to carrying out an atomicity refinement.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.2
0.005
0.002353
0
0
0
0
0
0
0
0
0
0
0
Procedures, parameters, and abstraction: Separate concerns. The notions of procedures, parameters, and abstraction are by convention treated together in methods of imperative program development. Rules for preserving correctness in such developments can be complex. We show that the three concerns can be separated, and we give simple rules for each. Crucial to this is the ability to embed specification—representing abstraction—directly within programs; with this we can use the elegant copy rule of ALGOL-60 to treat procedure calls, whether abstract or not. Our contribution is in simplifying the use of the three features, whether separately or together, and in the proper location of any difficulties that do arise. The aliasing problem, for example, is identified as a “loss of monotonicity” with respect to program refinement.
A Case Study in Representing a Model: to Z or not to Z? As part of the Domino project on distributed system management, a model of 'Delegation of Authority' was created. A formal description method was used as the basis of the model in order to achieve precision and generality. Z was chosen for this purpose, supplemented by Prolog to animate the specification so that it could be validated with examples. It was found that other representation methods were necessary for visualising the model and for meaningful communication in discussions between colleagues. Three different methods were used for discussions: plain English, an ad hoc graphical method for representing domain structures and Petri net diagrams. In this paper we discuss the roles of each method of representation, its uses and limitations, and their inter-relationship. Formal interpretations in Z of the graphical methods are shown.
From MooZ to Eiffel - A Rigorous Approach to System Development We propose a method for refining MooZ specifications into Eiffel programs. MooZ is an object-oriented extension of the Z model based specification language and Eiffel is a programming language which is also based on the object-oriented paradigm. We present the refinement method and then we illustrate its application to part of an Industrial Maintenance System.
Synthesizing structured analysis and object-based formal specifications Structured Analysis (SA) is a widely&dash;used software development method. SA specifications are based on Data Flow Diagrams (DFD’s), Data Dictionaries (DD’s) and Process Specifications (P&dash;Specs). As used in practice, SA specifications are not formal. Seemingly orthogonal approaches to specifications are those using formal, object&dash;based, abstract model specification languages, e.g., VDM, Z, Larch/C&plus;&plus; and SPECS. These languages support object&dash;based software development in that they are designed to specify abstract data types (ADT’s). We suggest formalizing SA specifications by: (i) formally specifying flow value types as ADT’s in DD’s, (ii) formally specifying P&dash;Specs using both the assertional style of the aforementioned specification languages and ADT operations defined in DD’s, and (iii) adopting a formal semantics for DFD “execution steps”. The resulting formalized SA specifications, DFD&dash;SPECS, are well&dash;suited to the specification of distributed or concurrent systems. We provide an example DFD&dash;SPEC for a client&dash;server system with a replicated server. When synthesized with our recent results in the direct execution of formal, model&dash;based specifications, DFD&dash;SPECS will also support the direct execution of specifications of concurrent or distributed systems.
Refining Specifications to Logic Programs The refinement calculus provides a framework for the stepwise development of imperative programsfrom specifications. In this paper we study a refinement calculus for deriving logic programs.Dealing with logic programs rather than imperative programs has the dual advantages that, dueto the expressive power of logic programs, the final program is closer to the original specification,and each refinement step can achieve more. Together these reduce the overall number of derivationsteps....
Supporting contexts in program refinement A program can be refined either by transforming the whole program or by refining one of its components. The refinement of a component is, for the main part, independent of the remainder of the program. However, refinement of a component can depend on the context of the component for information about the variables that are in scope and what their types are. The refinement can also take advantage of additional information, such as any precondition the component can assume. The aim of this paper is to introduce a technique, which we call program window inference , to handle such contextual information during derivations in the refinement calculus. The idea is borrowed from a technique, called window inference , for handling context in theorem proving. Window inference is the primary proof paradigm of the Ergo proof editor. This tool has been extended to mechanize refinement using program window inference.
Laws of data refinement A specification language typically contains sophisticated data types that are expensive or even impossible to implement. Their replacement with simpler or more efficiently implementable types during the programming process is called data refinement. We give a new formal definiton of data refinement and use it to derive some basic laws. The derived laws are constructive in that used in conjunction with the known laws of procedural refinement they allow us to calculate a new specification from a given one in which variables are to be replaced by other variables of a different type.
Data refinement by calculation Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but possibly more complex; the purpose of the data refinement in that case is to make progress in program construction from more abstract to more concrete formulations. A recent trend in program construction is to calculate programs from their specifications; that contrasts with proving that a given program satisfies some specification. We investigate to what extent the trend can be applied to data refinement.
Multi-process systems analysis using event b: application to group communication systems We introduce a method to elicit and to structure using Event B, processes interacting with ad hoc dynamic architecture. A Group Communication System is used as the investigation support. The method takes account of the evolving structure of the interacting processes; it guides the user to structure the abstract system that models his/her requirements. The method also integrates property verification using both theorem proving and model checking. A B specification of a GCS is built using the proposed approach and its stated properties are verified using a B theorem prover and a B model checker.
A distributed algorithm to implement n-party rendezvous The concept of n-party rendezvous has been proposed to implement synchronous communication among an arbitrary number of concurrent, asynchronous processes. The problem of implementing n-party rendezvous captures two central issues in the design of distributed systems: exclusion and synchronization. This paper describes a simple, distributed algorithm, referred to as the event manager algorithm, to implement n-party rendezvous. It also compares the performance of this algorithm with an existing algorithm for this problem.
Petri nets in software engineering The central issue of this contribution is a methodology for the use of nets in practical systems design. We show how nets of channels and agencies allow for a continuous and systematic transition from informal and unprecise to precise and formal specifications. This development methodology leads to the representation of dynamic systems behaviour (using Pr/T-Nets) which is apt to rapid prototyping and formal correctness proofs.
Moment images, polynomial fit filters. and the problem of surface interpolation A uniform hierarchical procedure for processing incomplete image data is describes. It begins with the computation of local moments within windows centered on each output sample point. Arrays of such measures, called moment images, are computed efficiently through the application of a series of small kernel filters. A polynomial surface is then fit to the available image data within a local neighborhood of each sample point. Best-fit polynomials are obtained from the corresponding local moments. The procedure, hierarchical polynomial fit filtering, yields a multiresolution set of low-pass filtered images. The set of low-pass images is combined by multiresolution interpolation to form a smooth surface passing through the original image data
Refinement-Preserving translation from event-b to register-voice interactive systems The state-based formal method Event-B relies on the concept of correct stepwise development, ensured by discharging corresponding proof obligations. The register-voice interactive systems (rv-IS) formalism is a recent approach for developing software systems using both structural state-based as well as interaction-based composition operators. One of the most interesting feature of the rv-IS formalism is the structuring of the components interactions. In order to study whether a more structured (rv-IS inspired) interaction approach can significantly ease the proof obligation effort needed for correct development in Event-B, we need to devise a way of integrating these formalisms. In this paper we propose a refinement-based translation from Event-B to rv-IS, exemplified with a file transfer protocol modelled in both formalisms.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.014912
0.0125
0.0125
0.0125
0.0075
0.005195
0.002044
0.000578
0.00009
0.000015
0
0
0
0
Verifying task-based specifications in conceptual graphs A conceptual model is a model of real world concepts and application domains as perceived by users and developers. It helps developers investigate and represent the semantics of the problem domain, as well as communicate among themselves and with users. In this paper, we propose the use of task-based specifications in conceptual graphs (TBCG) to construct and verify a conceptual model. Task-based specification methodology is used to serve as the mechanism to structure the knowledge captured in the conceptual model; whereas conceptual graphs are adopted as the formalism to express task-based specifications and to provide a reasoning capability for the purpose of verification. Verifying a conceptual model is performed on model specifications of a task through constraints satisfaction and relaxation techniques, and on process specifications of the task based on operators and rules of inference inherited in conceptual graphs.
The role of knowledge in software development Software development is knowledge-intensive. Many concepts have been developed to ease or guide the processing of knowledge in software development, including information hiding, modularity, objects, functions and procedures, patterns, and more. These concepts are supported by various methods, approaches, and tools using symbols, graphics, and languages. Some are formal; others are semiformal or simply made up of key practices. Methods and approaches in software engineering are often based on the results of empirical observations or on individual success stories.
Organizing usability work to fit the full product range
Knowledge Representation And Reasoning In Software Engineering It has been widely recognized that in order to solve difficult problems using computers one will usually have to use a great deal of knowledge (often domain specific), rather than a few general principles. The intent of this special issue was to study how this attitude has affected research on tools for improved software productivity and quality. Many such tools and problems related to them were discussed at a Workshop on the Development of Intelligent and Cooperative Information Systems, held in Niagara-on-the-Lake in April 1991, from which the idea for this issue originated.
Knowledge management and the dynamic nature of knowledge Knowledge management (KM) or knowledge sharing in organizations is based on an understanding of knowledge creation and knowledge transfer. In implementation, KM is an effort to benefit from the knowledge that resides in an organization by using it to achieve the organization's mission. The transfer of tacit or implicit knowledge to explicit and accessible formats, the goal of many KM projects, is challenging, controversial, and endowed with ongoing management issues. This article argues that effective knowledge management in many disciplinary contexts must be based on understanding the dynamic nature of knowledge itself. The article critiques some current thinking in the KM literature and concludes with a view towards knowledge management programs built around knowledge as a dynamic process.
Evaluating a domain-specialist-oriented knowledge management system We discuss the evaluation of software tools whose purpose is to assist humans to perform complex creative tasks. We call these creative task assistants (CTAs) and use as a case study CODE4, a CTA designed to allow domain specialists to manage their own knowledge base. We present an integrated process involving evaluation of usability, attractiveness and feature contribution, the latter two being the focus. To illustrate attractiveness evaluation, we assess whether CODE4 has met its objective of having users not trained in logical formalisms choose the tool to represent and manipulate knowledge in a computer. By studying use of the tool by its intended users, we conclude that it has met this objective. To illustrate feature contribution evaluation, we assess what aspects of CODE4 have in fact led to its success. To do this, we study what tasks are performed by users, and what features of both knowledge representation and user interface are exercised. We find that features for manipulating the inheritance hierarchy and naming concepts are considered the most valuable. Our overall conclusion is that those developing or researching CTAs would benefit from using the three types of evaluation in order to make effective decisions about the evolution of their products.
The CG Formalism as an Ontolingua for Web-Oriented Representation Languages The semantic Web entails the standardization of representation mechanisms so that the knowledge contained in a Web document can be retrieved and processed on a semantic level. RDF seems to be the emerging encoding scheme for that purpose. However, there are many different sorts of documents on theWeb that do not use RDF as their primary coding scheme. It is expected that many one-to-one mappings between pairs of document representation formalisms will eventually arise. This would create a situation where a young standard such as RDF would generate update problems for all these mappings as it evolves, which is inevitable. Rather, we advocate the use of a common Ontolingua for all these encoding formalisms. Though there may be many knowledge representation formalisms suited for that task, we advocate the use of the conceptual graph formalism.
Software development: two approaches to animation of Z specifications using Prolog Formal methods rely on the correctness of the formal requirements specification, but this correctness cannot be proved. This paper discusses the use of software tools to assist in the validation of formal specifications and advocates a system by which Z specifications may be animated as Prolog programs. Two Z/Prolog translation strategies are explored; formal program synthesis and structure simulation. The paper explains why the former proved to be unsuccessful and describes the techniques developed for implementing the latter approach, with the aid of case studies
Abstraction-based software development A five-year experience with abstraction-based software-development techniques in the university environment indicates that the investment required to support the paradigm in practice is returned in terms of greater ability to control complexity in large projects—provided there exists a set of software tools sufficient to support the approach.
Specifying software requirements for complex systems: new techniques and their application This paper concerns new techniques for making requirements specifications precise, concise, unambiguous, and easy to check for completeness and consistency. The techniques are well-suited for complex real-time software systems; they were developed to document the requirements of existing flight software for the Navy's A-7 aircraft. The paper outlines the information that belongs in a requirements document and discusses the objectives behind the techniques. Each technique is described and illustrated with examples from the A-7 document. The purpose of the paper is to introduce the A-7 document as a model of a disciplined approach to requirements specification; the document is available to anyone who wishes to see a fully worked-out example of the approach.
Simulation of hepatological models: a study in visual interactive exploration of scientific problems In many different fields of science and technology, visual expressions formed by diagrams, sketches, plots and even images are traditionally used to communicate not only data but also procedures. When these visual expressions are systematically used within a scientific community, bi-dimensional notations often develop which allow the construction of complex messages from sets of primitive icons. This paper discusses how these notations can be translated into visual languages and organized into an interactive environment designed to improve the user's ability to explore scientific problems. To facilitate this translation, the use of Conditional Attributed Rewriting Systems has been extended to visual language definition. The case of a visual language in the programming of a simulation of populations of hepatic cells is studied. A discussion is given of how such a visual language allows the construction of programs through the combination of graphical symbols which are familiar to the physician or which schematize shapes familiar to him in that they resemble structures the observes in real experiments. It is also shown how such a visual approach allows the user to focus on the solution of his problems, avoiding any request for unnecessary precision and most requests for house-keeping data during the interaction.
The Semantics of Statecharts in HOL Statecharts are used to produce operational specifications in the CASE tool STATEMATE. This tool provides some analysis capabilities such as reachability of states, but formal methods offer the potential of linking more powerful requirements analysis with CASE tools. To provide this link, it is necessary to have a rigorous semantics for the specification notation. In this paper we present an operational semantics for Statecharts in quantifier free higher order logic, embedded in the theorem prover HOL.
TAER: time-aware entity retrieval-exploiting the past to find relevant entities in news articles Retrieving entities instead of just documents has become an important task for search engines. In this paper we study entity retrieval for news applications, and in particular the importance of the news trail history (i.e., past related articles) in determining the relevant entities in current articles. This is an important problem in applications that display retrieved entities to the user, together with the news article. We analyze and discuss some statistics about entities in news trails, unveiling some unknown findings such as the persistence of relevance over time. We focus on the task of query dependent entity retrieval over time. For this task we evaluate several features, and show that their combinations significantly improves performance.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.041171
0.044352
0.044352
0.044352
0.040249
0.040249
0.020125
0.004928
0.000003
0
0
0
0
0
Fuzzy Bang-Bang control problem under granular differentiability In this paper, a class of fuzzy optimal control problem called fuzzy Bang-Bang (FBB) control problem is revisited. The FBB control problem aims at finding control inputs which transfer the states of an uncertain dynamical system to the origin in a minimum time. Since the conditions and/or parameters of the dynamical system are uncertain, the FBB control problem is a challenge task. To address the problem, we first introduce the concepts of the granular integral and derivative of fuzzy functions whose domain is uncertain. In addition, the notion of granular partial derivative of a multivariable fuzzy function whose variables are fuzzy functions with uncertain domains is presented. Then, we propose a theorem which is proved to be applicable to the FBB control problem. Moreover, taking the relative-distance-measure fuzzy interval arithmetic and horizontal membership functions into consideration, we further give complementary theorems to ensure that if the problem has a solution, then the controller assumes its boundary values. The simulated results confirm this theoretical conclusion. These findings may enrich our insight of the behavior of the uncertain dynamical system subjected to the FBB control problem, and guide to predict the uncertain trajectories.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Formal specification and design of a message router Formal derivation refers to a family of design techniques that entail the development of programs which are guaranteed to be correct by construction. Only limited industrial use of such techniques (e.g., UNITY-style specification refinement) has been reported in the literature, and there is a great need for methodological developments aimed at facilitating their application to complex problems. This article examines the formal specification and design of a message router in an attempt to identify those methodological elements that are likely to contribute to successful industrial uses of program derivation. Although the message router cannot be characterized as being industrial grade, it is a sophisticated problem that poses significant specification and design challenges—its apparent simplicity is rather deceiving. The main body of the article consists of a complete formal specification of the router and a series of successive refinements that eventually lead to an immediate construction of a correct UNITY program. Each refinement is accompanied by its design rationale and is explained in a manner accessible to a broad audience. We use this example to make the case that program derivation provides a good basis for introducing rigor in the design strategy, regardless of the degrees of formality one is willing to consider.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Large Discriminative Structured Set Prediction Modeling With Max-Margin Markov Network for Lossless Image Coding Inherent statistical correlation for context-based prediction and structural interdependencies for local coherence is not fully exploited in existing lossless image coding schemes. This paper proposes a novel prediction model where the optimal correlated prediction for a set of pixels is obtained in the sense of the least code length. It not only exploits the spatial statistical correlations for the optimal prediction directly based on 2D contexts, but also formulates the data-driven structural interdependencies to make the prediction error coherent with the underlying probability distribution for coding. Under the joint constraints for local coherence, max-margin Markov networks are incorporated to combine support vector machines structurally to make max-margin estimation for a correlated region. Specifically, it aims to produce multiple predictions in the blocks with the model parameters learned in such a way that the distinction between the actual pixel and all possible estimations is maximized. It is proved that, with the growth of sample size, the prediction error is asymptotically upper bounded by the training error under the decomposable loss function. Incorporated into the lossless image coding framework, the proposed model outperforms most prediction schemes reported.
Fast and Context-Free Lossless Image Compression Algorithm Based on JPEG-LS A fast, loss less and context-free image compression algorithm based on the median edge predictor and Golomb coder of JPEG-LS is proposed. Compared to the JPEG-LS standard, the most expensive parts with respect to memory space requirements and computational complexity are omitted. We show that these parts of the JPEG-LS codec only contribute to the compression performance in a very limited way. In order to avoid the context memory, we apply fast adaptive Golomb coder parameterization and distribution de-emphasis. The new algorithm features very low memory footprint and dissolves widespread data-dependencies to enable parallelization. Depending on run-mode and predictor choice, we measure throughput speedup ranging between factors 3 and 8 with compression performance loss between 9% and 13% respectively.
Least squares-adapted edge-look-ahead prediction with run-length encodings for lossless compression of images Many coding methods are more efficient with certain types of images than others. In particular, run-length coding is very useful for coding areas of little changes. Adaptive predictive coding achieves high coding efficiency for fast changing areas like edges. In this paper, we propose a switching coding scheme that will combine the advantages of both run-length and adaptive linear predictive coding (RALP) for lossless compression of images. For pixels in slowly varying areas, run-length coding is used; otherwise LS (least square)-adapted predictive coding is used. Instead of performing LS adaptation in a pixel-by-pixel manner, we adapt the predictor coefficients only when an edge is detected so that the computational complexity can be significantly reduced. For this, we propose an edge detector using only causal pixels. This way, the predictor can look ahead if the coding pixel is around an edge and initiate the LS adaptation in advance to prevent the occurrence of a large prediction error. With the proposed switching structure, very good prediction results can be obtained in both slowly varying areas and pixels around boundaries as we will see in the experiments.
Lossless medical image compression using geometry-adaptive partitioning and least square-based prediction. To improve the compression rates for lossless compression of medical images, an efficient algorithm, based on irregular segmentation and region-based prediction, is proposed in this paper. Considering that the first step of a region-based compression algorithm is segmentation, this paper proposes a hybrid method by combining geometry-adaptive partitioning and quadtree partitioning to achieve adaptive irregular segmentation for medical images. Then, least square (LS)-based predictors are adaptively designed for each region (regular subblock or irregular subregion). The proposed adaptive algorithm not only exploits spatial correlation between pixels but it utilizes local structure similarity, resulting in efficient compression performance. Experimental results show that the average compression performance of the proposed algorithm is 10.48, 4.86, 3.58, and 0.10% better than that of JPEG 2000, CALIC, EDP, and JPEG-LS, respectively. Graphical abstract ᅟ.
Recent Developments in Context-Based Predictive Techniques for Lossless Image Compression In this paper we describe some recent developments that have taken place in context-based predictive coding, in response to the JPEG/JBIG committee&#39;s recent call for proposals for a new international standard on lossless compression of continuous-tone images. We describe the different prediction techniques that were proposed and give a performance comparison. We describe the notion of context-base...
Formal Derivation of Strongly Correct Concurrent Programs. Summary  A method is described for deriving concurrent programs which are consistent with the problem specifications and free from deadlock and from starvation. The programs considered are expressed by nondeterministic repetitive selections of pairs of synchronizing conditions and subsequent actions. An iterative, convergent calculus is developed for synthesizing the invariant and synchronizing conditions which guarantee strong correctness. These conditions are constructed as limits of recurrences associated with the specifications and the actions. An alternative method for deriving starvationfree programs by use of auxiliary variables is also given. The applicability of the techniques presented is discussed through various examples; their use for verification purposes is illustrated as well.
Hypertext: An Introduction and Survey First Page of the Article
A field study of the software design process for large systems The problems of designing large software systems were studied through interviewing personnel from 17 large projects. A layered behavioral model is used to analyze how three of these problems—the thin spread of application domain knowledge, fluctuating and conflicting requirements, and communication bottlenecks and breakdowns—affected software productivity and quality through their impact on cognitive, social, and organizational processes.
Different perspectives on information systems: problems and solutions The paper puts information systems (IS) research dealing with IS problems into perspective. IS problems are surveyed and classified. Using the IS research framework suggested by Ives, Hamilton, and Davis, research into IS problems is classified into several perspectives whose relevance in coping with the problems is discussed. Research perspectives focusing on IS operations environment, IS development process, IS development organization, IS development methods, and IS theories are distinguished. The paper concludes with suggestions for future research and how to deal with IS problems in practice.
Drawing graphs nicely using simulated annealing The paradigm of simulated annealing is applied to the problem of drawing graphs “nicely.” Our algorithm deals with general undirected graphs with straight-line edges, and employs several simple criteria for the aesthetic quality of the result. The algorithm is flexible, in that the relative weights of the criteria can be changed. For graphs of modest size it produces good results, competitive with those produced by other methods, notably, the “spring method” and its variants.
Automatic construction of networks of concepts characterizing document databases Two East-bloc computing knowledge bases, both based on a semantic network structure, were created automatically from large, operational textual databases using two statistical algorithms. The knowledge bases were evaluated in detail in a concept-association experiment based on recall and recognition tests. In the experiment, one of the knowledge bases, which exhibited the asymmetric link property, outperformed four experts in recalling relevant concepts in East-bloc computing. The knowledge base, which contained 20000 concepts (nodes) and 280000 weighted relationships (links), was incorporated as a thesaurus-like component in an intelligent retrieval system. The system allowed users to perform semantics-based information management and information retrieval via interactive, conceptual relevance feedback
UTP and Sustainability Hoare and He’s approach to unifying theories of programming, UTP, is a dozen years old. In spite of the importance of its ideas, UTP does not seem to be attracting due interest. The purpose of this article is to discuss why that is the case, and to consider UTP’s destiny. To do so it analyses the nature of UTP, focusing primarily on unification, and makes suggestions to expand its use.
Task Structures As A Basis For Modeling Knowledge-Based Systems Recently, there has been an increasing interest in improving the reliability and quality of Al systems. As a result, a number of approaches to knowledge-based systems modeling have been proposed. However, these approaches are limited in formally verifying the intended functionality and behavior of a knowledge-based system. In this article, we proposed a formal treatment to task structures to formally specify and verify knowledge-based systems modeled using these structures. The specification of a knowledge-based system modeled using task structures has two components: a model specification that describes static properties of the system, and a process specification that characterizes dynamic properties of the system. The static properties of a system are described by two models: a model about domain objects (domain model), and a model about the problem-solving states (state model). The dynamic properties of the system are characterized by (1) using the notion of state transition to explicitly describe what the functionality of a task is, and (2) specifying the sequence of tasks and interactions between tasks (i.e., behavior of a system) using task state expressions (TSE). The task structure extended with the proposed formalism not only provides a basis for detailed functional decomposition with procedure abstraction embedded in, but also facilitates the verification of the intended functionality and behavior of a knowledge-based system. (C) 1997 John Wiley gr Sons, Inc.
Cognitive Relaying With Transceiver Hardware Impairments Under Interference Constraints. In this letter, we analyze the performance of cognitive amplify-and-forward multirelay networks with active direct link in the presence of relay transceiver hardware impairments. Considering distortion noises on both interference and main data links, we derive tight closed-form outage probability expressions and their asymptotic behavior for partial relay selection (PRS) and opportunistic relay se...
1.1
0.1
0.1
0.1
0.008333
0
0
0
0
0
0
0
0
0
Sub-Channel Assignment, Power Allocation, and User Scheduling for Non-Orthogonal Multiple Access Networks. In this paper, we study the resource allocation and user scheduling problem for a downlink non-orthogonal multiple access network where the base station allocates spectrum and power resources to a set of users. We aim to jointly optimize the sub-channel assignment and power allocation to maximize the weighted total sum-rate while taking into account user fairness. We formulate the sub-channel allocation problem as equivalent to a many-to-many two-sided user-subchannel matching game in which the set of users and sub-channels are considered as two sets of players pursuing their own interests. We then propose a matching algorithm, which converges to a two-side exchange stable matching after a limited number of iterations. A joint solution is thus provided to solve the sub-channel assignment and power allocation problems iteratively. Simulation results show that the proposed algorithm greatly outperforms the orthogonal multiple access scheme and a previous non-orthogonal multiple access scheme.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Identifying Quality-Requirement Conflicts Despite well-specified functional and interface requirements, many software projects have failed because they had a poor set of quality-attribute requirements. To find the right balance of quality-attribute requirements, you must identify the conflicts among desired quality attributes and work out a balance of attribute satisfaction. We have developed The Quality Attribute Risk and Conflict Consultant, a knowledge-based tool that can be used early in the system life cycle to identify potential conflicts. QARCC operates in the context of the WinWin system, a groupware support system that determines software and system requirements as negotiated win conditions. This article summarizes our experiences developing the QARCC-1 prototype using an early version of WinWin, and our integration of the resulting improvements into QARCC-2.
Scenario-based Analysis of Non-Functional Requirements In this paper, we propose an analysis method that describes scenario templates for NFRs, with heuristics for scenario generation, elaboration and validation. The paper is constructed in four sections. First the method is outlined, then the case study used to illustrate the method is introduced. This is followed by examples of the NFR scenario templates and the scenario based analysis is described via the case study. The paper concludes with a brief discussion.
The architecture tradeoff analysis method This paper presents the Architecture Tradeoff Analysis Method (ATAM), a structured technique for understanding the tradeoffs inherent in the architectures of soft ware inten- sive systems. This method was developed to provide a prin- cipled way to evaluate a software architecture 's fitness with respect to multiple competing quality attributes: modifiabil- ity, security, performance, availability, and so forth. These attributes interact—improving one often comes at the price of worsening one or more of the others—as is shown in the paper, and the method helps us to reason about architectural decisions that affect quality attribute interactions. The ATAM is a spiral model of design: one of postulating candi- date architectures followed by analysis and risk mitigation, leading to refined architectures.
Knowledge maintenance: the state of the art In the software and knowledge engineering literature we can find maintenance strategies offered to maintain seven main types of knowledge: words; sentences; behavioral knowledge; and meta-knowledge. Meta-knowledge divides into problem solving methods, quality knowledge, fix knowledge, social knowledge and processing activities. There are five main ways in which these seven knowledge types are processed: acquire, operationalise, fault, fix and preserve. We review systems that contribute to these 7*5 = 35 types of knowledge maintenance to make the following conclusions. First, open issues with the current maintenance research are identified. These include (a) areas that are not being addressed by any researcher; (b) the recursive maintenance problem; and (c) drawbacks with rapid acquire systems and the operationalisation KM assumption. Secondly, a process is described for commissioning a new maintenance tool. Thirdly, a general common principle for maintenance (search-space reflection) is isolated.
Negotiation in distributed artificial intelligence: drawing from human experience Distributed artificial intelligence and cooperative problem solving deal with agents who allocate resources, make joint decisions and develop plans. Negotiation may be important for interacting agents who make sequential decisions. We discuss the use of negotiation in conflict resolution in distributed AI and select those elements of human negotiations that can help artificial agents better to resolve conflicts. Problem evolution, a basic aspect of negotiation, can be represented using the restructurable modelling method of developing decision support systems. Restructurable modelling is implemented in the knowledge-based generic decision analysis and simulation system Negoplan. Experiments show that Negoplan can effectively help resolve individual and group conflicts in negotiation.<>
Surfacing Root Requirements Interactions from Inquiry Cycle Requirements Documents Systems requirements errors are numerous, persistent, and expensive. To detect such errors, and focus on critical ones during the development of a requirements document, we have defined Root Requirements Analysis. This simple tech- nique is based on: generalizing requirements to form root requirements, exhaustively comparing the root require- ments, and applying simple metrics to the resultant com- parison matrix. Root Requirements Analysis is effective. In the case study described in this article, the technique finds that 36 percent of the case's root requirements interactions result in problems which require further analysis. More- over, the technique provides a specific, operational proce- dure to guide the efficient iterative resolution of identified requirements conflicts. The process of Root Requirements Analysis itself is not specific to a particular methodology. It can be applied directly to requirements in a variety of forms, as well as to the documentation of requirements development. We took this later approach in the case study illustrating how Root Requirements Analysis can augment the Inquiry Cycle model of requirements development. Finally, the technique is amenable to support through col- laborative CASE tools, as we demonstrate with our DEALSCRIBE prototype.
Tolerant planning and negotiation in generating coordinated movement plans in an automated factory Plan robustness is important for real world applications where modelling imperfections often result in execution deviations. The concept of tolerant planning is suggested as one of the ways to build robust plans. Tolerant planning achieves this aim by being tolerant of an agent's own execution deviations. When applied to multi-agent domains, it has the additional characteristic of being tolerant of other agents' deviant behaviour. Tolerant planning thus defers dynamic replanning until execution errors become excessive. The underlying strategy is to provide more than ample resources for agents to achieve their goals. Such redundancies aggravate the resource contention problem. To counter this, the iterative negotiation mechanism is suggested. It requires agents to be skillful in negotiating with other agents to resolve conflicts in such a way as to minimize compromising one's own tolerances and yet being benevolent in helping others find a feasible plan.
Database design with common sense business reasoning and learning Automated database design systems embody knowledge about the database design process. However, their lack of knowledge about the domains for which databases are being developed significantly limits their usefulness. A methodology for acquiring and using general world knowledge about business for database design has been developed and implemented in a system called the Common Sense Business Reasoner, which acquires facts about application domains and organizes them into a a hierarchical, context-dependent knowledge base. This knowledge is used to make intelligent suggestions to a user about the entities, attributes, and relationships to include in a database design. A distance function approach is employed for integrating specific facts, obtained from individual design sessions, into the knowledge base (learning) and for applying the knowledge to subsequent design problems (reasoning).
PARIS: a system for reusing partially interpreted schemas This paper describes PARIS, an implemented system that facilitates the reuse of partially interpreted schemas. A schema is a program and specification with abstract, or uninterpreted, entities. Different interpretations of those entities will produce different programs. The PARIS System maintains a library of such schemas and provides an interactive mechanism to interpret a schema into a useful program by means of partially automated matching and verification procedures.
Information modeling in the time of the revolution Information modeling is concerned with the construction of computer-based symbol structures which capture the meaning of information and organize it in ways that make it understandable and useful to people. Given that information is becoming an ubiquitous, abundant and precious resource, its modeling is serving as a core technology for information systems engineering. We present a brief history of information modeling techniques in Computer Science and briefly survey such techniques developed within Knowledge Representation (Artificial Intelligence), Data Modeling (Databases), and Requirements Analysis (Software Engineering and Information Systems). We then offer a characterization of information modeling techniques which classifies them according to their ontologies , i.e., the type of application for which they are intended, the set of abstraction mechanisms (or, structuring principles ) they support, as well as the tools they provide for building, analyzing, and managing application models. The final component of the paper uses the proposed characterization to assess particular information modeling techniques and draw conclusions about the advances that have been achieved in the field.
Software engineering for security: a roadmap Is there such a thing anymore as a software system that doesn't need to be secure? Almost every software- controlled system faces threats from potential adversaries, from Internet-aware client applications running on PCs, to complex telecommunications and power systems acces- sible over the Internet, to commodity software with copy protection mechanisms. Software engineers must be cog- nizant of these threats and engineer systems with credible defenses, while still delivering value to customers. In this paper, we present our perspectives on the research issues that arise in the interactions between software engineering and security.
Animating Conceptual Graphs This paper addresses operational aspects of conceptual graph systems. This paper is an attempt to formalize operations within a conceptual graph system by using conceptual graphs themselves to describe the mechanism. We outline a unifying approach that can integrate the notions of a fact base, type definitions, actor definitions, messages, and the assertion and retraction of graphs. Our approach formalizes the notion of type expansion and actor definitions, and in the process also formalizes the notion for any sort of formal assertion in a conceptual graph system. We introduce definitions as concept types called assertional types which are animated through a concept type called an assertional event. We illustrate the assertion of a type definition, a nested definition and an actor definition, using one extended example. We believe this mechanism has immediate and far-reaching value in offering a self-contained, yet animate conceptual graph system architecture.
Nondeterminacy and recursion via stacks and games The weakest-precondition interpretation of recursive procedures is developed for a language with a combination of unbounded demonic choice and unbounded angelic choice. This compositional formal semantics is proved to be equal to a game-theoretic operational semantics. Two intermediate stages are exploited. One step consists of unfolding the declaration of the recursive procedures. Fixpoint induction is used to prove the validity of this step. The compositional semantics of the unfolded declaration is proved to be equal to a formal semantics of a stack implementation of the recursive procedures. After an introduction to boolean two-person games, this stack semantics is shown to correspond to a game-theoretic operational semantics.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1.01019
0.016024
0.014983
0.0096
0.008342
0.008142
0.007791
0.003983
0.001819
0.000169
0.000003
0
0
0
Timed Communicating Object Z This paper describes a timed, multithreaded object modeling notation for specifying real-time, concurrent, and reactive systems. The notation Timed Communicating Object Z (TCOZ) builds on Object Z's strengths in modeling complex data and algorithms, and on Timed CSP's strengths in modeling process control and real-time interactions. TCOZ is novel in that it includes timing primitives, properly separates process control and data/algorithm issues and supports the modeling of true multithreaded concurrency. TCOZ is particularly well-suited for specifying complex systems whose components have their own thread of control. The expressiveness of the notation is demonstrated by a case study in specifying a multilift system that operates in real-time.
Visualization with Hierarchically Structured Trees for an Explanation Reasoning System This paper is concerned with an application of drawing hierarchically structured trees. The tree drawing is applied to an explanation reasoning system. The reasoing is based on synthetic abduction (hypothesis) that gets a case from a rule and a result. In other words, the system searches a proper environment to get a desired result. In order that the system may be reliably related to the amount of rules which are used to get the answer, we visualize a process of reasoning to show how rules have concern with the process. Since the process of reasoning in the system makes a hierarchically structured tree, the visualization of reasoning is a drawing of a hierarchically structured tree. We propose a method of visualization that is applicable to the explanation reasoning system.
HGV: A Library for Hierarchies, Graphs, and Views We introduce the base architecture of a software library which combines graphs, hierarchies, and views and describes the interactions between them. Each graph may have arbitrarily many hierarchies and each hierarchy may have arbitrarily many views. Both the hierarchies and the views can be added and removed dynamically from the correspondingg raph and hierarchy, respectively. The software library shall serve as a platform for algorithms and data structures on hierarchically structured graphs. Such graphs become increasingly important and occur in special applications, e. g., call graphs in software engineering or biochemical pathways, with a particular need to manipulate and draw graphs.
On visual formalisms The higraph, a general kind of diagramming object, forms a visual formalism of topological nature. Higraphs are suited for a wide array of applications to databases, knowledge representation, and, most notably, the behavioral specification of complex concurrent systems using the higraph-based language of statecharts.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
A semantics of multiple inheritance this paper is to present a clean semantics of multiple inheritance and to show that, in the context of strongly-typed, statically-scoped languages, a sound typechecking algorithm exists. Multiple inheritance is also interpreted in a broad sense: instead of being limited to objects, it is extended in a natural way to union types and to higher-order functional types. This constitutes a semantic basis for the unification of functional and object-oriented programming.
The Manchester prototype dataflow computer The Manchester project has developed a powerful dataflow processor based on dynamic tagging. This processor is large enough to tackle realistic applications and exhibits impressive speedup for programs with sufficient parallelism.
A lazy evaluator A different way to execute pure LISP programs is presented. It delays the evaluation of parameters and list structures without ever having to perform more evaluation steps than the usual method. Although the central idea can be found in earlier work this paper is of interest since it treats a rather well-known language and works out an algorithm which avoids full substitution. A partial correctness proof using Scott-Strachey semantics is sketched in a later section.
Modelling information flow for organisations: A review of approaches and future challenges. Modelling is a classic approach to understanding complex problems that can be achieved diagrammatically to visualise concepts, and mathematically to analyse attributes of concepts. An organisation as a communicating entity is a made up of constructs in which people can have access to information and speak to each other. Modelling information flow for organisations is a challenging task that enables analysts and managers to better understand how to: organise and coordinate processes, eliminate redundant information flows and processes, minimise the duplication of information and manage the sharing of intra- and inter-organisational information.
A Theory of Prioritizing Composition An operator for the composition of two processes, where one process has priority over the other process, is studied. Processes are described by action systems, and data refinement is used for transforming processes. The operator is shown to be compositional, i.e. monotonic with respect to refinement. It is argued that this operator is adequate for modelling priorities as found in programming languages and operating systems. Rules for introducing priorities and for raising and lowering priorities of processes are given. Dynamic priorities are modelled with special priority variables which can be freely mixed with other variables and the prioritising operator in program development. A number of applications show the use of prioritising composition for modelling and specification in general.
An ontological model of an information system An ontological model of an information system that provides precise definitions of fundamental concepts like system, subsystem, and coupling is proposed. This model is used to analyze some static and dynamic properties of an information system and to examine the question of what constitutes a good decomposition of an information system. Some of the major types of information system formalisms that bear on the authors' goals and their respective strengths and weaknesses relative to the model are briefly reviewed. Also articulated are some of the fundamental notions that underlie the model. Those basic notions are then used to examine the nature and some dynamics of system decomposition. The model's predictive power is discussed.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.1
0.1
0.033333
0.000325
0
0
0
0
0
0
0
0
0
0
Software verification and validation in practice and theory (Position Statement) The techniques and tools that are available to the developer of practical software systems today are oriented towards the detection of errors in a test environment. That is, the software has been designed, coded, walked through, and tested by the designers, and is now to be subjected to outside verification and validation. Among the test tools that can be used to aid in the testing process are JAVS1, FAVS2, RXVP3, PACE4, and PET5.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Applying synthesis principles to create responsive software systems The engineering of new software systems is a process of iterative refinement. Each refinement step involves understanding the problem, creating the proposed solution, describing or representing it, and assessing its viability. The assessment includes evaluating its correctness, its feasibility, and its preferability (when there are alternatives). Many factors affect preferability, such as maintainability, responsiveness, reliability, usability, etc. This discussion focuses on only one, the {\em responsiveness} of the software: that is, the response time or throughput as seen by the users. The understanding, creation, representation, and assessment steps are repeated until the proposed product of the refinement step ``passes'''' the assessment.
Software Performance Engineering Performance is critical to the success of today's software systems. However, many software products fail to meet their performance objectives when they are initially constructed. Fixing these problems is costly and causes schedule delays, cost overruns, lost productivity, damaged customer relations, missed market windows, lost revenues, and a host of other difficulties. This chapter presents software performance engineering (SPE), a systematic, quantitative approach to constructing software systems that meet performance objectives. SPE begins early in the software development process to model the performance of the proposed architecture and high-level design. The models help to identify potential performance problems when they can be fixed quickly and economically.
Software CAD: a revolutionary approach A research project is described in which an experimental software CAD environment called the Carleton embedded system design environment (CAEDE), oriented toward embedded systems and Ada, was developed to provide a demonstration of the concept and to serve as a research testbed. The major contribution of CAEDE is a demonstration of a visual paradigm which combines semantic depth and syntactic shallowness, relative to Ada, in a manner that makes it possible for the embedded-system designer to work in terms of abstract machines while still thinking Ada. A secondary contribution is the identification of Prolog as a promising approach for supporting tool development in an environment which supports the visual paradigm. Also described are experimental tools for temporal analysis, performance analysis, and the generation of skeleton Ada code.
Automatic synthesis of SARA design models from system requirements In this research in design automation, two views are employed as the requirements of a system-namely, the functional requirements and the operations concept. A requirement analyst uses data flow diagrams and system verification diagrams (SVDs) to represent the functional requirements and the operations concept, respectively. System Architect's Apprentice (SARA) is an environment-supported method for designing hardware and software systems. A knowledge-based system, called the design assistant, was built to help the system designer to transform requirements stated in one particular collection of design languages. The SVD requirement specification features and the SARA design models are reviewed. The knowledge-based tool for synthesizing a particular domain of SARA design from the requirements is described, and an example is given to illustrate this synthesis process. This example shows the rules used and how they are applied. An evaluation of the approach is given.
Software Requirements Analysis for Real-Time Process-Control Systems A set of criteria is defined to help find errors in, software requirements specifications. Only analysis criteria that examine the behavioral description of the computer are considered. The behavior of the software is described in terms of observable phenomena external to the software. Particular attention is focused on the properties of robustness and lack of ambiguity. The criteria are defined using an abstract state-machine model for generality. Using these criteria, analysis procedures can be defined for particular state-machine modeling languages to provide semantic analysis of real-time process-control software requirements.
Petri net-based object-oriented modelling of distributed systems This paper presents an object-oriented approach for building distributed systems. An example taken from the field of computer integrated manufacturing systems is taken as a guideline. According to this approach a system is built up through three steps: control and synchronization aspects for each class of objects are treated first using PROT nets, which are a high-level extension to Petri nets; then data are introduced specifying the internal states of the objects as well as the messages they send each other; finally the connections between the objects are introduced by means of a data flow diagram between classes. The implementation uses ADA as the target language, exploiting its tasking and structuring mechanisms. The flexibility of the approach and the possibility of using a knowledge-based user interface promote rapid prototyping and reusability.
The Refinement Paradigm: The Interaction of Coding and Efficiency Knowledge in Program Synthesis A refinement paradigm for implementing a high-level specification in a low-level target language is discussed. In this paradigm, coding and analysis knowledge work together to produce an efficient program in the target language. Since there are many possible implementations for a given specification of a program, searching knowledge is applied to increase the efficiency of the process of finding a good implementation. For example, analysis knowledge is applied to determine upper and lower cost bounds on alternate implementations, and these bounds are used to measure the potential impact of different design decisions and to decide which alternatives should be pursued. In this paper we also describe a particular implementation of this program synthesis paradigm, called PSI/SYN, that has automatically implemented a number of programs in the domain of symbolic processing.
SA-ER: A Methodology that Links Structured Analysis and Entity-Relationship Modeling for Database Design
Stepwise Refinement of Action Systems A method for the formal development of provably correct parallel algorithms by stepwise refinement is presented. The entire derivation procedure is carried out in the context of purely sequential programs. The resulting parallel algorithms can be efficiently executed on different architectures. The methodology is illustrated by showing the main derivation steps in a construction of a parallel algorithm for matrix multiplication.
A simple approach to specifying concurrent systems Over the past few years, I have developed an approach to the formal specification of concurrent systems that I now call the transition axiom method. The basic formalism has already been described in [12] and [1], but the formal details tend to obscure the important concepts. Here, I attempt to explain these concepts without discussing the details of the underlying formalism.Concurrent systems are not easy to specify. Even a simple system can be subtle, and it is often hard to find the appropriate abstractions that make it understandable. Specifying a complex system is a formidable engineering task. We can understand complex structures only if they are composed of simple parts, so a method for specifying complex systems must have a simple conceptual basis. I will try to demonstrate that the transition axiom method provides such a basis. However, I will not address the engineering problems associated with specifying real systems. Instead, the concepts will be illustrated with a series of toy examples that are not meant to be taken seriously as real specifications.Are you proposing a specification language? No. The transition axiom method provides a conceptual and logical foundation for writing formal specifications; it is not a specification language. The method determines what a specification must say; a language determines in detail how it is said.What do you mean by a formal specification? I find it helpful to view a specification as a contract between the user of a system and its implementer. The contract should tell the user everything he must know to use the system, and it should tell the implementer everything he must know about the system to implement it. In principle, once this contract has been agreed upon, the user and the implementer have no need for further communication. (This view describes the function of the specification; it is not meant as a paradigm for how systems should be built.)For a specification to be formal, the question of whether an implementation satisfies the specification must be reducible to the question of whether an assertion is provable in some mathematical system. To demonstrate that he has met the terms of the contract, the implementer should resort to logic rather than contract law. This does not mean that an implementation must be accompanied by a mathematical proof. It does mean that it should be possible, in principle though not necessarily in practice, to provide such a proof for a correct implementation. The existence of a formal basis for the specification method is the only way I know to guarantee that specifications are unambiguous.Ultimately, the systems we specify are physical objects, and mathematics cannot prove physical properties. We can prove properties only of a mathematical model of the system; whether or not the system correctly implements the model must remain a question of law and not of mathematics.Just what is a system? By "system," I mean anything that interacts with its environment in a discrete (digital) fashion across a well-defined boundary. An airline reservation system is such a system, where the boundary might be drawn between the agents using the system, who are part of the environment, and the terminals, which are part of the system. A Pascal procedure is a system whose environment is the rest of the program, with which it interacts by responding to procedure calls and accessing global variables. Thus, the system being specified may be just one component of a larger system.A real system has many properties, ranging from its response time to the color of the cabinet. No formal method can specify all of these properties. Which ones can be specified with the transition axiom method? The transition axiom method specifies the behavior of a System—that is, the sequence of observable actions it performs when interacting with the environment. More precisely, it specifies two classes of behavioral properties: safety and liveness properties. Safety properties assert what the system is allowed to do, or equivalently, what it may not do. Partial correctness is an example of a safety property, asserting that a program may not generate an incorrect answer. Liveness properties assert what the system must do. Termination is an example of a liveness property, asserting that a program must eventually generate an answer. (Alpern and Schneider [2] have formally defined these two classes of properties.) In the transition axiom method, safety and liveness properties are specified separately.There are important behavioral properties that cannot be specified by the transition axiom method; these include average response time and probability of failure. A transition axiom specification can provide a formal model with which to analyze such properties,1 but it cannot formally specify them.There are also important nonbehavioral properties of systems that one might want to specify, such as storage requirements and the color of the cabinet. These lie completely outside the realm of the method.Why specify safety and liveness properties separately? There is a single formalism that underlies a transition axiom specification, so there is no formal separation between the specification of safety and liveness properties. However, experience indicates that different methods are used to reason about the two kinds of properties and it is convenient in practice to separate them. I consider the ability to decompose a specification into liveness and safety properties to be one of the advantages of the method. (One must prove safety properties in order to verify liveness properties, but this is a process of decomposing the proof into smaller lemmas.)Can the method specify real-time behavior? Worst-case behavior can be specified, since the requirement that the system must respond within a certain length of time can be expressed as a safety property—namely, that the clock is not allowed to reach a certain value without the system having responded. Average response time cannot be expressed as a safety or liveness property.The transition axiom method can assert that some action either must occur (liveness) or must not occur (safety). Can it also assert that it is possible for the action to occur? No. A specification serves as a contractual constraint on the behavior of the system. An assertion that the system may or may not do something provides no constraint and therefore serves no function as part of the formal specification. Specification methods that include such assertions generally use them as poor substitutes for liveness properties. Some methods cannot specify that a certain input must result in a certain response, specifying instead that it is possible for the input to be followed by the response. Every specification I have encountered that used such assertions was improved by replacing the possibility assertions with liveness properties that more accurately expressed the system's informal requirements.Imprecise wording can make it appear that a specification contains a possibility assertion when it really doesn't. For example, one sometimes states that it must be possible for a transmission line to lose messages. However, the specification does not require that the loss of messages be possible, since this would prohibit an implementation that guaranteed no messages were lost. The specification might require that something happens (a liveness property) or doesn't happen (a safety property) despite the loss of messages. Or, the statement that messages may be lost might simply be a comment about the specification, observing that it does not require that all messages be delivered, and not part of the actual specification.If a safety property asserts that some action cannot happen, doesn't its negation assert that the action is possible? In a formal system, one must distinguish the logical formula A from the assertion &vdash; A, which means that A is provable in the logic; &vdash; A is not a formula of the logic itself. In the logic underlying the transition axiom method, if A represents a safety property asserting that some action is impossible, then the negation of A, which is the formula &boxdl;A, asserts that the action must occur. The action's possibility is expressed by the negation of &vdash; A, which is a metaformula and not a formula within the logic. See [10] for more details.
Non-interference through determinism The standard approach to the specification of a secure system is to present a (usually state-based) abstract security model separately from the specification of the system's functional requirements, and establishing a correspondence between the two specifications. This complex treatment has resulted in development methods distinct from those usually advocated for general applications.We provide a novel and intellectually satisfying formulation of security properties in a process algebraic framework, and show that these are preserved under refinement. We relate the results to a more familiar state-based (Z) specification methodology. There are efficient algorithms for verifying our security properties using model checking.
Fuzzy logic as a basis for reusing task‐based specifications
LANSF: a protocol modelling environment and its implementation SUMMARY LANSF is a software package that was originally designed as a tool to investigate the behaviour of medium access control (MAC) level protocols. These protocols form an interesting class of distributed computations: timing of events is the key factor in them. The protocol definition language of LANSF is based on C, and protocols are specified (programmed) as collections of communicating, interrupt-driven processes. These specifications are executable: an event-driven emulator of MAC-level communication phenomena forms the foundation of the implementation. Some tools for debugging, testing, and validation of protocol specifications are provided. We present key features of LANSF at the syntactic level, comment informally on the semantics of these features, and highlight some implementation issues. A complete example of a LANSF application is discussed in the Appendix.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.2496
0.2496
0.0832
0.0624
0.009244
0.002933
0.000054
0.000002
0
0
0
0
0
0
IN service specification using the KANNEL language
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Visually lossless screen content coding using HEVC base-layer. This paper presents a novel two-layer coding framework targeting visually loss less compression of screen content video. The proposed framework employs the conventional HEVC standard for the base-layer. For the enhancement layer, a hybrid of spatial and temporal block-prediction mechanism is introduced to guarantee a small energy of the error-residual. Spatial prediction is generally chosen for dynamic areas, while temporal predictions yield better prediction for static areas in a video frame. The prediction residual is quantized based on whether a given block is static or dynamic. Run-length coding, Golomb based binarization and context-based arithmetic coding are employed to efficiently code the quantized residual and form the enhancement-layer. Performance evaluations using 4: 4: 4 screen content sequences show that, for visually lossless video quality, the proposed system significantly saves the bit-rate compared to the two-layer lossless HEVC framework.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Identifying brain abnormalities from electroencephalogram using evolutionary gravitational neocognitron neural network Now-a-day’s brain abnormality is one among the dangerous neurological disorders that occurs because of the birth defects, brain stroke, brain injuries, genetic mutation and brain tumor. This brain disorder creates continue melancholia, bipolar disorder, stress disorder (PTSD) and so on. Due to this serious impact of the brain abnormalities, need to be identified within the beginning stage for eliminating the difficulties in humans day to day life. So, the automatic brain abnormality prediction process is created by utilizing electroencephalogram (EEG) for avoiding risk factor in future. As per the discussion, this paper introduces the evolutionary gravitational Neocognitron neural network(GNNN) for recognizing brain abnormalities with effective manner and it is especially suited for humans in war field. Initially, EEG signal is collected from patient; unwanted signal information is eliminated by using multi-linear principal component analysis from pre-processed signal, various features are extracted using affine invariant component analysis method and greedy global optimized features are chosen. The chosen features are analyzed using multi-layer virtual cortex model for predicting abnormal features. Finally the potency of the brain related abnormality prediction process developed using MATLAB tool and efficiency is examined using F-measure, Mathew correlation coefficient error rate, sensitivity, specificity, and accuracy. Along these lines the proposed framework effectively perceives the cerebrum variation from the norm with most astounding precision up to 99.48% with error rate.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1
0
0
0
0
0
0
0
0
0
0
0
0
0
A novel method for solving the fully neutrosophic linear programming problems The most widely used technique for solving and optimizing a real-life problem is linear programming (LP), due to its simplicity and efficiency. However, in order to handle the impreciseness in the data, the neutrosophic set theory plays a vital role which makes a simulation of the decision-making process of humans by considering all aspects of decision (i.e., agree, not sure and disagree). By keeping the advantages of it, in the present work, we have introduced the neutrosophic LP models where their parameters are represented with a trapezoidal neutrosophic numbers and presented a technique for solving them. The presented approach has been illustrated with some numerical examples and shows their superiority with the state of the art by comparison. Finally, we conclude that proposed approach is simpler, efficient and capable of solving the LP models as compared to other methods.
The structured design of cryptographically good s-boxes We describe a design procedure for the s-boxes of private key cryptosystems constructed as substitution-permutation networks (DES-like cryptosystems). Our procedure is proven to construct s-boxes which are bijective, are highly nonlinear, possess the strict avalanche criterion, and have output bits which act (vitually) independently when any single input bit is complemented. Furthermore, our procedure is very efficient: we have generated approximately 60 such 4 × 4 s-boxes in a few seconds of CPU time on a SUN workstation.
UniSpaCh: A text-based data hiding method using Unicode space characters This paper proposes a text-based data hiding method to insert external information into Microsoft Word document. First, the drawback of low embedding efficiency in the existing text-based data hiding methods is addressed, and a simple attack, DASH, is proposed to reveal the information inserted by the existing text-based data hiding methods. Then, a new data hiding method, UniSpaCh, is proposed to counter DASH. The characteristics of Unicode space characters with respect to embedding efficiency and DASH are analyzed, and the selected Unicode space characters are inserted into inter-sentence, inter-word, end-of-line and inter-paragraph spacings to encode external information while improving embedding efficiency and imperceptivity of the embedded information. UniSpaCh is also reversible where the embedded information can be removed to completely reconstruct the original Microsoft Word document. Experiments were carried out to verify the performance of UniSpaCh as well as comparing it to the existing space-manipulating data hiding methods. Results suggest that UniSpaCh offers higher embedding efficiency while exhibiting higher imperceptivity of white space manipulation when compared to the existing methods considered. In the best case scenario, UniSpaCh produces output document of size almost 9 times smaller than that of the existing method.
A hybrid whale optimization algorithm based on local search strategy for the permutation flow shop scheduling problem. The flow shop scheduling problem is one of the most important types of scheduling with a large number of real-world applications. In this paper, we propose a new algorithm that integrates the Whale Optimization Algorithm (WOA) with a local search strategy for tackling the permutation flow shop scheduling problem. The Largest Rank Value (LRV) requires the algorithm to deal with the discrete search space of the problem. The diversity of candidate schedules is improved using a swap mutation operation as well. In addition to the insert-reversed block operation is adopted to escape from the local optima. The proposed hybrid whale algorithm (HWA) is incorporated with Nawaz–Enscore–Ham (NEH) to improve the performance of the algorithm. It is observed that HWA gives competitive results compared to the existing algorithms.
2-Levels of clustering strategy to detect and locate copy-move forgery in digital images Understanding is considered a key purpose of image forensic science in order to find out if a digital image is authenticated or not. It can be a sensitive task in case images are used as necessary proof as an impact judgment. it’s known that There are several different manipulating attacks but, this copy move is considered as one of the most common and immediate one, in which a region is copied twice in order to give different information about the same scene, which can be considered as an issue of information integrity. The detection of this kind of manipulating has been recently handled using methods based on SIFT. SIFT characteristics are represented in the detection of image features and determining matched points. A clustering is a key step which always following SIFT matching in-order to classify similar matched points to clusters. The ability of the image forensic tool is represented in the assessment of the conversion that is applied between the two duplicated images of one region and located them correctly. Detecting copy-move forgery is not a new approach but using a new clustering approach which has been purposed by using the 2-level clustering strategy based on spatial and transformation domains and any previous information about the investigated image or the number of clusters need to be created is not necessary. Results from different data have been set, proving that the proposed method is able to individuate the altered areas, with high reliability and dealing with multiple cloning.
A historical perspective of speech recognition What do we know now that we did not know 40 years ago?
An incremental ant colony optimization based approach to task assignment to processors for multiprocessor scheduling. Optimized task scheduling is one of the most important challenges to achieve high performance in multiprocessor environments such as parallel and distributed systems. Most introduced task-scheduling algorithms are based on the so-called list scheduling technique. The basic idea behind list scheduling is to prepare a sequence of nodes in the form of a list for scheduling by assigning them some priority measurements, and then repeatedly removing the node with the highest priority from the list and allocating it to the processor providing the earliest start time (EST). Therefore, it can be inferred that the makespans obtained are dominated by two major factors: (1) which order of tasks should be selected (sequence subproblem); (2) how the selected order should be assigned to the processors (assignment subproblem). A number of good approaches for overcoming the task sequence dilemma have been proposed in the literature, while the task assignment problem has not been studied much. The results of this study prove that assigning tasks to the processors using the traditional EST method is not optimum; in addition, a novel approach based on the ant colony optimization algorithm is introduced, which can find far better solutions.
Robust and Imperceptible Dual Watermarking for Telemedicine Applications In this paper, the effects of different error correction codes on the robustness and imperceptibility of discrete wavelet transform and singular value decomposition based dual watermarking scheme is investigated. Text and image watermarks are embedded into cover radiological image for their potential application in secure and compact medical data transmission. Four different error correcting codes such as Hamming, the Bose, Ray-Chaudhuri, Hocquenghem (BCH), the Reed---Solomon and hybrid error correcting (BCH and repetition code) codes are considered for encoding of text watermark in order to achieve additional robustness for sensitive text data such as patient identification code. Performance of the proposed algorithm is evaluated against number of signal processing attacks by varying the strength of watermarking and covers image modalities. The experimental results demonstrate that this algorithm provides better robustness without affecting the quality of watermarked image.This algorithm combines the advantages and removes the disadvantages of the two transform techniques. Out of the three error correcting codes tested, it has been found that Reed---Solomon shows the best performance. Further, a hybrid model of two of the error correcting codes (BCH and repetition code) is concatenated and implemented. It is found that the hybrid code achieves better results in terms of robustness. This paper provides a detailed analysis of the obtained experimental results.
Separation and information hiding We investigate proof rules for information hiding, using the recent formalism of separation logic. In essence, we use the separating conjunction to partition the internal resources of a module from those accessed by the module's clients. The use of a logical connective gives rise to a form of dynamic partitioning, where we track the transfer of ownership of portions of heap storage between program components. It also enables us to enforce separation in the presence of mutable data structures with embedded addresses that may be aliased.
Incorporating usability into requirements engineering tools The development of a computer system requires the definition of a precise set of properties or constraints that the system must satisfy with maximum economy and efficiency. This definition process requires a significant amount of communication between the requestor and the developer of the system. In recent years, several methodologies and tools have been proposed to improve this communication process. This paper establishes a framework for examining the methodologies and techniques, charting the progress made, and identifying opportunities to improve the communication capabilities of a requirements engineering tool.
Non-interference through determinism The standard approach to the specification of a secure system is to present a (usually state-based) abstract security model separately from the specification of the system's functional requirements, and establishing a correspondence between the two specifications. This complex treatment has resulted in development methods distinct from those usually advocated for general applications.We provide a novel and intellectually satisfying formulation of security properties in a process algebraic framework, and show that these are preserved under refinement. We relate the results to a more familiar state-based (Z) specification methodology. There are efficient algorithms for verifying our security properties using model checking.
Matching language and hardware for parallel computation in the Linda Machine The Linda Machine is a parallel computer that has been designed to support the Linda parallel programming environment in hardware. Programs in Linda communicate through a logically shared associative memory called tuple space. The goal of the Linda Machine project is to implement Linda's high-level shared-memory abstraction efficiently on a nonshared-memory architecture. The authors describe the machine's special-purpose communication network and its associated protocols, the design of the Linda coprocessor, and the way its interaction with the network supports global access to tuple space. The Linda Machine is in the process of fabrication. The authors discuss the machine's projected performance and compare this to software versions of Linda.
Refinement in Object-Z and CSP In this paper we explore the relationship between refinement in Object-Z and refinement in CSP. We prove with a simple counterexample that refinement within Object-Z, established using the standard simulation rules, does not imply failures-divergences refinement in CSP. This contradicts accepted results.Having established that data refinement in Object-Z and failures refinement in CSP are not equivalent we identify alternative refinement orderings that may be used to compare Object-Z classes and CSP processes. When reasoning about concurrent properties we need the strength of the failures-divergences refinement ordering and hence identify equivalent simulation rules for Object-Z. However, when reasoning about sequential properties it is sufficient to work within the simpler relational semantics of Object-Z. We discuss an alternative denotational semantics for CSP, the singleton failures semantic model, which has the same information content as the relational model of Object-Z.
Reversible data hiding by adaptive group modification on histogram of prediction errors. In this work, the conventional histogram shifting (HS) based reversible data hiding (RDH) methods are first analyzed and discussed. Then, a novel HS based RDH method is put forward by using the proposed Adaptive Group Modification (AGM) on the histogram of prediction errors. Specifically, in the proposed AGM method, multiple bins are vacated based on their magnitudes and frequencies of occurrences by employing an adaptive strategy. The design goals are to maximize hiding elements while minimizing shifting and modification elements to maintain image high quality by giving priority to the histogram bins utilized for hiding. Furthermore, instead of hiding only one bit at a time, the payload is decomposed into segments and each segment is hidden by modifying a triplet of prediction errors to suppress distortion. Experimental results show that the proposed AGM technique outperforms the current state-of-the-art HS based RDH methods. As a representative result, the proposed method achieves an improvement of 4.30 dB in terms of PSNR when 105,000 bits are hidden into the test Lenna image.
1.208
0.208
0.208
0.208
0.208
0.004
0.000667
0.000111
0
0
0
0
0
0
Ponder: Realising Enterprise Viewpoint Concepts This paper introduces the Ponder language for specifying distributed object enterprise concepts. Ponder, is a declarative language, which permits the specification of policies in terms of obligations, permissions and prohibitions and provides the means for defining roles, relationships and their configurations in nested communities. Ponder provides a concrete representation of most of the concepts of the Enterprise Viewpoint. The design of the language incorporates lessons drawn from several years of research on policy for security and distributed systems management as well as policy conflict analysis. The various language constructs are presented through a scenario for the operation, administration and maintenance of a mobile telecommunication network.
ANTLR: a predicated-LL(k) parser generator this paper, we introduce the ANTLR (ANother Tool for Language Recognition) parsergenerator, which addresses all these issues. ANTLR is a component of the Purdue CompilerConstruction Tool Set (PCCTS)
Specifying privacy policies with P3P and EPAL: lessons learned As computing becomes more ubiquitous and Internet use continues to rise, it is increasingly important for organizations to construct accurate and effective privacy policies that document their information handling and usage practices. Most privacy policies are derived and specified in a somewhat ad-hoc manner, leading to policies that are of limited use to the consumers they are intended to serve. To make privacy policies more readable and enforceable, two privacy policy specification languages have emerged, P3P and EPAL. This paper discusses a case study in which the authors systematically formalized two real and complex, healthcare website privacy statements, and measured the results against well-known requirements engineering criteria.
Commitment analysis to operationalize software requirements from privacy policies Online privacy policies describe organizations’ privacy practices for collecting, storing, using, and protecting consumers’ personal information. Users need to understand these policies in order to know how their personal information is being collected, stored, used, and protected. Organizations need to ensure that the commitments they express in their privacy policies reflect their actual business practices, especially in the United States where the Federal Trade Commission regulates fair business practices. Requirements engineers need to understand the privacy policies to know the privacy practices with which the software must comply and to ensure that the commitments expressed in these privacy policies are incorporated into the software requirements. In this paper, we present a methodology for obtaining requirements from privacy policies based on our theory of commitments, privileges, and rights, which was developed through a grounded theory approach. This methodology was developed from a case study in which we derived software requirements from seventeen healthcare privacy policies. We found that legal-based approaches do not provide sufficient coverage of privacy requirements because privacy policies focus primarily on procedural practices rather than legal practices.
Legally \"reasonable\" security requirements: A 10-year FTC retrospective Growth in electronic commerce has enabled businesses to reduce costs and expand markets by deploying information technology through new and existing business practices. However, government laws and regulations require businesses to employ reasonable security measures to thwart risks associated with this technology. Because many security vulnerabilities are only discovered after attacker exploitation, regulators update their interpretation of reasonable security to stay current with emerging threats. With a focus on determining what businesses must do to comply with these changing interpretations of the law, we conducted an empirical, multi-case study to discover and measure the meaning and evolution of ''reasonable'' security by examining 19 regulatory enforcement actions by the U.S. Federal Trade Commission (FTC) over a 10 year period. The results reveal trends in FTC enforcement actions that are institutionalizing security knowledge as evidenced by 39 security requirements that mitigate 110 legal security vulnerabilities.
Analyzing Goal Semantics for Rights, Permissions, and Obligations Software requirements, rights, permissions, obligations, and operations of policy enforcing systems are often misaligned. Our goal is to develop tools and techniques that help requirements engineers and policy makers bring policies and system requirements into better alignment. Goals from requirements engineering are useful for distilling natural language policy statements into structured descriptions of these interactions; however, they are limited in that they are not easy to compare with one another despite sharing common semantic features. In this paper, we describe a process called semantic parameterization that we use to derive semantic models from goals mined from privacy policy documents. We present example semantic models that enable comparing policy statements and present a template method for generating natural language policy statements (and ultimately requirements) from unique semantic models. The semantic models are described by a context-free grammar called KTL that has been validated within the context of the most frequently expressed goals in over 100 Internet privacy policy documents. KTL is supported by a policy analysis tool that supports queries and policy statement generation.
Knowledge Visualization from Conceptual Structures This paper addresses the problem of automatically generating displays from conceptual graphs for visualization of the knowledge contained in them. Automatic display generation is important in validating the graphs and for communicating the knowledge they contain. Displays may be classified as literal, schematic, or pictorial, and also as static versus dynamic. At this time prototype software has been developed to generate static schematic displays of graphs representing knowledge of digital systems. The prototype software generates displays in two steps, by first joining basis displays associated with basis graphs from which the graph to be displayed is synthesized, and then assigning screen coordinates to the display elements. Other strategies for mapping conceptual graphs to schematic displays are also discussed. Keywords Visualization, Representation Mapping, Conceptual Graphs, Schematic Diagrams, Pictures
Abstraction-based software development A five-year experience with abstraction-based software-development techniques in the university environment indicates that the investment required to support the paradigm in practice is returned in terms of greater ability to control complexity in large projects—provided there exists a set of software tools sufficient to support the approach.
Specification-based prototyping for embedded systems Specification of software for safety critical, embedded computer systems has been widely addressed in literature. To achieve the high level of confidence in a specification's correctness necessary in many applications, manual inspections, formal verification, and simulation must be used in concert. Researchers have successfully addressed issues in inspection and verification; however, results in the areas of execution and simulation of specifications have not made as large an impact as desired.In this paper we present an approach to specification-based prototyping which addresses this issue. It combines the advantages of rigorous formal specifications and rapid systems prototyping. The approach lets us refine a formal executable model of the system requirements to a detailed model of the software requirements. Throughout this refinement process, the specification is used as a prototype of the proposed software. Thus, we guarantee that the formal specification of the system is always consistent with the observed behavior of the prototype. The approach is supported with the NIMBUS environment, a framework that allows the formal specification to execute while interacting with software models of its embedding environment or even the physical environment itself (hardware-in-the-loop simulation).
A review of the state of the practice in requirements modeling We have conducted a field study of ten organizations to find out what they currently do to define, interpret, analyze and use the requirements for their software systems and prod- ucts. By "field study" we refer to a series of in-depth, struc- tured interviews with practitioners of various kinds. In this paper we summarize the findings of this field study and to explain the implications for improving practice either by organizational and methodological interventions or by introducing new technology.
A generalization of Dijkstra's calculus Dijsktra's calculus of guarded commands can be generalized and simplified by dropping the law of the excluded miracle. This paper gives a self-contained account of the generalized calculus from first principles through the semantics of recursion. The treatment of recursion uses the fixpoint method from denotational semantics. The paper relies only on the algebraic properties of predicates; individual states are not mentioned (except for motivation). To achieve this, we apply the correspondence between programs and predicates that underlies predicative programming.The paper is written from the axiomatic semantic point of view, but its contents can be described from the denotational semantic point of view roughly as follows: The Plotkin-Apt correspondence between wp semantics and the Smyth powerdomain is extended to a correspondence between the full wp/wlp semantics and the Plotkin powerdomain extended with the empty set.
Robust contour decomposition using a constant curvature criterion The problem of decomposing an extended boundary or contour into simple primitives is addressed with particular emphasis on Laplacian-of-Gaussian zero-crossing contours. A technique is introduced for partitioning such contours into constant curvature segments. A nonlinear 'blip' filter matched to the impairment signature of the curvature computation process, an overlapped voting scheme, and a sequential contiguous segment extraction mechanism are used. This technique is insensitive to reasonable changes in algorithm parameters and robust to noise and minor viewpoint-induced distortions in the contour shape, such as those encountered between stereo image pairs. The results vary smoothly with the data, and local perturbations induce only local changes in the result. Robustness and insensitivity are experimentally verified.
Nonrepetitive Colorings of Graphs—A Survey
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.111719
0.089587
0.067568
0.066667
0.033514
0.001296
0.00016
0.000059
0.000012
0.000004
0
0
0
0
Further results on passivity analysis for uncertain neural networks with discrete and distributed delays. The problem of passivity analysis of uncertain neural networks (UNNs) with discrete and distributed delay is considered. By constructing a suitable augmented Lyapunov-Krasovskii functional(LKF) and combing a novel integral inequality with convex approach to estimate the derivative of the proposed LKF, improved sufficient conditions to guarantee passivity of the concerned neural networks are established with the framework of linear matrix inequalities(LMIs), which can be solved easily by various efficient convex optimization algorithms. Two numerical examples are provided to demonstrate the enhancement of feasible region of the proposed criteria by the comparison of maximum allowable delay bounds.
Dissipativity analysis for neural networks with two-delay components using an extended reciprocally convex matrix inequality. This paper focuses on the problem of strictly (Q,S,R)-γ-dissipativity analysis for neural networks with two-delay components. Based on the dynamic delay interval method, a Lyapunov–Krasovskii functional is constructed. By solving its self-positive definite and derivative negative definite conditions via an extended reciprocally convex matrix inequality, several new sufficient conditions that guarantee the neural networks strictly (Q,S,R)-γ-dissipative are derived. Furthermore, the dissipativity analysis of neural networks with two-delay components is extended to the stability analysis. Finally, two numerical examples are employed to illustrate the advantages of the proposed method.
A generalized free-matrix-based integral inequality for stability analysis of time-varying delay systems. This paper focuses on the delay-dependent stability problem of time-varying delay systems. A generalized free-matrix-based integral inequality (GFMBII) is presented. This inequality is able to deal with time-varying delay systems without using the reciprocal convexity lemma. It overcomes the drawback that the Bessel–Legendre inequality is inconvenient to cope with a time-varying delay system as the resultant bound contains a reciprocal convexity. Through the use of the derived inequality and by constructing a suitable Lyapunov–Krasovskii function (LKF), improved stability criteria are presented in the form of linear matrix inequalities (LMIs). Two numerical examples are carried out to demonstrate that the results outperform the state of the art in the literature.
An overview of recent developments in Lyapunov-Krasovskii functionals and stability criteria for recurrent neural networks with time-varying delays. Global asymptotic stability is an important issue for wide applications of recurrent neural networks with time-varying delays. The Lyapunov–Krasovskii functional method is a powerful tool to check the global asymptotic stability of a delayed recurrent neural network. When the Lyapunov–Krasovskii functional method is employed, three steps are necessary in order to derive a global asymptotic stability criterion: (i) constructing a Lyapunov–Krasovskii functional, (ii) estimating the derivative of the Lyapunov–Krasovskii functional, and (iii) formulating a global asymptotic stability criterion. This paper provides an overview of recent developments in each step with insightful understanding. In the first step, some existing Lyapunov–Krasovskii functionals for stability of delayed recurrent neural networks are anatomized. In the second step, a free-weighting matrix approach, an integral inequality approach and its recent developments, reciprocally convex inequalities and S-procedure are analyzed in detail. In the third step, linear convex and quadratic convex approaches, together with the refinement of allowable delay sets are reviewed. Finally, some challenging issues are presented to guide the future research.
Stability Analysis of Neural Networks With Time-Varying Delay by Constructing Novel Lyapunov Functionals. This paper presents two novel Lyapunov functionals for analyzing the stability of neural networks with time-varying delay. Based on our newly proposed Lyapunov functionals and a relaxed Wirtinger-based integral inequality, new stability criteria are derived in the form of linear matrix inequalities. A comprehensive comparison of results is given to illustrate the newly proposed stability criteria ...
An improved reciprocally convex inequality and an augmented Lyapunov-Krasovskii functional for stability of linear systems with time-varying delay. This paper is concerned with stability of a linear system with a time-varying delay. First, an improved reciprocally convex inequality including some existing ones as its special cases is derived. Compared with an extended reciprocally convex inequality recently reported, the improved reciprocally convex inequality can provide a maximum lower bound with less slack matrix variables for some reciprocally convex combinations. Second, an augmented Lyapunov–Krasovskii functional is tailored for the use of a second-order Bessel–Legendre inequality. Third, a stability criterion is derived by employing the proposed reciprocally convex inequality and the augmented Lyapunov–Krasovskii functional. Finally, two well-studied numerical examples are given to show that the obtained stability criterion can produce a larger upper bound of the time-varying delay than some existing criteria.
Comparison of bounding methods for stability analysis of systems with time-varying delays. Integral inequalities for quadratic functions play an important role in the derivation of delay-dependent stability criteria for linear time-delay systems. Based on the Jensen inequality, a reciprocally convex combination approach was introduced by Park et al. (2011) for deriving delay-dependent stability criterion, which achieves the same upper bound of the time-varying delay as the one on the use of the Moon’s et al. inequality. Recently, a new inequality called Wirtinger-based inequality that encompasses the Jensen inequality was proposed by Seuret and Gouaisbaut (2013) for the stability analysis of time-delay systems. In this paper, it is revealed that the reciprocally convex combination approach is effective only with the use of Jensen inequality. When the Jensen inequality is replaced by Wirtinger-based inequality, the Moon’s et al.inequality together with convex analysis can lead to less conservative stability conditions than the reciprocally convex combination inequality. Moreover, we prove that the feasibility of an LMI condition derived by the Moon’s et al.inequality as well as convex analysis implies the feasibility of an LMI condition induced by the reciprocally convex combination inequality. Finally, the efficiency of the methods is demonstrated by some numerical examples, even though the corresponding system with zero-delay as well as the system without the delayed term are not stable.
Improved delay-range-dependent stability criteria for linear systems with time-varying delays This paper is concerned with the stability analysis of linear systems with time-varying delays in a given range. A new type of augmented Lyapunov functional is proposed which contains some triple-integral terms. In the proposed Lyapunov functional, the information on the lower bound of the delay is fully exploited. Some new stability criteria are derived in terms of linear matrix inequalities without introducing any free-weighting matrices. Numerical examples are given to illustrate the effectiveness of the proposed method.
State-Based Model Checking of Event-Driven System Requirements It is demonstrated how model checking can be used to verify safety properties for event-driven systems. SCR tabular requirements describe required system behavior in a format that is intuitive, easy to read, and scalable to large systems (e.g. the software requirements for the A-7 military aircraft). Model checking of temporal logics has been established as a sound technique for verifying properties of hardware systems. An automated technique for formalizing the semiformal SCR requirements and for transforming the resultant formal specification onto a finite structure that a model checker can analyze has been developed. This technique was effective in uncovering violations of system invariants in both an automobile cruise control system and a water-level monitoring system.
Proving acceptability properties of relaxed nondeterministic approximate programs Approximate program transformations such as skipping tasks [29, 30], loop perforation [21, 22, 35], reduction sampling [38], multiple selectable implementations [3, 4, 16, 38], dynamic knobs [16], synchronization elimination [20, 32], approximate function memoization [11],and approximate data types [34] produce programs that can execute at a variety of points in an underlying performance versus accuracy tradeoff space. These transformed programs have the ability to trade accuracy of their results for increased performance by dynamically and nondeterministically modifying variables that control their execution. We call such transformed programs relaxed programs because they have been extended with additional nondeterminism to relax their semantics and enable greater flexibility in their execution. We present language constructs for developing and specifying relaxed programs. We also present proof rules for reasoning about properties [28] which the program must satisfy to be acceptable. Our proof rules work with two kinds of acceptability properties: acceptability properties [28], which characterize desired relationships between the values of variables in the original and relaxed programs, and unary acceptability properties, which involve values only from a single (original or relaxed) program. The proof rules support a staged reasoning approach in which the majority of the reasoning effort works with the original program. Exploiting the common structure that the original and relaxed programs share, relational reasoning transfers reasoning effort from the original program to prove properties of the relaxed program. We have formalized the dynamic semantics of our target programming language and the proof rules in Coq and verified that the proof rules are sound with respect to the dynamic semantics. Our Coq implementation enables developers to obtain fully machine-checked verifications of their relaxed programs.
Computer-Aided Computing Formal program design methods are most useful when supported with suitable mechanization. This need for mechanization has long been apparent, but there have been doubts whether verification technology could cope with the problems of scale and complexity. Though there is very little compelling evidence either way at this point, several powerful mechanical verification systems are now available for experimentation. Using SRI's PVS as one representative example, we argue that the technology of...
Operational specification languages The “operational approach” to software development is based on separation of problem-oriented and implementation-oriented concerns, and features executable specifications and transformational implementation. “Operational specification languages” are executable specification languages designed to fit the goals, assumptions, and strategies of the operational approach. This paper defines the operational approach and surveys the existing operational specification languages, viz., the graphic notation of the Jackson System Development method, PAISLey, Gist, and modern applicative languages.
The Architecture of Lisp Machines First Page of the Article
Delay-partitioning approach to stability analysis of generalized neural networks with time-varying delay via new integral inequality. On the basis of establishing a new integral inequality composed of a set of adjustable slack matrix variables, this paper mainly focuses on further improved stability criteria for a class of generalized neural networks (GNNs) with time-varying delay by delay-partitioning approach. A newly augmented Lyapunov–Krasovskii functional (LKF) containing triple-integral terms is constructed by decomposing integral interval. The new integral inequality together with Peng–Park׳s integral inequality and Free-Matrix-based integral inequality (FMII) is adopted to effectively reduce the enlargement in bounding the derivative of LKF. Therefore, less conservative results can be expected in terms of es and LMIs. Finally, two numerical examples are included to show that the proposed method is less conservative than existing ones.
1.24
0.08
0.08
0.06
0.026667
0.009011
0.001667
0.000234
0
0
0
0
0
0
A stepwise refinement heuristic for protocol construction A stepwise refinement heuristic to construct distributed systems is presented. The heuristic is based on a conditional refinement relation between system specifications, and a “Marking”. It is applied to construct four sliding window protocols that provide reliable data transfer over unreliable communication channels. The protocols use modulo-N sequence numbers. The first protocol is for channels that can only lose messages in transit. By refining this protocol, we obtain three protocols for channels that can lose, reorder, and duplicate messages in transit. The protocols herein are less restrictive and easier to implement than sliding window protocols previously studied in the protocol verification literature.
A relational notation for state transition systems A relational notation for specifying state transition systems is presented. Several refinement relations between specifications are defined. To illustrate the concepts and methods, three specifications of the alternating-bit protocol are given. The theory is applied to explain auxiliary variables. Other applications of the theory to protocol verification, composition, and conversion are discussed. The approach is compared with previously published approaches.
Time-Dependent Distributed Systems: Proving Safety, Liveness and Real-Time Properties Most communication protocol systems utilize timers to implement real-time constraints between event occurrences. Such systems are said to betime-dependent if the real-time constraints are crucial to their correct operation. We present a model for specifying and verifying time-dependent distributed systems. We consider networks of processes that communicate with one another by message-passing. Each process has a set of state variables and a set of events. An event is described by a predicate that relates the values of the network's state variables immediately before to their values immediately after the event occurrence. The predicate embodies specifications of both the event's enabling condition and action. Inference rules for both safety and liveness properties are presented. Real-time progress properties can be verified as safety properties. We illustrate with three sliding window data transfer protocols that use modulo-2 sequence numbers. The first protocol operates over channels that only lose messages. It is a time-independent protocol. The second and third protocols operate over channels that lose, reorder, and duplicate messages. For their correct operation, it is necessary that messages in the channels have bounded lifetimes. They are time-dependent protocols.
A simple approach to specifying concurrent systems Over the past few years, I have developed an approach to the formal specification of concurrent systems that I now call the transition axiom method. The basic formalism has already been described in [12] and [1], but the formal details tend to obscure the important concepts. Here, I attempt to explain these concepts without discussing the details of the underlying formalism.Concurrent systems are not easy to specify. Even a simple system can be subtle, and it is often hard to find the appropriate abstractions that make it understandable. Specifying a complex system is a formidable engineering task. We can understand complex structures only if they are composed of simple parts, so a method for specifying complex systems must have a simple conceptual basis. I will try to demonstrate that the transition axiom method provides such a basis. However, I will not address the engineering problems associated with specifying real systems. Instead, the concepts will be illustrated with a series of toy examples that are not meant to be taken seriously as real specifications.Are you proposing a specification language? No. The transition axiom method provides a conceptual and logical foundation for writing formal specifications; it is not a specification language. The method determines what a specification must say; a language determines in detail how it is said.What do you mean by a formal specification? I find it helpful to view a specification as a contract between the user of a system and its implementer. The contract should tell the user everything he must know to use the system, and it should tell the implementer everything he must know about the system to implement it. In principle, once this contract has been agreed upon, the user and the implementer have no need for further communication. (This view describes the function of the specification; it is not meant as a paradigm for how systems should be built.)For a specification to be formal, the question of whether an implementation satisfies the specification must be reducible to the question of whether an assertion is provable in some mathematical system. To demonstrate that he has met the terms of the contract, the implementer should resort to logic rather than contract law. This does not mean that an implementation must be accompanied by a mathematical proof. It does mean that it should be possible, in principle though not necessarily in practice, to provide such a proof for a correct implementation. The existence of a formal basis for the specification method is the only way I know to guarantee that specifications are unambiguous.Ultimately, the systems we specify are physical objects, and mathematics cannot prove physical properties. We can prove properties only of a mathematical model of the system; whether or not the system correctly implements the model must remain a question of law and not of mathematics.Just what is a system? By "system," I mean anything that interacts with its environment in a discrete (digital) fashion across a well-defined boundary. An airline reservation system is such a system, where the boundary might be drawn between the agents using the system, who are part of the environment, and the terminals, which are part of the system. A Pascal procedure is a system whose environment is the rest of the program, with which it interacts by responding to procedure calls and accessing global variables. Thus, the system being specified may be just one component of a larger system.A real system has many properties, ranging from its response time to the color of the cabinet. No formal method can specify all of these properties. Which ones can be specified with the transition axiom method? The transition axiom method specifies the behavior of a System—that is, the sequence of observable actions it performs when interacting with the environment. More precisely, it specifies two classes of behavioral properties: safety and liveness properties. Safety properties assert what the system is allowed to do, or equivalently, what it may not do. Partial correctness is an example of a safety property, asserting that a program may not generate an incorrect answer. Liveness properties assert what the system must do. Termination is an example of a liveness property, asserting that a program must eventually generate an answer. (Alpern and Schneider [2] have formally defined these two classes of properties.) In the transition axiom method, safety and liveness properties are specified separately.There are important behavioral properties that cannot be specified by the transition axiom method; these include average response time and probability of failure. A transition axiom specification can provide a formal model with which to analyze such properties,1 but it cannot formally specify them.There are also important nonbehavioral properties of systems that one might want to specify, such as storage requirements and the color of the cabinet. These lie completely outside the realm of the method.Why specify safety and liveness properties separately? There is a single formalism that underlies a transition axiom specification, so there is no formal separation between the specification of safety and liveness properties. However, experience indicates that different methods are used to reason about the two kinds of properties and it is convenient in practice to separate them. I consider the ability to decompose a specification into liveness and safety properties to be one of the advantages of the method. (One must prove safety properties in order to verify liveness properties, but this is a process of decomposing the proof into smaller lemmas.)Can the method specify real-time behavior? Worst-case behavior can be specified, since the requirement that the system must respond within a certain length of time can be expressed as a safety property—namely, that the clock is not allowed to reach a certain value without the system having responded. Average response time cannot be expressed as a safety or liveness property.The transition axiom method can assert that some action either must occur (liveness) or must not occur (safety). Can it also assert that it is possible for the action to occur? No. A specification serves as a contractual constraint on the behavior of the system. An assertion that the system may or may not do something provides no constraint and therefore serves no function as part of the formal specification. Specification methods that include such assertions generally use them as poor substitutes for liveness properties. Some methods cannot specify that a certain input must result in a certain response, specifying instead that it is possible for the input to be followed by the response. Every specification I have encountered that used such assertions was improved by replacing the possibility assertions with liveness properties that more accurately expressed the system's informal requirements.Imprecise wording can make it appear that a specification contains a possibility assertion when it really doesn't. For example, one sometimes states that it must be possible for a transmission line to lose messages. However, the specification does not require that the loss of messages be possible, since this would prohibit an implementation that guaranteed no messages were lost. The specification might require that something happens (a liveness property) or doesn't happen (a safety property) despite the loss of messages. Or, the statement that messages may be lost might simply be a comment about the specification, observing that it does not require that all messages be delivered, and not part of the actual specification.If a safety property asserts that some action cannot happen, doesn't its negation assert that the action is possible? In a formal system, one must distinguish the logical formula A from the assertion &vdash; A, which means that A is provable in the logic; &vdash; A is not a formula of the logic itself. In the logic underlying the transition axiom method, if A represents a safety property asserting that some action is impossible, then the negation of A, which is the formula &boxdl;A, asserts that the action must occur. The action's possibility is expressed by the negation of &vdash; A, which is a metaformula and not a formula within the logic. See [10] for more details.
The existence of refinement mappings Refinement mappings are used to prove that a lower-level specification correctly implements a higher-level one. The authors consider specifications consisting of a state machine (which may be infinite-state) that specifies safety requirements and an arbitrary supplementary property that specifies liveness requirements. A refinement mapping from a lower-level specification S/sub 1/ to higher-level one S/sub 2/ is a mapping from S/sub 1/'s state space to S/sub 2/'s state space that maps steps of S/sub 1/'s state machine steps to steps of S/sub 2/'s state machine and maps behaviors allowed by S/sub 1/ to behaviors allowed by S/sub 2/. It is shown that under reasonable assumptions about the specifications, if S/sub 1/ implements S/sub 2/, then by adding auxiliary variables to S/sub 1/ one can guarantee the existence of a refinement mapping. This provides a completeness result for a practical hierarchical specification method.<>
Refinement, Decomposition, and Instantiation of Discrete Models: Application to Event-B We argue that formal modeling should be the starting point for any serious development of computer systems. This claim poses a challenge for modeling: at first it must cope with the constraints and scale of serious developments. Only then it is a suitable starting point. We present three techniques, refinement, decomposition, and instantiation, that we consider indispensable for modeling large and complex systems. The vehicle of our presentation is Event-B, but the techniques themselves do not depend on it.
Introducing Dynamic Constraints in B In B, the expression of dynamic constraints is notoriously missing. In this paper, we make various proposals for introducing them. They all express, in different complementary ways, how a system is allowed to evolve. Such descriptions are independent of the proposed evolutions of the system, which are defined, as usual, by means of a number of operations. Some proof obligations are thus proposed in order to reconcile the two points of view. We have been very careful to ensure that these proposals are compatible with refinement. They are illustrated by several little examples, and a larger one. In a series of small appendices, we also give some theoretical foundations to our approach. In writing this paper, we have been heavily influenced by the pioneering works of Z. Manna and A. Pnueli [11], L. Lamport [10], R. Back [5] and M. Butler [6].
Superposed Automata Nets
Integrated Formal Methods, Third International Conference, IFM 2002, Turku, Finland, May 15-18, 2002, Proceedings
Higher Order Software A Methodology for Defining Software The key to software reliability is to design, develop, and manage software with a formalized methodology which can be used by computer scientists and applications engineers to describe and communicate interfaces between systems. These interfaces include: software to software; software to other systems; software to management; as well as discipline to discipline within the complete software development process. The formal methodology of Higher Order Software (HOS), specifically aimed toward large-scale multiprogrammed/multiprocessor systems, is dedicated to systems reliability. With six axioms as the basis, a given system and all of its interfaces is defined as if it were one complete and consistent computable system. Some of the derived theorems provide for: reconfiguration of real-time multiprogrammed processes, communication between functions, and prevention of data and timing conflicts.
Inquiry-Based Requirements Analysis This approach emphasizes pinpointing where and when information needs occur; at its core is the inquiry cycle model, a structure for describing and supporting discussions about system requirements. The authors use a case study to describe the model's conversation metaphor, which follows analysis activities from requirements elicitation and documentation through refinement.
Optical Character Recognition - a Survey.
Conceptual Structures: Knowledge Visualization and Reasoning, 16th International Conference on Conceptual Structures, ICCS 2008, Toulouse, France, July 7-11, 2008, Proceedings
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.079818
0.016687
0.014577
0.008701
0.004258
0.001752
0.000128
0.00005
0.000009
0.000001
0
0
0
0
Problem-solution mapping in object-oriented design Six expert Smalltalk programmers and three expert procedural programmers were observed as they worked on a gourmet shopping design problem; they were asked to think aloud about what was going through their minds as they worked. These verbal protocols were recorded and examined for ways in which the programmers' understanding of the problem domain affected the design process; most of our examples are from the three Smalltalk programmers who focussed most on the mapping from problem to solution. We characterize the problem entities that did appear as solution objects, the active nature of the mapping process, and ways in which the resultant objects went beyond their problem analogs.
Prototyping interactive information systems Applying prototype-oriented development processes to computerized application systems significantly improves the likelihood that useful systems will be developed and that the overall development cycle will be shortened. The prototype development methodology and development tool presented here have been widely applied to the development of interactive information systems in the commercial data processing setting. The effectiveness and relationship to other applications is discussed.
Requirements specification: learning object, process, and data methodologies
The cognitive consequences of object-oriented design The most valuable tools or methodologies supporting the design of interactive systems are those that simultaneously ease the process of design and improve the usability of the resulting system. We consider the potential of the object-oriented paradigm in providing this dual function. After briefly reviewing what is known about the design process and some important characteristics of object-oriented programming and design, we speculate on the possible cognitive consequences of this paradigm for problem understanding, problem decomposition, and design result. We conclude with research issues raised by our analysis.
Frameworks for interactive, extensible, information-intensive applications We describe a set of application frameworks designed especially to support information-intensive applications in complex domains, where the visual organization of an application's information is critical. Our frameworks, called visual formalisms, provide the semantic structures and editing operations, as well as the visual layout algorithms, needed to create a complete application. Examples of visual formalisms include tables, panels, graphs, and outlines. They are designed to be extended both by programmers, through subclassing, and by end users, through an integrated extension language.
Extending the Entity-Relationship Approach for Dynamic Modeling Purposes
Software design using: SADT. SADT TM, Structured Analysis and Design Technique, is a graphical language for describing systems. In this paper we indicate the role of SADT in software design. The graphical language provides a powerful design vocabulary in which a designer can concisely and unambiguously express his design. SADT is compatible with widely used contemporary design methodologies including Structured Design and the Jackson method.
Data Flow Structures for System Specification and Implementation Data flow representations are used increasingly as a formal modeling tool for the specification of systems. While we often think of systems in this form, developers have been reluctant to implement data flow constructs directly because languages and operating systems have traditionally not encouraged (or supported) such an approach. This paper describes the use of data flow structures for system analysis and a set of facilities that make direct implementation of data flow convenient and natural.
Design problem solving: a task analysis I propose a task structure for design by analyzing a general class of methods that I call propose- critique-modify methods. The task structure is constructed by identifying a range of methods for each task. For each method, the knowledge needed and the subtasks that it sets up are iden- tified. This recursive style of analysis provides a framework in which we can understand a number of particular proposals for design prob- lem solving as specific combinations of tasks, methods, and subtasks. Most of the subtasks are not really specific to design as such. The analy- sis shows that there is no one ideal method for design, and good design problem solving is a result of recursively selecting methods based on a number of criteria, including knowledge avail- ability. How the task analysis can help in knowledge acquisition and system design is dis- cussed.
Recursive functions of symbolic expressions and their computation by machine, Part I this paper in L a TEXpartly supported by ARPA (ONR) grant N00014-94-1-0775to Stanford University where John McCarthy has been since 1962. Copied with minor notationalchanges from CACM, April 1960. If you want the exact typography, look there. Currentaddress, John McCarthy, Computer Science Department, Stanford, CA 94305, (email:jmc@cs.stanford.edu), (URL: <a href="http://citeseer.ist.psu.edu/rd/0/http%3AqSqqSqwww-formal.stanford.eduqSqjmcqSq" onmouseover="self.status="http://www-formal.stanford.edu/jmc/"; return true" onmouseout="self.status=""; return true">http://www-formal.stanford.edu/jmc/</a> )by starting with the class of expressions called S-expressions and the functionscalled...
Knowledge management and the dynamic nature of knowledge Knowledge management (KM) or knowledge sharing in organizations is based on an understanding of knowledge creation and knowledge transfer. In implementation, KM is an effort to benefit from the knowledge that resides in an organization by using it to achieve the organization's mission. The transfer of tacit or implicit knowledge to explicit and accessible formats, the goal of many KM projects, is challenging, controversial, and endowed with ongoing management issues. This article argues that effective knowledge management in many disciplinary contexts must be based on understanding the dynamic nature of knowledge itself. The article critiques some current thinking in the KM literature and concludes with a view towards knowledge management programs built around knowledge as a dynamic process.
Supporting Unanticipated Dynamic Adaptation of Application Behaviour The need to dynamically modify running applications arises in systems that must adapt to changes in their environment, in updating longrunning systems that cannot be halted and restarted, and in monitoring and debugging systems without the need to recompile and restart them. Relatively few architectures have explored the meaning and possibilities of applying behavioural modifications to already running applications without static preparation of the application. The desirable characteristics of an architecture for dynamic modification include support for non-invasive association of new behaviour with the application, support for modular reusable components encapsulating the new behaviour and support for dynamic association (and deassociation) of new behaviour with any class or object of the application. The Iguana/J architecture explores unanticipated dynamic modification, and demonstrates how these characteristics may be supported in an interpreted language without extending the language, without a preprocessor, and without requiring the source code of the application. This paper describes the Iguana/J programmer's model and how it addresses some acknowledged issues in dynamic adaptation and separation of concerns, describes how Iguana/J is implemented, and gives examples of applying Iguana/J.
Abstraction of objects by conceptual clustering Very bound to the logic of first-rate predicates, the formalism of conceptual graphs constitutes a knowledge representation language. The abstraction of systems presents several advantages. It helps to render complex systems more understandable, thus facilitating their analysis and their conception. Our approach of conceptual graphs abstraction, or conceptual clustering, is based on rectangular decomposition. It produces a set of clusters representing similarities between subsets of objects to be abstracted, organized into a hierarchy of classes: the Knowledge Space. Some conceptual clustering methods already exist. Our approach is distinguishable from other approaches in as far as it allows a gain in space and time.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.101703
0.017234
0.016667
0.013291
0.000833
0.000032
0.000004
0
0
0
0
0
0
0
Evaluating Recommender Systems for Technology Enhanced Learning: A Quantitative Survey The increasing number of publications on recommender systems for Technology Enhanced Learning (TEL) evidence a growing interest in their development and deployment. In order to support learning, recommender systems for TEL need to consider specific requirements, which differ from the requirements for recommender systems in other domains like e-commerce. Consequently, these particular requirements motivate the incorporation of specific goals and methods in the evaluation process for TEL recommender systems. In this article, the diverse evaluation methods that have been applied to evaluate TEL recommender systems are investigated. A total of 235 articles are selected from major conferences, workshops, journals and books where relevant work have been published between 2000 and 2014. These articles are quantitatively analysed and classified according to the following criteria: type of evaluation methodology, subject of evaluation, and effects measured by the evaluation. Results from the survey suggest that there is a growing awareness in the research community of the necessity for more elaborate evaluations. At the same time, there is still substantial potential for further improvements. This survey highlights trends and discusses strengths and shortcomings of the evaluation of TEL recommender systems thus far, thereby aiming to stimulate researchers to contemplate novel evaluation approaches.
A Semantically Enriched Context-Aware OER Recommendation Strategy and Its Application to a Computer Science OER Repository This paper describes a knowledge-based strategy for recommending educational resources—worked problems, exercises, quiz questions, and lecture notes—to learners in the first two courses in the introductory sequence of a computer science major (CS1 and CS2). The goal of the recommendation strategy is to provide support for personalized access to the resources that exist in open educational repositories. The strategy uses: 1) a description of the resources based on metadata standards enriched by ontology-based semantic indexing, and 2) contextual information about the user (her knowledge of that particular field of learning). The results of an experimental analysis of the strategy's performance are presented. These demonstrate that the proposed strategy offers a high level of personalization and can be adapted to the user. An application of the strategy to a repository of computer science open educational resources was well received by both educators and students and had promising effects on the student performance and dropout rates.
The Use of Machine Learning Algorithms in Recommender Systems: A Systematic Review. •A survey of machine learning (ML) algorithms in recommender systems (RSs) is provided.•The surveyed studies are classified in different RS categories.•The studies are classified based on the types of ML algorithms and application domains.•The studies are also analyzed according to main and alternative performance metrics.•LNCS and EWSA are the main sources of studies in this research field.
Mining Educational Data to Predict Students' Academic Performance. Data mining is the process of extracting useful information from a huge amount of data. One of the most common applications of data mining is the use of different algorithms and tools to estimate future events based on previous experiences. In this context, many researchers have been using data mining techniques to support and solve challenges in higher education. There are many challenges facing this level of education, one of which is helping students to choose the right course to improve their success rate. An early prediction of students' grades may help to solve this problem and improve students' performance, selection of courses, success rate and retention. In this paper we use different classification techniques in order to build a performance prediction model, which is based on previous students' academic records. The model can be easily integrated into a recommender system that can help students in their course selection, based on their and other graduated students' grades. Our model uses two of the most recognised decision tree classification algorithms: ID3 and J48. The advantages of such a system have been presented along with a comparison in performance between the two algorithms.
Student Performance Prediction Using Collaborative Filtering Methods This paper shows how to utilize collaborative filtering methods for student performance prediction. These methods are often used in recommender systems. The basic idea of such systems is to utilize the similarity of users based on their ratings of the items in the system. We have decided to employ these techniques in the educational environment to predict student performance. We calculate the similarity of students utilizing their study results, represented by the grades of their previously passed courses. As a real-world example we show results of the performance prediction of students who attended courses at Masaryk University. We describe the data, processing phase, evaluation, and finally the results proving the success of this approach.
Implementing Remote procedure calls Remote procedure calls (RPC) are a useful paradigm for providing communication across a network between programs written in a high level language. This paper describes a package, written as part of the Cedar project, providing a remote procedure call facility. The paper describes the options that face a designer of such a package, and the decisions we made. We describe the overall structure of our RPC mechanism, our facilities for binding RPC clients, the transport level communication protocol, and some performance measurements. We include descriptions of some optimisations we used to achieve high performance and to minimize the load on server machines that have many clients. Our primary aim in building an RPC package was to make the building of distributed systems easier. Previous protocols were sufficiently hard to use that only members of a select group of communication experts were willing to undertake the construction of distributed systems. We hoped to overcome this by providing a communication paradigm as close as possible to the familiar facilities of our high level languages. To achieve this aim, we concentrated on making remote calls efficient, and on making the semantics of remote calls as close as possible to those of local calls.
Feedback stabilization of some event graph models The authors introduce several notions of stability for event graph models, timed or not. The stability is similar to the boundedness notion for Petri nets. The event graph models can be controlled by an output feedback which takes information from some observable transitions and can disable some controllable transitions. The controller itself is composed of an event graph. In this framework the authors solve the corresponding stabilization problems, i.e., they wonder if such a controller may prevent the explosion of the number of tokens
Automated consistency checking of requirements specifications This article describes a formal analysis technique, called consistency checking, for automatic detection of errors, such as type errors, nondeterminism, missing cases, and circular definitions, in requirements specifications. The technique is designed to analyze requirements specifications expressed in the SCR (Software Cost Reduction) tabular notation. As background, the SCR approach to specifying requirements is reviewed. To provide a formal semantics for the SCR notation and a foundation for consistency checking, a formal requirements model is introduced; the model represents a software system as a finite-state automation which produces externally visible outputs in response to changes in monitored environmental quantities. Results of two experiments are presented which evaluated the utility and scalability of our technique for consistency checking in real-world avionics application. The role of consistency checking during the requirements phase of software development is discussed.
Further Improvement of Free-Weighting Matrices Technique for Systems With Time-Varying Delay A novel method is proposed in this note for stability analysis of systems with a time-varying delay. Appropriate Lyapunov functional and augmented Lyapunov functional are introduced to establish some improved delay-dependent stability criteria. Less conservative results are obtained by considering the additional useful terms (which are ignored in previous methods) when estimating the upper bound of the derivative of Lyapunov functionals and introducing the new free-weighting matrices. The resulting criteria are extended to the stability analysis for uncertain systems with time-varying structured uncertainties and polytopic-type uncertainties. Numerical examples are given to demonstrate the effectiveness and the benefits of the proposed method
Protocol verification as a hardware design aid The role of automatic formal protocol verificationin hardware design is considered. Principlesare identified that maximize the benefits of protocolverification while minimizing the labor andcomputation required. A new protocol descriptionlanguage and verifier (both called Mur") are described,along with experiences in applying themto two industrial protocols that were developed aspart of hardware designs.1 IntroductionMost complex digital designs must be regarded as...
Executable requirements for embedded systems An approach to requirements specification for embedded systems, based on constructing an executable model of the proposed system interacting with its environment, is proposed. The approach is explained, motivated, and related to data-oriented specification techniques. Portions of a specification language embodying it are introduced, and illustrated with an extended example in which the requirements for a process-control system are developed incrementally.
Involutions On Relational Program Calculi The standard Galois connection between the relational and predicate-transformer models of sequential programming (defined in terms of weakest precondition) confers a certain similarity between them. This paper investigates the extent to which the important involution on transformers (which, for instance, interchanges demonic and angelic nondeterminism, and reduces the two kinds of simulation in the relational model to one kind in the transformer model) carries over to relations. It is shown that no exact analogue exists; that the two complement-based involutions are too weak to be of much use; but that the translation to relations of transformer involution under the Galois connection is just strong enough to support Boolean-algebra style reasoning, a claim that is substantiated by proving properties of deterministic computations. Throughout, the setting is that of the guarded-command language augmented by the usual specification commands; and where possible algebraic reasoning is used in place of the more conventional semantic reasoning.
Reactive and Real-Time Systems Course: How to Get the Most Out of it The paper describes the syllabus and the students’ projects from a graduate course on the subject of “Reactive and Real-Time Systems”, taught at Tel-Aviv University and at the Open University of Israel. The course focuses on the development of provably correct reactive real-time systems. The course combines theoretical issues with practical implementation experience, trying to make things as tangible as possible. Hence, the mathematical and logical frameworks introduced are followed by presentation of relevant software tools and the students’ projects are implemented using these tools. The course is planned so that no special purpose hardware is needed and so that all software tools used are freely available from various Internet sites and can be installed quite easily. This makes our course attractive to institutions and instructors for which purchasing and maintaining a special lab is not feasible due to budget, space, or time limitations (as in our case). In the paper we elaborate on the rationale behind the syllabus and the selection of the students’ projects, presenting an almost complete description of a sample design of one team’s project.
Reversible Denoising and Lifting Based Color Component Transformation for Lossless Image Compression An undesirable side effect of reversible color space transformation, which consists of lifting steps (LSs), is that while removing correlation it contaminates transformed components with noise from other components. Noise affects particularly adversely the compression ratios of lossless compression algorithms. To remove correlation without increasing noise, a reversible denoising and lifting step (RDLS) was proposed that integrates denoising filters into LS. Applying RDLS to color space transformation results in a new image component transformation that is perfectly reversible despite involving the inherently irreversible denoising; the first application of such a transformation is presented in this paper. For the JPEG-LS, JPEG 2000, and JPEG XR standard algorithms in lossless mode, the application of RDLS to the RDgDb color space transformation with simple denoising filters is especially effective for images in the native optical resolution of acquisition devices. It results in improving compression ratios of all those images in cases when unmodified color space transformation either improves or worsens ratios compared with the untransformed image. The average improvement is 5.0–6.0% for two out of the three sets of such images, whereas average ratios of images from standard test-sets are improved by up to 2.2%. For the efficient image-adaptive determination of filters for RDLS, a couple of fast entropy-based estimators of compression effects that may be used independently of the actual compression algorithm are investigated and an immediate filter selection method based on the detector precision characteristic model driven by image acquisition parameters is introduced.
1.2
0.2
0.2
0.2
0.2
0
0
0
0
0
0
0
0
0
Select Z Bibliography This bibliography contains a list of references concerned with the for- mal Z notation that are either available as published papers, books, selected tech- nical reports, or on-line. Some references on the related B-Method are also in- cluded. The bibliography is in alphabetical order by author name(s).
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Towards an Automatic Integration of Statecharts The integration of statecharts is part of an integration methodology for object oriented views. Statecharts are the most important language for the representation of the behaviour of objects and are used in many object oriented modeling techniques, e.g. in UML ([23]). In this paper we focus on the situation where the behaviour of an object type is represented in several statecharts, which have to be integrated into a single statechart. The presented approach allows an automatic integration process but gives the designer possibilities to make own decisions to guide the integration process and to achieve qualitative design goals.
A Conceptual Graph Model for W3C Resource Description Framework With the aim of building a "Semantic Web", the content of the documents must be explicitly represented through metadata in order to enable contents-guided search. Our approach is to exploit a standard language (RDF, recommended by W3C) for expressing such metadata and to interpret these metadata in conceptual graphs (CG) in order to exploit querying and inferencing capabilities enabled by CG formalism. The paper presents our mapping of RDF into CG and its interest in the context of the semantic Web.
Procedures and atomicity refinement The introduction of an early return from a (remote) procedure call can increase the degree of parallelism in a parallel or distributed algorithm modeled by an action system. We define a return statement for procedures in an action systems framework and show that it corresponds to carrying out an atomicity refinement.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
SemEval-2015 Task 10: Sentiment Analysis in Twitter In this paper, we describe the 2015 iteration of the SemEval shared task on Sentiment Analysis in Twitter. This was the most popular sentiment analysis shared task to date with more than 40 teams participating in each of the last three years. This year’s shared task competition consisted of five sentiment prediction subtasks. Two were reruns from previous years: (A) sentiment expressed by a phrase in the context of a tweet, and (B) overall sentiment of a tweet. We further included three new subtasks asking to predict (C) the sentiment towards a topic in a single tweet, (D) the overall sentiment towards a topic in a set of tweets, and (E) the degree of prior polarity of a phrase.
Learning Sentiment-Specific Word Embedding For Twitter Sentiment Classification We present a method that learns word embedding for Twitter sentiment classification in this paper. Most existing algorithms for learning continuous word representations typically only model the syntactic context of words but ignore the sentiment of text. This is problematic for sentiment analysis as they usually map words with similar syntactic context but opposite sentiment polarity, such as good and bad, to neighboring word vectors. We address this issue by learning sentiment-specific word embedding (SSWE), which encodes sentiment information in the continuous representation of words. Specifically, we develop three neural networks to effectively incorporate the supervision from sentiment polarity of text (e.g. sentences or tweets) in their loss functions. To obtain large scale training corpora, we learn the sentiment-specific word embedding from massive distant-supervised tweets collected by positive and negative emoticons. Experiments on applying SSWE to a benchmark Twitter sentiment classification dataset in SemEval 2013 show that (1) the SSWE feature performs comparably with hand-crafted features in the top-performed system; (2) the performance is further improved by concatenating SSWE with existing feature set.
Contextualized Sarcasm Detection on Twitter. Sarcasm requires some shared knowledge between speaker and audience; it is a profoundly contextual phenomenon.  Most computational approaches to sarcasm detection, however, treat it as a purely linguistic matter, using information such as lexical cues and their corresponding sentiment as predictive features.  We show that by including extra-linguistic information from the context of an utterance on Twitter — such as properties of the author, the audience and the immediate communicative environment — we are able to achieve gains in accuracy compared to purely linguistic features in the detection of this complex phenomenon, while also shedding light on features of interpersonal interaction that enable sarcasm in conversation.
FEUP at SemEval-2017 Task 5: Predicting Sentiment Polarity and Intensity with Financial Word Embeddings. This paper presents the approach developed at the Faculty of Engineering of University of Porto, to participate in SemEval 2017, Task 5: Fine-grained Sentiment Analysis on Financial Microblogs and News. The task consisted in predicting a real continuous variable from -1.0 to +1.0 representing the polarity and intensity of sentiment concerning companies/stocks mentioned in short texts. We modeled the task as a regression analysis problem and combined traditional techniques such as pre-processing short texts, bag-of-words representations and lexical-based features with enhanced financial specific bag-of-embeddings. We used an external collection of tweets and news headlines mentioning companies/stocks from Su0026P 500 to create financial word embeddings which are able to capture domain-specific syntactic and semantic similarities. The resulting approach obtained a cosine similarity score of 0.69 in sub-task 5.1 - Microblogs and 0.68 in sub-task 5.2 - News Headlines.
Searching Social Updates for Topic-centric Entities.
NRC-Canada: Building the State-of-the-Art in Sentiment Analysis of Tweets. In this paper, we describe how we created two state-of-the-art SVM classifiers, one to detect the sentiment of messages such as tweets and SMS (message-level task) and one to detect the sentiment of a term within a submissions stood first in both tasks on tweets, obtaining an F-score of 69.02 in the message-level task and 88.93 in the term-level task. We implemented a variety of surface-form, semantic, and sentiment features. with sentiment-word hashtags, and one from tweets with emoticons. In the message-level task, the lexicon-based features provided a gain of 5 F-score points over all others. Both of our systems can be replicated us available resources.
The Party Is Over Here: Structure and Content in the 2010 Election.
System effectiveness, user models, and user utility: a conceptual framework for investigation There is great interest in producing effectiveness measures that model user behavior in order to better model the utility of a system to its users. These measures are often formulated as a sum over the product of a discount function of ranks and a gain function mapping relevance assessments to numeric utility values. We develop a conceptual framework for analyzing such effectiveness measures based on classifying members of this broad family of measures into four distinct families, each of which reflects a different notion of system utility. Within this framework we can hypothesize about the properties that such a measure should have and test those hypotheses against user and system data. Along the way we present a collection of novel results about specific measures and relationships between them.
Formal methods: state of the art and future directions ing with credit is permitted. To copy otherwise, to republish, to post onservers, to redistribute to lists, or to use any component of this work in other works, requires priorspecific permission and/or a fee. Permissions may be requested from Publications Dept, ACMInc., 1515 Broadway, New York, NY 10036 USA, fax +1 (212) 869-0481, or permissions@acm.org.2 \Delta E.M. Clarke and J.M. WingAbout Programs---Mechanical verification, Specification techniques; F.4.1 [Mathematical Logicand...
Strategies for information requirements determination Correct and complete information requirements are key ingredients in planning organizational information systems and in implementing information systems applications. Yet, there has been relatively little research on information requirements determination, and there are relatively few practical, well-formulated procedures for obtaining complete, correct information requirements. Methods for obtaining and documenting information requirements are proposed, but they tend to be presented as general solutions rather than alternative methods for implementing a chosen strategy of requirements determination. This paper identifies two major levels of requirements: the organizational information requirements reflected in a planned portfolio of applications and the detailed information requirements to be implemented in a specific application. The constraints on humans as information processors are described in order to explain why "asking" users for information requirements may not yield a complete, correct set. Various strategies for obtaining information requirements are explained. Examples are given of methods that fit each strategy. A contingency approach is then presented for selecting an information requirements determination strategy. The contingency approach is explained both for defining organizational information requirements and for defining specific, detailed requirements in the development of an application.
On acyclic colorings of planar graphs The conjecture of B. Grünbaum on existing of admissible vertex coloring of every planar graph with 5 colors, in which every bichromatic subgraph is acyclic, is proved and some corollaries of this result are discussed in the present paper.
Making Distortions Comprehensible This paper discusses visual information representation from the perspective of human comprehension. The distortion viewing paradigm is an appropriate focus for this discussion as its motivation has always been to create more understandable displays. While these techniques are becoming increasingly popular for exploring images that are larger than the available screen space, in fact users sometimes report confusion and disorientation. We provide an overview of structural changes made in response to this phenomenon and examine methods for incorporating visual cues based on human perceptual skills.
A knowledge representation language for requirements engineering Requirements engineering, the phase of software development where the users' needs are investigated, is more and more shifting its concern from the target system towards its environment. A new generation of languages is needed to support the definition of application domain knowledge and the behavior of the universe around the computer. This paper assesses the applicability of classical knowledge representation techniques to this purpose. Requirements engineers insist, however, more on natural representation, whereas expert systems designers insist on efficient automatic use of the knowledge. Given this priority of expressiveness, two candidates emerge: the semantic networks and the techniques based on logic. They are combined in a language called the ERAE model, which is illustrated on examples, and compared to other requirements engineering languages.
Analysis and Design of Secure Massive MIMO Systems in the Presence of Hardware Impairments. To keep the hardware costs of future communications systems manageable, the use of low-cost hardware components is desirable. This is particularly true for the emerging massive multiple-input multiple-output (MIMO) systems which equip base stations (BSs) with a large number of antenna elements. However, low-cost transceiver designs will further accentuate the hardware impairments, which are presen...
1.120567
0.12
0.12
0.12
0.065
0.040189
0.010626
0.000006
0
0
0
0
0
0
A secure fragile watermarking scheme based on chaos-and-hamming code In this work, a secure fragile watermarking scheme is proposed. Images are protected and any modification to an image is detected using a novel hybrid scheme combining a two-pass logistic map with Hamming code. For security purposes, the two-pass logistic map scheme contains a private key to resist the vector quantization (VQ) attacks even though the embedding scheme is block independent. To ensure image integrity, watermarks are embedded into the to-be-protected images which are generated using Hamming code technique. Experimental results show that the proposed scheme has satisfactory protection ability and can detect and locate various malicious tampering via image insertion, erasing, burring, sharpening, contrast modification, and even though burst bits. Additionally, experiments prove that the proposed scheme successfully resists VQ attacks.
Image encryption using the two-dimensional logistic chaotic map Chaos maps and chaotic systems have been proved to be useful and effective for cryptography. In our study, the two-dimensional logistic map with complicated basin structures and attractors are first used for image encryption. The proposed method adopts the classic framework of the permutation-substitution network in cryptography and thus ensures both confusion and diffusion properties for a secure cipher. The proposed method is able to encrypt an intelligible image into a random-like one from the statistical point of view and the human visual system point of view. Extensive simulation results using test images from the USC-SIPI image database demonstrate the effectiveness and robustness of the proposed method. Security analysis results of using both the conventional and the most recent tests show that the encryption quality of the proposed method reaches or excels the current state-of-the-art methods. Similar encryption ideas can be applied to digital data in other formats (e.g., digital audio and video). We also publish the cipher MATLAB open-source-code under the web page https://sites.google.com/site/tuftsyuewu/source-code. (c) 2012 SPIE and IS&T. [DOI: 10.1117/1.JEI.21.1.013014]
Underwater image dehazing using joint trilateral filter This paper describes a novel method to enhance underwater images by image dehazing. Scattering and color change are two major problems of distortion for underwater imaging. Scattering is caused by large suspended particles, such as turbid water which contains abundant particles. Color change or color distortion corresponds to the varying degrees of attenuation encountered by light traveling in the water with different wavelengths, rendering ambient underwater environments dominated by a bluish tone. Our key contributions are proposed a new underwater model to compensate the attenuation discrepancy along the propagation path, and proposed a fast joint trigonometric filtering dehazing algorithm. The enhanced images are characterized by reduced noised level, better exposedness of the dark regions, improved global contrast while the finest details and edges are enhanced significantly. In addition, our method is comparable to higher quality than the state-of-the-art methods by assuming in the latest image evaluation systems.
Image authentication algorithm with recovery capabilities based on neural networks in the DCT domain In this study, the authors propose an image authentication algorithm in the DCT domain based on neural networks. The watermark is constructed from the image to be watermarked. It consists of the average value of each 8 × 8 block of the image. Each average value of a block is inserted in another supporting block sufficiently distant from the protected block to prevent simultaneous deterioration of the image and the recovery data during local image tampering. Embedding is performed in the middle frequency coefficients of the DCT transform. In addition, a neural network is trained and used later to recover tampered regions of the image. Experimental results shows that the proposed method is robust to JPEG compression and can also not only localise alterations but also recover them.
Self authenticating medical X-ray images for telemedicine applications. Telemedicine focuses on sharing medical images over network among doctors for better consultation. Hence medical images must be protected from unwanted modifications (intentional/unintentional) over network. Digital watermarking can be used for this purpose. ROI based watermarking hide’s ROI (Region of Interest) information in NROI (Non-Interested Regions of Interest) regions in general. Most of the existing watermarking methods used in telemedicine are block based. Main drawback of block based watermarking is exact localization of exact tampered pixels, as even if single pixel of a block is altered the whole block is detected as tampered. Thus this paper proposes a pixel based watermark technique so that alteration can be detected exactly. This method works in three phases: first, separates the ROI and NROI, second some information depending on the relative size of ROI and NROI the ROI is embedded in NROI using LSBs of ROI. Third, a tamper detection algorithm is proposed for detecting the tampered pixels exactly. Performance of proposed method when compared with state-of-art block based approaches is found to be best having 33db of PSNR value and 80% of accuracy on an average.
Hybrid watermarking of medical images for ROI authentication and recovery Medical image data require strict security, confidentiality and integrity. To achieve these stringent requirements, we propose a hybrid watermarking method which embeds a robust watermark in the region of non-interest (RONI) for achieving security and confidentiality, while integrity control is achieved by inserting a fragile watermark into the region of the interest (ROI). First the information to be modified in ROI is separated and is inserted into RONI, which later is used in recovery of the original ROI. Secondly, to avoid the underflow and overflow, a location map is generated for embedding the watermark block-wise by leaving the suspected blocks. This avoids the preprocessing step of histogram modification. The image visual quality, as well as tamper localization, is evaluated. We use weighted peak signal to noise ratio for measuring image quality of watermarked images. Experimental results show that the proposed method outperforms the existing hybrid watermarking techniques.
"Like Having a Really Bad PA": The Gulf between User Expectation and Experience of Conversational Agents. The past four years have seen the rise of conversational agents (CAs) in everyday life. Apple, Microsoft, Amazon, Google and Facebook have all embedded proprietary CAs within their software and, increasingly, conversation is becoming a key mode of human-computer interaction. Whilst we have long been familiar with the notion of computers that speak, the investigative concern within HCI has been upon multimodality rather than dialogue alone, and there is no sense of how such interfaces are used in everyday life. This paper reports the findings of interviews with 14 users of CAs in an effort to understand the current interactional factors affecting everyday use. We find user expectations dramatically out of step with the operation of the systems, particularly in terms of known machine intelligence, system capability and goals. Using Norman's 'gulfs of execution and evaluation' [30] we consider the implications of these findings for the design of future systems.
A comparison of multiprocessor task scheduling algorithms with communication costs Both parallel and distributed network environment systems play a vital role in the improvement of high performance computing. Of primary concern when analyzing these systems is multiprocessor task scheduling. Therefore, this paper addresses the challenge of multiprocessor task scheduling parallel programs, represented as directed acyclic task graph (DAG), for execution on multiprocessors with communication costs. Moreover, we investigate an alternative paradigm, where genetic algorithms (GAs) have recently received much attention, which is a class of robust stochastic search algorithms for various combinatorial optimization problems. We design the new encoding mechanism with a multi-functional chromosome that uses the priority representation-the so-called priority-based multi-chromosome (PMC). PMC can efficiently represent a task schedule and assign tasks to processors. The proposed priority-based GA has show effective performance in various parallel environments for scheduling methods.
Formal methods: state of the art and future directions ing with credit is permitted. To copy otherwise, to republish, to post onservers, to redistribute to lists, or to use any component of this work in other works, requires priorspecific permission and/or a fee. Permissions may be requested from Publications Dept, ACMInc., 1515 Broadway, New York, NY 10036 USA, fax +1 (212) 869-0481, or permissions@acm.org.2 \Delta E.M. Clarke and J.M. WingAbout Programs---Mechanical verification, Specification techniques; F.4.1 [Mathematical Logicand...
Stepwise Refinement of Control Software - A Case Study Using RAISE We develop a control program for a realistic automation problem by stepwise refinement. We focus on exemplifying appropriate levels of abstraction for the refinement steps. By using phases as a means for abstraction, safety requirements are specified on a high level of abstraction and can be verified using process algebra. The case study is carried out using the RAISE specification language, and we report on some experiences using the RAISE tool set.
A General Scheme for Breadth-First Graph Traversal . We survey an algebra of formal languages suitable to dealwith graph algorithms. As an example of its use we derive a generalscheme for breadth--first graph traversal. This general scheme is thenapplied to a reachability and a shortest path problem.1 IntroductionIn books about algorithmic graph theory algorithms are usually presented withoutformal specification and formal development. Some approaches, in contrast,provide a more precise treatment of graph algorithms, resulting in...
Compact and localized distributed data structures This survey concerns the role of data structures for compactly storing and representing various types of information in a localized and distributed fashion. Traditional approaches to data representation are based on global data structures, which require access to the entire structure even if the sought information involves only a small and local set of entities. In contrast, localized data representation schemes are based on breaking the information into small local pieces, or labels, selected in a way that allows one to infer information regarding a small set of entities directly from their labels, without using any additional (global) information. The survey concentrates mainly on combinatorial and algorithmic techniques, such as adjacency and distance labeling schemes and interval schemes for routing, and covers complexity results on various applications, focusing on compact localized schemes for message routing in communication networks.
Reflection in direct style A reflective language enables us to access, inspect, and/or modify the language semantics from within the same language framework. Although the degree of semantics exposure differs from one language to another, the most powerful approach, referred to as the behavioral reflection, exposes the entire language semantics (or the language interpreter) that defines behavior of user programs for user inspection/modification. In this paper, we deal with the behavioral reflection in the context of a functional language Scheme. In particular, we show how to construct a reflective interpreter where user programs are interpreted by the tower of metacircular interpreters and have the ability to change any parts of the interpreters during execution. Its distinctive feature compared to the previous work is that the metalevel interpreters observed by users are written in direct style. Based on the past attempt of the present author, the current work solves the level-shifting anomaly by defunctionalizing and inspecting the top of the continuation frames. The resulting system enables us to freely go up and down the levels and access/modify the direct-style metalevel interpreter. This is in contrast to the previous system where metalevel interpreters were written in continuation-passing style (CPS) and only CPS functions could be exposed to users for modification.
Reversible data hiding by adaptive group modification on histogram of prediction errors. In this work, the conventional histogram shifting (HS) based reversible data hiding (RDH) methods are first analyzed and discussed. Then, a novel HS based RDH method is put forward by using the proposed Adaptive Group Modification (AGM) on the histogram of prediction errors. Specifically, in the proposed AGM method, multiple bins are vacated based on their magnitudes and frequencies of occurrences by employing an adaptive strategy. The design goals are to maximize hiding elements while minimizing shifting and modification elements to maintain image high quality by giving priority to the histogram bins utilized for hiding. Furthermore, instead of hiding only one bit at a time, the payload is decomposed into segments and each segment is hidden by modifying a triplet of prediction errors to suppress distortion. Experimental results show that the proposed AGM technique outperforms the current state-of-the-art HS based RDH methods. As a representative result, the proposed method achieves an improvement of 4.30 dB in terms of PSNR when 105,000 bits are hidden into the test Lenna image.
1.203333
0.203333
0.203333
0.203333
0.203333
0.068222
0.001667
0.000139
0
0
0
0
0
0
Matching with Externalities
The Lattice Structure of the Set of Stable Matchings with Multiple Partners We continue recent work on the matching problem for firms and workers, and show that, for a suitable ordering, the set of stable matchings is a lattice.
Conflict and Coincidence of Interest in Job Matching: Some New Results and Open Questions <P>The game-theoretic solution to certain job assignment problems allows the interests of agents on the same side of the market e.g., firms or workers to be simultaneously maximized. This is shown to follow from the lattice structure of the set of stable outcomes. However it is shown that in a more general class of problems the optimality results persist, but the lattice structures do not. Thus this paper raises as many questions as it answers.</P>
Core many-to-one matchings by fixed-point methods We characterize the core many-to-one matchings as fixed points of a map. Our characterization gives an algorithm for finding core allocations; the algorithm is efficient and simple to implement. Our characterization does not require substitutable preferences, so it is separate from the structure needed for the non-emptiness of the core. When preferences are substitutable, our characterization gives a simple proof of the lattice structure of core matchings, and it gives a method for computing the join and meet of two core matchings.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
A semantics of multiple inheritance this paper is to present a clean semantics of multiple inheritance and to show that, in the context of strongly-typed, statically-scoped languages, a sound typechecking algorithm exists. Multiple inheritance is also interpreted in a broad sense: instead of being limited to objects, it is extended in a natural way to union types and to higher-order functional types. This constitutes a semantic basis for the unification of functional and object-oriented programming.
The Manchester prototype dataflow computer The Manchester project has developed a powerful dataflow processor based on dynamic tagging. This processor is large enough to tackle realistic applications and exhibits impressive speedup for programs with sufficient parallelism.
A lazy evaluator A different way to execute pure LISP programs is presented. It delays the evaluation of parameters and list structures without ever having to perform more evaluation steps than the usual method. Although the central idea can be found in earlier work this paper is of interest since it treats a rather well-known language and works out an algorithm which avoids full substitution. A partial correctness proof using Scott-Strachey semantics is sketched in a later section.
Modelling information flow for organisations: A review of approaches and future challenges. Modelling is a classic approach to understanding complex problems that can be achieved diagrammatically to visualise concepts, and mathematically to analyse attributes of concepts. An organisation as a communicating entity is a made up of constructs in which people can have access to information and speak to each other. Modelling information flow for organisations is a challenging task that enables analysts and managers to better understand how to: organise and coordinate processes, eliminate redundant information flows and processes, minimise the duplication of information and manage the sharing of intra- and inter-organisational information.
From Action Systems to Modular Systems Action systems are used to extend program refinement methods for sequential programs, as described in the refinement calculus, to parallel and reactive system refinement. They provide a general description of reactive systems, capable of modeling terminating, possibly aborting and infinitely repeating systems. We show how to extend the action system model to refinement of modular systems. A module may export and import variables, it may provide access procedures for other modules, and it may itself access procedures of other modules. Modules may have autonomous internal activity and may execute in parallel or in sequence. Modules may be nested within each other. They may communicate by shared variables, shared actions, a generalized form of remote procedure calls and by persistent data structures. Both synchronous and asynchronous communication between modules is supported. The paper shows how a single framework can be used for both the specification of large systems, the modular decomposition of the system into smaller units and the refinement of the modules into program modules that can be described in a standard programming language and executed on standard hardware.
A Software Development Environment for Improving Productivity First Page of the Article
The navigation toolkit The problem
Maintaining a legacy: towards support at the architectural level An organization that develops large, software intensive systems with a long lifetime will encounter major changes in the market requirements, the software development environment, including its platform, and the target platform. In order to meet the challenges associated with these changes, software development has to undergo major changes as well, Especially when these systems are successful, and hence become an asset, particular care shall be taken to maintain this legacy; large systems with a long lifetime tend to become very complex and difficult to understand. Software architecture plays a vital role in the development of large software systems. For the purpose of maintenance, an up-to-date explicit description of the software architecture of a system supports understanding and comprehension of it, amongst other things. However, many large! complex systems do not have an up-to-date documented software architecture. Particularly in cases where these systems have a long lifetime, the (natural) turnover of personnel will make it very likely that many employees contributing to previous generations of the system are no longer available. A need to 'recover' the software architecture of the system may become prevalent, facilitating the understanding of the system, providing ways to improve its maintainability and quality and to control architectural changes. This paper gives an overview of an on-going effort to improve the maintainability and quality of a legacy system, and describes the recent introduction of support at the architectural level for program understanding and complexity control. Copyright (C) 2000 John Wiley & Sons, Ltd.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.2
0.028571
0.028571
0.006667
0
0
0
0
0
0
0
0
0
0
Survey of Deep Learning Paradigms for Speech Processing Over the past decades, a particular focus is given to research on machine learning techniques for speech processing applications. However, in the past few years, research has focused on using deep learning for speech processing applications. This new machine learning field has become a very attractive area of study and has remarkably better performance than the others in the various speech processing applications. This paper presents a brief survey of application deep learning for various speech processing applications such as speech separation, speech enhancement, speech recognition, speaker recognition, emotion recognition, language recognition, music recognition, speech data retrieval, etc. The survey goes on to cover the use of Auto-Encoder, Generative Adversarial Network, Restricted Boltzmann Machine, Deep Belief Network, Deep Neural Network, Convolutional Neural Network, Recurrent Neural Network and Deep Reinforcement Learning for speech processing. Additionally, it focuses on the various speech database and evaluation metrics used by deep learning algorithms for performance evaluation.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Consensus-based distributed unscented target tracking in wireless sensor networks with state-dependent noise. •Generalized unscented information filter (GUIF) is developed in in an information-space framework.•State-dependent noise of ranging and bearing sensors are considered for state estimation of a moving target.•Average consensus algorithm is employed to obtain a distributed implementation of GUIF.•Mean-square boundedness of state estimation error is guaranteed for any number of consensus steps.
Distributed particle filtering in agent networks: A survey, classification, and comparison Distributed particle filter (DPF) algorithms are sequential state estimation algorithms that are executed by a set of agents. Some or all of the agents perform local particle filtering and interact with other agents to calculate a global state estimate. DPF algorithms are attractive for large-scale, nonlinear, and non-Gaussian distributed estimation problems that often occur in applications involving agent networks (ANs). In this article, we present a survey, classification, and comparison of various DPF approaches and algorithms available to date. Our emphasis is on decentralized ANs that do not include a central processing or control unit.
Information Weighted Consensus Filters and Their Application in Distributed Camera Networks. Due to their high fault-tolerance and scalability to large networks, consensus-based distributed algorithms have recently gained immense popularity in the sensor networks community. Large-scale camera networks are a special case. In a consensus-based state estimation framework, multiple neighboring nodes iteratively communicate with each other, exchanging their own local information about each target's state with the goal of converging to a single state estimate over the entire network. However, the state estimation problem becomes challenging when some nodes have limited observability of the state. In addition, the consensus estimate is suboptimal when the cross-covariances between the individual state estimates across different nodes are not incorporated in the distributed estimation framework. The cross-covariance is usually neglected because the computational and bandwidth requirements for its computation become unscalable for a large network. These limitations can be overcome by noting that, as the state estimates at different nodes converge, the information at each node becomes correlated. This fact can be utilized to compute the optimal estimate by proper weighting of the prior state and measurement information. Motivated by this idea, we propose information-weighted consensus algorithms for distributed maximum a posteriori parameter estimation, and their extension to the information-weighted consensus filter (ICF) for state estimation. We compare the performance of the ICF with existing consensus algorithms analytically, as well as experimentally by considering the scenario of a distributed camera network under various operating conditions.
Stability of consensus extended Kalman filter for distributed state estimation. The paper addresses consensus-based networked estimation of the state of a nonlinear dynamical system. The focus is on a family of distributed state estimation algorithms which relies on the extended Kalman filter linearization paradigm. Consensus is exploited in order to fuse the information, both prior and novel, available in each network node. It is shown that the considered family of distributed Extended Kalman Filters enjoys local stability properties, under minimal requirements of network connectivity and system collective observability. A simulation case-study concerning target tracking with a network of nonlinear (angle and range) position sensors is worked out in order to show the effectiveness of the considered nonlinear consensus filter.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
A semantics of multiple inheritance this paper is to present a clean semantics of multiple inheritance and to show that, in the context of strongly-typed, statically-scoped languages, a sound typechecking algorithm exists. Multiple inheritance is also interpreted in a broad sense: instead of being limited to objects, it is extended in a natural way to union types and to higher-order functional types. This constitutes a semantic basis for the unification of functional and object-oriented programming.
The Manchester prototype dataflow computer The Manchester project has developed a powerful dataflow processor based on dynamic tagging. This processor is large enough to tackle realistic applications and exhibits impressive speedup for programs with sufficient parallelism.
A lazy evaluator A different way to execute pure LISP programs is presented. It delays the evaluation of parameters and list structures without ever having to perform more evaluation steps than the usual method. Although the central idea can be found in earlier work this paper is of interest since it treats a rather well-known language and works out an algorithm which avoids full substitution. A partial correctness proof using Scott-Strachey semantics is sketched in a later section.
Modelling information flow for organisations: A review of approaches and future challenges. Modelling is a classic approach to understanding complex problems that can be achieved diagrammatically to visualise concepts, and mathematically to analyse attributes of concepts. An organisation as a communicating entity is a made up of constructs in which people can have access to information and speak to each other. Modelling information flow for organisations is a challenging task that enables analysts and managers to better understand how to: organise and coordinate processes, eliminate redundant information flows and processes, minimise the duplication of information and manage the sharing of intra- and inter-organisational information.
From Action Systems to Modular Systems Action systems are used to extend program refinement methods for sequential programs, as described in the refinement calculus, to parallel and reactive system refinement. They provide a general description of reactive systems, capable of modeling terminating, possibly aborting and infinitely repeating systems. We show how to extend the action system model to refinement of modular systems. A module may export and import variables, it may provide access procedures for other modules, and it may itself access procedures of other modules. Modules may have autonomous internal activity and may execute in parallel or in sequence. Modules may be nested within each other. They may communicate by shared variables, shared actions, a generalized form of remote procedure calls and by persistent data structures. Both synchronous and asynchronous communication between modules is supported. The paper shows how a single framework can be used for both the specification of large systems, the modular decomposition of the system into smaller units and the refinement of the modules into program modules that can be described in a standard programming language and executed on standard hardware.
A Software Development Environment for Improving Productivity First Page of the Article
The navigation toolkit The problem
Analogical retrieval in reuse-oriented requirements engineering Computational mechanisms are presented for analogical retrieval of domain knowledge as a basis for intelligent tool-based assistance for requirements engineers, A first mechanism, called the domain matcher, retrieves object system models which describe key features for new problems, A second mechanism, called the problem classifier, reasons with analogical mappings inferred by the domain matcher to detect potential incompleteness, overspecification and inconsistencies in entered facts and requirements, Both mechanisms are embedded in AIR, a toolkit that provides co-operative reuse-oriented assistance for requirements engineers.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1.2
0.05
0.013333
0.003774
0
0
0
0
0
0
0
0
0
0
Model-driven assessment of system dependability Designers of complex real-time systems need to address dependability requirements early on in the development process. This paper presents a model-based approach that allows developers to analyse the dependability of use cases and to discover more reliable and safe ways of designing the interactions of the system with the environment. The hardware design and the dependability of the hardware to be used also needs to be considered. We use a probabilistic extension of statecharts to formally model the interaction requirements defined in the use cases. The model is then evaluated analytically based on the success and failure probabilities of events. The analysis may lead to further refinement of the use cases by introducing detection and recovery measures to ensure dependable system interaction. A visual modelling environment for our extended statecharts formalism supporting automatic probability analysis has been implemented in AToM3, A Tool for Multi-formalism and Meta-Modelling. Our approach is illustrated with an elevator control system case study.
On visual formalisms The higraph, a general kind of diagramming object, forms a visual formalism of topological nature. Higraphs are suited for a wide array of applications to databases, knowledge representation, and, most notably, the behavioral specification of complex concurrent systems using the higraph-based language of statecharts.
On Overview of KRL, a Knowledge Representation Language
Formal Derivation of Strongly Correct Concurrent Programs. Summary  A method is described for deriving concurrent programs which are consistent with the problem specifications and free from deadlock and from starvation. The programs considered are expressed by nondeterministic repetitive selections of pairs of synchronizing conditions and subsequent actions. An iterative, convergent calculus is developed for synthesizing the invariant and synchronizing conditions which guarantee strong correctness. These conditions are constructed as limits of recurrences associated with the specifications and the actions. An alternative method for deriving starvationfree programs by use of auxiliary variables is also given. The applicability of the techniques presented is discussed through various examples; their use for verification purposes is illustrated as well.
Simulation of hepatological models: a study in visual interactive exploration of scientific problems In many different fields of science and technology, visual expressions formed by diagrams, sketches, plots and even images are traditionally used to communicate not only data but also procedures. When these visual expressions are systematically used within a scientific community, bi-dimensional notations often develop which allow the construction of complex messages from sets of primitive icons. This paper discusses how these notations can be translated into visual languages and organized into an interactive environment designed to improve the user's ability to explore scientific problems. To facilitate this translation, the use of Conditional Attributed Rewriting Systems has been extended to visual language definition. The case of a visual language in the programming of a simulation of populations of hepatic cells is studied. A discussion is given of how such a visual language allows the construction of programs through the combination of graphical symbols which are familiar to the physician or which schematize shapes familiar to him in that they resemble structures the observes in real experiments. It is also shown how such a visual approach allows the user to focus on the solution of his problems, avoiding any request for unnecessary precision and most requests for house-keeping data during the interaction.
Object-oriented modeling and design
Reasoning Algebraically about Loops We show here how to formalize different kinds of loop constructs within the refinement calculus, and how to use this formalization to derive general loop transformation rules. The emphasis is on using algebraic methods for reasoning about equivalence and refinement of loops, rather than looking at operational ways of reasoning about loops in terms of their execution sequences. We apply the algebraic reasoning techniques to derive a collection of different loop transformation rules that have been found important in practical program derivations: merging and reordering of loops, data refinement of loops with stuttering transitions and atomicity refinement of loops.
Separation and information hiding We investigate proof rules for information hiding, using the recent formalism of separation logic. In essence, we use the separating conjunction to partition the internal resources of a module from those accessed by the module's clients. The use of a logical connective gives rise to a form of dynamic partitioning, where we track the transfer of ownership of portions of heap storage between program components. It also enables us to enforce separation in the presence of mutable data structures with embedded addresses that may be aliased.
Joining specification statements The specification statement allows us to easily express what a program statement does. This paper shows how refinement of specification statements can be directly expressed using the predicate calculus. It also shows that the specification statements interpreted as predicate transformers form a complete lattice, and that this lattice is the lattice of conjunctive predicate transformers. The join operator of this lattice is constructed as a specification statement. The join operators of two interesting sublattices of the set of specification statements are also investigated.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Procedures and atomicity refinement The introduction of an early return from a (remote) procedure call can increase the degree of parallelism in a parallel or distributed algorithm modeled by an action system. We define a return statement for procedures in an action systems framework and show that it corresponds to carrying out an atomicity refinement.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.2
0.000325
0
0
0
0
0
0
0
0
0
0
0
0
Requirements modeling for embedded realtime systems Requirements engineering is the process of defining the goals and constraints of the system and specifying the system's domain of operation. Requirements activities may span the entire life cycle of the system development, refining the system specification and ultimately leading to an implementation. This chapter presents methodologies for the entire process from identifying requirements, modeling the domain, and mapping requirements to architectures. We detail multiple activities, approaches, and aspects of the requirements gathering process, with the ultimate goal of guiding the reader in selecting and handling the most appropriate process for the entire lifecycle of a project. Special focus is placed on the challenges posed by the embedded systems. We present several modeling approaches for requirements engineering and ways of integrating real-time extensions and quality properties into the models. From requirements models we guide the reader in deriving architectures as realizations of core requirements and present an example alongside with a formal verification approach based on the SPIN model checker.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
An integrated platform of collaborative project management and silicon intellectual property management for IC design industry Due to the rapid growth of consumer electronics market, there are urgent needs in effective execution of integrated circuit (IC) collaborative design projects and better utilization of silicon intellectual property (SIP) as design knowledge on a virtual design cyber-space. In order to provide IC industry a well-integrated design environment, this research proposes an integrated information platform which combines the project management (PM) module, the design knowledge (e.g., SIP) management module and the collaborative working environment for the IC industry. With the project management module, the R&D team leaders can better monitor, coordinate and control design tasks among partners and their schedules to shorten the time to market. Moreover, a knowledge exchange environment, supported by the design knowledge management module, enriches the design chain productivity and efficiency. Finally, the integration of collaborative working space is the propeller to enable collaborative design. With the unique platform that integrates collaborative project and SIP knowledge management, IC companies gain competitive advantages when working as a virtual design team.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
A Trace-Based Refinement Calculus for Shared-Variable Parallel Programs We present a refinement calculus for shared-variable parallelprograms. The calculus allows the stepwise formal derivation of a low-level implementation from a trusted high-level specification. It is based on a trace-theoretic semantic model that supports local variable declaration and fair parallel composition. Compositionality is achieved through assumption-commitment reasoning. The refinement rules are syntax-directed in the sense that each rule corresponds to a specific language construct. The calculus is applicable to terminating and nonterminating programs and supports reasoning about liveness properties like termination and eventual entry. A detailed example is given and related work is reviewed.
Stepwise Refinement of Action Systems A method for the formal development of provably correct parallel algorithms by stepwise refinement is presented. The entire derivation procedure is carried out in the context of purely sequential programs. The resulting parallel algorithms can be efficiently executed on different architectures. The methodology is illustrated by showing the main derivation steps in a construction of a parallel algorithm for matrix multiplication.
The specification statement Dijkstra's programming language is extended by specification statements, which specify parts of a program “yet to be developed.” A weakest precondition semantics is given for these statements so that the extended language has a meaning as precise as the original.The goal is to improve the development of programs, making it closer to manipulations within a single calculus. The extension does this by providing one semantic framework for specifications and programs alike: Developments begin with a program (a single specification statement) and end with a program (in the executable language). And the notion of refinement or satisfaction, which normally relates a specification to its possible implementations, is automatically generalized to act between specifications and between programs as well.A surprising consequence of the extension is the appearance of miracles: program fragments that do not satisfy Dijkstra's Law of the Excluded Miracle. Uses for them are suggested.
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The mystery of the tower revealed: a non-reflective description of the reflective tower Abstract In an important series of papers [8, 9], Brian Smith has discussed the nature of programs that know about their text and the context in which they are executed. He called this kind of knowledge,reflection. Smith proposed a programming language, called 3-LISP, which embodied such self-knowledge in the domain of metacircular interpreters. Every 3-LISP program is interpreted by a metacircular interpreter, also written in 3-LISP. This gives rise to a picture of an infinite tower of metacircular interpreters, each being interpreted by the one above it. Such a metaphor poses a serious challenge for conventional modes of understandingof programming languages. In our earlier work on reflection [4], we showed how a useful species of reflection could be modeled without the use of towers. In this paper, we give a semantic account of the reflective tower. This account is self-contained in the sense that it does not em- ploy reflection to explain reflection. 1. Modeling reflection
Statecharts: A visual formalism for complex systems Abstract. We,present,a broad,extension,of the,conventional,formalism,of state machines,and state diagrams, that is relevant to the specification and design of complex discrete-event systems, such as multi-computer real-time systems, communication protocols and digital control units. Our diagrams, which we call statecharts, extend conventional state-transition diagrams with essentially three elements, dealing, respectively, with the notions of hierarchy, concurrency and communica- tion. These,transform,the language,of state diagrams,into a highly,structured,and,economical description,language.,Statecharts,are thus,compact,and,expressiv-small,diagrams,can,express complex,behavior-as,well,as compositional,and,modular.,When,coupled,with,the capabilities of computerized graphics, statecharts enable viewing the description at different levels of detail, and make even very large specifications manageable and comprehensible. In fact, we intend to demonstrate,here that statecharts,counter,many,of the objections,raised,against,conventional,state diagrams, and thus appear to render specification by diagrams an attractive and plausible approach. Statecharts,can be used,either as a stand-alone,behavioral,description,or as part of a more,general design methodology that deals also with the system’s other aspects, such as functional decomposi- tion and,data-flow specification. We also discuss,some,practical,experience,that was,gained,over the last three,years,in applying,the statechart,formalism,to the specification,of a particularly complex,system.
A calculus of refinements for program derivations A calculus of program refinements is described, to be used as a tool for the step-by-step derivation of correct programs. A derivation step is considered correct if the new program preserves the total correctness of the old program. This requirement is expressed as a relation of (correct) refinement between nondeterministic program statements. The properties of this relation are studied in detail. The usual sequential statement constructors are shown to be monotone with respect to this relation and it is shown how refinement between statements can be reduced to a proof of total correctness of the refining statement. A special emphasis is put on the correctness of replacement steps, where some component of a program is replaced by another component. A method by which assertions can be added to statements to justify replacements in specific contexts is developed. The paper extends the weakest precondition technique of Dijkstra to proving correctness of larger program derivation steps, thus providing a unified framework for the axiomatic, the stepwise refinement and the transformational approach to program construction and verification.
Separation and information hiding We investigate proof rules for information hiding, using the recent formalism of separation logic. In essence, we use the separating conjunction to partition the internal resources of a module from those accessed by the module's clients. The use of a logical connective gives rise to a form of dynamic partitioning, where we track the transfer of ownership of portions of heap storage between program components. It also enables us to enforce separation in the presence of mutable data structures with embedded addresses that may be aliased.
Joining specification statements The specification statement allows us to easily express what a program statement does. This paper shows how refinement of specification statements can be directly expressed using the predicate calculus. It also shows that the specification statements interpreted as predicate transformers form a complete lattice, and that this lattice is the lattice of conjunctive predicate transformers. The join operator of this lattice is constructed as a specification statement. The join operators of two interesting sublattices of the set of specification statements are also investigated.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.2
0.0025
0.001124
0
0
0
0
0
0
0
0
0
0
0
Modified region decomposition method and optimal depth decision tree in the recognition of non-uniform sized characters – An experimentation with Kannada characters In contrast to English alphabets, some characters in Indian languages such as Kannada, Hindi, Telugu may have either horizontal or vertical or both the extensions making it difficult to enclose every such character in a standard rectangular grid as done quite often in character recognition research. In this work, an improved method is proposed for the recognition of such characters (especially Kannada characters), which can have spread in vertical and horizontal directions. The method uses a standard sized rectangle which can circumscribe standard sized characters. This rectangle can be interpreted as a two-dimensional, 3 x 3 structure of nine parts which we define as bricks. This structure is also interpreted as consecutively placed three row structures of three bricks each or adjacently placed three column structures of three bricks each. It is obvious that non-uniform sized characters cannot be contained within the standard rectangle of nine bricks. The work presented here proposes to take up such cases. If the character has horizontal extension, then the rectangle is extended horizontally by adding one column structure of three bricks at a time, until the character is encapsulated. Likewise, for vertically extended characters, one row structure is added at a time. For the characters which are smaller than the standard rectangle, one column structure is removed at a time till the character fits in the shrunk rectangle. Thus, the character is enclosed in a rectangular structure of m x n bricks where m greater than or equal to 3 and n greater than or equal to 1. The recognition is carried out intelligently by examining certain selected bricks only instead of all Inn bricks. The recognition is done based on an optimal depth logical decision tree developed during the Learning phase and does not require any mathematical computation. (C) 1999 Elsevier Science B.V. All rights reserved.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Procedures and atomicity refinement The introduction of an early return from a (remote) procedure call can increase the degree of parallelism in a parallel or distributed algorithm modeled by an action system. We define a return statement for procedures in an action systems framework and show that it corresponds to carrying out an atomicity refinement.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Linking theories in probabilistic programming This paper presents a theory of probabilistic programming based on relational calculus through a series of stages; each stage concentrates on a different and smaller class of program, defined by the healthiness conditions of increasing strength. At each stage we show that the notations of the probabilistic language conserve the healthiness conditions of their operands, and that every theory conserves the definition of recursion. (C) 1999 Elsevier Science Inc. All rights reserved.
Probabilistic predicate transformers Probabilistic predicates generalize standard predicates over a state space; with probabilistic predicate transformers one thus reasons about imperative programs in terms of probabilistic pre- and postconditions. Probabilistic healthiness conditions generalize the standard ones, characterizing “real” probabilistic programs, and are based on a connection with an underlying relational model for probabilistic execution; in both contexts demonic nondeterminism coexists with probabilistic choice. With the healthiness conditions, the associated weakest-precondition calculus seems suitable for exploring the rigorous derivation of small probabilistic programs.
Guarded commands, nondeterminacy and formal derivation of programs So-called “guarded commands” are introduced as a building block for alternative and repetitive constructs that allow nondeterministic program components for which at least the activity evoked, but possibly even the final state, is not necessarily uniquely determined by the initial state. For the formal derivation of programs expressed in terms of these constructs, a calculus will be be shown.
On Overview of KRL, a Knowledge Representation Language
Implementing Remote procedure calls Remote procedure calls (RPC) are a useful paradigm for providing communication across a network between programs written in a high level language. This paper describes a package, written as part of the Cedar project, providing a remote procedure call facility. The paper describes the options that face a designer of such a package, and the decisions we made. We describe the overall structure of our RPC mechanism, our facilities for binding RPC clients, the transport level communication protocol, and some performance measurements. We include descriptions of some optimisations we used to achieve high performance and to minimize the load on server machines that have many clients. Our primary aim in building an RPC package was to make the building of distributed systems easier. Previous protocols were sufficiently hard to use that only members of a select group of communication experts were willing to undertake the construction of distributed systems. We hoped to overcome this by providing a communication paradigm as close as possible to the familiar facilities of our high level languages. To achieve this aim, we concentrated on making remote calls efficient, and on making the semantics of remote calls as close as possible to those of local calls.
Alloy: a lightweight object modelling notation Alloy is a little language for describing structural properties. It offers a declaration syntax compatible with graphical object models, and a set-based formula syntax powerful enough to express complex constraints and yet amenable to a fully automatic semantic analysis. Its meaning is given by translation to an even smaller (formally defined) kernel. This paper presents the language in its entirety, and explains its motivation, contributions and deficiencies.
Semantic grammar: an engineering technique for constructing natural language understanding systems One of the major stumbling blocks to more effective used computers by naive users is the lack of natural means of communication between the user and the computer system. This report discusses a paradigm for constructing efficient and friendly man-machine interface systems involving subsets of natural language for limited domains of discourse. As such this work falls somewhere between highly constrained formal language query systems and unrestricted natural language under-standing systems. The primary purpose of this research is not to advance our theoretical under-standing of natural language but rather to put forth a set of techniques for embedding both semantic/conceptual and pragmatic information into a useful natural language interface module. Our intent has been to produce a front end system which enables the user to concentrate on his problem or task rather than making him worry about how to communicate his ideas or questions to the machine.
Recursive functions of symbolic expressions and their computation by machine, Part I this paper in L a TEXpartly supported by ARPA (ONR) grant N00014-94-1-0775to Stanford University where John McCarthy has been since 1962. Copied with minor notationalchanges from CACM, April 1960. If you want the exact typography, look there. Currentaddress, John McCarthy, Computer Science Department, Stanford, CA 94305, (email:jmc@cs.stanford.edu), (URL: <a href="http://citeseer.ist.psu.edu/rd/0/http%3AqSqqSqwww-formal.stanford.eduqSqjmcqSq" onmouseover="self.status="http://www-formal.stanford.edu/jmc/"; return true" onmouseout="self.status=""; return true">http://www-formal.stanford.edu/jmc/</a> )by starting with the class of expressions called S-expressions and the functionscalled...
Joining specification statements The specification statement allows us to easily express what a program statement does. This paper shows how refinement of specification statements can be directly expressed using the predicate calculus. It also shows that the specification statements interpreted as predicate transformers form a complete lattice, and that this lattice is the lattice of conjunctive predicate transformers. The join operator of this lattice is constructed as a specification statement. The join operators of two interesting sublattices of the set of specification statements are also investigated.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.2
0.011111
0.002817
0
0
0
0
0
0
0
0
0
0
0
Automatic generation of entity-oriented summaries for reputation management Producing online reputation summaries for an entity (company, brand, etc.) is a focused summarization task with a distinctive feature: issues that may affect the reputation of the entity take priority in the summary. In this paper we (i) present a new test collection of manually created (abstractive and extractive) reputation reports which summarize tweet streams for 31 companies in the banking and automobile domains; (ii) propose a novel methodology to evaluate summaries in the context of online reputation monitoring, which profits from an analogy between reputation reports and the problem of diversity in search; and (iii) provide empirical evidence that producing reputation reports is different from a standard summarization problem, and incorporating priority signals is essential to address the task effectively.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1
0
0
0
0
0
0
0
0
0
0
0
0
0
A New Fast Multi-Context Method for Lossless Image Coding A new efficient and fast context lossless image coding method is presented in the paper, named Multi-ctx. High performance is obtained due to the fact that predictors are chosen from a very large database. On the other hand, simple rules determining contexts result in low computational complexity of the method. The technique can be easily adapted to specific class of applications, as the algorithm can be trained on a set of exemplary images. For the purpose of proving its potential it has been trained on a set of 45 images of various type, then its performance compared to that of some widely used fast techniques. Indeed, the new method appears to be the best, even better than CALIC approach.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Lexicon-based methods for sentiment analysis We present a lexicon-based approach to extracting sentiment from text. The Semantic Orientation CALculator (SO-CAL) uses dictionaries of words annotated with their semantic orientation (polarity and strength), and incorporates intensification and negation. SO-CAL is applied to the polarity classification task, the process of assigning a positive or negative label to a text that captures the text's opinion towards its main subject matter. We show that SO-CAL's performance is consistent across domains and in completely unseen data. Additionally, we describe the process of dictionary creation, and our use of Mechanical Turk to check dictionaries for consistency and reliability.
Location, Location, Location: How Network Embeddedness Affects Project Success in Open Source Systems The community-based model for software development in open source environments is becoming a viable alternative to traditional firm-based models. To better understand the workings of open source environments, we examine the effects of network embeddedness---or the nature of the relationship among projects and developers---on the success of open source projects. We find that considerable heterogeneity exists in the network embeddedness of open source projects and project managers. We use a visual representation of the affiliation network of projects and developers as well as a formal statistical analysis to demonstrate this heterogeneity and to investigate how these structures differ across projects and project managers. Our main results surround the effect of this differential network embeddedness on project success. We find that network embeddedness has strong and significant effects on both technical and commercial success, but that those effects are quite complex. We use latent class regression analysis to show that multiple regimes exist and that some of the effects of network embeddedness are positive under some regimes and negative under others. We use project age and number of page views to provide insights into the direction of the effect of network embeddedness on project success. Our findings show that different aspects of network embeddedness have powerful but subtle effects on project success and suggest that this is a rich environment for further study.
Web-based adaptive application mobility. Application mobility is achieved when migrating an application with its code, states and all related information from one device to another during its execution. As traditional applications are being challenged by apps, we believe it is time to rethink the notion of application mobility and explore the concept from a web perspective. More specifically, this paper aims to address the challenges associated with application mobility through mapping these against features within modern web technologies, analyzing their ability to meet the specific requirements of application mobility. An architectural proposal is described where we deploy application mobility using web technologies. We show that the emerging HTML5 standard along with related API:s and frameworks can provide an environment for delivering application mobility that strongly meets the requirements of support for offline work and heterogeneous environments.
TweetCred: Real-Time Credibility Assessment of Content on Twitter. During sudden onset crisis events, the presence of spam, rumors and fake content on Twitter reduces the value of information contained on its messages (or "tweets"). A possible solution to this problem is to use machine learning to automatically evaluate the credibility of a tweet, i.e. whether a person would deem the tweet believable or trustworthy. This has been often framed and studied as a supervised classification problem in an off-line (post-hoc) setting. In this paper, we present a semi-supervised ranking model for scoring tweets according to their credibility. This model is used in TweetCred, a real-time system that assigns a credibility score to tweets in a user's timeline. TweetCred, available as a browser plug-in, was installed and used by 1,127 Twitter users within a span of three months. During this period, the credibility score for about 5.4 million tweets was computed, allowing us to evaluate TweetCred in terms of response time, effectiveness and usability. To the best of our knowledge, this is the first research work to develop a real-time system for credibility on Twitter, and to evaluate it on a user base of this size.
Sentiment strength detection for the social web Sentiment analysis is concerned with the automatic extraction of sentiment-related information from text. Although most sentiment analysis addresses commercial tasks, such as extracting opinions from product reviews, there is increasing interest in the affective dimension of the social web, and Twitter in particular. Most sentiment analysis algorithms are not ideally suited to this task because they exploit indirect indicators of sentiment that can reflect genre or topic instead. Hence, such algorithms used to process social web texts can identify spurious sentiment patterns caused by topics rather than affective phenomena. This article assesses an improved version of the algorithm SentiStrength for sentiment strength detection across the social web that primarily uses direct indications of sentiment. The results from six diverse social web data sets (MySpace, Twitter, YouTube, Digg, RunnersWorld, BBCForums) indicate that SentiStrength 2 is successful in the sense of performing better than a baseline approach for all data sets in both supervised and unsupervised cases. SentiStrength is not always better than machine-learning approaches that exploit indirect indicators of sentiment, however, and is particularly weaker for positive sentiment in news-related discussions. Overall, the results suggest that, even unsupervised, SentiStrength is robust enough to be applied to a wide variety of different social web contexts.
Popmine: Tracking Political Opinion On The Web The automatic content analysis of mass media in the social sciences has become necessary and possible with the raise of social media and computational power. One particularly promising avenue of research concerns the use of opinion mining. We design and implement the POPmine system which is able to collect texts from web-based conventional media (news items in mainstream media sites) and social media (blogs and Twitter) and to process those texts, recognizing topics and political actors, analyzing relevant linguistic units, and generating indicators of both frequency of mention and polarity (positivity/negativity) of mentions to political actors across sources, types of sources, and across time.
Building a sentiment lexicon for social judgement mining We present a methodology for automatically enlarging a Portuguese sentiment lexicon for mining social judgments from text, i.e., detecting opinions on human entities. Starting from publicly-availabe language resources, the identification of human adjectives is performed through the combination of a linguistic-based strategy, for extracting human adjective candidates from corpora, and machine learning for filtering the human adjectives from the candidate list. We then create a graph of the synonymic relations among the human adjectives, which is built from multiple open thesauri. The graph provides distance features for training a model for polarity assignment. Our initial evaluation shows that this method produces results at least as good as the best that have been reported for this task.
Simulating simple user behavior for system effectiveness evaluation Information retrieval effectiveness evaluation typically takes one of two forms: batch experiments based on static test collections, or lab studies measuring actual users interacting with a system. Test collection experiments are sometimes viewed as introducing too many simplifying assumptions to accurately predict the usefulness of a system to its users. As a result, there is great interest in creating test collections and measures that better model user behavior. One line of research involves developing measures that include a parameterized user model; choosing a parameter value simulates a particular type of user. We propose that these measures offer an opportunity to more accurately simulate the variance due to user behavior, and thus to analyze system effectiveness to a simulated user population. We introduce a Bayesian procedure for producing sampling distributions from click data, and show how to use statistical tools to quantify the effects of variance due to parameter selection.
Relevance based language models We explore the relation between classical probabilistic models of information retrieval and the emerging language modeling approaches. It has long been recognized that the primary obstacle to effective performance of classical models is the need to estimate arelevance model: probabilities of words in the relevant class. We propose a novel technique for estimating these probabilities using the query alone. We demonstrate that our technique can produce highly accurate relevance models, addressing important notions of synonymy and polysemy. Our experiments show relevance models outperforming baseline language modeling systems on TREC retrieval and TDT tracking tasks. The main contribution of this work is an effective formal method for estimating a relevance model with no training data
Stepwise Refinement of Control Software - A Case Study Using RAISE We develop a control program for a realistic automation problem by stepwise refinement. We focus on exemplifying appropriate levels of abstraction for the refinement steps. By using phases as a means for abstraction, safety requirements are specified on a high level of abstraction and can be verified using process algebra. The case study is carried out using the RAISE specification language, and we report on some experiences using the RAISE tool set.
Usability analysis with Markov models How hard to users to find interactive devices to use to achieve their goals, and how can we get this information early enough to influence design? We show that Markov modeling can obtain suitable measures, and we provide formulas that can be used for a large class of systems. We analyze and consider alternative designs for various real examples. We introduce a “knowledege/usability graph,” which shows the impact of even a smaller amount of knowledge for the user, and the extent to which designers' knowledge may bias their views of usability. Markov models can be built into design tools, and can therefore be made very convenient for designers to utilize. One would hope that in the future, design tools would include such mathematical analysis, and no new design skills would be required to evaluate devices. A particular concern of this paper is to make the approach accessible. Complete program code and all the underlying mathematics are provided in appendices to enable others to replicate and test all results shown.
A research typology for object-oriented analysis and design This article evaluates current research on object-oriented analysis and design (OOAD). Critical components in OOAD are identified and various OOAD techniques (i.e., processes or methods) and representations are compared based on these components. Strong and weak areas in OOAD are identified and areas for future research are discussed in this article.
A proof-based approach to verifying reachability properties This paper presents a formal approach to proving temporal reachability properties, expressed in CTL, on B systems. We are particularly interested in demonstrating that a system can reach a given state by executing a sequence of actions (or operation calls) called a path. Starting with a path, the proposed approach consists in calculating the proof obligations to discharge in order to prove that the path allows the system to evolve in order to verify the desired property. Since these proof obligations are expressed as first logic formulas without any temporal operator, they can be discharged using the prover of AtelierB. Our proposal is illustrated through a case study.
How to merge three different methods for information filtering ? Twitter is now a gold marketing tool for entities concerned with online reputation. To automatically monitor online reputation of entities , systems have to deal with ambiguous entity names, polarity detection and topic detection. We propose three approaches to tackle the first issue: monitoring Twitter in order to find relevant tweets about a given entity. Evaluated within the framework of the RepLab-2013 Filtering task, each of them has been shown competitive with state-of-the-art approaches. Mainly we investigate on how much merging strategies may impact performances on a filtering task according to the evaluation measure.
1.022214
0.023
0.023
0.023
0.0136
0.006615
0.002199
0.000069
0
0
0
0
0
0
Requirements-Based Testing of Real-Time Systems. Modeling for Testability First Page of the Article
Using attributed grammars to test designs and implementations We present a method for generating test cases that can be used throughout the entire life cycle of a program. This method uses attributed translation grammars to generate both inputs and outputs, which can then be used either as is, in order to test the specifications, or in conjunction with automatic test drivers to test an implementation against the specifications. The grammar can generate test cases either randomly or systematically. The attributes are used to guide the generation process, thereby avoiding the generation of many superfluous test cases. The grammar itself not only drives the generation of test cases but also serves as a concise documentation of the test plan. In the paper, we describe the test case generator, show how it works in typical examples, compare it with related techniques, and discuss how it can be used in conjunction with various testing heuristics.
An experiment in technology transfer: PAISLey specification of requirements for an undersea lightwave cable system From May to October 1985 members of the Undersea Systems Laboratory and the Computer Technology Research Laboratory of AT&T Bell Laboratories worked together to apply the executable specification language PAISLey to requirements for the “SL” communications system. This paper describes our experiences and answers three questions based on the results of the experiment: Can SL requirements be specified formally in PAISLey? Can members of the SL project learn to read and write specifications in PAISLey? How would the use of PAISLey affect the productivity of the software-development team and the quality of the resulting software?
A Total System Design Framework First Page of the Article
Comparison of analysis techniques for information requirement determination A comparison of systems analysis techniques, the Data Flow Diagram (DFD) and part of the Integrated Definition Method (IDEFo), is done using a new developmental framework.
A comparison of techniques for the specification of external system behavior The elimination of ambiguity, inconsistency, and incompleteness in a Software Requirements Specification (SRS) document is inherently difficult, due to the use of natural language. The focus here is a survey of available techniques designed to reduce these negatives in the documentation of a software product's external behavior.
Callisto: An intelligent project management system
A Taxonomy of Current Issues in Requirements Engineering The purpose of this article is to increase awareness of several requirements specifications issues: (1) the role they play in the full system development life cycle, (2) the diversity of forms they assume, and (3) the problems we continue to face. The article concentrates on ways of expressing requirements rather than ways of generating them. A discussion of various classification criteria for existing requirements specification techniques follows a brief review of requirements specification contents and concerns.
Issues in automated negotiation and electronic commerce: extending the contract net framework In this paper we discuss a number of previously unaddressed issues that arise in automated ne- got/ation among self-interested agents whose rationality is bounded by computational com- plexity. These issues are presented in the con- text of iterative task allocation negotiations. First, the reasons why such agents need to be able to choose the stage and level of com- mitment dynamically are identified. A pro- tocol that allows such choices through condi- tional commitment breaking penalties is pre- sented. Next, the implications of bounded ra- tionality are analysed. Several tradeoffs be- tween allocated computation and negotiation benefits and risk are enumerated, and the ne- cessity of explicit local deliberation control is substantiated. Techniques for linking negoti- ation items and multiagent contracts are pre- sented as methods for escaping local optima in the task allocation process. Implementing both methods among self-interested bounded ratio- nal agents is discussed. Finally, the problem of message congestion among self-interested agents is described, and alternative remedies are presented.
Domain modelling with hierarchies of alternative viewpoints Domain modelling can be used within requirements engineering to reveal the conceptual models used by the participants, and relate these to one another. However, existing elicitation techniques used in AI adopt a purely cognitive stance, in that they model a single problem- solving agent, and ignore the social and organisational context. This paper describes a framework for representing alternative, conflicting viewpoints in a single domain model. The framework is based on the development of a hierarchy of viewpoint descriptions, where lower levels of the hierarchy contain the conflicts. The hierarchies can be viewed in a number of ways, and hence allow the participants to develop an understanding of one another's perspective. The framework is supported by a set of tools for developing and manipulating these hierarchies.
How to write parallel programs: a guide to the perplexed We present a framework for parallel programming, based on three conceptual classes for understanding parallelism and three programming paradigms for implementing parallel programs. The conceptual classes are result parallelism, which centers on parallel computation of all elements in a data structure; agenda parallelism, which specifies an agenda of tasks for parallel execution; and specialist parallelism, in which specialist agents solve problems cooperatively. The programming paradigms center on live data structures that transform themselves into result data structures; distributed data structures that are accessible to many processes simultaneously; and message passing, in which all data objects are encapsulated within explicitly communicating processes. There is a rough correspondence between the conceptual classes and the programming methods, as we discuss. We begin by outlining the basic conceptual classes and programming paradigms, and by sketching an example solution under each of the three paradigms. The final section develops a simple example in greater detail, presenting and explaining code and discussing its performance on two commercial parallel computers, an 18-node shared-memory multiprocessor, and a 64-node distributed-memory hypercube. The middle section bridges the gap between the abstract and the practical by giving an overview of how the basic paradigms are implemented.We focus on the paradigms, not on machine architecture or programming languages: The programming methods we discuss are useful on many kinds of parallel machine, and each can be expressed in several different parallel programming languages. Our programming discussion and the examples use the parallel language C-Linda for several reasons: The main paradigms are all simple to express in Linda; efficient Linda implementations exist on a wide variety of parallel machines; and a wide variety of parallel programs have been written in Linda.
Optical Character Recognition - a Survey.
Feature based classification of computer graphics and real images Photorealistic images can now be created using advanced techniques in computer graphics (CG). Synthesized elements could easily be mistaken for photographic (real) images. Therefore we need to differentiate between CG and real images. In our work, we propose and develop a new framework based on an aggregate of existing features. Our framework has a classification accuracy of 90% when tested on the de facto standard Columbia dataset, which is 4% better than the best results obtained by other prominent methods in this area. We further show that using feature selection it is possible to reduce the feature dimension of our framework from 557 to 80 without a significant loss in performance (≪ 1%). We also investigate different approaches that attackers can use to fool the classification system, including creation of hybrid images and histogram manipulations. We then propose and develop filters to effectively detect such attacks, thereby limiting the effect of such attacks to our classification system.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.045447
0.035127
0.033557
0.016897
0.004762
0.001699
0.000531
0.00031
0.000164
0.000057
0.000001
0
0
0
Application of reversible denoising and lifting steps to DWT in lossless JPEG 2000 for improved bitrates In a previous study, we noticed that the lifting step of a color space transform might increase the amount of noise that must be encoded during compression of an image. To alleviate this problem, we proposed the replacement of lifting steps with reversible denoising and lifting steps (RDLS), which are basically lifting steps integrated with denoising filters. We found the approach effective for some of the tested images. In this study, we apply RDLS to discrete wavelet transform (DWT) in JPEG 2000 lossless coding. We evaluate RDLS effects on bitrates using various denoising filters and a large number of diverse images. We employ a heuristic for image-adaptive RDLS filter selection; based on its empirical outcomes, we also propose a fixed filter selection variant. We find that RDLS significantly improves bitrates of non-photographic images and of images with impulse noise added, while bitrates of photographic images are improved by below 1% on average. Considering that the DWT stage may worsen bitrates of some images, we propose a couple of practical compression schemes based on JPEG 2000 and RDLS. For non-photographic images, we obtain an average bitrate improvement of about 12% for fixed filter selection and about 14% for image-adaptive selection. Denoising is integrated with DWT lifting steps in lossless JPEG 2000.A heuristic is used for image-adaptive selection of denoising filters.Significant bitrate improvements are obtained for nonphotographic images.Consistently good performance is observed on images with impulse noise.Compression schemes with various bitrate-complexity tradeoffs are proposed.
Reversible Denoising and Lifting Based Color Component Transformation for Lossless Image Compression An undesirable side effect of reversible color space transformation, which consists of lifting steps (LSs), is that while removing correlation it contaminates transformed components with noise from other components. Noise affects particularly adversely the compression ratios of lossless compression algorithms. To remove correlation without increasing noise, a reversible denoising and lifting step (RDLS) was proposed that integrates denoising filters into LS. Applying RDLS to color space transformation results in a new image component transformation that is perfectly reversible despite involving the inherently irreversible denoising; the first application of such a transformation is presented in this paper. For the JPEG-LS, JPEG 2000, and JPEG XR standard algorithms in lossless mode, the application of RDLS to the RDgDb color space transformation with simple denoising filters is especially effective for images in the native optical resolution of acquisition devices. It results in improving compression ratios of all those images in cases when unmodified color space transformation either improves or worsens ratios compared with the untransformed image. The average improvement is 5.0–6.0% for two out of the three sets of such images, whereas average ratios of images from standard test-sets are improved by up to 2.2%. For the efficient image-adaptive determination of filters for RDLS, a couple of fast entropy-based estimators of compression effects that may be used independently of the actual compression algorithm are investigated and an immediate filter selection method based on the detector precision characteristic model driven by image acquisition parameters is introduced.
Skipping Selected Steps Of Dwt Computation In Lossless Jpeg 2000 For Improved Bitrates In order to improve bitrates of lossless JPEG 2000, we propose to modify the discrete wavelet transform (DWT) by skipping selected steps of its computation. We employ a heuristic to construct the skipped steps DWT (SS-DWT) in an image-adaptive way and define fixed SS-DWT variants. For a large and diverse set of images, we find that SS-DWT significantly improves bitrates of non-photographic images. From a practical standpoint, the most interesting results are obtained by applying entropy estimation of coding effects for selecting among the fixed SS-DWT variants. This way we get the compression scheme that, as opposed to the general SS-DWT case, is compliant with the JPEG 2000 part 2 standard. It provides average bitrate improvement of roughly 5% for the entire test-set, whereas the overall compression time becomes only 3% greater than that of the unmodified JPEG 2000. Bitrates of photographic and non-photographic images are improved by roughly 0.5% and 14%, respectively. At a significantly increased cost of exploiting a heuristic, selecting the steps to be skipped based on the actual bitrate instead of an estimated one, and by applying reversible denoising and lifting steps to SS-DWT, we have attained greater bitrate improvements of up to about 17.5% for non-photographic images.
New simple and efficient color space transformations for lossless image compression. •New transformation requiring 4 operations per pixel gave the best overall ratios.•New transformation done in 2 operations gave the best JPEG2000 and JPEG XR ratios.•A transformation from human vision system outperformed established ones.•PCA/KLT resulted in ratios inferior to ratios of new and established transformations.
Context-based lossless interband compression--extending CALIC. This paper proposes an interband version of CALIC (context-based, adaptive, lossless image codec) which represents one of the best performing, practical and general purpose lossless image coding techniques known today. Interband coding techniques are needed for effective compression of multispectral images like color images and remotely sensed images. It is demonstrated that CALIC's techniques of context modeling of DPCM errors lend themselves easily to modeling of higher-order interband correlations that cannot be exploited by simple interband linear predictors alone. The proposed interband CALIC exploits both interband and intraband statistical redundancies, and obtains significant compression gains over its intrahand counterpart. On some types of multispectral images, interband CALIC can lead to a reduction in bit rate of more than 20% as compared to intraband CALIC. Interband CALIC only incurs a modest increase in computational cost as compared to intraband CALIC.
Generalized kraft inequality and arithmetic coding Algorithms for encoding and decoding finite strings over a finite alphabet are described. The coding operations are arithmetic involving rational numbers li as parameters such that ∑i2−li≤2−ε. This coding technique requires no blocking, and the per-symbol length of the encoded string approaches the associated entropy within ε. The coding speed is comparable to that of conventional coding methods.
A universal algorithm for sequential data compression A universal algorithm for sequential data compression is presented. Its performance is investigated with respect to a nonprobabilistic model of constrained sources. The compression ratio achieved by the proposed universal code uniformly approaches the lower bounds on the compression ratios attainable by block-to-variable codes and variable-to-block codes designed to match a completely specified source.
List processing in real time on a serial computer A real-time list processing system is one in which the time required by the elementary list operations (e.g. CONS, CAR, CDR, RPLACA, RPLACD, EQ, and ATOM in LISP) is bounded by a (small) constant. Classical implementations of list processing systems lack this property because allocating a list cell from the heap may cause a garbage collection, which process requires time proportional to the heap size to finish. A real-time list processing system is presented which continuously reclaims garbage, including directed cycles, while linearizing and compacting the accessible cells into contiguous locations to avoid fragmenting the free storage pool. The program is small and requires no time-sharing interrupts, making it suitable for microcode. Finally, the system requires the same average time, and not more than twice the space, of a classical implementation, and those space requirements can be reduced to approximately classical proportions by compact list representation. Arrays of different sizes, a program stack, and hash linking are simple extensions to our system, and reference counting is found to be inferior for many applications.
Time-delay systems: an overview of some recent advances and open problems After presenting some motivations for the study of time-delay system, this paper recalls modifications (models, stability, structure) arising from the presence of the delay phenomenon. A brief overview of some control approaches is then provided, the sliding mode and time-delay controls in particular. Lastly, some open problems are discussed: the constructive use of the delayed inputs, the digital implementation of distributed delays, the control via the delay, and the handling of information related to the delay value.
A Study of The Fragile Base Class Problem In this paper we study the fragile base class problem. This problem occurs in open object-oriented systems employing code inher- itance as an implementation reuse mechanism. System developers un- aware of extensions to the system developed by its users may produce a seemingly acceptable revision of a base class which may damage its exten- sions. The fragile base class problem becomes apparent during mainte- nance of open object-oriented systems, but requires consideration during design. We express the fragile base class problem in terms of a flexibility property. By means of ve orthogonal examples, violating the flexibility property, we demonstrate dierent aspects of the problem. We formulate requirements for disciplining inheritance, and extend the renement cal- culus to accommodate for classes, objects, class-based inheritance, and class renement. We formulate and formally prove a flexibility theorem demonstrating that the restrictions we impose on inheritance are suf- cient to permit safe substitution of a base class with its revision in presence of extension classes.
On the Lattice of Specifications: Applications to a Specification Methodology In this paper we investigate the lattice properties of the natural ordering between specifications, which expresses that a specification expresses a stronger requirement than another specification. The lattice-like structure that we uncover is used as a basis for a specification methodology.
Visual Formalisms Revisited The development of an interactive application is a complex task that has to consider data, behavior, inter- communication, architecture and distribution aspects of the modeled system. In particular, it presupposes the successful communication between the customer and the software expert. To enhance this communica- tion most modern software engineering methods rec- ommend to specify the different aspects of a system by visual formalisms. In essence, visual specifications are directed graphs that are interpreted in a particular way for each as- pect of the system. They are also intended to be com- positional. This means that, each node can itself be a graph with a separate meaning. However, the lack of a denotational model for hierarchical graphs often leads to the loss of compositionality. This has severe negative consequences in the development of realistic applications. In this paper we present a simple denotational model (which is by definition compositional) for the architecture and behavior aspects of a system. This model is then used to give as emantics to almost all the concepts occurring in ROOM. Our model also provides a compositional semantics for or-states in statecharts.
A Task-Based Methodology for Specifying Expert Systems A task-based specification methodology for expert system specification that is independent of the problem solving architecture, that can be applied to many expert system applications, that focuses on what the knowledge is, not how it is implemented, that introduces the major concepts involved gradually, and that supports verification and validation is discussed. To evaluate the methodology, a specification of R1/SOAR, an expert system that reimplements a major portion of the R1 expert system, was reverse engineered.<>
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1.11
0.12
0.1
0.0275
0.000116
0.000008
0.000001
0
0
0
0
0
0
0
Scientific Computation with JavaSpaces JavaSpaces provides a simple yet expressive mechanism for distributed computing with commodity technology. We discuss the suitability of JavaSpaces for implementing different classes of concurrent computations based on low-level metrics (null messaging and array I/O), and present performance results for several parametric algorithms. We found that although inefficient for communication intensive problems, JavaSpaces yields good speedups for parametric experiments, relative to both sequential Java and C. We also outline a dynamic native compilation technique, which for short, compute-intensive codes further boosts performance without compromising Java portability or extensive algorithm recoding. Discussion and empirical results are presented in the context of our public benchmark suite.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
A Conceptual Graph Model for W3C Resource Description Framework With the aim of building a "Semantic Web", the content of the documents must be explicitly represented through metadata in order to enable contents-guided search. Our approach is to exploit a standard language (RDF, recommended by W3C) for expressing such metadata and to interpret these metadata in conceptual graphs (CG) in order to exploit querying and inferencing capabilities enabled by CG formalism. The paper presents our mapping of RDF into CG and its interest in the context of the semantic Web.
Procedures and atomicity refinement The introduction of an early return from a (remote) procedure call can increase the degree of parallelism in a parallel or distributed algorithm modeled by an action system. We define a return statement for procedures in an action systems framework and show that it corresponds to carrying out an atomicity refinement.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
O-O Requirements Analysis: an Agent Perspective In this paper, we present a formal object-oriented specification language designed for capturing requirements expressed on composite realtime systems. The specification describes the system as a society of 'agents', each of them being characterised (i) by its responsibility with respect to actions happening in the system and (ii) by its time-varying perception of the behaviour of the other agents. On top of the language, we also suggest some methodological guidance by considering a general strategy based on a progressive assignement cf responsibilities to agents.
From Organization Models to System Requirements: A 'Cooperating Agents' Approach
Models for supporting the redesign of organizational work Many types of models have been proposed for supporting organizational work. In this paper, we consider models that are used for supporting the redesign of organizational work. These models are used to help discover opportunities for improvements in organizations, introducing information technologies where appropriate. To support the redesign of organizational work, models are needed for describing work configurations, and for identifying issues, exploring alternatives, and evaluating them. Several approaches are presented and compared. The i* framework — consisting of the Strategic Dependency and Strategic Rationale models — is discussed in some detail, as it is expressly designed for modelling and redesigning organizational work. We argue that models which view organizational participants as intentional actors with motivations and intents, and abilities and commitments, are needed to provide richer representations of organizational work to support its effective redesign. The redesign of a bank loan operation is used as illustration.
GRAIL/KAOS: An Environment for Goal-Driven Requirements Analysis, Integration and Layout The KAOS methodology provides a language, a method, and meta-level knowledge for goal-driven requirements elaboration. The language provides a rich ontology for capturing requirements in terms of goals, constraints, objects, actions, agents etc. Links between requirements are represented its well to capture refinements, conflicts, operationalizations, responsibility assignments, etc. The KAOS specification language is a multi-paradigm language with a two-level structure: an outer semantic net layer for declaring concepts, their attributes and links to other concepts, and an inner formal assertion layer for formally defining the concept. The latter combines a real-time temporal logic for the specification of goals, constraints, and objects, and standard pre-/postconditions for the specification of actions and their strengthening to ensure the constraints
Repository support for multi-perspective requirements engineering Relationships among different modeling perspectives have been systematically investigated focusing either on given notations (e.g. UML) or on domain reference models (e.g. ARIS/SAP). In contrast, many successful informal methods for business analysis and requirements engineering (e.g. JAD) emphasize team negotiation, goal orientation and flexibility of modeling notations. This paper addresses the question how much formal and computerized support can be provided in such settings without destroying their creative tenor. Our solution is based on a novel modeling language, M-Telos, that integrates the adaptability and analysis advantages of the logic-based meta modeling language Telos with a module concept covering the structuring mechanisms of scalable software architectures. It comprises four components: (1) A modular conceptual modeling formalism organizes individual perspectives and their interrelationships. (2) Perspective schemata are linked to a conceptual meta meta model of shared domain terms, thus giving the architecture a semantic meaning and enabling adaptability and extensibility of the network of perspectives. (3) Inconsistency management across perspectives is handled in a goal-oriented manner, by formalizing analysis goals as meta rules which are automatically customized to perspective schemata. (4) Continuous incremental maintenance of inconsistency information is provided by exploiting recent view maintenance techniques from deductive databases. The approach has been implemented as an extension to the ConceptBase ‡ ‡ ConceptBase is available through web site http://www-i5.Informatik.RWTH-Aachen.de/Cbdor/index.html. meta database management system and has been applied in a number of real-world requirements engineering projects.
A classification of semantic conflicts in heterogeneous database systems
Specification-based test oracles for reactive systems
On artificial agents for negotiation in electronic commerce A well-established body of research consistently shows that people involved in multiple-issue negotiations frequently select pareto-inferior agreements that “leave money on the table”. Using an evolutionary computation approach, we show how simple, boundedly rational, artificial adaptive agents can learn to perform similarly to humans at stylized negotiations. Furthermore, there is the promise that these agents can be integrated into practicable electronic commerce systems which would not only leave less money on the table, but would enable new types of transactions to be negotiated cost effectively
Protocol verification as a hardware design aid The role of automatic formal protocol verificationin hardware design is considered. Principlesare identified that maximize the benefits of protocolverification while minimizing the labor andcomputation required. A new protocol descriptionlanguage and verifier (both called Mur") are described,along with experiences in applying themto two industrial protocols that were developed aspart of hardware designs.1 IntroductionMost complex digital designs must be regarded as...
Microanalysis: Acquiring Database Semantics in Conceptual Graphs Relational databases are in widespread use, yet they suffer from serious limitations when one uses them for reasoning about real-world enterprises. This is due to the fact that database relations possess no inherent semantics. This paper describes an approach called microanalysis that we have used to effectively capture database semantics represented by conceptual graphs. The technique prescribes a manual knowledge acquisition process whereby each relation schema is captured in a single conceptual graph. The schema's graph can then easily be instantiated for each tuple in the database forming a set of graphs representing the entire database's semantics. Although our technique originally was developed to capture semantics in a restricted domain of interest, namely database inference detection, we believe that domain-directed microanalysis is a general approach that can be of significant value for databases in many domains. We describe the approach and give a brief example.
Object-oriented development in an industrial environment Object-oriented programming is a promising approach to the industrialization of the software development process. However, it has not yet been incorporated in a development method for large systems. The approaches taken are merely extensions of well-known techniques when 'programming in the small' and do not stand on the firm experience of existing developments methods for large systems. One such technique called block design has been used within the telecommunication industry and relies on a similar paradigm as object-oriented programming. The two techniques together with a third technique, conceptual modeling used for requirement modeling of information systems, have been unified into a method for the development of large systems.
JPEG 2000 performance evaluation and assessment JPEG 2000, the new ISO/ITU-T standard for still image coding, has recently reached the International Standard (IS) status. Other new standards have been recently introduced, namely JPEG-LS and MPEG-4 VTC. This paper provides a comparison of JPEG 2000 with JPEG-LS and MPEG-4 VTC, in addition to older but widely used solutions, such as JPEG and PNG, and well established algorithms, such as SPIHT. Lossless compression efficiency, fixed and progressive lossy rate-distortion performance, as well as complexity and robustness to transmission errors, are evaluated. Region of Interest coding is also discussed and its behavior evaluated. Finally, the set of provided functionalities of each standard is also evaluated. In addition, the principles behind each algorithm are briefly described. The results show that the choice of the “best” standard depends strongly on the application at hand, but that JPEG 2000 supports the widest set of features among the evaluated standards, while providing superior rate-distortion performance in most cases.
Fuzzy Time Series Forecasting With a Probabilistic Smoothing Hidden Markov Model Since its emergence, the study of fuzzy time series (FTS) has attracted more attention because of its ability to deal with the uncertainty and vagueness that are often inherent in real-world data resulting from inaccuracies in measurements, incomplete sets of observations, or difficulties in obtaining measurements under uncertain circumstances. The representation of fuzzy relations that are obtained from a fuzzy time series plays a key role in forecasting. Most of the works in the literature use the rule-based representation, which tends to encounter the problem of rule redundancy. A remedial forecasting model was recently proposed in which the relations were established based on the hidden Markov model (HMM). However, its forecasting performance generally deteriorates when encountering more zero probabilities owing to fewer fuzzy relationships that exist in the historical temporal data. This paper thus proposes an enhanced HMM-based forecasting model by developing a novel fuzzy smoothing method to overcome performance deterioration. To deal with uncertainty more appropriately, the roulette-wheel selection approach is applied to probabilistically determine the forecasting result. The effectiveness of the proposed model is validated through real-world forecasting experiments, and performance comparison with other benchmarks is conducted by a Monte Carlo method.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1.032678
0.050758
0.030079
0.025795
0.025188
0.025188
0.013308
0.008907
0.004567
0.000215
0.000005
0
0
0
Lossless coding of hyperspectral images through components ordering by minimum bidirected spanning tree.
Vector-lifting schemes for lossless coding and progressive archival of multispectral images ABSTRACT In this paper, a nonlinear subband decomposition scheme with per - fect reconstruction is proposed for lossless coding of multispectral images The merit of this new scheme is to exploit efficiently both the spatial and the spectral redundancies contained in a multispec - tral image sequence Besides, it is suitable for progressive cod - ing, which constitutes a desirable feature for telebrowsing applica - tions Simulation tests performed on real scenes allow to assess the performances of this new multiresolution coding algorithm They demonstrate that the achieved compression ratios are higher than those obtained with currently used lossless coders
Band ordering in lossless compression of multispectral images In this paper, we consider a model of lossless image compression in which each band of a multispectral image is coded using a prediction function involving values from a previously coded band of the compression, and examine how the ordering of the bands affects the achievable compression. We present an efficient algorithm for computing the optimal band ordering for a multispectral image. This algorithm has time complexity O(n-) for an n-band image, while the naive algorithm takes time Ω(n!). A slight variant of the optimal ordering problem that is motivated by some practical concerns is shown to be NP-hard, and hence, computationally infeasible, in all cases except for the most trivial possibility. In addition, we report on our experimental findings using the algorithms designed in this paper applied to real multispectral satellite data. The results show that the techniques described here hold great promise for application to real-world compression needs
On Overview of KRL, a Knowledge Representation Language
Implementing Remote procedure calls Remote procedure calls (RPC) are a useful paradigm for providing communication across a network between programs written in a high level language. This paper describes a package, written as part of the Cedar project, providing a remote procedure call facility. The paper describes the options that face a designer of such a package, and the decisions we made. We describe the overall structure of our RPC mechanism, our facilities for binding RPC clients, the transport level communication protocol, and some performance measurements. We include descriptions of some optimisations we used to achieve high performance and to minimize the load on server machines that have many clients. Our primary aim in building an RPC package was to make the building of distributed systems easier. Previous protocols were sufficiently hard to use that only members of a select group of communication experts were willing to undertake the construction of distributed systems. We hoped to overcome this by providing a communication paradigm as close as possible to the familiar facilities of our high level languages. To achieve this aim, we concentrated on making remote calls efficient, and on making the semantics of remote calls as close as possible to those of local calls.
Alloy: a lightweight object modelling notation Alloy is a little language for describing structural properties. It offers a declaration syntax compatible with graphical object models, and a set-based formula syntax powerful enough to express complex constraints and yet amenable to a fully automatic semantic analysis. Its meaning is given by translation to an even smaller (formally defined) kernel. This paper presents the language in its entirety, and explains its motivation, contributions and deficiencies.
Semantic grammar: an engineering technique for constructing natural language understanding systems One of the major stumbling blocks to more effective used computers by naive users is the lack of natural means of communication between the user and the computer system. This report discusses a paradigm for constructing efficient and friendly man-machine interface systems involving subsets of natural language for limited domains of discourse. As such this work falls somewhere between highly constrained formal language query systems and unrestricted natural language under-standing systems. The primary purpose of this research is not to advance our theoretical under-standing of natural language but rather to put forth a set of techniques for embedding both semantic/conceptual and pragmatic information into a useful natural language interface module. Our intent has been to produce a front end system which enables the user to concentrate on his problem or task rather than making him worry about how to communicate his ideas or questions to the machine.
Recursive functions of symbolic expressions and their computation by machine, Part I this paper in L a TEXpartly supported by ARPA (ONR) grant N00014-94-1-0775to Stanford University where John McCarthy has been since 1962. Copied with minor notationalchanges from CACM, April 1960. If you want the exact typography, look there. Currentaddress, John McCarthy, Computer Science Department, Stanford, CA 94305, (email:jmc@cs.stanford.edu), (URL: <a href="http://citeseer.ist.psu.edu/rd/0/http%3AqSqqSqwww-formal.stanford.eduqSqjmcqSq" onmouseover="self.status="http://www-formal.stanford.edu/jmc/"; return true" onmouseout="self.status=""; return true">http://www-formal.stanford.edu/jmc/</a> )by starting with the class of expressions called S-expressions and the functionscalled...
A study of cross-validation and bootstrap for accuracy estimation and model selection We review accuracy estimation methods and compare the two most common methods crossvalidation and bootstrap. Recent experimental results on artificial data and theoretical re cults in restricted settings have shown that for selecting a good classifier from a set of classifiers (model selection), ten-fold cross-validation may be better than the more expensive leaveone-out cross-validation. We report on a largescale experiment--over half a million runs of C4.5 and a Naive-Bayes algorithm--to estimate the effects of different parameters on these algrithms on real-world datasets. For crossvalidation we vary the number of folds and whether the folds are stratified or not, for bootstrap, we vary the number of bootstrap samples. Our results indicate that for real-word datasets similar to ours, The best method to use for model selection is ten fold stratified cross validation even if computation power allows using more folds.
A Theory of Prioritizing Composition An operator for the composition of two processes, where one process has priority over the other process, is studied. Processes are described by action systems, and data refinement is used for transforming processes. The operator is shown to be compositional, i.e. monotonic with respect to refinement. It is argued that this operator is adequate for modelling priorities as found in programming languages and operating systems. Rules for introducing priorities and for raising and lowering priorities of processes are given. Dynamic priorities are modelled with special priority variables which can be freely mixed with other variables and the prioritising operator in program development. A number of applications show the use of prioritising composition for modelling and specification in general.
Inheritance of proofs The Curry-Howard isomorphism, a fundamental property shared by many type theories, establishes a direct correspondence between programs and proofs. This suggests that the same structuring principles that ease programming should be useful for proving as well. To exploit object-oriented structuring mechanisms for verification, we extend the object-model of Pierce and Turner, based on the higher-order typed X-calculus F less than or equal to(omega), with a logical component. By enriching the (functional) signature of objects with a specification, methods and their correctness proofs are packed together in objects. The uniform treatment of methods and proofs gives rise in a natural way to object-oriented proving principles - including inheritance of proofs, late binding of proofs, and encapsulation of proofs - as analogues to object-oriented programming principles. We have used Lego, a type-theoretic proof checker, to explore the feasibility of this approach. (C) 1998 John Wiley & Sons, Inc.
Software engineering for parallel systems Current approaches to software engineering practice for parallel systems are reviewed. The parallel software designer has not only to address the issues involved in the characterization of the application domain and the underlying hardware platform, but, in many instances, the production of portable, scalable software is desirable. In order to accommodate these requirements, a number of specific techniques and tools have been proposed, and these are discussed in this review in the framework of the parallel software life-cycle. The paper outlines the role of formal methods in the practical production of parallel software, but its main focus is the emergence of development methodologies and environments. These include CASE tools and run-time support systems, as well as the use of methods taken from experience of conventional software development. Because of the particular emphasis on performance of parallel systems, work on performance evaluation and monitoring systems is considered.
Maintaining a legacy: towards support at the architectural level An organization that develops large, software intensive systems with a long lifetime will encounter major changes in the market requirements, the software development environment, including its platform, and the target platform. In order to meet the challenges associated with these changes, software development has to undergo major changes as well, Especially when these systems are successful, and hence become an asset, particular care shall be taken to maintain this legacy; large systems with a long lifetime tend to become very complex and difficult to understand. Software architecture plays a vital role in the development of large software systems. For the purpose of maintenance, an up-to-date explicit description of the software architecture of a system supports understanding and comprehension of it, amongst other things. However, many large! complex systems do not have an up-to-date documented software architecture. Particularly in cases where these systems have a long lifetime, the (natural) turnover of personnel will make it very likely that many employees contributing to previous generations of the system are no longer available. A need to 'recover' the software architecture of the system may become prevalent, facilitating the understanding of the system, providing ways to improve its maintainability and quality and to control architectural changes. This paper gives an overview of an on-going effort to improve the maintainability and quality of a legacy system, and describes the recent introduction of support at the architectural level for program understanding and complexity control. Copyright (C) 2000 John Wiley & Sons, Ltd.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.2
0.022222
0.008
0
0
0
0
0
0
0
0
0
0
0