id
stringlengths
1
4
title
stringlengths
13
200
abstract
stringlengths
67
2.93k
keyphrases
sequence
prmu
sequence
590
Universal approximation by hierarchical fuzzy system with constraints on the fuzzy rule
This paper presents a special hierarchical fuzzy system where the outputs of the previous layer are not used in the IF-parts, but used only in the THEN-parts of the fuzzy rules of the current layer. The proposed scheme can be shown to be a universal approximator to any continuous function on a compact set if complete fuzzy sets are used in the IF-parts of the fuzzy rules with singleton fuzzifier and center average defuzzifier. From the simulation of ball and beam control system, it is demonstrated that the proposed scheme approximates with good accuracy the model nonlinear controller with fewer fuzzy rules than the centralized fuzzy system and its control performance is comparable to that of the nonlinear controller
[ "universal approximator", "hierarchical fuzzy system", "fuzzy rules", "continuous function", "ball and beam control system", "hierarchical fuzzy logic", "Stone-Weierstrass theorem" ]
[ "P", "P", "P", "P", "P", "M", "U" ]
1138
Approximating martingales for variance reduction in Markov process simulation
"Knowledge of either analytical or numerical approximations should enable more efficient simulation estimators to be constructed." This principle seems intuitively plausible and certainly attractive, yet no completely satisfactory general methodology has been developed to exploit it. The authors present a new approach for obtaining variance reduction in Markov process simulation that is applicable to a vast array of different performance measures. The approach relies on the construction of a martingale that is then used as an internal control variate
[ "martingales", "variance reduction", "Markov process simulation", "performance measures", "internal control variate", "approximating martingale-process method", "complex stochastic processes", "single-server queue" ]
[ "P", "P", "P", "P", "P", "M", "M", "U" ]
1281
A notion of non-interference for timed automata
The non-interference property of concurrent systems is a security property concerning the flow of information among different levels of security of the system. In this paper we introduce a notion of timed non-interference for real-time systems specified by timed automata. The notion is presented using an automata based approach and then it is characterized also by operations and equivalence between timed languages. The definition is applied to an example of a time-critical system modeling a simplified control of an airplane
[ "timed automata", "concurrent systems", "security property", "real-time systems", "time-critical system", "noninterference notion" ]
[ "P", "P", "P", "P", "P", "M" ]
691
Robust output-feedback control for linear continuous uncertain state delayed systems with unknown time delay
The state-delayed time often is unknown and independent of other variables in most real physical systems. A new stability criterion for uncertain systems with a state time-varying delay is proposed. Then, a robust observer-based control law based on this criterion is constructed via the sequential quadratic programming method. We also develop a separation property so that the state feedback control law and observer can be independently designed and maintain closed-loop system stability. An example illustrates the availability of the proposed design method
[ "output-feedback control", "state delayed systems", "time delay", "uncertain systems", "state time-varying delay", "observer-based control law", "sequential quadratic programming", "state feedback control law", "closed-loop system stability", "robust control", "linear continuous systems" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "R", "R" ]
1060
Variety identification of wheat using mass spectrometry with neural networks and the influence of mass spectra processing prior to neural network analysis
The performance of matrix-assisted laser desorption/ionisation time-of-flight mass spectrometry with neural networks in wheat variety classification is further evaluated. Two principal issues were studied: (a) the number of varieties that could be classified correctly; and (b) various means of preprocessing mass spectrometric data. The number of wheat varieties tested was increased from 10 to 30. The main pre-processing method investigated was based on Gaussian smoothing of the spectra, but other methods based on normalisation procedures and multiplicative scatter correction of data were also used. With the final method, it was possible to classify 30 wheat varieties with 87% correctly classified mass spectra and a correlation coefficient of 0.90
[ "variety identification", "mass spectra processing", "neural network analysis", "matrix-assisted laser desorption/ionisation time-of-flight mass spectrometry", "wheat variety classification", "mass spectrometric data", "Gaussian smoothing", "normalisation procedures", "multiplicative scatter correction", "correctly classified mass spectra", "correlation coefficient", "pre-processing- method" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "M" ]
1025
Watermarking techniques for electronic delivery of remote sensing images
Earth observation missions have recently attracted a growing interest, mainly due to the large number of possible applications capable of exploiting remotely sensed data and images. Along with the increase of market potential, the need arises for the protection of the image products. Such a need is a very crucial one, because the Internet and other public/private networks have become preferred means of data exchange. A critical issue arising when dealing with digital image distribution is copyright protection. Such a problem has been largely addressed by resorting to watermarking technology. A question that obviously arises is whether the requirements imposed by remote sensing imagery are compatible with existing watermarking techniques. On the basis of these motivations, the contribution of this work is twofold: assessment of the requirements imposed by remote sensing applications on watermark-based copyright protection, and modification of two well-established digital watermarking techniques to meet such constraints. More specifically, the concept of near-lossless watermarking is introduced and two possible algorithms matching such a requirement are presented. Experimental results are shown to measure the impact of watermark introduction on a typical remote sensing application, i.e., unsupervised image classification
[ "watermarking techniques", "electronic delivery", "remote sensing images", "Earth observation missions", "digital image distribution", "copyright protection", "digital watermarking", "near-lossless watermarking", "unsupervised image classification" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P" ]
1361
Adaptive scheduling of batch servers in flow shops
Batch servicing is a common way of benefiting from economies of scale in manufacturing operations. Good examples of production systems that allow for batch processing are ovens found in the aircraft industry and in semiconductor manufacturing. In this paper we study the issue of dynamic scheduling of such systems within the context of multi-stage flow shops. So far, a great deal of research has concentrated on the development of control strategies, which only address the batch stage. This paper proposes an integral scheduling approach that also includes succeeding stages. In this way, we aim for shop optimization, instead of optimizing performance for a single stage. Our so-called look-ahead strategy adapts its scheduling decision to shop status, which includes information on a limited number of near-future arrivals. In particular, we study a two-stage flow shop, in which the batch stage is succeeded by a serial stage. The serial stage may be realized by a single machine or by parallel machines. Through an extensive simulation study it is demonstrated how shop performance can be improved by the proposed strategy relative to existing strategies
[ "adaptive scheduling", "batch servers", "flow shops", "batch servicing", "manufacturing operations", "production systems", "ovens", "aircraft industry", "semiconductor manufacturing", "dynamic scheduling", "multi-stage flow shops", "control strategies", "integral scheduling approach", "shop optimization", "look-ahead strategy", "near-future arrivals", "two-stage flow shop", "single machine", "parallel machines", "simulation study" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P" ]
1324
A look at MonacoProfiler 4
The newest profiling program from Monaco Software adds some valuable features: support for up to 8-color printing, profiling for digital cameras, fine-tuning of black generation and tweaking of profile transforms. We tested its ease of use and a few of the advanced functions. In all, it's pretty good
[ "MonacoProfiler 4", "color-correction", "Pantone Hexachrome", "commercial printers" ]
[ "P", "U", "U", "U" ]
771
Pareto-optimal formulations for cost versus colorimetric accuracy trade-offs in printer color management
Color management for the printing of digital images is a challenging task, due primarily to nonlinear ink-mixing behavior and the presence of redundant solutions for print devices with more than three inks. Algorithms for the conversion of image data to printer-specific format are typically designed to achieve a single predetermined rendering intent, such as colorimetric accuracy. We present two CIELAB to CMYK color conversion schemes based on a general Pareto-optimal formulation for printer color management. The schemes operate using a 149-color characterization data set selected to efficiently capture the entire CMYK gamut. The first scheme uses artificial neural networks as transfer functions between the CIELAB and CMYK spaces. The second scheme is based on a reformulation of tetrahedral interpolation as an optimization problem. Characterization data are divided into tetrahedra for the interpolation-based approach using the program Qhull, which removes the common restriction that characterization data be well organized. Both schemes offer user control over trade-off problems such as cost versus reproduction accuracy, allowing for user-specified print objectives and the use of constraints such as maximum allowable ink and maximum allowable AE*/sub ab/. A formulation for minimization of ink is shown to be particularly favorable, integrating both a clipping and gamut compression features into a single methodology
[ "Pareto-optimal formulations", "optimization", "cost versus colorimetric accuracy trade-offs", "printer color management", "nonlinear ink-mixing behavior", "redundant solutions", "rendering intent", "CIELAB to CMYK color conversion schemes", "artificial neural networks", "transfer functions", "tetrahedral interpolation", "tetrahedra", "interpolation-based approach", "user control", "cost versus reproduction accuracy", "user-specified print objectives", "constraints", "maximum allowable ink", "clipping", "gamut compression features", "digital image printing", "image data conversion", "color characterization data set", "Qhull program", "MacBeth ColorChecker chart", "grey component replacement" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "R", "R", "R", "R", "U", "U" ]
734
Web services boost integration
Microsoft and IBM have announced products to help their database software co-exist with competitors' offerings. The products use web services technology allowing users to improve integration between databases and application software from rival vendors
[ "Microsoft", "IBM", "database software", "web services technology" ]
[ "P", "P", "P", "P" ]
1409
North American carrier survey: simply the best
Network Magazine carried out a North American carrier survey. Thousands of network engineers gave information on providers' strengths and weaknesses across seven services: private lines, frame relay, ATM, VPNs, dedicated Internet access, Ethernet services, and Web hosting. Respondents also ranked providers on their ability to perform in up to eight categories including customer service, reliability, and price. Users rated more than a dozen providers for each survey. Carriers needed to receive at least 30 votes for inclusion in the survey. Readers were asked to rate carriers on up to nine categories using a scale of 1 (unacceptable) to 5 (excellent). Not all categories are equally important. To try and get at these differences, Network Magazine asked readers to assign a weight to each category. The big winners were VPNs
[ "North American carrier survey", "private lines", "frame relay", "ATM", "VPNs", "dedicated Internet access", "Ethernet services", "Web hosting", "customer service", "reliability", "price", "service providers" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "R" ]
1319
Routing security in wireless ad hoc networks
A mobile ad hoc network consists of a collection of wireless mobile nodes that are capable of communicating with each other without the use of a network infrastructure or any centralized administration. MANET is an emerging research area with practical applications. However, wireless MANET is particularly vulnerable due to its fundamental characteristics, such as open medium, dynamic topology, distributed cooperation, and constrained capability. Routing plays an important role in the security of the entire network. In general, routing security in wireless MANETs appears to be a problem that is not trivial to solve. In this article we study the routing security issues of MANETs, and analyze in detail one type of attack-the "black hole" problem-that can easily be employed against the MANETs. We also propose a solution for the black hole problem for ad hoc on-demand distance vector routing protocol
[ "routing security", "wireless ad hoc networks", "mobile ad hoc network", "wireless mobile nodes", "wireless MANET", "open medium", "dynamic topology", "distributed cooperation", "on-demand distance vector routing protocol", "satellite transmission", "home wireless personal area networks" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "U", "M" ]
709
Cooperative mutation based evolutionary programming for continuous function optimization
An evolutionary programming (EP) algorithm adapting a new mutation operator is presented. Unlike most previous EPs, in which each individual is mutated on its own, each individual in the proposed algorithm is mutated in cooperation with the other individuals. This not only enhances convergence speed but also gives more chance to escape from local minima
[ "cooperative mutation based evolutionary programming", "continuous function optimization", "convergence speed", "local minima" ]
[ "P", "P", "P", "P" ]
822
Reinventing broadband
Many believe that broadband providers need to change their whole approach. The future, then, is in reinventing broadband. That means tiered pricing to make broadband more competitive with dial-up access and livelier, more distinct content: video on demand, MP3, and other features exclusive to the fat-pipe superhighway
[ "broadband", "tiered pricing", "video on demand", "MP3", "business plans" ]
[ "P", "P", "P", "P", "U" ]
867
Tracking control of the flexible slider-crank mechanism system under impact
The variable structure control (VSC) and the stabilizer design by using the pole placement technique are applied to the tracking control of the flexible slider-crank mechanism under impact. The VSC strategy is employed to track the crank angular position and speed, while the stabilizer design is involved to suppress the flexible vibrations simultaneously. From the theoretical impact consideration, three approaches including the generalized momentum balance (GMB), the continuous force model (CFM), and the CFM associated with the effective mass compensation EMC are adopted, and are derived on the basis of the energy and impulse-momentum conservations. Simulation results are provided to demonstrate the performance of the motor-controller flexible slider-crank mechanism not only accomplishing good tracking trajectory of the crank angle, but also eliminating vibrations of the flexible connecting rod
[ "tracking control", "flexible slider-crank mechanism system", "impact", "variable structure control", "stabilizer design", "pole placement technique", "crank angular position", "flexible vibrations", "generalized momentum balance", "continuous force model", "effective mass compensation", "tracking trajectory", "flexible connecting rod", "conservation laws", "multibody dynamics" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "M", "U" ]
1434
A simple etalon-stabilized visible laser diode
Visible laser diodes (LDs) are inexpensively available with single-transverse-mode, single-longitudinal-mode operation with a coherence length in the metre range. With constant current bias and constant operating temperature, the optical output power and operating wavelength are stable. A simple and inexpensive way is developed to maintain a constant LD temperature as the temperature of the local environment varies, by monitoring the initially changing wavelength with an external etalon and using this information to apply a heating correction to the monitor photodiode commonly integral to the LD package. The fractional wavelength stability achieved is limited by the solid etalon to 7*10/sup -6/ degrees C/sup -1/
[ "visible laser diode", "single-transverse-mode", "single-longitudinal-mode", "constant current bias", "constant operating temperature", "heating correction", "monitor photodiode", "fractional wavelength stability", "etalon-stabilized laser diode", "index-guided multi-quantum-well", "closed-loop operation", "feedback loop" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "R", "U", "M", "U" ]
1018
Fabrication of polymeric microlens of hemispherical shape using micromolding
Polymeric microlenses play an important role in reducing the size, weight, and cost of optical data storage and optical communication systems. We fabricate polymeric microlenses using the microcompression molding process. The design and fabrication procedures for mold insertion is simplified using silicon instead of metal. PMMA powder is used as the molding material. Governed by process parameters such as temperature and pressure histories, the micromolding process is controlled to minimize various defects that develop during the molding process. The radius of curvature and magnification ratio of fabricated microlens are measured as 150 mu m and over 3.0, respectively
[ "micromolding", "polymeric microlenses", "size", "weight", "cost", "optical data storage", "optical communication systems", "microcompression molding process", "fabrication procedures", "mold insertion", "silicon", "PMMA powder", "molding material", "process parameters", "temperature", "pressure", "micromolding process", "magnification ratio", "polymeric microlens fabrication", "hemispherical shape microlens", "design procedures", "300 micron" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "R", "R", "R", "U" ]
987
Proof that the election problem belongs to NF-completeness problems in asynchronous distributed systems
This paper is about the hardness of the election problem in asynchronous distributed systems in which processes can crash but links are reliable. The hardness of the problem is defined with respect to the difficulty to solve it despite failures. It is shown that problems encountered in the system are classified as three classes of problems: F (fault-tolerant), NF (Not fault-tolerant) and NFC (NF-completeness). Among those, the class NFC are the hardest problems to solve. In this paper, we prove that the Election problem is the most difficult problem which belongs to the class NFC
[ "election problem", "NF-completeness problems", "asynchronous distributed systems", "distributed computing", "leader election", "failure detectors", "fault-tolerant problems", "not-fault-tolerant problems" ]
[ "P", "P", "P", "M", "M", "M", "R", "M" ]
550
Market watch - air conditioning
After a boom period in the late nineties, the air conditioning market finds itself in something of a lull at present, but manufacturers aren't panicking
[ "market", "air conditioning" ]
[ "P", "P" ]
1105
Fuzzy business [Halden Reactor Project]
The Halden Reactor Project has developed two systems to investigate how signal validation and thermal performance monitoring techniques can be improved. PEANO is an online calibration monitoring system that makes use of artificial intelligence techniques. The system has been tested in cooperation with EPRI and Edan Engineering, using real data from a US PWR plant. These tests showed that PEANO could reliably assess the performance of the process instrumentation at different plant conditions. Real cases of zero and span drifts were successfully detected by the system. TEMPO is a system for thermal performance monitoring and optimisation, which relies on plant-wide first principle models. The system has been installed on a Swedish BWR plant. Results obtained show an overall rms deviation from measured values of a few tenths of a percent, and giving goodness-of-fits in the order of 95%. The high accuracy demonstrated is a good basis for detecting possible faults and efficiency losses in steam turbine cycles
[ "Halden Reactor Project", "thermal performance monitoring", "PEANO", "calibration", "artificial intelligence", "PWR", "TEMPO", "BWR", "steam turbine cycles", "fuzzy logic", "steam generators", "feedwater flow" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "M", "M", "U" ]
1140
Computer aided classification of masses in ultrasonic mammography
Frequency compounding was recently investigated for computer aided classification of masses in ultrasonic B-mode images as benign or malignant. The classification was performed using the normalized parameters of the Nakagami distribution at a single region of interest at the site of the mass. A combination of normalized Nakagami parameters from two different images of a mass was undertaken to improve the performance of classification. Receiver operating characteristic (ROC) analysis showed that such an approach resulted in an area of 0.83 under the ROC curve. The aim of the work described in this paper is to see whether a feature describing the characteristic of the boundary can be extracted and combined with the Nakagami parameter to further improve the performance of classification. The combination of the features has been performed using a weighted summation. Results indicate a 10% improvement in specificity at a sensitivity of 96% after combining the information at the site and at the boundary. Moreover, the technique requires minimal clinical intervention and has a performance that reaches that of the trained radiologist. It is hence suggested that this technique may be utilized in practice to characterize breast masses
[ "computer aided classification", "ultrasonic mammography", "frequency compounding", "ultrasonic B-mode images", "benign", "malignant", "normalized parameters", "Nakagami distribution", "single region of interest", "normalized Nakagami parameters", "receiver operating characteristic", "ROC curve", "weighted summation", "specificity", "sensitivity", "minimal clinical intervention", "breast masses" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P" ]
96
OMS battle heating up as Chicago Equity ousts LongView for Macgregor
Chicago Equity Partners LLC has gone into full production with Macgregor's Financial Trading Platform. This marks a concentrated effort to achieve straight-through processing
[ "LongView", "Macgregor", "Chicago Equity Partners", "Financial Trading Platform", "straight-through processing" ]
[ "P", "P", "P", "P", "P" ]
614
An on-line distributed intelligent fault section estimation system for large-scale power networks
In this paper, a novel distributed intelligent system is suggested for on-line fault section estimation (FSE) of large-scale power networks. As the first step, a multi-way graph partitioning method based on weighted minimum degree reordering is proposed for effectively partitioning the original large-scale power network into the desired number of connected sub-networks with quasi-balanced FSE burdens and minimum frontier elements. After partitioning, a distributed intelligent system based on radial basis function neural network (RBF NN) and companion fuzzy system is suggested for FSE. The relevant theoretical analysis and procedure are presented in the paper. The proposed distributed intelligent FSE method has been implemented with sparse storage technique and tested on the IEEE 14, 30 and 118-bus systems, respectively. Computer simulation results show that the proposed FSE method works successfully for large-scale power networks
[ "on-line distributed intelligent fault section estimation system", "large-scale power networks", "distributed intelligent system", "on-line fault section estimation", "multi-way graph partitioning method based", "weighted minimum degree reordering", "connected sub-networks", "quasi-balanced FSE burdens", "minimum frontier elements", "radial basis function neural network", "fuzzy system", "sparse storage technique", "computer simulation", "IEEE 14-bus systems", "IEEE 30-bus systems", "IEEE 118-bus systems" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "M", "M", "R" ]
651
Application-layer multicasting with Delaunay triangulation overlays
Application-layer multicast supports group applications without the need for a network-layer multicast protocol. Here, applications arrange themselves in a logical overlay network and transfer data within the overlay. We present an application-layer multicast solution that uses a Delaunay triangulation as an overlay network topology. An advantage of using a Delaunay triangulation is that it allows each application to locally derive next-hop routing information without requiring a routing protocol in the overlay. A disadvantage of using a Delaunay triangulation is that the mapping of the overlay to the network topology at the network and data link layer may be suboptimal. We present a protocol, called Delaunay triangulation (DT protocol), which constructs Delaunay triangulation overlay networks. We present measurement experiments of the DT protocol for overlay networks with up to 10 000 members, that are running on a local PC cluster with 100 Linux PCs. The results show that the protocol stabilizes quickly, e.g., an overlay network with 10 000 nodes can be built in just over 30 s. The traffic measurements indicate that the average overhead of a node is only a few kilobits per second if the overlay network is in a steady state. Results of throughput experiments of multicast transmissions (using TCP unicast connections between neighbors in the overlay network) show an achievable throughput of approximately 15 Mb/s in an overlay with 100 nodes and 2 Mb/s in an overlay with 1000 nodes
[ "application-layer multicasting", "Delaunay triangulation overlays", "group applications", "network-layer multicast protocol", "logical overlay network", "overlay networks", "overlay network topology", "next-hop routing information", "data link layer", "DT protocol", "measurement experiments", "local PC cluster", "Linux PC", "traffic measurements", "average overhead", "throughput experiments", "multicast transmissions", "TCP unicast connections", "data transfer", "Delaunay triangulation protocol", "network nodes", "15 Mbit/s", "2 Mbit/s" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "R", "R", "R", "M", "M" ]
1204
Design and prototype of a performance tool interface for OpenMP
This paper proposes a performance tools interface for OpenMP, similar in spirit to the MPI profiling interface in its intent to define a clear and portable API that makes OpenMP execution events visible to runtime performance tools. We present our design using a source-level instrumentation approach based on OpenMP directive rewriting. Rules to instrument each directive and their combination are applied to generate calls to the interface consistent with directive semantics and to pass context information (e.g., source code locations) in a portable and efficient way. Our proposed OpenMP performance API further allows user functions and arbitrary code regions to be marked and performance measurement to be controlled using new OpenMP directives. To prototype the proposed OpenMP performance interface, we have developed compatible performance libraries for the EXPERT automatic event trace analyzer [17, 18] and the TAU performance analysis framework [13]. The directive instrumentation transformations we define are implemented in a source-to-source translation tool called OPARI. Application examples are presented for both EXPERT and TAU to show the OpenMP performance interface and OPARI instrumentation tool in operation. When used together with the MPI profiling interface (as the examples also demonstrate), our proposed approach provides a portable and robust solution to performance analysis of OpenMP and mixed-mode (OpenMP + MPI) applications
[ "performance tool interface", "MPI profiling interface", "API", "source-level instrumentation approach", "OpenMP directive rewriting", "directive semantics", "arbitrary code regions", "performance libraries", "EXPERT automatic event trace analyzer", "TAU performance analysis framework", "source-to-source translation tool", "OPARI", "parallel programming" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "U" ]
1241
Code generator for the HPF Library and Fortran 95 transformational functions
One of the language features of the core language of HPF 2.0 (High Performance Fortran) is the HPF Library. The HPF Library consists of 55 generic functions. The implementation of this library presents the challenge that all data types, data kinds, array ranks and input distributions need to be supported. For instance, more than 2 billion separate functions are required to support COPY-SCATTER fully. The efficient support of these billions of specific functions is one of the outstanding problems of HPF. We have solved this problem by developing a library generator which utilizes the mechanism of parameterized templates. This mechanism allows the procedures to be instantiated at compile time for arguments with a specific type, kind, rank and distribution over a specific processor array. We describe the algorithms used in the different library functions. The implementation gives the ease of generating a large number of library routines from a single template. The templates can be extended with special code for specific combinations of the input arguments. We describe in detail the implementation and performance of the matrix multiplication template for the Fujitsu VPP5000 platform
[ "code generation", "HPF", "HPF Library", "High Performance Fortran", "generic functions", "data types", "library generator", "parameterized templates", "library functions", "matrix multiplication", "parallel computing", "parallel languages" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "U", "M" ]
139
Equilibrium swelling and kinetics of pH-responsive hydrogels: models, experiments, and simulations
The widespread application of ionic hydrogels in a number of applications like control of microfluidic flow, development of muscle-like actuators, filtration/separation and drug delivery makes it important to properly understand these materials. Understanding hydrogel properties is also important from the standpoint of their similarity to many biological tissues. Typically, gel size is sensitive to outer solution pH and salt concentration. In this paper, we develop models to predict the swelling/deswelling of hydrogels in buffered pH solutions. An equilibrium model has been developed to predict the degree of swelling of the hydrogel at a given pH and salt concentration in the solution. A kinetic model has been developed to predict the rate of swelling of the hydrogel when the solution pH is changed. Experiments are performed to characterize the mechanical properties of the hydrogel in different pH solutions. The degree of swelling as well as the rate of swelling of the hydrogel are also studied through experiments. The simulations are compared with experimental results and the models are found to predict the swelling/deswelling processes accurately
[ "pH-responsive hydrogels", "ionic hydrogels", "microfluidic flow", "muscle-like actuators", "filtration/separation", "drug delivery", "gel size", "swelling/deswelling", "buffered pH solutions", "equilibrium model", "mechanical properties" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P" ]
1365
Deadlock-free scheduling in flexible manufacturing systems using Petri nets
This paper addresses the deadlock-free scheduling problem in Flexible Manufacturing Systems. An efficient deadlock-free scheduling algorithm was developed, using timed Petri nets, for a class of FMSs called Systems of Sequential Systems with Shared Resources (S/sup 4/ R). The algorithm generates a partial reachability graph to find the optimal or near-optimal deadlock-free schedule in terms of the firing sequence of the transitions of the Petri net model. The objective is to minimize the mean flow time (MFT). An efficient truncation technique, based on the siphon concept, has been developed and used to generate the minimum necessary portion of the reachability graph to be searched. It has been shown experimentally that the developed siphon truncation technique enhances the ability to develop deadlock-free schedules of systems with a high number of deadlocks, which cannot be achieved using standard Petri net scheduling approaches. It may be necessary, in some cases, to relax the optimality condition for large FMSs in order to make the search effort reasonable. Hence, a User Control Factor (UCF) was defined and used in the scheduling algorithm. The objective of using the UCF is to achieve an acceptable trade-off between the solution quality and the search effort. Its effect on the MFT and the CPU time has been investigated. Randomly generated examples are used for illustration and comparison. Although the effect of UCF did not affect the mean flow time, it was shown that increasing it reduces the search effort (CPU time) significantly
[ "deadlock-free scheduling", "flexible manufacturing systems", "Petri nets", "systems of sequential systems with shared resources", "partial reachability graph", "near-optimal deadlock-free schedule", "siphon truncation technique", "user control factor", "CPU time", "randomly generated examples", "optimal deadlock-free schedule", "Petri net model transitions firing sequence", "mean flow time minimization", "optimality condition relaxation" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "R", "R", "R", "R" ]
1320
Securing the Internet routing infrastructure
The unprecedented growth of the Internet over the last years, and the expectation of an even faster increase in the numbers of users and networked systems, resulted in the Internet assuming its position as a mass communication medium. At the same time, the emergence of an increasingly large number of application areas and the evolution of the networking technology suggest that in the near future the Internet may become the single integrated communication infrastructure. However, as the dependence on the networking infrastructure grows, its security becomes a major concern, in light of the increased attempt to compromise the infrastructure. In particular, the routing operation is a highly visible target that must be shielded against a wide range of attacks. The injection of false routing information can easily degrade network performance, or even cause denial of service for a large number of hosts and networks over a long period of time. Different approaches have been proposed to secure the routing protocols, with a variety of countermeasures, which, nonetheless, have not eradicated the vulnerability of the routing infrastructure. In this article, we survey the up-to-date secure routing schemes. that appeared over the last few years. Our critical point of view and thorough review of the literature are an attempt to identify directions for future research on an indeed difficult and still largely open problem
[ "routing infrastructure", "networked systems", "networking technology", "integrated communication infrastructure", "networking infrastructure", "false routing information", "network performance", "routing protocols", "countermeasures", "secure routing schemes", "research", "Internet routing infrastructure security", "preventive security mechanisms", "link state protocols" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "R", "M", "M" ]
775
Disability-related special libraries
One of the ways that the federal government works to improve services to people with disabilities is to fund disability-related information centers and clearinghouses that provide information resources and referrals to disabled individuals, their family members, service providers, and the general public. The Teaching Research Division of Western Oregon University operates two federally funded information centers for people with disabilities: OBIRN (the Oregon Brain Injury Resource Network) and DB-LINK (the National Information Clearinghouse on Children who are Deaf-Blind). Both have developed in-depth library collections and services in addition to typical clearinghouse services. The authors describe how OBIRN and DB-LINK were designed and developed, and how they are currently structured and maintained. Both information centers use many of the same strategies and tools in day-to-day operations, but differ in a number of ways, including materials and clientele
[ "disability-related special libraries", "federal government", "disability-related information centers", "information resources", "Western Oregon University", "OBIRN", "Oregon Brain Injury Resource Network", "DB-LINK", "National Information Clearinghouse on Children who are Deaf-Blind", "library collections", "disability-related clearinghouses", "information referrals" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "R", "R" ]
730
Multi-hour design of survivable classical IP networks
Most of Internet intra-domain routing protocols (OSPF, RIP, and IS-IS) are based on shortest path routing. The path length is defined as the sum of metrics associated with the path links. These metrics are often managed by the network administrator. In this context, the design of an Internet backbone network consists in dimensioning the network (routers and transmission links) and establishing the metric. Many requirements have to be satisfied. First, Internet traffic is not static as significant variations can be observed during the day. Second, many failures can occur (cable cuts, hardware failures, software failures, etc.). We present algorithms (meta-heuristics and greedy heuristic) to design Internet backbone networks, taking into account the multi-hour behaviour of traffic and some survivability requirements. Many multi-hour and protection strategies are studied and numerically compared. Our algorithms can be extended to integrate other quality of service constraints
[ "multi-hour design", "survivable classical IP networks", "Internet intra-domain routing protocols", "OSPF", "RIP", "IS-IS", "shortest path routing", "path length", "path links", "network administrator", "Internet backbone network", "transmission links", "Internet traffic", "survivability requirements", "quality of service constraints", "network dimensioning", "network routers", "network failures", "meta-heuristics algorithm", "greedy heuristic algorithm", "network protection", "QoS constraints" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "R", "R", "R", "R", "R", "R", "M" ]
1448
Implementing equals for mixed-type comparison
The idea of comparing objects of different types is not entirely off base, in particular for classes from the same class hierarchy. After all, objects from the same class hierarchy (and by class hierarchy we mean all classes derived from a common superclass other than Object) have something in common, namely at least the superclass part. As we demonstrated in a previous paper (2002), providing a correct implementation of a mixed-type comparison is a non-trivial task. In this article, we will show one way of implementing a mixed-type comparison of objects from the same class hierarchy that meets the requirements of the equals contract
[ "mixed-type comparison", "superclass", "equals contract", "Java", "transitivity requirement" ]
[ "P", "P", "P", "U", "M" ]
1099
WebCAD: A computer aided design tool constrained with explicit 'design for manufacturability' rules for computer numerical control milling
A key element in the overall efficiency of a manufacturing enterprise is the compatibility between the features that have been created in a newly designed part, and the capabilities of the downstream manufacturing processes. With this in mind, a process-aware computer aided design (CAD) system called WebCAD has been developed. The system restricts the freedom of the designer in such a way that the designed parts can be manufactured on a three-axis computer numerical control milling machine. This paper discusses the vision of WebCAD and explains the rationale for its development in comparison with commercial CAD/CAM (computer aided design/manufacture) systems. The paper then goes on to describe the implementation issues that enforce the manufacturability rules. Finally, certain design tools are described that aid a user during the design process. Some examples are given of the parts designed and manufactured with WebCAD
[ "WebCAD", "computer aided design tool", "design tools", "computer numerical control milling", "manufacturability rules", "design for manufacturability rules", "manufacturing enterprise efficiency", "process-aware CAD system", "three-axis CNC milling machine", "CAD/CAM systems", "Internet-based CAD/CAM" ]
[ "P", "P", "P", "P", "P", "R", "R", "R", "M", "R", "M" ]
1064
Quantum-controlled measurement device for quantum-state discrimination
We propose a "programmable" quantum device that is able to perform a specific generalized measurement from a certain set of measurements depending on a quantum state of a "program register." In particular, we study a situation when the programmable measurement device serves for the unambiguous discrimination between nonorthogonal states. The particular pair of states that can be unambiguously discriminated is specified by the state of a program qubit. The probability of successful discrimination is not optimal for all admissible pairs. However, for some subsets it can be very close to the optimal value
[ "quantum-controlled measurement device", "quantum-state discrimination", "quantum state", "program register", "nonorthogonal states", "program qubit", "programmable quantum device" ]
[ "P", "P", "P", "P", "P", "P", "R" ]
1021
Error-probability analysis of MIL-STD-1773 optical fiber data buses
We have analyzed the error probabilities of MIL-STD-1773 optical fiber data buses with three modulation schemes, namely, original Manchester II bi-phase coding, PTMBC, and EMBC-BSF. Using these derived expressions of error probabilities, we can also compare the receiver sensitivities of such optical fiber data buses
[ "optical fiber data buses", "error probabilities", "modulation schemes", "receiver sensitivities", "Manchester bi-phase coding" ]
[ "P", "P", "P", "P", "R" ]
788
Rise of the supercompany [CRM]
All the thoughts, conversations and notes of employees help the firm create a wider picture of business. Customer relationship management (CRM) feeds on data, and it is hungry
[ "customer relationship management", "central data repository", "database", "staff trained" ]
[ "P", "M", "U", "U" ]
1398
Swamped by data [storage]
While the cost of storage has plummeted, the demand continued to climb and there are plenty of players out there offering solutions to a company's burgeoning storage needs
[ "cost of storage", "IT personnel", "resource management", "disk capacity management", "disk optimisation", "file system automation", "storage virtualisation", "storage area networks", "network attached storage" ]
[ "P", "U", "U", "U", "U", "U", "M", "M", "M" ]
9
Achieving competitive capabilities in e-services
What implications does the Internet have for service operations strategy? How can business performance of e-service companies be improved in today's knowledge-based economy? These research questions are the subject of the paper. We propose a model that links the e-service company's knowledge-based competencies with their competitive capabilities. Drawing from the current literature, our analysis suggests that services that strategically build a portfolio of knowledge-based competencies, namely human capital, structural capital, and absorptive capacity have more operations-based options, than their counterparts who are less apt to invest. We assume that the combinative capabilities of service quality, delivery, flexibility, and cost are determined by the investment in intellectual capital. Arguably, with the advent of the Internet, different operating models (e.g., bricks-and-mortar, clicks-and-mortar, or pure dot-com) have different strategic imperatives in terms of knowledge-based competencies. Thus, the new e-operations paradigm can be viewed as a configuration of knowledge-based competencies and capabilities
[ "competitive capabilities", "e-services", "Internet", "service operations strategy", "business performance", "knowledge-based economy", "knowledge-based competencies", "human capital", "structural capital", "absorptive capacity", "operations-based options", "investment", "combinative capabilities", "service quality", "delivery", "flexibility", "cost", "intellectual capital", "bricks-and-mortar", "clicks-and-mortar", "dot-com", "strategic imperatives" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P" ]
569
Application of an internally consistent material model to determine the effect of tool edge geometry in orthogonal machining
It is well known that the edge geometry of a cutting tool affects the forces measured in metal cutting. Two experimental methods have been suggested in the past to extract the ploughing (non-cutting) component from the total measured force: (1) the extrapolation approach, and (2) the dwell force technique. This study reports the behavior of zinc during orthogonal machining using tools of controlled edge radius. Applications of both the extrapolation and dwell approaches show that neither produces an analysis that yields a material response consistent with the known behavior of zinc. Further analysis shows that the edge geometry modifies the shear zone of the material and thereby modifies the forces. When analyzed this way, the measured force data yield the expected material response without requiring recourse to an additional ploughing component
[ "tool edge geometry", "edge geometry", "orthogonal machining", "cutting tool", "metal cutting", "extrapolation", "dwell force", "zinc", "ploughing component" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P" ]
1179
Evolution complexity of the elementary cellular automaton rule 18
Cellular automata are classes of mathematical systems characterized by discreteness (in space, time, and state values), determinism, and local interaction. Using symbolic dynamical theory, we coarse-grain the temporal evolution orbits of cellular automata. By means of formal languages and automata theory, we study the evolution complexity of the elementary cellular automaton with local rule number 18 and prove that its width 1-evolution language is regular, but for every n >or= 2 its width n-evolution language is not context free but context sensitive
[ "evolution complexity", "complexity", "elementary cellular automaton", "cellular automata", "symbolic dynamical theory", "formal languages" ]
[ "P", "P", "P", "P", "P", "P" ]
1285
On fractal dimension in information systems. Toward exact sets in infinite information systems
The notions of an exact as well as a rough set are well-grounded as basic notions in rough set theory. They are however defined in the setting of a finite information system i.e. an information system having finite numbers of objects as well as attributes. In theoretical studies e.g. of topological properties of rough sets, one has to trespass this limitation and to consider information systems with potentially unbound number of attributes. In such setting, the notions of rough and exact sets may be defined in terms of topological operators of interior and closure with respect to an appropriate topology following the ideas from the finite case, where it is noticed that in the finite case rough-set-theoretic operators of lower and upper approximation are identical with the interior, respectively, closure operators in topology induced by equivalence classes of the indiscernibility relation. Extensions of finite information systems are also desirable from application point of view in the area of knowledge discovery and data mining, when demands of e.g. mass collaboration and/or huge experimental data call for need of working with large data tables so the sound theoretical generalization of these cases is an information system with the number of attributes not bound in advance by a fixed integer i.e. an information system with countably but infinitely many attributes, In large information systems, a need arises for qualitative measures of complexity of concepts involved free of parameters, cf. e.g. applications for the Vapnik-Czervonenkis dimension. We study here in the theoretical setting of infinite information system a proposal to apply fractal dimensions suitably modified as measures of concept complexity
[ "fractal dimension", "information systems", "exact sets", "infinite information systems", "rough set", "topological properties", "closure operators", "equivalence classes", "knowledge discovery", "data mining", "qualitative measures", "complexity" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P" ]
695
Design of high-performance wavelets for image coding using a perceptual time domain criterion
This paper presents a new biorthogonal linear-phase wavelet design for image compression. Instead of calculating the prototype filters as spectral factors of a half-band filter, the design is based on the direct optimization of the low pass analysis filter using an objective function directly related to a perceptual criterion for image compression. This function is defined as the product of the theoretical coding gain and an index called the peak-to-peak ratio, which was shown to have high correlation with perceptual quality. A distinctive feature of the proposed technique is a procedure by which, given a "good" starting filter, "good" filters of longer lengths are generated. The results are excellent, showing a clear improvement in perceptual image quality. Also, we devised a criterion for constraining the coefficients of the filters in order to design wavelets with minimum ringing
[ "high-performance wavelets", "image coding", "perceptual time domain criterion", "biorthogonal linear-phase wavelet design", "image compression", "prototype filters", "half-band filter", "analysis filter", "objective function", "coding gain", "peak-to-peak ratio", "perceptual image quality", "low pass filter", "filter banks" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "R", "M" ]
1278
Verification of timed automata based on similarity
The paper presents a modification of the standard partitioning technique to generate abstract state spaces preserving similarity for Timed Automata. Since this relation is weaker than bisimilarity, most of the obtained models (state spaces) are smaller than bisimilar ones, but still preserve the universal fragments of branching time temporal logics. The theoretical results are exemplified for strong, delay, and observational simulation relations
[ "partitioning technique", "abstract state spaces", "bisimilarity", "universal fragments", "branching time temporal logics", "observational simulation relations", "timed automata verification" ]
[ "P", "P", "P", "P", "P", "P", "R" ]
1184
Measuring return: revealing ROI
The most critical part of the return-on-investment odyssey is to develop metrics that matter to the business and to measure systems in terms of their ability to help achieve those business goals. Everything must flow from those key metrics. And don't forget to revisit those every now and then, too. Since all systems wind down over time, it's important to keep tabs on how well your automation investment is meeting the metrics established by your company. Manufacturers are clamoring for a tool to help quantify returns and analyze the results
[ "ROI", "return-on-investment", "key metrics", "automation investment", "technology purchases" ]
[ "P", "P", "P", "P", "U" ]
100
Separate accounts go mainstream [investment]
New entrants are shaking up the separate-account industry by supplying Web-based platforms that give advisers the tools to pick independent money managers
[ "investment", "separate-account industry", "Web-based platforms", "independent money managers", "financial advisors" ]
[ "P", "P", "P", "P", "U" ]
943
Implementation of universal quantum gates based on nonadiabatic geometric phases
We propose an experimentally feasible scheme to achieve quantum computation based on nonadiabatic geometric phase shifts, in which a cyclic geometric phase is used to realize a set of universal quantum gates. Physical implementation of this set of gates is designed for Josephson junctions and for NMR systems. Interestingly, we find that the nonadiabatic phase shift may be independent of the operation time under appropriate controllable conditions. A remarkable feature of the present nonadiabatic geometric gates is that there is no intrinsic limitation on the operation time
[ "universal quantum gates", "quantum computation", "nonadiabatic geometric phase shifts", "cyclic geometric phase", "Josephson junctions", "NMR systems", "nonadiabatic phase shift", "operation time", "nonadiabatic geometric gates" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P" ]
145
If the RedBoot fits [open-source ROM monitor]
Many embedded developers today use a ROM- or flash-resident software program that provides functionality such as loading and running application software, scripting, read/write access to processor registers, and memory dumps. A ROM monitor, as it is often called, can be a useful and far less expensive debugging tool than an in-circuit emulator. This article describes the RedBoot ROM monitor. It takes a look at the features offered by the RedBoot ROM monitor and sees how it can be configured. It also walks through the steps of rebuilding and installing a new RedBoot image on a target platform. Finally, it looks at future enhancements that are coming in new releases and how to get support and additional information when using RedBoot. Although RedBoot uses software modules from the eCos real-time operating system (RTOS) and is often used in systems running embedded Linux, it is completely independent of both operating systems. RedBoot can be used with any operating system or RTOS, or even without one
[ "RedBoot", "open-source ROM monitor", "flash-resident software program", "scripting", "memory dumps", "debugging tool", "eCos", "real-time operating system", "embedded Linux", "embedded systems", "processor register access", "bootstrapping" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "R", "R", "U" ]
906
High-performance servo systems based on multirate sampling control
In this paper, novel multirate two-degree-of-freedom controllers are proposed for digital control systems, in which the sampling period of plant output is restricted to be relatively longer than the control period of plant input. The proposed feedforward controller assures perfect tracking at M inter-sampling points. On the other hand, the proposed feedback controller assures perfect disturbance rejection at M inter-sample points in the steady state. Illustrative examples of position control for hard disk drive are presented, and advantages of these approaches are demonstrated
[ "servo system", "multirate sampling control", "digital control systems", "feedforward", "tracking", "feedback", "disturbance rejection", "position control", "hard disk drive" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P" ]
594
Improved analysis for the nonlinear performance of CMOS current mirrors with device mismatch
The nonlinear performance of the simple and complementary MOSFET current mirrors are analyzed. Closed-form expressions are obtained for the harmonic and intermodulation components resulting from a multisinusoidal input current. These expressions can be used for predicting the limiting values of the input current under prespecified conditions of threshold-voltage mismatches and/or transconductance mismatches. The case of a single input sinusoid is discussed in detail and the results are compared with SPICE simulations
[ "nonlinear performance", "CMOS current mirrors", "device mismatch", "complementary MOSFET current mirrors", "closed-form expressions", "intermodulation components", "multisinusoidal input current", "input current", "threshold-voltage mismatch", "transconductance mismatch", "SPICE simulations", "harmonic components", "simulation results" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "R", "R" ]
610
AGC for autonomous power system using combined intelligent techniques
In the present work two intelligent load frequency controllers have been developed to regulate the power output and system frequency by controlling the speed of the generator with the help of fuel rack position control. The first controller is obtained using fuzzy logic (FL) only, whereas the second one by using a combination of FL, genetic algorithms and neural networks. The aim of the proposed controller(s) is to restore in a very smooth way the frequency to its nominal value in the shortest time possible whenever there is any change in the load demand etc. The action of these controller(s) provides a satisfactory balance between frequency overshoot and transient oscillations with zero steady-state error. The design and performance evaluation of the proposed controller(s) structure are illustrated with the help of case studies applied (without loss of generality) to a typical single-area power system. It is found that the proposed controllers exhibit satisfactory overall dynamic performance and overcome the possible drawbacks associated with other competing techniques
[ "autonomous power system", "combined intelligent techniques", "frequency control", "fuel rack position control", "fuzzy logic", "genetic algorithms", "neural networks", "load demand", "frequency overshoot", "transient oscillations", "zero steady-state error", "performance evaluation", "single-area power system", "overall dynamic performance", "competing techniques", "power output regulation", "generator speed control", "controller design" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "R", "R", "R" ]
655
Mapping CCF to MARC21: an experimental approach
The purpose of this article is to raise and address a number of issues pertaining to the conversion of Common Communication Format (CCF) into MARC21. In this era of global resource sharing, exchange of bibliographic records from one system to another is imperative in today's library communities. Instead of using a single standard to create machine-readable catalogue records, more than 20 standards have emerged and are being used by different institutions. Because of these variations in standards, sharing of resources and transfer of data from one system to another among the institutions locally and globally has become a significant problem. Addressing this problem requires keeping in mind that countries such as India and others in southeast Asia are using the CCF as a standard for creating bibliographic cataloguing records. This paper describes a way to map the bibliographic catalogue records from CCF to MARC21, although 100% mapping is not possible. In addition, the paper describes an experimental approach that enumerates problems that may occur during the mapping of records/exchanging of records and how these problems can be overcome
[ "MARC21", "global resource sharing", "library communities", "standards", "machine-readable catalogue records", "India", "southeast Asia", "Common Communication Format conversion", "bibliographic records exchange", "data transfer", "CCF to MARC21 mapping" ]
[ "P", "P", "P", "P", "P", "P", "P", "R", "R", "R", "R" ]
1200
From continuous recovery to discrete filtering in numerical approximations of conservation laws
Modern numerical approximations of conservation laws rely on numerical dissipation as a means of stabilization. The older, alternative approach is the use of central differencing with a dose of artificial dissipation. In this paper we review the successful class of weighted essentially non-oscillatory finite volume schemes which comprise sophisticated methods of the first kind. New developments in image processing have made new devices possible which can serve as highly nonlinear artificial dissipation terms. We view artificial dissipation as discrete filter operation and introduce several new algorithms inspired by image processing
[ "continuous recovery", "discrete filtering", "numerical approximations", "conservation laws", "numerical dissipation", "central differencing", "artificial dissipation", "finite volume schemes", "image processing", "highly nonlinear artificial dissipation terms", "discrete filter operation" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P" ]
1245
A brief guide to competitive intelligence: how to gather and use information on competitors
The author outlines the processes involved in competitive intelligence, and discusses what it is, how to do it and gives examples of what happens when companies fail to monitor their competitive environment effectively. The author presents a case study, showing how the company that produced the pre-cursor to the Barbie doll failed to look at their business environment and how this led to the firm's failure. The author discusses what competitive intelligence is, and what it is not, and why it is important for businesses, and presents three models used to describe the competitive intelligence process, going through the various steps involved in defining intelligence requirements and collecting, analyzing, communicating and utilizing competitive intelligence
[ "competitive intelligence", "Barbie doll", "business environment", "competitor information", "intelligence collection", "intelligence analysis", "intelligence communication", "intelligence utilization" ]
[ "P", "P", "P", "R", "R", "M", "R", "R" ]
983
Limitations of delayed state feedback: a numerical study
Stabilization of a class of linear time-delay systems can be achieved by a numerical procedure, called the continuous pole placement method [Michiels et al., 2000]. This method can be seen as an extension of the classical pole placement algorithm for ordinary differential equations to a class of delay differential equations. In [Michiels et al., 2000] it was applied to the stabilization of a linear time-invariant system with an input delay using static state feedback. In this paper we study the limitations of such delayed state feedback laws. More precisely we completely characterize the class of stabilizable plants in the 2D-case. For that purpose we make use of numerical continuation techniques. The use of delayed state feedback in various control applications and the effect of its limitations are briefly discussed
[ "delayed state feedback", "linear time-delay systems", "continuous pole placement method", "delay differential equations", "static state feedback", "numerical continuation" ]
[ "P", "P", "P", "P", "P", "P" ]
554
A scalable and lightweight QoS monitoring technique combining passive and active approaches: on the mathematical formulation of CoMPACT monitor
To make a scalable and lightweight QoS monitoring system, we (2002) have proposed a new QoS monitoring technique, called the change-of-measure based passive/active monitoring (CoMPACT Monitor), which is based on the change-of-measure framework and is an active measurement transformed by using passively monitored data. This technique enables us to measure detailed QoS information for individual users, applications and organizations, in a scalable and lightweight manner. In this paper, we present the mathematical foundation of CoMPACT Monitor. In addition, we show its characteristics through simulations in terms of typical implementation issues for inferring the delay distributions. The results show that CoMPACT Monitor gives accurate QoS estimations with only a small amount of extra traffic for active measurement
[ "QoS monitoring", "CoMPACT Monitor", "change-of-measure", "active monitoring", "passive monitoring", "delay distributions", "quality of service", "Internet", "network performance" ]
[ "P", "P", "P", "P", "P", "P", "M", "U", "U" ]
1101
Evaluation of existing and new feature recognition algorithms. 1. Theory and implementation
This is the first of two papers evaluating the performance of general-purpose feature detection techniques for geometric models. In this paper, six different methods are described to identify sets of faces that bound depression and protrusion faces. Each algorithm has been implemented and tested on eight components from the National Design Repository. The algorithms studied include previously published general-purpose feature detection algorithms such as the single-face inner-loop and concavity techniques. Others are improvements to existing algorithms such as extensions of the two-dimensional convex hull method to handle curved faces as well as protrusions. Lastly, new algorithms based on the three-dimensional convex hull, minimum concave, visible and multiple-face inner-loop face sets are described
[ "feature recognition algorithms", "general-purpose feature detection techniques", "geometric models", "sets of faces", "protrusion faces", "National Design Repository", "concavity technique", "two-dimensional convex hull method", "curved faces", "three-dimensional convex hull", "minimum concave", "multiple-face inner-loop face sets", "depression faces", "single-face inner-loop technique", "CAD/CAM software", "geometric reasoning algorithms", "visible inner-loop face sets" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "R", "R", "U", "M", "R" ]
1144
Simultaneous iterative reconstruction of emission and attenuation images in positron emission tomography from emission data only
For quantitative image reconstruction in positron emission tomography attenuation correction is mandatory. In case that no data are available for the calculation of the attenuation correction factors one can try to determine them from the emission data alone. However, it is not clear if the information content is sufficient to yield an adequate attenuation correction together with a satisfactory activity distribution. Therefore, we determined the log likelihood distribution for a thorax phantom depending on the choice of attenuation and activity pixel values to measure the crosstalk between both. In addition an iterative image reconstruction (one-dimensional Newton-type algorithm with a maximum likelihood estimator), which simultaneously reconstructs the images of the activity distribution and the attenuation coefficients is used to demonstrate the problems and possibilities of such a reconstruction. As result we show that for a change of the log likelihood in the range of statistical noise, the associated change in the activity value of a structure is between 6% and 263%. In addition, we show that it is not possible to choose the best maximum on the basis of the log likelihood when a regularization is used, because the coupling between different structures mediated by the (smoothing) regularization prevents an adequate solution due to crosstalk. We conclude that taking into account the attenuation information in the emission data improves the performance of image reconstruction with respect to the bias of the activities, however, the reconstruction still is not quantitative
[ "image reconstruction", "positron emission tomography attenuation correction", "attenuation correction factors", "activity distribution", "log likelihood distribution", "thorax phantom", "activity pixel values", "crosstalk", "iterative image reconstruction", "one-dimensional Newton-type algorithm", "maximum likelihood estimator", "attenuation coefficients", "statistical noise", "smoothing", "attenuation information" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P" ]
92
Wireless-retail financial services: adoption can't justify the cost
Slow adoption by retail investors, costly services and bankrupt vendors has prompted banks and brokerage firms to turn off their wireless applications
[ "banks", "brokerage firms", "wireless applications" ]
[ "P", "P", "P" ]
1059
Mustering motivation to enact decisions: how decision process characteristics influence goal realization
Decision scientists tend to focus mainly on decision antecedents, studying how people make decisions. Action psychologists, in contrast, study post-decision issues, investigating how decisions, once formed, are maintained, protected, and enacted. Through the research presented here, we seek to bridge these two disciplines, proposing that the process by which decisions are reached motivates subsequent pursuit and benefits eventual realization. We identify three characteristics of the decision process (DP) as having motivation-mustering potential: DP effort investment, DP importance, and DP confidence. Through two field studies tracking participants' decision processes, pursuit and realization, we find that after controlling for the influence of the motivational mechanisms of goal intention and implementation intention, the three decision process characteristics significantly influence the successful enactment of the chosen decision directly. The theoretical and practical implications of these findings are considered and future research opportunities are identified
[ "motivation", "decision process characteristics", "goal realization", "decision scientists", "action psychologists", "post-decision issues", "motivation-mustering potential", "goal intention", "research opportunities", "decision enactment", "decision process investment", "decision process importance", "decision process confidence" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "R", "R", "R", "R" ]
1358
Analysis of the surface roughness and dimensional accuracy capability of fused deposition modelling processes
Building up materials in layers poses significant challenges from the viewpoint of material science, heat transfer and applied mechanics. However, numerous aspects of the use of these technologies have yet to be studied. One of these aspects is the characterization of the surface roughness and dimensional precision obtainable in layered manufacturing processes. In this paper, a study of roughness parameters obtained through the use of these manufacturing processes was made. Prototype parts were manufactured using FDM techniques and an experimental analysis of the resulting roughness average (R/sub a/) and rms roughness (R/sub q/) obtained through the use of these manufacturing processes was carried out. Dimensional parameters were also studied in order to determine the capability of the Fused Deposition Modelling process for manufacturing parts
[ "surface roughness", "dimensional accuracy capability", "fused deposition modelling processes", "dimensional precision", "layered manufacturing processes", "prototype parts", "roughness average", "rms roughness", "rapid prototyping", "three-dimensional solid objects", "CAD model", "CNC-controlled robot", "extrusion head" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "M", "U", "M", "U", "U" ]
748
Simulation study of the cardiovascular functional status in hypertensive situation
An extended cardiovascular model was established based on our previous work to study the consequences of physiological or pathological changes to the homeostatic functions of the cardiovascular system. To study hemodynamic changes in hypertensive situations, the impacts of cardiovascular parameter variations (peripheral vascular resistance, arterial vessel wall stiffness and baroreflex gain) upon hemodynamics and the short-term regulation of the cardiovascular system were investigated. For the purpose of analyzing baroregulation function, the short-term regulation of arterial pressure in response to moderate dynamic exercise for normotensive and hypertensive cases was studied through computer simulation and clinical experiments. The simulation results agree well with clinical data. The results of this work suggest that the model presented in this paper provides a useful tool to investigate the functional status of the cardiovascular system in normal or pathological conditions
[ "cardiovascular functional status", "hypertensive situation", "extended cardiovascular model", "pathological changes", "homeostatic functions", "hemodynamics", "cardiovascular parameter variations", "peripheral vascular resistance", "arterial vessel wall stiffness", "baroreflex gain", "short-term regulation", "arterial pressure", "moderate dynamic exercise", "computer simulation", "clinical experiments", "physiological changes", "normotensive cases" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "R", "R" ]
826
A round of cash, a pound of flesh [telecom]
Despite the upheaval across telecom, venture capital firms are still investing in start-ups. But while a promising idea and a catchy name were enough to guarantee millions in funding at the peak of the dotcom frenzy, now start-ups must prove-their long-term viability, and be willing to concede control of their business to their VC suitors
[ "telecom", "venture capital firms", "viability" ]
[ "P", "P", "P" ]
863
A scanline-based algorithm for the 2D free-form bin packing problem
This paper describes a heuristic algorithm for the 2D free-form bin packing (2D-FBP) problem. Given a set of 2D free-form bins and a set of 2D free-form items, the 2D-FBP problem is to lay out items inside one or more bins in such a way that the number of bins used is minimized, and for each bin, the yield is maximized. The proposed algorithm handles the problem as a variant of the 1D problem; i.e., items and bins are approximated as sets of scanlines, and scanlines are packed. The details of the algorithm are given, and its application to a nesting problem in a shipbuilding company is reported. The proposed algorithm consists of the basic and the group placement algorithms. The basic placement algorithm is a variant of the first-fit decreasing algorithm which is simply extended from the 1D case to the 2D case by a novel scanline approximation. A numerical study with real instances shows that the basic placement algorithm has sufficient performance for most of the instances, however, the group placement algorithm is required when items must be aligned in columns. The qualities of the resulting layouts are good enough for practical use, and the processing times are good
[ "scanline-based algorithm", "2D free-form bin packing problem", "heuristic algorithm", "2D-FBP problem", "minimization", "nesting problem", "shipbuilding company", "group placement algorithm", "first-fit decreasing algorithm", "irregular cutting", "irregular packing", "yield maximization" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "U", "M", "R" ]
1430
The free lunch is over: online content subscriptions on the rise
High need, rather than high use, may be what really determines a user's willingness to pay. Retooling and targeting content may be a sharper strategy than trying to re-educate users that it is time to pay up for material that has been free. Waiting for a paradigm shift in general user attitudes about paying for online content Could be a fool's errand
[ "online content subscriptions", "content retooling", "content targeting", "pay-to-play business models" ]
[ "P", "R", "R", "U" ]
570
Prediction and compensation of dynamic errors for coordinate measuring machines
Coordinate measuring machines (CMMs) are already widely utilized as measuring tools in the modem manufacturing industry. Rapidly approaching now is the trend for next-generation CMMs. However, the increases in measuring velocity of CMM applications are limited by dynamic errors that occur in CMMs. In this paper a systematic approach for modeling the dynamic errors of a touch-trigger probe CMM is developed through theoretical analysis and experimental study. An overall analysis of the dynamic errors of CMMs is conducted, with weak components of the CMM identified by a laser interferometer. The probing process, as conducted with a touch-trigger probe, is analyzed. The dynamic errors are measured, modeled, and predicted using neural networks. The results indicate that, using this mode, it is possible to compensate for the dynamic errors of CMMs
[ "compensation", "dynamic errors", "coordinate measuring machines", "manufacturing industry", "touch-trigger probe", "laser interferometer", "neural networks", "inertial forces" ]
[ "P", "P", "P", "P", "P", "P", "P", "U" ]
1125
Structure of weakly invertible semi-input-memory finite automata with delay 1
Semi-input-memory finite automata, a kind of finite automata introduced by the first author of this paper for studying error propagation, are a generalization of input memory finite automata by appending an autonomous finite automaton component. In this paper, we give a characterization of the structure of weakly invertible semi-input-memory finite automata with delay 1, in which the state graph of each autonomous finite automaton is a cycle. From a result on mutual invertibility of finite automata obtained by the authors recently, it leads to a characterization of the structure of feedforward inverse finite automata with delay 1
[ "weakly invertible", "invertibility", "semi-input-memory", "semi-input-memory finite automata", "finite automata", "delay 1", "state graph", "feedforward inverse finite automata" ]
[ "P", "P", "P", "P", "P", "P", "P", "P" ]
1160
Monoids all polygons over which are omega -stable: proof of the Mustafin-Poizat conjecture
A monoid S is called an omega -stabilizer (superstabilizer, or stabilizer) if every S-polygon has an omega -stable (superstable, or stable) theory. It is proved that every omega -stabilizer is a regular monoid. This confirms the Mustafin-Poizat conjecture and allows us to end up the description of omega -stabilizers
[ "monoids all polygons", "Mustafin-Poizat conjecture", "omega -stabilizer", "S-polygon", "regular monoid" ]
[ "P", "P", "P", "P", "P" ]
119
JPEG2000: standard for interactive imaging
JPEG2000 is the latest image compression standard to emerge from the Joint Photographic Experts Group (JPEG) working under the auspices of the International Standards Organization. Although the new standard does offer superior compression performance to JPEG, JPEG2000 provides a whole new way of interacting with compressed imagery in a scalable and interoperable fashion. This paper provides a tutorial-style review of the new standard, explaining the technology on which it is based and drawing comparisons with JPEG and other compression standards. The paper also describes new work, exploiting the capabilities of JPEG2000 in client-server systems for efficient interactive browsing of images over the Internet
[ "JPEG2000", "interactive imaging", "image compression", "Joint Photographic Experts Group", "International Standards Organization", "review", "client-server systems", "scalable compression", "interoperable compression" ]
[ "P", "P", "P", "P", "P", "P", "P", "R", "R" ]
634
An approximation to the F distribution using the chi-square distribution
For the cumulative distribution function (c.d.f.) of the F distribution, F(x; k, n), with associated degrees of freedom, k and n, a shrinking factor approximation (SFA), G( lambda kx; k), is proposed for large n and any fixed k, where G(x; k) is the chi-square c.d.f. with degrees of freedom, k, and lambda = lambda (kx; n) is the shrinking factor. Numerical analysis indicates that for n/k >or= 3, approximation accuracy of the SFA is to the fourth decimal place for most small values of k. This is a substantial improvement on the accuracy that is achievable using the normal, ordinary chi-square, and Scheffe-Tukey approximations. In addition, it is shown that the theoretical approximation error of the SFA, |F(x; k,n)-G( lambda kx; k)|, is O(1/n/sup 2/) uniformly over x
[ "F distribution", "chi-square distribution", "cumulative distribution function", "degrees of freedom", "shrinking factor approximation", "numerical analysis" ]
[ "P", "P", "P", "P", "P", "P" ]
671
Expert advice - how can my organisation take advantage of reverse auctions without jeopardising existing supplier relationships?
In a recent survey, AMR Research found that companies that use reverse auctions to negotiate prices with suppliers typically achieve savings of between 10% and 15% on direct goods and between 20% and 25% on indirect goods, and can slash sourcing cycle times from months to weeks. Suppliers, however, are less enthusiastic. They believe that these savings are achieved only by stripping the human element out of negotiations and evaluating bids on price alone, which drives down their profit margins. As a result, reverse auctions carry the risk of jeopardising long-term and trusted relationships. Suppliers that have not been involved in a reverse auction before typically fear the bidding event itself - arguably the most theatrical and, therefore, most hyped-up part of the process. Although it may only last one hour, weeks of preparation go into setting up a successful bidding event
[ "reverse auctions", "supplier relationships", "preparation", "Request For Quotation" ]
[ "P", "P", "P", "U" ]
1224
Formalization of weighted factors analysis
Weighted factors analysis (WeFA) has been proposed as a new approach for elicitation, representation, and manipulation of knowledge about a given problem, generally at a high and strategic level. Central to this proposal is that a group of experts in the area of the problem can identify a hierarchy of factors with positive or negative influences on the problem outcome. The tangible output of WeFA is a directed weighted graph called a WeFA graph. This is a set of nodes denoting factors that can directly or indirectly influence an overall aim of the graph. The aim is also represented by a node. Each directed arc is a direct influence of one factor on another. A chain of directed arcs indicates an indirect influence. The influences may be identified as either positive or negative. For example, sales and costs are two factors that influence the aim of profitability in an organization. Sales has a positive influence on profitability and costs has a negative influence on profitability. In addition, the relative significance of each influence is represented by a weight. We develop Binary WeFA which is a variant of WeFA where the factors in the graph are restricted to being either true or false. Imposing this restriction on a WeFA graph allows us to be more precise about the meaning of the graph and of reasoning in it. Binary WeFA is a new proposal that provides a formal yet sufficiently simple language for logic-based argumentation for use by business people in decision-support and knowledge management. Whilst Binary WeFA is expressively simpler than other logic-based argumentation formalisms, it does incorporate a novel formalization of the notion of significance
[ "weighted factors analysis", "directed weighted graph", "WeFA graph", "directed arc", "profitability", "organization", "significance", "Binary WeFA", "reasoning", "logic-based argumentation", "decision-support", "knowledge management", "knowledge elicitation", "knowledge representation", "knowledge manipulation" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "R", "R", "R" ]
1261
Topology-adaptive modeling of objects using surface evolutions based on 3D mathematical morphology
Level set methods were proposed mainly by mathematicians for constructing a model of a 3D object of arbitrary topology. However, those methods are computationally inefficient due to repeated distance transformations and increased dimensions. In the paper, we propose a new method of modeling fast objects of arbitrary topology by using a surface evolution approach based on mathematical morphology. Given sensor data covering the whole object surface, the method begins with an initial approximation of the object by evolving a closed surface into a model topologically equivalent to the real object. The refined approximation is then performed using energy minimization. The method has been applied in several experiments using range data, and the results are reported in the paper
[ "topology-adaptive modeling", "surface evolutions", "3D mathematical morphology", "level set methods", "3D object", "arbitrary topology", "repeated distance transformations", "initial approximation", "refined approximation", "energy minimization", "range data", "pseudo curvature flow" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "U" ]
802
A brief history of electronic reserves
Electronic reserves has existed as a library service for barely ten years, yet its history, however brief, is important as an indicator of the direction being taken by the profession of Librarianship as a whole. Recent improvements in technology and a desire to provide better service to students and faculty have resulted in the implementation of e-reserves by ever greater numbers of academic libraries. Yet a great deal of confusion still surrounds the issue of copyright compliance. Negotiation, litigation, and legislation in particular have framed the debate over the application of fair use to an e-reserves environment, and the question of whether or not permission fees should be paid to rights holders, but as of yet no definitive answers or standards have emerged
[ "electronic reserves", "library service", "librarianship", "students", "faculty", "academic libraries", "copyright compliance", "negotiation", "litigation", "legislation", "e-reserves environment", "permission fees" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P" ]
847
A gendered view of computer professionals: preliminary results of a survey
The under-representation of women in the computing profession in many parts the western world has received our attention through numerous publications, the noticeable low representation of women at computer science conferences and in the lecture halls. Over the past two decades, the situation had become worse. This paper seeks to add to the dialogue by presenting preliminary findings from a research project conducted in four countries. The aim of this research was to gain an insight into the perceptions future computer professionals hold on the category of employment loosely defined under the term of "a computer professional." One goal was to get insight into whether or not there is a difference between female and mate students regarding their view of computer professionals. Other goals were to determine if there was any difference between female and male students in different parts of the world, as well as who or what most influences the students to undertake their courses in computing
[ "computing profession", "employment", "mate students", "women under-representation", "future computer professional perceptions", "female students", "computing courses" ]
[ "P", "P", "P", "R", "R", "R", "R" ]
1080
Car-caravan snaking. 2 Active caravan braking
For part 1, see ibid., p.707-22. Founded on the review and results of Part 1, Part 2 contains a description of the virtual design of an active braking system for caravans or other types of trailer, to suppress snaking vibrations, while being simple from a practical viewpoint. The design process and the design itself are explained. The performance is examined by simulations and it is concluded that the system is effective, robust and realizable with modest and available components
[ "car-caravan snaking", "active caravan braking", "virtual design", "trailer", "snaking vibrations suppression", "dynamics" ]
[ "P", "P", "P", "P", "R", "U" ]
1451
From information gateway to digital library management system: a case analysis
This paper discusses the design, implementation and evolution of the Cornell University Library Gateway using the case analysis method. It diagnoses the Gateway within the conceptual framework of definitions and best practices associated with information gateways, portals, and emerging digital library management systems, in particular the product ENCompass
[ "information gateways", "digital library management system", "Cornell University Library Gateway", "portals", "ENCompass", "metadata" ]
[ "P", "P", "P", "P", "P", "U" ]
1414
Survey says! [online world of polls and surveys]
Many content managers miss the fundamental interactivity of the Web by not using polls and surveys. Using interactive features-like a poll or quiz-offers your readers an opportunity to become more engaged in your content. Using a survey to gather feedback about your content provides cost-effective data to help make modifications or plot the appropriate course of action. The Web has allowed us to take traditional market research and turn it on its ear. Surveys and polls can be conducted faster and cheaper than with telephone and mail. But if you are running a Web site, should you care about polls and surveys? Do you know the difference between the two in Web-speak?
[ "surveys", "polls", "content managers", "site owners", "World Wide Web", "site feedback" ]
[ "P", "P", "P", "M", "M", "R" ]
1339
Edge-colorings with no large polychromatic stars
Given a graph G and a positive integer r, let f/sub r/(G) denote the largest number of colors that can be used in a coloring of E(G) such that each vertex is incident to at most r colors. For all positive integers n and r, we determine f/sub r/(K/sub n,n/) exactly and f/sub r/(K/sub n/) within 1. In doing so, we disprove a conjecture by Y. Manoussakis et al. (1996)
[ "polychromatic stars", "positive integer", "positive integer", "edge colorings", "positive integers" ]
[ "P", "P", "P", "M", "P" ]
791
The rise and fall and rise again of customer care
Taking care of customers has never gone out of style, but as the recession fades, interest is picking up in a significant retooling of the CRM solutions banks have been using. The goal: usable knowledge to help improve service
[ "banks", "usable knowledge", "customer relationship management" ]
[ "P", "P", "M" ]
1381
An augmented spatial digital tree algorithm for contact detection in computational mechanics
Based on the understanding of existing spatial digital tree-based contact detection approaches, and the alternating digital tree (ADT) algorithm in particular, a more efficient algorithm, termed the augmented spatial digital tree (ASDT) algorithm, is proposed in the present work. The ASDT algorithm adopts a different point representation scheme that uses only the lower comer vertex to represent a (hyper-)rectangle, with the upper comer vertex serving as the augmented information. Consequently, the ASDT algorithm can keep the working space the same as the original n-dimensional space and, in general, a much better balanced tree can be expected. This, together with the introduction of an additional bounding subregion for the rectangles associated with each tree node, makes it possible to significantly reduce the number of node visits in the region search, although each node visit may be slightly more expensive. Three examples arising in computational mechanics are presented to provide an assessment of the performance of the ASDT. The numerical results indicate that the ASDT is, at least, over 3.9 times faster than the ADT
[ "augmented spatial digital tree algorithm", "contact detection", "computational mechanics", "upper comer vertex", "alternating digital tree algorithm", "augmented data structure", "spatial binary tree-based contact detection approaches" ]
[ "P", "P", "P", "P", "R", "M", "M" ]
1038
The analysis and control of longitudinal vibrations from wave viewpoint
The analysis and control of longitudinal vibrations in a rod from feedback wave viewpoint are synthesized. Both collocated and noncollocated feedback wave control strategies are explored. The control design is based on the local properties of wave transmission and reflection in the vicinity of the control force applied area, hence there is no complex closed form solution involved. The controller is designed to achieve various goals, such as absorbing the incoming vibration energy, creating a vibration free zone and eliminating standing waves in the structure. The findings appear to be very useful in practice due to the simplicity in the implementation of the controllers
[ "feedback waves", "noncollocated feedback wave control", "control design", "wave transmission", "control force", "complex closed form solution", "vibration energy", "vibration free zone", "standing waves", "longitudinal vibration control", "collocated feedback wave control", "wave reflection" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "R", "R", "R" ]
1040
CRONE control: principles and extension to time-variant plants with asymptotically constant coefficients
The principles of CRONE control, a frequency-domain robust control design methodology based on fractional differentiation, are presented. Continuous time-variant plants with asymptotically constant coefficients are analysed in the frequency domain, through their representation using time-variant frequency responses. A stability theorem for feedback systems including time-variant plants with asymptotically constant coefficients is proposed. Finally, CRONE control is extended to robust control of these plants
[ "CRONE control", "time-variant plants", "asymptotically constant coefficients", "frequency-domain robust control design", "robust control", "fractional differentiation", "time-variant frequency responses", "stability theorem", "feedback systems", "automatic control" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "M" ]
1005
The average-case identifiability and controllability of large scale systems
Needs for increased product quality, reduced pollution, and reduced energy and material consumption are driving enhanced process integration. This increases the number of manipulated and measured variables required by the control system to achieve its objectives. This paper addresses the question of whether processes tend to become increasingly more difficult to identify and control as the process dimension increases. Tools and results of multivariable statistics are used to show that, under a variety of assumed distributions on the elements, square processes of higher dimension tend to be more difficult to identify and control, whereas the expected controllability and identifiability of nonsquare processes depends on the relative numbers of measured and manipulated variables. These results suggest that the procedure of simplifying the control problem so that only a square process is considered is a poor practice for large scale systems
[ "average-case identifiability", "large scale systems", "enhanced process integration", "measured variables", "multivariable statistics", "nonsquare processes", "manipulated variables", "average-case controllability", "process control", "high dimension square processes", "process identification", "Monte Carlo simulations", "chemical engineering" ]
[ "P", "P", "P", "P", "P", "P", "P", "R", "R", "M", "M", "U", "U" ]
887
Towards strong stability of concurrent repetitive processes sharing resources
The paper presents a method for design of stability conditions of concurrent, repetitive processes sharing common resources. Steady-state behaviour of the system with m cyclic processes utilising a resource with the mutual exclusion is considered. Based on a recurrent equations framework necessary and sufficient conditions for the existence of maximal performance steady-state are presented. It was shown that if the conditions hold then the m-process system is marginally stable, i.e., a steady-state of the system depends on the perturbations. The problem of finding the relative positions of the processes leading to waiting-free (maximal efficiency) steady-states of the system is formulated as a constraint logic programming problem. An example illustrating the solving of the problem for a 3-process system using object-oriented, constraint logic programming language Oz is presented. A condition sufficient for strong stability of the m-process system is given. When the condition holds then for any initial phases of the processes a waiting-free steady-state will be reached
[ "strong stability", "concurrent repetitive processes", "common resources", "steady-state behaviour", "cyclic processes", "mutual exclusion", "recurrent equations framework", "necessary and sufficient conditions", "maximal performance steady-state", "constraint logic programming", "3-process system", "waiting-free steady-states", "Oz language" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "R" ]
1429
Online coverage of the Olympic Games
In 1956 a new medium was evolving which helped shape not only the presentation of the Games to a worldwide audience, but created entirely new avenues for marketing and sponsorship which changed the entire economic relevance of the Games. The medium in 1956 was television, and the medium now, of course, is the Internet. Not since 1956 has Olympic coverage been so impacted by the onset of new technology as the current Olympiad has been. But now the IOC finds itself in another set of circumstances not altogether different from 1956
[ "online coverage", "Olympic Games", "marketing", "sponsorship", "economic relevance", "Olympiad", "IOC", "online rights", "e-broadcast" ]
[ "P", "P", "P", "P", "P", "P", "P", "M", "U" ]
1341
STEM: Secure Telephony Enabled Middlebox
Dynamic applications, including IP telephony, have not seen wide acceptance within enterprises because of problems caused by the existing network infrastructure. Static elements, including firewalls and network address translation devices, are not capable of allowing dynamic applications to operate properly. The Secure Telephony Enabled Middlebox (STEM) architecture is an enhancement of the existing network design to remove the issues surrounding static devices. The architecture incorporates an improved firewall that can interpret and utilize information in the application layer of packets to ensure proper functionality. In addition to allowing dynamic applications to function normally, the STEM architecture also incorporates several detection and response mechanisms for well-known network-based vulnerabilities. This article describes the key components of the architecture with respect to the SIP protocol
[ "STEM", "Secure Telephony Enabled Middlebox", "dynamic applications", "IP telephony", "network infrastructure", "firewalls", "network address translation devices", "network design", "static devices", "application layer", "STEM architecture", "response mechanisms", "network-based vulnerabilities", "SIP protocol", "detection mechanisms" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "R" ]
1304
Center-crossing recurrent neural networks for the evolution of rhythmic behavior
A center-crossing recurrent neural network is one in which the null(hyper)surfaces of each neuron intersect at their exact centers of symmetry, ensuring that each neuron's activation function is centered over the range of net inputs that it receives. We demonstrate that relative to a random initial population, seeding the initial population of an evolutionary search with center-crossing networks significantly improves both the frequency and the speed with which high-fitness oscillatory circuits evolve on a simple walking task. The improvement is especially striking at low mutation variances. Our results suggest that seeding with center-crossing networks may often be beneficial, since a wider range of dynamics is more likely to be easily accessible from a population of center-crossing networks than from a population of random networks
[ "center-crossing recurrent neural networks", "symmetry", "activation function", "random initial population", "evolutionary search", "high-fitness oscillatory circuits", "low mutation variance", "random networks", "rhythmic behavior evolution", "null surfaces", "evolutionary algorithm", "learning" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "R", "U", "M", "U" ]
751
A new method of regression on latent variables. Application to spectral data
Several applications are based on the assessment of a linear model linking a set of variables Y to a set of predictors X. In the presence of strong colinearity among predictors, as in the case with spectral data, several alternative procedures to ordinary least squares (OLS) are proposed, We discuss a new alternative approach which we refer to as regression models through constrained principal components analysis (RM-CPCA). This method basically shares certain common characteristics with PLS regression as the dependent variables play a central role in determining the latent variables to be used as predictors. Unlike PLS, however, the approach discussed leads to straightforward models. This method also bears some similarity to latent root regression analysis (LRR) that was discussed by several authors. Moreover, a tuning parameter that ranges between 0 and 1 is introduced and the family of models thus formed includes several other methods as particular cases
[ "latent variables", "spectral data", "linear model", "predictors", "strong colinearity", "regression models through constrained principal components analysis", "dependent variables", "latent root regression analysis", "tuning parameter", "near-IR spectroscopy" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "U" ]
714
Embeddings of planar graphs that minimize the number of long-face cycles
We consider the problem of finding embeddings of planar graphs that minimize the number of long-face cycles. We prove that for any k >or= 4, it is NP-complete to find an embedding that minimizes the number of face cycles of length at least k
[ "embeddings", "planar graphs", "long-face cycles", "NP-complete problem", "graph drawing" ]
[ "P", "P", "P", "R", "M" ]
124
High-speed CMOS circuits with parallel dynamic logic and speed-enhanced skewed static logic
In this paper, we describe parallel dynamic logic (PDL) which exhibits high speed without charge sharing problem. PDL uses only parallel-connected transistors for fast logic evaluation and is a good candidate for high-speed low-voltage operation. It has less back-bias effect compared to other logic styles, which use stacked transistors. Furthermore, PDL needs no signal ordering or tapering. PDL with speed-enhanced skewed static logic renders straightforward logic synthesis without the usual area penalty due to logic duplication. Our experimental results on two 32-bit carry lookahead adders using 0.25- mu m CMOS technology show that PDL with speed-enhanced skewed static (SSS) look reduces the delay over clock-delayed(CD)-domino by 15%-27% and the power-delay product by 20%-37%
[ "high-speed CMOS circuits", "parallel dynamic logic", "speed-enhanced skewed static logic", "parallel-connected transistors", "low-voltage operation", "back-bias effect", "stacked transistors", "logic synthesis", "carry lookahead adders", "delay", "power-delay product", "32 bit", "0.25 micron" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "U", "U" ]
967
On the relationship between parametric variation and state feedback in chaos control
In this Letter, we study the popular parametric variation chaos control and state-feedback methodologies in chaos control, and point out for the first time that they are actually equivalent in the sense that there exist diffeomorphisms that can convert one to the other for most smooth chaotic systems. Detailed conversions are worked out for typical discrete chaotic maps (logistic, Henon) and continuous flows (Rossler, Lorenz) for illustration. This unifies the two seemingly different approaches from the physics and the engineering communities on chaos control. This new perspective reveals some new potential applications such as chaos synchronization and normal form analysis from a unified mathematical point of view
[ "parametric variation", "chaos control", "state-feedback", "diffeomorphisms", "logistic", "continuous flows", "Henon map", "Rossler system", "Lorenz system" ]
[ "P", "P", "P", "P", "P", "P", "R", "R", "R" ]
922
Smart collision information processing sensors for fast moving objects
In this technical note we survey the area of smart collision information processing sensors. We review the existing technologies to detect collision or overlap between fast moving physical objects or objects in virtual environments, physical environments or a combination of physical and virtual objects. We report developments in the collision detection of fast moving objects at discrete time steps such as two consecutive time frames, as well as continuous time intervals such as in an interframe collision detection system. Our discussion of computational techniques in this paper is limited to convex objects. Techniques exist however to efficiently decompose non-convex objects into convex objects. We also discuss the tracking technologies for objects from the standpoint of collision detection or avoidance
[ "collision information processing", "fast moving objects", "virtual environments", "physical environments", "collision detection", "discrete time steps", "consecutive time frames", "continuous time intervals", "interframe collision detection", "convex objects", "tracking", "nonconvex objects", "air traffic control", "smart sensors", "military training", "high speed machining" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "M", "U", "R", "U", "U" ]
76
Reaching strong consensus in a general network
The strong consensus (SC) problem is a variant of the conventional distributed consensus problem (also known as the Byzantine agreement problem). The SC problem requires that the agreed value among fault-free processors be one of the fault-free processor's initial values. Originally, the problem was studied in a fully connected network with malicious faulty processors. In this paper, the SC problem is re-examined in a general network, in which the components (processors and communication links) may be subjected to different faulty types simultaneously (also called the hybrid fault model or mixed faulty types) and the network topology does not have to be fully connected. The proposed protocol can tolerate the maximum number of tolerable faulty components such that each fault-free processor obtains a common value for the SC problem in a general network
[ "strong consensus", "distributed consensus problem", "Byzantine agreement", "fault-free processors", "fully connected network", "hybrid fault model", "strong consensus problem", "fault-tolerant distributed system" ]
[ "P", "P", "P", "P", "P", "P", "R", "M" ]
609
Chemical production in the superlative [formaldehyde plant process control system and remote I/O system]
BASF commissioned the largest formaldehyde production plant in the world, in December 2000, with an annual capacity of 180000 t. The new plant, built to meet the growing demand for formaldehyde, sets new standards. Its size, technology and above all its cost-effectiveness give it a leading position internationally. To maintain such high standards by the automation technology, in addition to the trail-blazing Simatic PCS 7 process control system from Siemens, BASF selected the innovative remote I/O system I.S.1 from R. STAHL Schaltgerate GmbH to record and to output field signals in hazardous areas Zone 1 and 2. This combination completely satisfied all technical requirements and also had the best price-performance ratio of all the solutions. 25 remote I/O field stations were designed and matched to the needs of the formaldehyde plant
[ "chemical production", "superlative", "process control system", "BASF", "automation technology", "trail-blazing Simatic PCS 7", "Siemens", "remote I/O system I.S.1", "R. STAHL Schaltgerate GmbH", "price-performance ratio", "formaldehyde production plant construction", "cost-effective plant", "signal recording", "Zone 1 hazardous area", "Zone 2 hazardous area", "remote I/O field station design" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "M", "R", "R", "R", "R", "R" ]
1219
Knowledge organisation of product design blackboard systems via graph decomposition
Knowledge organisation plays an important role in building a knowledge-based product design blackboard system. Well-organised knowledge sources will facilitate the effectiveness and efficiency of communication and data exchange in a blackboard system. In a previous investigation, an approach for constructing blackboard systems for product design using a non-directed graph decomposition algorithm was proposed. In this paper, the relationship between graph decomposition and the resultant blackboard system is further studied. A case study of a number of hypothetical blackboard systems that comprise different knowledge organisations is provided
[ "knowledge organisation", "product design blackboard systems", "graph decomposition", "knowledge-based product design", "data exchange", "case study" ]
[ "P", "P", "P", "P", "P", "P" ]
1118
Run-time data-flow analysis
Parallelizing compilers have made great progress in recent years. However, there still remains a gap between the current ability of parallelizing compilers and their final goals. In order to achieve the maximum parallelism, run-time techniques were used in parallelizing compilers during last few years. First, this paper presents a basic run-time privatization method. The definition of run-time dead code is given and its side effect is discussed. To eliminate the imprecision caused by the run-time dead code, backward data-flow information must be used. Proteus Test, which can use backward information in run-time, is then presented to exploit more dynamic parallelism. Also, a variation of Proteus Test, the Advanced Proteus Test, is offered to achieve partial parallelism. Proteus Test was implemented on the parallelizing compiler AFT. In the end of this paper the program fpppp.f of Spec95fp Benchmark is taken as an example, to show the effectiveness of Proteus Test
[ "parallelizing compilers", "run-time privatization method", "run-time dead code", "backward data-flow information", "Proteus Test", "dynamic parallelism", "run-time data flow analysis" ]
[ "P", "P", "P", "P", "P", "P", "M" ]
1343
Estimating the intrinsic dimension of data with a fractal-based method
In this paper, the problem of estimating the intrinsic dimension of a data set is investigated. A fractal-based approach using the Grassberger-Procaccia algorithm is proposed. Since the Grassberger-Procaccia algorithm (1983) performs badly on sets of high dimensionality, an empirical procedure that improves the original algorithm has been developed. The procedure has been tested on data sets of known dimensionality and on time series of Santa Fe competition
[ "fractal-based method", "time series", "Santa Fe competition", "data intrinsic dimension estimation", "pattern recognition" ]
[ "P", "P", "P", "R", "U" ]
1306
Scalable hybrid computation with spikes
We outline a hybrid analog-digital scheme for computing with three important features that enable it to scale to systems of large complexity: First, like digital computation, which uses several one-bit precise logical units to collectively compute a precise answer to a computation, the hybrid scheme uses several moderate-precision analog units to collectively compute a precise answer to a computation. Second, frequent discrete signal restoration of the analog information prevents analog noise and offset from degrading the computation. Third, a state machine enables complex computations to be created using a sequence of elementary computations. A natural choice for implementing this hybrid scheme is one based on spikes because spike-count codes are digital, while spike-time codes are analog. We illustrate how spikes afford easy ways to implement all three components of scalable hybrid computation. First, as an important example of distributed analog computation, we show how spikes can create a distributed modular representation of an analog number by implementing digital carry interactions between spiking analog neurons. Second, we show how signal restoration may be performed by recursive spike-count quantization of spike-time codes. Third, we use spikes from an analog dynamical system to trigger state transitions in a digital dynamical system, which reconfigures the analog dynamical system using a binary control vector; such feedback interactions between analog and digital dynamical systems create a hybrid state machine (HSM). The HSM extends and expands the concept of a digital finite-state-machine to the hybrid domain. We present experimental data from a two-neuron HSM on a chip that implements error-correcting analog-to-digital conversion with the concurrent use of spike-time and spike-count codes. We also present experimental data from silicon circuits that implement HSM-based pattern recognition using spike-time synchrony. We outline how HSMs may be used to perform learning, vector quantization, spike pattern recognition and generation, and how they may be reconfigured
[ "scalable hybrid computation", "spikes", "hybrid analog-digital scheme", "moderate-precision analog units", "frequent discrete signal restoration", "analog noise", "spike-count codes", "spike-time codes", "distributed analog computation", "digital carry interactions", "binary control vector", "feedback interactions", "finite-state-machine", "error-correcting analog-to-digital conversion", "silicon circuits", "pattern recognition", "learning", "vector quantization", "two neuron hybrid state machine" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "M" ]
753
In medias res [DVD formats]
Four years in the making, the DVD format war rages on, no winner insight. meanwhile, the spoils of war abound, and DVD media manufacturers stand poised to profit
[ "DVD format war", "DVD media manufacturers", "DVD-RAM", "DVD+RW", "DVD+R", "DVD-RW", "DVD-R", "compatibility", "writable DVD" ]
[ "P", "P", "U", "U", "U", "U", "U", "U", "M" ]
716
Algorithmic results for ordered median problems
In a series of papers a new type of objective function in location theory, called ordered median function, has been introduced and analyzed. This objective function unifies and generalizes most common objective functions used in location theory. In this paper we identify finite dominating sets for these models and develop polynomial time algorithms together with a detailed complexity analysis
[ "algorithmic results", "ordered median problems", "objective function", "location theory", "ordered median function", "finite dominating sets", "polynomial time algorithms", "detailed complexity analysis" ]
[ "P", "P", "P", "P", "P", "P", "P", "P" ]