{"name": "train_100", "title": "Separate accounts go mainstream [investment]", "abstract": "New entrants are shaking up the separate-account industry by supplying Web-based platforms that give advisers the tools to pick independent money managers", "fulltext": "", "keywords": "separate-account industry;web-based platforms;investment;financial advisors;independent money managers"} {"name": "train_1000", "title": "Does classicism explain universality? Arguments against a pure classical", "abstract": "component of mind One of the hallmarks of human cognition is the capacity to generalize over arbitrary constituents. Marcus (Cognition 66, p.153; Cognitive Psychology 37, p. 243, 1998) argued that this capacity, called \"universal generalization\" (universality), is not supported by connectionist models. Instead, universality is best explained by classical symbol systems, with connectionism as its implementation. Here it is argued that universality is also a problem for classicism in that the syntax-sensitive rules that are supposed to provide causal explanations of mental processes are either too strict, precluding possible generalizations; or too lax, providing no information as to the appropriate alternative. Consequently, universality is not explained by a classical theory", "fulltext": "", "keywords": "human cognition;connectionist models;classicism;universal generalization;mental processes;universality;syntax-sensitive rules;classical component of mind;causal explanations;classical symbol systems"} {"name": "train_1001", "title": "A conflict between language and atomistic information", "abstract": "Fred Dretske and Jerry Fodor are responsible for popularizing three well-known theses in contemporary philosophy of mind: the thesis of Information-Based Semantics (IBS), the thesis of Content Atomism (Atomism) and the thesis of the Language of Thought (LOT). LOT concerns the semantically relevant structure of representations involved in cognitive states such as beliefs and desires. It maintains that all such representations must have syntactic structures mirroring the structure of their contents. IBS is a thesis about the nature of the relations that connect cognitive representations and their parts to their contents (semantic relations). It holds that these relations supervene solely on relations of the kind that support information content, perhaps with some help from logical principles of combination. Atomism is a thesis about the nature of the content of simple symbols. It holds that each substantive simple symbol possesses its content independently of all other symbols in the representational system. I argue that Dretske's and Fodor's theories are false and that their falsehood results from a conflict IBS and Atomism, on the one hand, and LOT, on the other", "fulltext": "", "keywords": "desires;ibs;cognitive states;beliefs;information-based semantics;language of thought;lot;philosophy of mind;content atomism"} {"name": "train_1002", "title": "Selective representing and world-making", "abstract": "We discuss the thesis of selective representing-the idea that the contents of the mental representations had by organisms are highly constrained by the biological niches within which the organisms evolved. While such a thesis has been defended by several authors elsewhere, our primary concern here is to take up the issue of the compatibility of selective representing and realism. We hope to show three things. First, that the notion of selective representing is fully consistent with the realist idea of a mind-independent world. Second, that not only are these two consistent, but that the latter (the realist conception of a mind-independent world) provides the most powerful perspective from which to motivate and understand the differing perceptual and cognitive profiles themselves. Third, that the (genuine and important) sense in which organism and environment may together constitute an integrated system of scientific interest poses no additional threat to the realist conception", "fulltext": "", "keywords": "organisms;mind-independent world;selective representing;realism;cognitive profiles;mental representations;world-making"} {"name": "train_1003", "title": "Lob's theorem as a limitation on mechanism", "abstract": "We argue that Lob's Theorem implies a limitation on mechanism. Specifically, we argue, via an application of a generalized version of Lob's Theorem, that any particular device known by an observer to be mechanical cannot be used as an epistemic authority (of a particular type) by that observer: either the belief-set of such an authority is not mechanizable or, if it is, there is no identifiable formal system of which the observer can know (or truly believe) it to be the theorem-set. This gives, we believe, an important and hitherto unnoticed connection between mechanism and the use of authorities by human-like epistemic agents", "fulltext": "", "keywords": "theorem-set;human-like epistemic agents;lob theorem;belief-set;limitation on mechanism;formal system;epistemic authority"} {"name": "train_1004", "title": "Games machines play", "abstract": "Individual rationality, or doing what is best for oneself, is a standard model used to explain and predict human behavior, and von Neumann-Morgenstern game theory is the classical mathematical formalization of this theory in multiple-agent settings. Individual rationality, however, is an inadequate model for the synthesis of artificial social systems where cooperation is essential, since it does not permit the accommodation of group interests other than as aggregations of individual interests. Satisficing game theory is based upon a well-defined notion of being good enough, and does accommodate group as well as individual interests through the use of conditional preference relationships, whereby a decision maker is able to adjust its preferences as a function of the preferences, and not just the options, of others. This new theory is offered as an alternative paradigm to construct artificial societies that are capable of complex behavior that goes beyond exclusive self interest", "fulltext": "", "keywords": "cooperation;game theory;conditional preference relationships;human behavior;multiple-agent;decision theory;group rationality;artificial social systems;individual rationality;self interest;artificial societies"} {"name": "train_1005", "title": "The average-case identifiability and controllability of large scale systems", "abstract": "Needs for increased product quality, reduced pollution, and reduced energy and material consumption are driving enhanced process integration. This increases the number of manipulated and measured variables required by the control system to achieve its objectives. This paper addresses the question of whether processes tend to become increasingly more difficult to identify and control as the process dimension increases. Tools and results of multivariable statistics are used to show that, under a variety of assumed distributions on the elements, square processes of higher dimension tend to be more difficult to identify and control, whereas the expected controllability and identifiability of nonsquare processes depends on the relative numbers of measured and manipulated variables. These results suggest that the procedure of simplifying the control problem so that only a square process is considered is a poor practice for large scale systems", "fulltext": "", "keywords": "process control;chemical engineering;large scale systems;process identification;average-case controllability;high dimension square processes;multivariable statistics;manipulated variables;monte carlo simulations;measured variables;average-case identifiability;enhanced process integration;nonsquare processes"} {"name": "train_1006", "title": "Robust model-order reduction of complex biological processes", "abstract": "This paper addresses robust model-order reduction of a high dimensional nonlinear partial differential equation (PDE) model of a complex biological process. Based on a nonlinear, distributed parameter model of the same process which was validated against experimental data of an existing, pilot-scale biological nutrient removal (BNR) activated sludge plant, we developed a state-space model with 154 state variables. A general algorithm for robustly reducing the nonlinear PDE model is presented and, based on an investigation of five state-of-the-art model-order reduction techniques, we are able to reduce the original model to a model with only 30 states without incurring pronounced modelling errors. The singular perturbation approximation balanced truncating technique is found to give the lowest modelling errors in low frequency ranges and hence is deemed most suitable for controller design and other real-time applications", "fulltext": "", "keywords": "state-space model;nonlinear distributed parameter model;modelling errors;complex biological processes;pilot-scale bnr activated sludge plant;biological nutrient removal activated sludge processes;hankel singular values;high dimensional nonlinear partial differential equation model;singular perturbation approximation balanced truncating technique;controller design;robust model-order reduction"} {"name": "train_1007", "title": "Conditions for decentralized integral controllability", "abstract": "The term decentralized integral controllability (DIC) pertains to the existence of stable decentralized controllers with integral action that have closed-loop properties such as stable independent detuning. It is especially useful to select control structures systematically at the early stage of control system design because the only information needed for DIC is the steady-state process gain matrix. Here, a necessary and sufficient condition conjectured in the literature is proved. The real structured singular value which can exploit realness of the controller gain is used to describe computable conditions for DIC. The primary usage of DIC is to eliminate unworkable pairings. For this, two other simple necessary conditions are proposed. Examples are given to illustrate the effectiveness of the proposed conditions for DIC", "fulltext": "", "keywords": "systematic control structure selection;unworkable pairing elimination;stable independent detuning;real structured singular value;controller gain realness;necessary sufficient conditions;steady-state process gain matrix;closed-loop properties;integral action;stable decentralized controllers;control system design;schur complement;decentralized integral controllability"} {"name": "train_1008", "title": "Quadratic programming algorithms for large-scale model predictive control", "abstract": "Quadratic programming (QP) methods are an important element in the application of model predictive control (MPC). As larger and more challenging MPC applications are considered, more attention needs to be focused on the construction and tailoring of efficient QP algorithms. In this study, we tailor and apply a new QP method, called QPSchur, to large MPC applications, such as cross directional control problems in paper machines. Written in C++, QPSchur is an object oriented implementation of a novel dual space, Schur complement algorithm. We compare this approach to three widely applied QP algorithms and show that QPSchur is significantly more efficient (up to two orders of magnitude) than the other algorithms. In addition, detailed simulations are considered that demonstrate the importance of the flexible, object oriented construction of QPSchur, along with additional features for constraint handling, warm starts and partial solution", "fulltext": "", "keywords": "cross directional control problems;quadratic programming algorithms;dual space schur complement algorithm;constraint handling;partial solution;flexible object oriented construction;warm starts;simulations;large-scale model predictive control;qpschur;object oriented implementation;paper machines"} {"name": "train_1009", "title": "Robust output feedback model predictive control using off-line linear matrix", "abstract": "inequalities A fundamental question about model predictive control (MPC) is its robustness to model uncertainty. In this paper, we present a robust constrained output feedback MPC algorithm that can stabilize plants with both polytopic uncertainty and norm-bound uncertainty. The design procedure involves off-line design of a robust constrained state feedback MPC law and a state estimator using linear matrix inequalities (LMIs). Since we employ an off-line approach for the controller design which gives a sequence of explicit control laws, we are able to analyze the robust stabilizability of the combined control laws and estimator, and by adjusting the design parameters, guarantee robust stability of the closed-loop system in the presence of constraints. The algorithm is illustrated with two examples", "fulltext": "", "keywords": "model uncertainty robustness;explicit control law sequence;closed-loop system;robust constrained state feedback mpc law;robust constrained output feedback mpc algorithm;off-line linear matrix inequalities;robust output feedback model predictive control;asymptotically stable invariant ellipsoid;polytopic uncertainty;norm-bound uncertainty;controller design procedure;state estimator"} {"name": "train_1010", "title": "Robust self-tuning PID controller for nonlinear systems", "abstract": "In this paper, we propose a robust self-tuning PID controller suitable for nonlinear systems. The control system employs a preload relay (P_Relay) in series with a PID controller. The P_Relay ensures a high gain to yield a robust performance. However, it also incurs a chattering phenomenon. In this paper, instead of viewing the chattering as an undesirable yet inevitable feature, we use it as a naturally occurring signal for tuning and re-tuning the PID controller as the operating regime digresses. No other explicit input signal is required. Once the PID controller is tuned for a particular operating point, the relay may be disabled and chattering ceases correspondingly. However, it is invoked when there is a change in setpoint to another operating regime. In this way, the approach is also applicable to time-varying systems as the PID tuning can be continuous, based on the latest set of chattering characteristics. Analysis is provided on the stability properties of the control scheme. Simulation results for the level control of fluid in a spherical tank using the scheme are also presented", "fulltext": "", "keywords": "robust performance;stability properties;relay disabling;controller re-tuning;robust self-tuning pid controller;simulation results;fluid level control;operating regime;time-varying systems;continuous tuning;controller tuning;chattering phenomenon;naturally occurring signal;nonlinear systems;spherical tank;preload relay"} {"name": "train_1011", "title": "A self-organizing context-based approach to the tracking of multiple robot", "abstract": "trajectories We have combined competitive and Hebbian learning in a neural network designed to learn and recall complex spatiotemporal sequences. In such sequences, a particular item may occur more than once or the sequence may share states with another sequence. Processing of repeated/shared states is a hard problem that occurs very often in the domain of robotics. The proposed model consists of two groups of synaptic weights: competitive interlayer and Hebbian intralayer connections, which are responsible for encoding respectively the spatial and temporal features of the input sequence. Three additional mechanisms allow the network to deal with shared states: context units, neurons disabled from learning, and redundancy used to encode sequence states. The network operates by determining the current and the next state of the learned sequences. The model is simulated over various sets of robot trajectories in order to evaluate its storage and retrieval abilities; its sequence sampling effects; its robustness to noise and its tolerance to fault", "fulltext": "", "keywords": "hebbian intralayer connections;context units;self-organizing context-based approach;competitive learning;sequence sampling effects;shared states;unsupervised learning;sequence states;trajectories tracking;storage abilities;robot trajectories;fault tolerance;synaptic weights;retrieval abilities;complex spatiotemporal sequences;competitive interlayer connections;hebbian learning"} {"name": "train_1012", "title": "Evolving receptive-field controllers for mobile robots", "abstract": "The use of evolutionary methods to generate controllers for real-world autonomous agents has attracted attention. Most of the pertinent research has employed genetic algorithms or variations thereof. Research has applied an alternative evolutionary method, evolution strategies, to the generation of simple Braitenberg vehicles. This application accelerates the development of such controllers by more than an order of magnitude (a few hours compared to more than two days). Motivated by this useful speedup, the paper investigates the evolution of more complex architectures, receptive-field controllers, that can employ nonlinear interactions and, therefore, can yield more complex behavior. It is interesting to note that the evolution strategy yields the same efficacy in terms of function evaluations, even though the second class of controllers requires up to 10 times more parameters than the simple Braitenberg architecture. In addition to the speedup, there is an important theoretical reason for preferring an evolution strategy over a genetic algorithm for this problem, namely the presence of epistasis", "fulltext": "", "keywords": "evolutionary methods;simple braitenberg vehicles;scalability;nonlinear interactions;complex behavior;evolution strategies;real-world autonomous agents;mobile robots;radial basis functions;receptive-field controllers"} {"name": "train_1013", "title": "A scalable intelligent takeoff controller for a simulated running jointed leg", "abstract": "Running with jointed legs poses a difficult control problem in robotics. Neural controllers are attractive because they allow the robot to adapt to changing environmental conditions. However, scalability is an issue with many neural controllers. The paper describes the development of a scalable neurofuzzy controller for the takeoff phase of the running stride. Scalability is achieved by selecting a controller whose size does not grow with the dimensionality of the problem. Empirical results show that with proper design the takeoff controller scales from a leg with a single movable link to one with three movable links without a corresponding growth in size and without a loss of accuracy", "fulltext": "", "keywords": "neural controllers;scalability;simulated running jointed leg;intelligent robotic control;running stride;scalable neurofuzzy controller;changing environmental conditions;takeoff phase;scalable intelligent takeoff controller"} {"name": "train_1014", "title": "Modelling of complete robot dynamics based on a multi-dimensional, RBF-like", "abstract": "neural architecture A neural network based identification approach of manipulator dynamics is presented. For a structured modelling, RBF-like static neural networks are used in order to represent and adapt all model parameters with their non-linear dependences on the joint positions. The neural architecture is hierarchically organised to reach optimal adjustment to structural a priori-knowledge about the identification problem. The model structure is substantially simplified by general system analysis independent of robot type. But also a lot of specific features of the utilised experimental robot are taken into account. A fixed, grid based neuron placement together with application of B-spline polynomial basis functions is utilised favourably for a very effective recursive implementation of the neural architecture. Thus, an online identification of a dynamic model is submitted for a complete 6 joint industrial robot", "fulltext": "", "keywords": "complete 6 joint industrial robot;online identification;fixed grid based neuron placement;online learning;multi-dimensional rbf-like neural architecture;recursive implementation;manipulator dynamics;general system analysis;neural architecture;complete robot dynamics;static neural networks;dynamic model;b-spline polynomial basis functions"} {"name": "train_1015", "title": "Scalable techniques from nonparametric statistics for real time robot learning", "abstract": "Locally weighted learning (LWL) is a class of techniques from nonparametric statistics that provides useful representations and training algorithms for learning about complex phenomena during autonomous adaptive control of robotic systems. The paper introduces several LWL algorithms that have been tested successfully in real-time learning of complex robot tasks. We discuss two major classes of LWL, memory-based LWL and purely incremental LWL that does not need to remember any data explicitly. In contrast to the traditional belief that LWL methods cannot work well in high-dimensional spaces, we provide new algorithms that have been tested on up to 90 dimensional learning problems. The applicability of our LWL algorithms is demonstrated in various robot learning examples, including the learning of devil-sticking, pole-balancing by a humanoid robot arm, and inverse-dynamics learning for a seven and a 30 degree-of-freedom robot. In all these examples, the application of our statistical neural networks techniques allowed either faster or more accurate acquisition of motor control than classical control engineering", "fulltext": "", "keywords": "real time robot learning;purely incremental learning;autonomous adaptive control;scalable techniques;memory-based learning;inverse-dynamics learning;nonparametric regression;locally weighted learning;humanoid robot arm;pole-balancing;devil-sticking;nonparametric statistics;complex phenomena;statistical neural networks techniques;training algorithms"} {"name": "train_1016", "title": "A scalable model of cerebellar adaptive timing and sequencing: the recurrent", "abstract": "slide and latch (RSL) model From the dawn of modern neural network theory, the mammalian cerebellum has been a favored object of mathematical modeling studies. Early studies focused on the fanout, convergence, thresholding, and learned weighting of perceptual-motor signals within the cerebellar cortex. This led to the still viable idea that the granule cell stage in the cerebellar cortex performs a sparse expansive recoding of the time-varying input vector. This recoding reveals and emphasizes combinations in a distributed representation that serves as a basis for the learned, state-dependent control actions engendered by cerebellar outputs to movement related centers. To make optimal use of available signals, the cerebellum must be able to sift the evolving state representation for the most reliable predictors of the need for control actions, and to use those predictors even if they appear only transiently and well in advance of the optimal time for initiating the control action. The paper proposes a modification to prior, population, models for cerebellar adaptive timing and sequencing. Since it replaces a population with a single element, the proposed RSL model is in one sense maximally efficient, and therefore optimal from the perspective of scalability", "fulltext": "", "keywords": "recurrent slide and latch model;recurrent network;sparse expansive recoding;mammalian cerebellum;cerebellar sequencing;time-varying input vector;cerebellar adaptive timing;scalable model;neural network theory;granule cell stage;distributed representation"} {"name": "train_1017", "title": "Searching a scalable approach to cerebellar based control", "abstract": "Decades of research into the structure and function of the cerebellum have led to a clear understanding of many of its cells, as well as how learning might take place. Furthermore, there are many theories on what signals the cerebellum operates on, and how it works in concert with other parts of the nervous system. Nevertheless, the application of computational cerebellar models to the control of robot dynamics remains in its infant state. To date, few applications have been realized. The currently emerging family of light-weight robots poses a new challenge to robot control: due to their complex dynamics traditional methods, depending on a full analysis of the dynamics of the system, are no longer applicable since the joints influence each other dynamics during movement. Can artificial cerebellar models compete here?", "fulltext": "", "keywords": "nervous system;light-weight robots;cerebellar based control;robot control;computational cerebellar models;scalable approach"} {"name": "train_1018", "title": "Fabrication of polymeric microlens of hemispherical shape using micromolding", "abstract": "Polymeric microlenses play an important role in reducing the size, weight, and cost of optical data storage and optical communication systems. We fabricate polymeric microlenses using the microcompression molding process. The design and fabrication procedures for mold insertion is simplified using silicon instead of metal. PMMA powder is used as the molding material. Governed by process parameters such as temperature and pressure histories, the micromolding process is controlled to minimize various defects that develop during the molding process. The radius of curvature and magnification ratio of fabricated microlens are measured as 150 mu m and over 3.0, respectively", "fulltext": "", "keywords": "microcompression molding process;weight;optical communication systems;size;fabrication procedures;optical data storage;molding material;design procedures;temperature;polymeric microlens fabrication;micromolding;mold insertion;cost;micromolding process;pressure;process parameters;300 micron;magnification ratio;polymeric microlenses;pmma powder;silicon;hemispherical shape microlens"} {"name": "train_1019", "title": "Optical setup and analysis of disk-type photopolymer high-density holographic", "abstract": "storage A relatively simple scheme for disk-type photopolymer high-density holographic storage based on angular and spatial multiplexing is described. The effects of the optical setup on the recording capacity and density are studied. Calculations and analysis show that this scheme is more effective than a scheme based on the spatioangular multiplexing for disk-type photopolymer high-density holographic storage, which has a limited medium thickness. Also an optimal beam recording angle exists to achieve maximum recording capacity and density", "fulltext": "", "keywords": "optimal beam recording angle;spatial multiplexing;spatio-angular multiplexing;disk-type photopolymer high-density holographic storage;angular multiplexing;recording capacity;recording density;optical setup;limited medium thickness;maximum recording capacity;maximum density"} {"name": "train_102", "title": "Harmless delays in Cohen-Grossberg neural networks", "abstract": "Without assuming monotonicity and differentiability of the activation functions and any symmetry of interconnections, we establish some sufficient conditions for the globally asymptotic stability of a unique equilibrium for the Cohen-Grossberg (1983) neural network with multiple delays. Lyapunov functionals and functions combined with the Razumikhin technique are employed. The criteria are all independent of the magnitudes of the delays, and thus the delays under these conditions are harmless", "fulltext": "", "keywords": "multiple delays;activation functions;monotonicity;differentiability;razumikhin technique;harmless delays;cohen-grossberg neural networks;lyapunov functionals;interconnections;globally asymptotic stability"} {"name": "train_1020", "title": "Supersampling multiframe blind deconvolution resolution enhancement of adaptive", "abstract": "optics compensated imagery of low earth orbit satellites We describe a postprocessing methodology for reconstructing undersampled image sequences with randomly varying blur that can provide image enhancement beyond the sampling resolution of the sensor. This method is demonstrated on simulated imagery and on adaptive-optics-(AO)-compensated imagery taken by the Starfire Optical Range 3.5-m telescope that has been artificially undersampled. Also shown are the results of multiframe blind deconvolution of some of the highest quality optical imagery of low earth orbit satellites collected with a ground-based telescope to date. The algorithm used is a generalization of multiframe blind deconvolution techniques that include a representation of spatial sampling by the focal plane array elements based on a forward stochastic model. This generalization enables the random shifts and shape of the AO-compensated point spread function (PSF) to be used to partially eliminate the aliasing effects associated with sub-Nyquist sampling of the image by the focal plane array. The method could be used to reduce resolution loss that occurs when imaging in wide-field-of-view (FOV) modes", "fulltext": "", "keywords": "randomly varying blur;starfire optical range telescope;sub-nyquist sampling;wide-field-of-view modes;spatial sampling;random shifts;image enhancement;postprocessing methodology;multiframe blind deconvolution;sensor sampling resolution;simulated imagery;ground-based telescope;forward stochastic model;undersampled image sequence reconstruction;focal plane array elements;resolution loss;3.5 m;adaptive optics compensated imagery;aliasing effects;low earth orbit satellites;ao-compensated point spread function;supersampling multiframe blind deconvolution resolution enhancement"} {"name": "train_1021", "title": "Error-probability analysis of MIL-STD-1773 optical fiber data buses", "abstract": "We have analyzed the error probabilities of MIL-STD-1773 optical fiber data buses with three modulation schemes, namely, original Manchester II bi-phase coding, PTMBC, and EMBC-BSF. Using these derived expressions of error probabilities, we can also compare the receiver sensitivities of such optical fiber data buses", "fulltext": "", "keywords": "receiver sensitivities;manchester bi-phase coding;optical fiber data buses;modulation schemes;error probabilities"} {"name": "train_1022", "title": "Bad pixel identification by means of principal components analysis", "abstract": "Bad pixels are defined as those pixels showing a temporal evolution of the signal different from the rest of the pixels of a given array. Principal component analysis helps us to understand the definition of a statistical distance associated with each pixels, and using this distance it is possible to identify those pixels labeled as bad pixels. The spatiality of a pixel is also calculated. An assumption about the normality of the distribution of the distances of the pixels is revised. Although the influence on the robustness of the identification algorithm is negligible, the definition of a parameter related with this nonnormality helps to identify those principal components and eigenimages responsible for the departure from a multinormal distribution. The method for identifying the bad pixels is successfully applied to a set of frames obtained from a CCD visible and a focal plane array (FPA) IR camera", "fulltext": "", "keywords": "principal components analysis;robustness;bad pixel identification;multinormal distribution;ccd visible camera;temporal evolution;statistical distance;focal plane array;ir camera;identification algorithm;eigenimages"} {"name": "train_1023", "title": "Simple nonlinear dual-window operator for edge detection", "abstract": "We propose a nonlinear edge detection technique based on a two-concentric-circular-window operator. We perform a preliminary selection of edge candidates using a standard gradient and use the dual-window operator to reveal edges as zero-crossing points of a simple difference function depending only on the minimum and maximum values in the two windows. Comparisons with other well-established techniques are reported in terms of visual appearance and computational efficiency. They show that detected edges are surely comparable with Canny's and Laplacian of Gaussian algorithms, with a noteworthy reduction in terms of computational load", "fulltext": "", "keywords": "canny's algorithms;difference function;edge detection;nonlinear dual-window operator;gaussian algorithms;two-concentric-circular-window operator;nonlinear processing;maximum values;standard gradient;computational load;minimum values;dual window operator;zero-crossing points;laplacian algorithms;nonlinear edge detection technique;detected edges;computational efficiency"} {"name": "train_1024", "title": "Rational systems exhibit moderate risk aversion with respect to \"gambles\" on", "abstract": "variable-resolution compression In an embedded wavelet scheme for progressive transmission, a tree structure naturally defines the spatial relationship on the hierarchical pyramid. Transform coefficients over each tree correspond to a unique local spatial region of the original image, and they can be coded bit-plane by bit-plane through successive-approximation quantization. After receiving the approximate value of some coefficients, the decoder can obtain a reconstructed image. We show a rational system for progressive transmission that, in absence of a priori knowledge about regions of interest, chooses at any truncation time among alternative trees for further transmission in such a way as to avoid certain forms of behavioral inconsistency. We prove that some rational transmission systems might exhibit aversion to risk involving \"gambles\" on tree-dependent quality of encoding while others favor taking such risks. Based on an acceptable predictor for visual distinctness from digital imagery, we demonstrate that, without any outside knowledge, risk-prone systems as well as those with strong risk aversion appear in capable of attaining the quality of reconstructions that can be achieved with moderate risk-averse behavior", "fulltext": "", "keywords": "tree structure;hierarchical pyramid spatial relationship;progressive transmission;digital imagery;decision problem;progressive transmission utility functions;moderate risk aversion;behavioral inconsistency avoidance;gambles;visual distinctness;variable-resolution compression;image encoding;reconstructed image;transform coefficients;truncation time;rational system;embedded wavelet scheme;local spatial region;successive-approximation quantization;information theoretic measure;rate control optimization;acceptable predictor;embedded coding"} {"name": "train_1025", "title": "Watermarking techniques for electronic delivery of remote sensing images", "abstract": "Earth observation missions have recently attracted a growing interest, mainly due to the large number of possible applications capable of exploiting remotely sensed data and images. Along with the increase of market potential, the need arises for the protection of the image products. Such a need is a very crucial one, because the Internet and other public/private networks have become preferred means of data exchange. A critical issue arising when dealing with digital image distribution is copyright protection. Such a problem has been largely addressed by resorting to watermarking technology. A question that obviously arises is whether the requirements imposed by remote sensing imagery are compatible with existing watermarking techniques. On the basis of these motivations, the contribution of this work is twofold: assessment of the requirements imposed by remote sensing applications on watermark-based copyright protection, and modification of two well-established digital watermarking techniques to meet such constraints. More specifically, the concept of near-lossless watermarking is introduced and two possible algorithms matching such a requirement are presented. Experimental results are shown to measure the impact of watermark introduction on a typical remote sensing application, i.e., unsupervised image classification", "fulltext": "", "keywords": "near-lossless watermarking;remote sensing images;electronic delivery;digital image distribution;digital watermarking;watermarking techniques;copyright protection;earth observation missions;unsupervised image classification"} {"name": "train_1026", "title": "Use of SPOT images as a tool for coastal zone management and monitoring of", "abstract": "environmental impacts in the coastal zone Modern techniques such as remote sensing have been one of the main factors leading toward the achievement of serious plans regarding coastal management. A multitemporal analysis of land use in certain areas of the Colombian Caribbean Coast is described. It mainly focuses on environmental impacts caused by anthropogenic activities, such as deforestation of mangroves due to shrimp farming. Selection of sensitive areas, percentage of destroyed mangroves, possible endangered areas, etc., are some of the results of this analysis. Recommendations for a coastal management plan in the area have also resulted from this analysis. Some other consequences of the deforestation of mangroves in the coastal zone and the construction of shrimp ponds are also analyzed, such as the increase of erosion problems in these areas and water pollution, among others. The increase of erosion in these areas has also changed part of their morphology, which has been studied by the analysis of SPOT images in previous years. A serious concern exists about the future of these areas. For this reason new techniques like satellite images (SPOT) have been applied with good results, leading to more effective control and coastal management in the area. The use of SPOT images to study changes of the land use of the area is a useful technique to determine patterns of human activities and suggest solutions for severe problems in these areas", "fulltext": "", "keywords": "colombian caribbean coast;satellite images;remote sensing;erosion problems;shrimp ponds;endangered areas;shrimp farming;vector overlay;anthropogenic activities;sedimentation;water pollution;human activities;land use;coastal zone management;mangrove deforestation;supervised classification;spot images;environmental impact monitoring;multitemporal analysis;vectorization"} {"name": "train_1027", "title": "Extracting straight road structure in urban environments using IKONOS satellite", "abstract": "imagery We discuss a fully automatic technique for extracting roads in urban environments. The method has its bases in a vegetation mask derived from multispectral IKONOS data and in texture derived from panchromatic IKONOS data. These two techniques together are used to distinguish road pixels. We then move from individual pixels to an object-based representation that allows reasoning on a higher level. Recognition of individual segments and intersections and the relationships among them are used to determine underlying road structure and to then logically hypothesize the existence of additional road network components. We show results on an image of San Diego, California. The object-based processing component may be adapted to utilize other basis techniques as well, and could be used to build a road network in any scene having a straight-line structured topology", "fulltext": "", "keywords": "object-based processing component;vegetation mask;san diego;panchromatic ikonos data;straight road structure;individual segment recognition;fully automatic technique;vectorized road network;high-resolution imagery;road network components;large-scale feature extraction;urban environments;ikonos satellite imagery;object-based representation;texture;higher level reasoning;straight-line structured topology;road pixels"} {"name": "train_1028", "title": "Novel approach to super-resolution pits readout", "abstract": "We proposed a novel method to realize the readout of super-resolution pits by using a super-resolution reflective film to replace the reflective layer of the conventional ROM. At the same time, by using Sb as the super-resolution reflective layer and SiN as a dielectric layer, the super-resolution pits with diameters of 380 nm were read out by a setup whose laser wavelength is 632.8 nm and numerical aperture is 0.40. In addition, the influence of the Sb thin film thickness on the readout signal was investigated, the results showed that the optimum Sb thin film thickness is 28 to 30 nm, and the maximum CNR is 38 to 40 dB", "fulltext": "", "keywords": "super-resolution reflective film;sb super-resolution reflective layer;numerical aperture;632.8 nm;sb-sin;380 nm;readout signal;sin dielectric layer;sb thin film thickness;super-resolution pits readout;maximum cnr;28 to 30 nm"} {"name": "train_1029", "title": "Effect of insulation layer on transcribability and birefringence distribution", "abstract": "in optical disk substrate As the need for information storage media with high storage density increases, digital video disks (DVDs) with smaller recording marks and thinner optical disk substrates than those of conventional DVDs are being required. Therefore, improving the replication quality of land-groove or pit structure and reducing the birefringence distribution are emerging as important criteria in the fabrication of high-density optical disk substrates. We control the transcribability and distribution of birefringence by inserting an insulation layer under the stamper during injection-compression molding of DVD RAM substrates. The effects of the insulation layer on the geometrical and optical properties, such as transcribability and birefringence distribution, are examined experimentally. The inserted insulation layer is found to be very effective in improving the quality of replication and leveling out the first peak of the gapwise birefringence distribution near the mold wall and reducing the average birefringence value, because the insulation layer retarded the growth of the solidified layer", "fulltext": "", "keywords": "land-groove;information storage media;optical properties;mold wall;dvd ram substrates;stamper;injection-compression molding;thinner optical disk substrates;insulation layer;gapwise birefringence distribution;polyimide thermal insulation layer;solidified layer growth retardation;transcribability;smaller recording marks;pit structure;birefringence distribution;high storage density;digital video disks;optical disk substrate;fabrication;geometrical properties;replication quality"} {"name": "train_1030", "title": "Comparison of automated digital elevation model extraction results using", "abstract": "along-track ASTER and across-track SPOT stereo images A digital elevation model (DEM) can be extracted automatically from stereo satellite images. During the past decade, the most common satellite data used to extract DEM was the across-track SPOT. Recently, the addition of along-track ASTER data, which can be downloaded freely, provides another attractive alternative to extract DEM data. This work compares the automated DEM extraction results using an ASTER stereo pair and a SPOT stereo pair over an area of hilly mountains in Drum Mountain, Utah, when compared to a USGS 7.5-min DEM standard product. The result shows that SPOT produces better DEM results in terms of accuracy and details, if the radiometric variations between the images, taken on subsequent satellite revolutions, are small. Otherwise, the ASTER stereo pair is a better choice because of simultaneous along-track acquisition during a single pass. Compared to the USGS 7.5-min DEM, the ASTER and the SPOT extracted DEMs have a standard deviation of 11.6 and 4.6 m, respectively", "fulltext": "", "keywords": "automated digital elevation model extraction;radiometric variations;across-track spot stereo images;stereo satellite images;along-track aster data;spot stereo image pair;aster stereo pair;simultaneous along-track acquisition"} {"name": "train_1031", "title": "Noise-constrained hyperspectral data compression", "abstract": "Storage and transmission requirements for hyperspectral data sets are significant. To reduce hardware costs, well-designed compression techniques are needed to preserve information content while maximizing compression ratios. Lossless compression techniques maintain data integrity, but yield small compression ratios. We present a slightly lossy compression algorithm that uses the noise statistics of the data to preserve information content while maximizing compression ratios. The adaptive principal components analysis (APCA) algorithm uses noise statistics to determine the number of significant principal components and selects only those that are required to represent each pixel to within the noise level. We demonstrate the effectiveness of these methods with airborne visible/infrared spectrometer (AVIRIS), hyperspectral digital imagery collection experiment (HYDICE), hyperspectral mapper (HYMAP), and Hyperion datasets", "fulltext": "", "keywords": "airborne visible/infrared spectrometer hyperspectral digital imagery collection experiment;transmission requirements;hardware costs;slightly lossy compression algorithm;hymap;storage requirements;noise-constrained hyperspectral data compression;data integrity;aviris hydice;hyperion datasets;adaptive principal components analysis algorithm;information content;hyperspectral mapper;noise statistics;lossless compression techniques;gaussian statistics;hyperspectral data sets;noise level;compression ratios"} {"name": "train_1032", "title": "Satellite image collection optimization", "abstract": "Imaging satellite systems represent a high capital cost. Optimizing the collection of images is critical for both satisfying customer orders and building a sustainable satellite operations business. We describe the functions of an operational, multivariable, time dynamic optimization system that maximizes the daily collection of satellite images. A graphical user interface allows the operator to quickly see the results of what if adjustments to an image collection plan. Used for both long range planning and daily collection scheduling of Space Imaging's IKONOS satellite, the satellite control and tasking (SCT) software allows collection commands to be altered up to 10 min before upload to the satellite", "fulltext": "", "keywords": "satellite control tasking software;long range planning;collection commands;graphical user interface;satellite image collection optimization;imaging satellite systems;daily collection scheduling;space imaging ikonos satellite;image collection plan;multivariable time dynamic optimization system"} {"name": "train_1033", "title": "Optical two-step modified signed-digit addition based on binary logic gates", "abstract": "A new modified signed-digit (MSD) addition algorithm based on binary logic gates is proposed for parallel computing. It is shown that by encoding each of the input MSD digits and flag digits into a pair of binary bits, the number of addition steps can be reduced to two. The flag digit is introduced to characterize the next low order pair (NLOP) of the input digits in order to suppress carry propagation. The rules for two-step addition of binary coded MSD (BCMSD) numbers are formulated that can be implemented using optical shadow-casting logic system", "fulltext": "", "keywords": "binary bits;two-step addition;input msd digits;modified signed-digit addition algorithm;optical two-step modified signed-digit addition;flag digits;optical shadow-casting logic system;addition steps;low order pair;carry propagation suppression;binary logic gates;binary coded msd;parallel computing"} {"name": "train_1034", "title": "Vibration control of the rotating flexible-shaft/multi-flexible-disk system", "abstract": "with the eddy-current damper In this paper, the rotating flexible-Timoshenko-shaft/flexible-disk coupling system is formulated by applying the assumed-mode method into the kinetic and strain energies, and the virtual work done by the eddy-current damper. From Lagrange's equations, the resulting discretized equations of motion can be simplified as a bilinear system (BLS). Introducing the control laws, including the quadratic, nonlinear and optimal feedback control laws, into the BLS, it is found that the eddy-current damper can be used to suppress flexible and shear vibrations simultaneously, and the system is globally asymptotically stable. Numerical results are provided to validate the theoretical analysis", "fulltext": "", "keywords": "shear vibrations;quadratic feedback control laws;rotating flexible-shaft/multi-flexible-disk system;flexible vibrations;bilinear system;nonlinear feedback control laws;discretized equations of motion;assumed-mode method;lagrange's equations;virtual work;optimal feedback control laws;eddy-current damper;rotating flexible-timoshenko-shaft/flexible-disk coupling system"} {"name": "train_1035", "title": "H/sub 2/ optimization of the three-element type dynamic vibration absorbers", "abstract": "The dynamic vibration absorber (DVA) is a passive vibration control device which is attached to a vibrating body (called a primary system) subjected to exciting force or motion. In this paper, we will discuss an optimization problem of the three-element type DVA on the basis of the H/sub 2/ optimization criterion. The objective of the H/sub 2/ optimization is to reduce the total vibration energy of the system for overall frequencies; the total area under the power spectrum response curve is minimized in this criterion. If the system is subjected to random excitation instead of sinusoidal excitation, then the H/sub 2/ optimization is probably more desirable than the popular H/sub infinity / optimization. In the past decade there has been increasing interest in the three-element type DVA. However, most previous studies on this type of DVA were based on the H/sub infinity / optimization design, and no one has been able to find the algebraic solution as of yet. We found a closed-form exact solution for a special case where the primary system has no damping. Furthermore, the general case solution including the damped primary system is presented in the form of a numerical solution. The optimum parameters obtained here are compared to those of the conventional Voigt type DVA. They are also compared to other optimum parameters based on the H/sub infinity / criterion", "fulltext": "", "keywords": "voigt type dynamic vibration absorber;h/sub 2/ optimization;power spectrum response;three-element type dynamic vibration absorbers;passive vibration control"} {"name": "train_1036", "title": "Nonlinear control of a shape memory alloy actuated manipulator", "abstract": "This paper presents a nonlinear, robust control algorithm for accurate positioning of a single degree of freedom rotary manipulator actuated by Shape Memory Alloy (SMA). A model for an SMA actuated manipulator is presented. The model includes nonlinear dynamics of the manipulator, a constitutive model of Shape Memory Alloy, and electrical and heat transfer behavior of SMA wire. This model is used for open and closed loop motion simulations of the manipulator. Experiments are presented that show results similar to both closed and open loop simulation results. Due to modeling uncertainty and nonlinear behavior of the system, classic control methods such as Proportional-Integral-Derivative control are not able to present fast and accurate performance. Hence a nonlinear, robust control algorithm is presented based on Variable Structure Control. This algorithm is a control gain switching technique based on the weighted average of position and velocity feedbacks. This method has been designed through simulation and tested experimentally. Results show fast, accurate, and robust performance of the control system. Computer simulation and experimental results for different stabilization and tracking situations are also presented", "fulltext": "", "keywords": "feedback;control gain switching;manipulator;stabilization;shape memory alloy;variable structure control;positioning;tracking;open loop;nonlinear dynamics;nonlinear control;closed loop"} {"name": "train_1037", "title": "A stochastic averaging approach for feedback control design of nonlinear", "abstract": "systems under random excitations This paper presents a method for designing and quantifying the performance of feedback stochastic controls for nonlinear systems. The design makes use of the method of stochastic averaging to reduce the dimension of the state space and to derive the Ito stochastic differential equation for the response amplitude process. The moment equation of the amplitude process closed by the Rayleigh approximation is used as a means to characterize the transient performance of the feedback control. The steady state and transient response of the amplitude process are used as the design criteria for choosing the feedback control gains. Numerical examples are studied to demonstrate the performance of the control", "fulltext": "", "keywords": "steady state;rayleigh approximation;stochastic averaging;transient response;random excitations;ito stochastic differential equation;feedback control;nonlinear systems;feedback stochastic controls"} {"name": "train_1038", "title": "The analysis and control of longitudinal vibrations from wave viewpoint", "abstract": "The analysis and control of longitudinal vibrations in a rod from feedback wave viewpoint are synthesized. Both collocated and noncollocated feedback wave control strategies are explored. The control design is based on the local properties of wave transmission and reflection in the vicinity of the control force applied area, hence there is no complex closed form solution involved. The controller is designed to achieve various goals, such as absorbing the incoming vibration energy, creating a vibration free zone and eliminating standing waves in the structure. The findings appear to be very useful in practice due to the simplicity in the implementation of the controllers", "fulltext": "", "keywords": "control force;feedback waves;vibration free zone;vibration energy;control design;collocated feedback wave control;noncollocated feedback wave control;standing waves;complex closed form solution;longitudinal vibration control;wave transmission;wave reflection"} {"name": "train_1039", "title": "Design of an adaptive vibration absorber to reduce electrical transformer", "abstract": "structural vibration This paper considers the design of a vibration absorber to reduce structural vibration at multiple frequencies, with an enlarged bandwidth control at these target frequencies. While the basic absorber is a passive device a control system has been added to facilitate tuning, effectively giving the combination of a passive and active device, which leads to far greater stability and robustness. Experimental results demonstrating the effectiveness of the absorber are also described", "fulltext": "", "keywords": "bandwidth control;adaptive vibration absorber;structural vibration;electrical transformer"} {"name": "train_1040", "title": "CRONE control: principles and extension to time-variant plants with", "abstract": "asymptotically constant coefficients The principles of CRONE control, a frequency-domain robust control design methodology based on fractional differentiation, are presented. Continuous time-variant plants with asymptotically constant coefficients are analysed in the frequency domain, through their representation using time-variant frequency responses. A stability theorem for feedback systems including time-variant plants with asymptotically constant coefficients is proposed. Finally, CRONE control is extended to robust control of these plants", "fulltext": "", "keywords": "feedback systems;frequency-domain robust control design;crone control;stability theorem;asymptotically constant coefficients;time-variant frequency responses;automatic control;time-variant plants;robust control;fractional differentiation"} {"name": "train_1041", "title": "Fractional differentiation in passive vibration control", "abstract": "From a single-degree-of-freedom model used to illustrate the concept of vibration isolation, a method to transform the design for a suspension into a design for a robust controller is presented. Fractional differentiation is used to model the viscoelastic behaviour of the suspension. The use of fractional differentiation not only permits optimisation of just four suspension parameters, showing the 'compactness' of the fractional derivative operator, but also leads to robustness of the suspension's performance to uncertainty of the sprung mass. As an example, an engine suspension is studied", "fulltext": "", "keywords": "vibration isolation;sprung mass;engine suspension;suspension;passive vibration control;robust controller;viscoelastic behaviour;fractional differentiation"} {"name": "train_1042", "title": "Chaotic phenomena and fractional-order dynamics in the trajectory control of", "abstract": "redundant manipulators Redundant manipulators have some advantages when compared with classical arms because they allow the trajectory optimization, both on the free space and on the presence of obstacles, and the resolution of singularities. For this type of arms the proposed kinematic control algorithms adopt generalized inverse matrices but, in general, the corresponding trajectory planning schemes show important limitations. Motivated by these problems this paper studies the chaos revealed by the pseudoinverse-based trajectory planning algorithms, using the theory of fractional calculus", "fulltext": "", "keywords": "trajectory planning schemes;classical arms;kinematic control algorithms;trajectory control;generalized inverse matrices;fractional-order dynamics;chaotic phenomena;redundant manipulators;fractional calculus;trajectory optimization"} {"name": "train_1043", "title": "Fractional motion control: application to an XY cutting table", "abstract": "In path tracking design, the dynamic of actuators must be taken into account in order to reduce overshoots appearing for small displacements. A new approach to path tracking using fractional differentiation is proposed with its application on a XY cutting table. It permits the generation of optimal movement reference-input leading to a minimum path completion time, taking into account both maximum velocity, acceleration and torque and the bandwidth of the closed-loop system. Fractional differentiation is used here through a Davidson-Cole filter. A methodology aiming at improving the accuracy especially on checkpoints is presented. The reference-input obtained is compared with spline function. Both are applied to an XY cutting table model and actuator outputs compared", "fulltext": "", "keywords": "xy cutting table;closed-loop system;davidson-cole filter;minimum path completion time;spline function;optimization;actuators;fractional motion control;path tracking design;fractional differentiation"} {"name": "train_1044", "title": "Analogue realizations of fractional-order controllers", "abstract": "An approach to the design of analogue circuits, implementing fractional-order controllers, is presented. The suggested approach is based on the use of continued fraction expansions; in the case of negative coefficients in a continued fraction expansion, the use of negative impedance converters is proposed. Several possible methods for obtaining suitable rational approximations and continued fraction expansions are discussed. An example of realization of a fractional-order I/sup lambda / controller is presented and illustrated by obtained measurements. The suggested approach can be used for the control of very fast processes, where the use of digital controllers is difficult or impossible", "fulltext": "", "keywords": "rational approximations;negative impedance converters;negative coefficients;fractional-order controllers;continued fraction expansions;analogue realizations;fast processes;fractional integration;digital controllers;fraction expansion;fractional differentiation"} {"name": "train_1045", "title": "Using fractional order adjustment rules and fractional order reference models", "abstract": "in model-reference adaptive control This paper investigates the use of Fractional Order Calculus (FOC) in conventional Model Reference Adaptive Control (MRAC) systems. Two modifications to the conventional MRAC are presented, i.e., the use of fractional order parameter adjustment rule and the employment of fractional order reference model. Through examples, benefits from the use of FOC are illustrated together with some remarks for further research", "fulltext": "", "keywords": "mrac;fractional order reference models;fractional order adjustment rules;model-reference adaptive control;foc;fractional calculus"} {"name": "train_1046", "title": "A suggestion of fractional-order controller for flexible spacecraft attitude", "abstract": "control A controller design method for flexible spacecraft attitude control is proposed. The system is first described by a partial differential equation with internal damping. Then the frequency response is analyzed, and the three basic characteristics of the flexible system, namely, average function, lower bound and upper bound are defined. On this basis, a fractional-order controller is proposed, which functions as phase stabilization control for lower frequency and smoothly enters to amplitude stabilization at higher frequency by proper amplitude attenuation. It is shown that the equivalent damping ratio increases in proportion to the square of frequency", "fulltext": "", "keywords": "frequency response;fractional-order controller;partial differential equation;amplitude stabilization;internal damping;flexible spacecraft attitude control;phase stabilization control;damping ratio"} {"name": "train_1047", "title": "Dynamics and control of initialized fractional-order systems", "abstract": "Due to the importance of historical effects in fractional-order systems, this paper presents a general fractional-order system and control theory that includes the time-varying initialization response. Previous studies have not properly accounted for these historical effects. The initialization response, along with the forced response, for fractional-order systems is determined. The scalar fractional-order impulse response is determined, and is a generalization of the exponential function. Stability properties of fractional-order systems are presented in the complex w-plane, which is a transformation of the s-plane. Time responses are discussed with respect to pole positions in the complex w-plane and frequency response behavior is included. A fractional-order vector space representation, which is a generalization of the state space concept, is presented including the initialization response. Control methods for vector representations of initialized fractional-order systems are shown. Finally, the fractional-order differintegral is generalized to continuous order-distributions which have the possibility of including all fractional orders in a transfer function", "fulltext": "", "keywords": "forced response;vector space representation;impulse response;exponential function;fractional-order differintegral;dynamics;control;state space concept;initialization response;transfer function;initialized fractional-order systems"} {"name": "train_1048", "title": "Parallel and distributed Haskells", "abstract": "Parallel and distributed languages specify computations on multiple processors and have a computation language to describe the algorithm, i.e. what to compute, and a coordination language to describe how to organise the computations across the processors. Haskell has been used as the computation language for a wide variety of parallel and distributed languages, and this paper is a comprehensive survey of implemented languages. It outlines parallel and distributed language concepts and classifies Haskell extensions using them. Similar example programs are used to illustrate and contrast the coordination languages, and the comparison is facilitated by the common computation language. A lazy language is not an obvious choice for parallel or distributed computation, and we address the question of why Haskell is a common functional computation language", "fulltext": "", "keywords": "parallel languages;parallel haskell;functional programming;multiple processors;distributed languages;coordination language;lazy language;functional computation language;distributed haskell"} {"name": "train_1049", "title": "A typed representation for HTML and XML documents in Haskell", "abstract": "We define a family of embedded domain specific languages for generating HTML and XML documents. Each language is implemented as a combinator library in Haskell. The generated HTML/XML documents are guaranteed to be well-formed. In addition, each library can guarantee that the generated documents are valid XML documents to a certain extent (for HTML only a weaker guarantee is possible). On top of the libraries, Haskell serves as a meta language to define parameterized documents, to map structured documents to HTML/XML, to define conditional content, or to define entire Web sites. The combinator libraries support element-transforming style, a programming style that allows programs to have a visual appearance similar to HTML/XML documents, without modifying the syntax of Haskell", "fulltext": "", "keywords": "combinator library;element-transforming style;html documents;functional programming;parameterized documents;software libraries;embedded domain specific languages;conditional content;web sites;typed representation;xml documents;haskell;meta language;syntax"} {"name": "train_105", "title": "Greenberger-Horne-Zeilinger paradoxes for many qubits", "abstract": "We construct Greenberger-Horne-Zeilinger (GHZ) contradictions for three or more parties sharing an entangled state, the dimension of each subsystem being an even integer d. The simplest example that goes beyond the standard GHZ paradox (three qubits) involves five ququats (d = 4). We then examine the criteria that a GHZ paradox must satisfy in order to be genuinely M partite and d dimensional", "fulltext": "", "keywords": "entangled state;many qubits;greenberger-horne-zeilinger paradoxes;ghz paradox;ghz contradictions"} {"name": "train_1050", "title": "Secrets of the Glasgow Haskell compiler inliner", "abstract": "Higher-order languages such as Haskell encourage the programmer to build abstractions by composing functions. A good compiler must inline many of these calls to recover an efficiently executable program. In principle, inlining is dead simple: just replace the call of a function by an instance of its body. But any compiler-writer will tell you that inlining is a black art, full of delicate compromises that work together to give good performance without unnecessary code bloat. The purpose of this paper is, therefore, to articulate the key lessons we learned from a full-scale \"production\" inliner, the one used in the Glasgow Haskell compiler. We focus mainly on the algorithmic aspects, but we also provide some indicative measurements to substantiate the importance of various aspects of the inliner", "fulltext": "", "keywords": "glasgow haskell compiler inliner;performance;functional programming;executable program;higher-order languages;algorithmic aspects;abstractions;functional language;optimising compiler"} {"name": "train_1051", "title": "Faking it: simulating dependent types in Haskell", "abstract": "Dependent types reflect the fact that validity of data is often a relative notion by allowing prior data to affect the types of subsequent data. Not only does this make for a precise type system, but also a highly generic one: both the type and the program for each instance of a family of operations can be computed from the data which codes for that instance. Recent experimental extensions to the Haskell type class mechanism give us strong tools to relativize types to other types. We may simulate some aspects of dependent typing by making counterfeit type-level copies of data, with type constructors simulating data constructors and type classes simulating datatypes. This paper gives examples of the technique and discusses its potential", "fulltext": "", "keywords": "counterfeit type-level copies;dependent types;type class mechanism;dependent typing;functional programming;precise type system;datatypes;data validity;haskell;data constructors;type constructors"} {"name": "train_1052", "title": "Developing a high-performance web server in Concurrent Haskell", "abstract": "Server applications, and in particular network-based server applications, place a unique combination of demands on a programming language: lightweight concurrency, high I/O throughput, and fault tolerance are all important. This paper describes a prototype Web server written in Concurrent Haskell (with extensions), and presents two useful results: firstly, a conforming server could be written with minimal effort, leading to an implementation in less than 1500 lines of code, and secondly the naive implementation produced reasonable performance. Furthermore, making minor modifications to a few time-critical components improved performance to a level acceptable for anything but the most heavily loaded Web servers", "fulltext": "", "keywords": "high i/o throughput;network-based server applications;high-performance web server;conforming server;concurrent haskell;fault tolerance;time-critical components;lightweight concurrency"} {"name": "train_1053", "title": "A static semantics for Haskell", "abstract": "This paper gives a static semantics for Haskell 98, a non-strict purely functional programming language. The semantics formally specifies nearly all the details of the Haskell 98 type system, including the resolution of overloading, kind inference (including defaulting) and polymorphic recursion, the only major omission being a proper treatment of ambiguous overloading and its resolution. Overloading is translated into explicit dictionary passing, as in all current implementations of Haskell. The target language of this translation is a variant of the Girard-Reynolds polymorphic lambda calculus featuring higher order polymorphism. and explicit type abstraction and application in the term language. Translated programs can thus still be type checked, although the implicit version of this system is impredicative. A surprising result of this formalization effort is that the monomorphism restriction, when rendered in a system of inference rules, compromises the principal type property", "fulltext": "", "keywords": "inference rules;polymorphic lambda calculus;kind inference;type system;static semantics;type checking;higher order polymorphism;polymorphic recursion;monomorphism restriction;term language;explicit type abstraction;nonstrict purely functional programming language;explicit dictionary passing;overloading;formal specification;haskell 98"} {"name": "train_1054", "title": "Choice preferences without inferences: subconscious priming of risk attitudes", "abstract": "We present a procedure for subconscious priming of risk attitudes. In Experiment 1, we were reliably able to induce risk-seeking or risk-averse preferences across a range of decision scenarios using this priming procedure. In Experiment 2, we showed that these priming effects can be reversed by drawing participants' attention to the priming event. Our results support claims that the formation of risk preferences can be based on preconscious processing, as for example postulated by the affective primacy hypothesis, rather than rely on deliberative mental operations, as posited by several current models of judgment and decision making", "fulltext": "", "keywords": "preconscious processing;risk-averse preferences;affective primacy hypothesis;deliberative mental operations;choice preferences;decision scenarios;risk attitudes;risk-seeking preferences;subconscious priming"} {"name": "train_1055", "title": "A re-examination of probability matching and rational choice", "abstract": "In a typical probability learning task participants are presented with a repeated choice between two response alternatives, one of which has a higher payoff probability than the other. Rational choice theory requires that participants should eventually allocate all their responses to the high-payoff alternative, but previous research has found that people fail to maximize their payoffs. Instead, it is commonly observed that people match their response probabilities to the payoff probabilities. We report three experiments on this choice anomaly using a simple probability learning task in which participants were provided with (i) large financial incentives, (ii) meaningful and regular feedback, and (iii) extensive training. In each experiment large proportions of participants adopted the optimal response strategy and all three of the factors mentioned above contributed to this. The results are supportive of rational choice theory", "fulltext": "", "keywords": "feedback;meaningful regular feedback;choice anomaly;rationality;optimal response strategy;rational choice theory;response probabilities;probability matching;payoff probability;extensive training;large financial incentives;probability learning task"} {"name": "train_1056", "title": "Eliminating recency with self-review: the case of auditors' 'going concern'", "abstract": "judgments This paper examines the use of self-review to debias recency. Recency is found in the 'going concern' judgments of staff auditors, but is successfully eliminated by the auditor's use of a simple self-review technique that would be extremely easy to implement in audit practice. Auditors who self-review are also less inclined to make audit report choices that are inconsistent with their going concern judgments. These results are important because the judgments of staff auditors often determine the type and extent of documentation in audit workpapers and serve as preliminary inputs for senior auditors' judgments and choices. If staff auditors' judgments are affected by recency, the impact of this bias may be impounded in the ultimate judgments and choices of senior auditors. Since biased judgments can expose auditors to significant costs involving extended audit procedures, legal liability and diminished reputation, simple debiasing techniques that reduce this exposure are valuable. The paper also explores some future research needs and other important issues concerning judgment debiasing in applied professional settings", "fulltext": "", "keywords": "judgment debiasing;recency debiasing;audit report choices;senior auditors;self-review;applied professional settings;documentation;accountability;legal liability;diminished reputation;audit workpapers;staff auditors;extended audit procedures;probability judgments;auditor going concern judgments"} {"name": "train_1057", "title": "Acceptance of a price discount: the role of the semantic relatedness between", "abstract": "purchases and the comparative price format Two studies are reported where people are asked to accept or not a price reduction on a target product. In the high (low) relative saving version, the regular price of the target product is low (high). In both versions, the absolute value of the price reduction is the same as well as the total of regular prices of planned purchases. As first reported by Tversky and Kahneman (1981), findings show that the majority of people accept the price discount in the high-relative saving version whereas the minority do it in the low one. In Study 1, findings show that the previous preference reversal disappears when planned purchases are strongly related. Also, a previously unreported preference reversal is found. The majority of people accept the price discount when the products are weakly related whereas the minority accept when the products are strongly related. In Study 2, findings show that the classic preference reversal disappears as a function of the comparative price format. Also, another previously unreported preference reversal is found. When the offered price reduction relates to a low-priced product, people are more inclined to accept it with a control than a minimal comparative price format. Findings reported in Studies 1 and 2 are interpreted in terms of mental accounting shifts", "fulltext": "", "keywords": "high relative saving version;planned purchases;semantic relatedness hypothesis;mental accounting shifts;low-priced product;preference reversal;comparative price format;low relative saving version;price discount acceptance"} {"name": "train_1058", "title": "Bigger is better: the influence of physical size on aesthetic preference", "abstract": "judgments The hypothesis that the physical size of an object can influence aesthetic preferences was investigated. In a series of four experiments, participants were presented with pairs of abstract stimuli and asked to indicate which member of each pair they preferred. A preference for larger stimuli was found on the majority of trials using various types of stimuli, stimuli of various sizes, and with both adult and 3-year-old participants. This preference pattern was disrupted only when participants had both stimuli that provided a readily accessible alternative source of preference-evoking information and sufficient attentional resources to make their preference judgments", "fulltext": "", "keywords": "decision making;attentional resources;adult participants;preference formation;abstract stimuli;physical size influence;preference-evoking information;preference pattern;aesthetic preference judgments;child participants;judgment cues"} {"name": "train_1059", "title": "Mustering motivation to enact decisions: how decision process characteristics", "abstract": "influence goal realization Decision scientists tend to focus mainly on decision antecedents, studying how people make decisions. Action psychologists, in contrast, study post-decision issues, investigating how decisions, once formed, are maintained, protected, and enacted. Through the research presented here, we seek to bridge these two disciplines, proposing that the process by which decisions are reached motivates subsequent pursuit and benefits eventual realization. We identify three characteristics of the decision process (DP) as having motivation-mustering potential: DP effort investment, DP importance, and DP confidence. Through two field studies tracking participants' decision processes, pursuit and realization, we find that after controlling for the influence of the motivational mechanisms of goal intention and implementation intention, the three decision process characteristics significantly influence the successful enactment of the chosen decision directly. The theoretical and practical implications of these findings are considered and future research opportunities are identified", "fulltext": "", "keywords": "post-decision issues;decision process characteristics;research opportunities;motivation-mustering potential;goal realization;goal intention;decision enactment;decision process importance;decision process confidence;decision scientists;decision process investment;motivation;action psychologists"} {"name": "train_106", "title": "Quantum Zeno subspaces", "abstract": "The quantum Zeno effect is recast in terms of an adiabatic theorem when the measurement is described as the dynamical coupling to another quantum system that plays the role of apparatus. A few significant examples are proposed and their practical relevance discussed. We also focus on decoherence-free subspaces", "fulltext": "", "keywords": "dynamical coupling;adiabatic theorem;quantum zeno subspaces;measurement;decoherence-free subspaces"} {"name": "train_1060", "title": "Variety identification of wheat using mass spectrometry with neural networks", "abstract": "and the influence of mass spectra processing prior to neural network analysis The performance of matrix-assisted laser desorption/ionisation time-of-flight mass spectrometry with neural networks in wheat variety classification is further evaluated. Two principal issues were studied: (a) the number of varieties that could be classified correctly; and (b) various means of preprocessing mass spectrometric data. The number of wheat varieties tested was increased from 10 to 30. The main pre-processing method investigated was based on Gaussian smoothing of the spectra, but other methods based on normalisation procedures and multiplicative scatter correction of data were also used. With the final method, it was possible to classify 30 wheat varieties with 87% correctly classified mass spectra and a correlation coefficient of 0.90", "fulltext": "", "keywords": "correctly classified mass spectra;normalisation procedures;correlation coefficient;mass spectrometric data;mass spectra processing;multiplicative scatter correction;gaussian smoothing;neural network analysis;wheat variety classification;pre-processing- method;matrix-assisted laser desorption/ionisation time-of-flight mass spectrometry;variety identification"} {"name": "train_1061", "title": "Abacus, EFI and anti-virus", "abstract": "The Extensible Firmware Interface (EFI) standard emerged as a logical step to provide flexibility and extensibility to boot sequence processes, enabling the complete abstraction of a system's BIOS interface from the system's hardware. In doing so, this provided the means of standardizing a boot-up sequence, extending device drivers and boot time applications' portability to non PC-AT-based architectures, including embedded systems like Internet appliances, TV Internet set-top boxes and 64-bit Itanium platforms", "fulltext": "", "keywords": "embedded systems;anti-virus;extensible firmware interface standard"} {"name": "train_1062", "title": "Fidelity of quantum teleportation through noisy channels", "abstract": "We investigate quantum teleportation through noisy quantum channels by solving analytically and numerically a master equation in the Lindblad form. We calculate the fidelity as a function of decoherence rates and angles of a state to be teleported. It is found that the average fidelity and the range of states to be accurately teleported depend on types of noises acting on quantum channels. If the quantum channels are subject to isotropic noise, the average fidelity decays to 1/2, which is smaller than the best possible value of 2/3 obtained only by the classical communication. On the other hand, if the noisy quantum channel is modeled by a single Lindblad operator, the average fidelity is always greater than 2/3", "fulltext": "", "keywords": "analytical solution;noisy quantum channels;recipient;classical communication;dual classical channels;quantum channels;alice;numerical solution;bob;quantum teleportation;fidelity;isotropic noise;lindblad operator;eigenstate;sender"} {"name": "train_1063", "title": "Operations that do not disturb partially known quantum states", "abstract": "Consider a situation in which a quantum system is secretly prepared in a state chosen from the known set of states. We present a principle that gives a definite distinction between the operations that preserve the states of the system and those that disturb the states. The principle is derived by alternately applying a fundamental property of classical signals and a fundamental property of quantum ones. The principle can be cast into a simple form by using a decomposition of the relevant Hilbert space, which is uniquely determined by the set of possible states. The decomposition implies the classification of the degrees of freedom of the system into three parts depending on how they store the information on the initially chosen state: one storing it classically, one storing it nonclassically, and the other one storing no information. Then the principle states that the nonclassical part is inaccessible and the classical part is read-only if we are to preserve the state of the system. From this principle, many types of no-cloning, no-broadcasting, and no-imprinting conditions can easily be derived in general forms including mixed states. It also gives a unified view on how various schemes of quantum cryptography work. The principle helps one to derive optimum amount of resources (bits, qubits, and ebits) required in data compression or in quantum teleportation of mixed-state ensembles", "fulltext": "", "keywords": "quantum system;hilbert space;nonclassical part;degrees of freedom;partially known quantum states;quantum teleportation;secretly prepared quantum state;ebits;bits;quantum cryptography;classical signals;mixed-state ensembles;qubits"} {"name": "train_1064", "title": "Quantum-controlled measurement device for quantum-state discrimination", "abstract": "We propose a \"programmable\" quantum device that is able to perform a specific generalized measurement from a certain set of measurements depending on a quantum state of a \"program register.\" In particular, we study a situation when the programmable measurement device serves for the unambiguous discrimination between nonorthogonal states. The particular pair of states that can be unambiguously discriminated is specified by the state of a program qubit. The probability of successful discrimination is not optimal for all admissible pairs. However, for some subsets it can be very close to the optimal value", "fulltext": "", "keywords": "quantum-state discrimination;quantum-controlled measurement device;quantum state;nonorthogonal states;programmable quantum device;program register;program qubit"} {"name": "train_1065", "title": "Quantum universal variable-length source coding", "abstract": "We construct an optimal quantum universal variable-length code that achieves the admissible minimum rate, i.e., our code is used for any probability distribution of quantum states. Its probability of exceeding the admissible minimum rate exponentially goes to 0. Our code is optimal in the sense of its exponent. In addition, its average error asymptotically tends to 0", "fulltext": "", "keywords": "quantum information theory;quantum universal variable-length source coding;admissible minimum rate;optimal code;quantum states;optimal quantum universal variable-length code;probability distribution;average error;exponent;quantum cryptography"} {"name": "train_1066", "title": "Application of artificial intelligence to search ground-state geometry of", "abstract": "clusters We introduce a global optimization procedure, the neural-assisted genetic algorithm (NAGA). It combines the power of an artificial neural network (ANN) with the versatility of the genetic algorithm. This method is suitable to solve optimization problems that depend on some kind of heuristics to limit the search space. If a reasonable amount of data is available, the ANN can \"understand\" the problem and provide the genetic algorithm with a selected population of elements that will speed up the search for the optimum solution. We tested the method in a search for the ground-state geometry of silicon clusters. We trained the ANN with information about the geometry and energetics of small silicon clusters. Next, the ANN learned how to restrict the configurational space for larger silicon clusters. For Si/sub 10/ and Si/sub 20/, we noticed that the NAGA is at least three times faster than the \"pure\" genetic algorithm. As the size of the cluster increases, it is expected that the gain in terms of time will increase as well", "fulltext": "", "keywords": "neural-assisted genetic algorithm;si/sub 20/;cluster size;ground-state geometry;atomic clusters;artificial intelligence;optimum solution;si/sub 10/;artificial neural network;population;silicon clusters;global optimization procedure"} {"name": "train_1067", "title": "Quantum-information processing by nuclear magnetic resonance: Experimental", "abstract": "implementation of half-adder and subtractor operations using an oriented spin-7/2 system The advantages of using quantum systems for performing many computational tasks have already been established. Several quantum algorithms have been developed which exploit the inherent property of quantum systems such as superposition of states and entanglement for efficiently performing certain tasks. The experimental implementation has been achieved on many quantum systems, of which nuclear magnetic resonance has shown the largest progress in terms of number of qubits. This paper describes the use of a spin-7/2 as a three-qubit system and experimentally implements the half-adder and subtractor operations. The required qubits are realized by partially orienting /sup 133/Cs nuclei in a liquid-crystalline medium, yielding a quadrupolar split well-resolved septet. Another feature of this paper is the proposal that labeling of quantum states of system can be suitably chosen to increase the efficiency of a computational task", "fulltext": "", "keywords": "quadrupolar split well-resolved septet;/sup 133/cs;quantum-information processing;computational tasks;quantum systems;quantum states;/sup 133/cs nuclei;nuclear magnetic resonance;state superposition;computational task;subtractor operations;half-adder operations;quantum algorithms;entanglement;oriented spin-7/2 system;three-qubit system;liquid-crystalline medium;qubits"} {"name": "train_1068", "title": "Quantum phase gate for photonic qubits using only beam splitters and", "abstract": "postselection We show that a beam splitter of reflectivity one-third can be used to realize a quantum phase gate operation if only the outputs conserving the number of photons on each side are postselected", "fulltext": "", "keywords": "postselection;photonic qubits;quantum computation;quantum phase gate;photon number conservation;postselected quantum phase gate;quantum phase gate operation;multiqubit networks;postselected quantum gate;outputs;postselected photon number conserving outputs;quantum information processing;optical quantum gate operations;polarization beam splitters;reflectivity"} {"name": "train_1069", "title": "Entangling atoms in bad cavities", "abstract": "We propose a method to produce entangled spin squeezed states of a large number of atoms inside an optical cavity. By illuminating the atoms with bichromatic light, the coupling to the cavity induces pairwise exchange of excitations which entangles the atoms. Unlike most proposals for entangling atoms by cavity QED, our proposal does not require the strong coupling regime g/sup 2// kappa Gamma >>1, where g is the atom cavity coupling strength, kappa is the cavity decay rate, and Gamma is the decay rate of the atoms. In this work the important parameter is Ng/sup 2// kappa Gamma , where N is the number of atoms, and our proposal permits the production of entanglement in bad cavities as long as they contain a large number of atoms", "fulltext": "", "keywords": "bichromatic light illumination;strong coupling regime;pairwise exchange;atom cavity coupling strength;excitations;coupling;entangled spin squeezed states;optical cavity;cavity qed;bad cavities;atom entanglement;cavity decay rate"} {"name": "train_107", "title": "Deterministic single-photon source for distributed quantum networking", "abstract": "A sequence of single photons is emitted on demand from a single three-level atom strongly coupled to a high-finesse optical cavity. The photons are generated by an adiabatically driven stimulated Raman transition between two atomic ground states, with the vacuum field of the cavity stimulating one branch of the transition, and laser pulses deterministically driving the other branch. This process is unitary and therefore intrinsically reversible, which is essential for quantum communication and networking, and the photons should be appropriate for all-optical quantum information processing", "fulltext": "", "keywords": "adiabatically driven stimulated raman transition;deterministic single-photon source;all-optical quantum information processing;single three-level atom;quantum communication;high-finesse optical cavity;distributed quantum networking;vacuum field"} {"name": "train_1070", "title": "Universal simulation of Hamiltonian dynamics for quantum systems with", "abstract": "finite-dimensional state spaces What interactions are sufficient to simulate arbitrary quantum dynamics in a composite quantum system? Dodd et al. [Phys. Rev. A 65, 040301(R) (2002)] provided a partial solution to this problem in the form of an efficient algorithm to simulate any desired two-body Hamiltonian evolution using any fixed two-body entangling N-qubit Hamiltonian, and local unitaries. We extend this result to the case where the component systems are qudits, that is, have D dimensions. As a consequence we explain how universal quantum computation can be performed with any fixed two-body entangling N-qudit Hamiltonian, and local unitaries", "fulltext": "", "keywords": "d-dimensional component systems;two-body hamiltonian evolution;composite quantum system;quantum dynamics;hamiltonian dynamics;fixed two-body entangling n-qubit hamiltonian;quantum systems;universal quantum computation;local unitaries;fixed two-body entangling n-qudit hamiltonian;universal simulation;finite- dimensional state spaces"} {"name": "train_1071", "title": "Dense coding in entangled states", "abstract": "We consider the dense coding of entangled qubits shared between two parties, Alice and Bob. The efficiency of classical information gain through quantum entangled qubits is also considered for the case of pairwise entangled qubits and maximally entangled qubits. We conclude that using the pairwise entangled qubits can be more efficient when two parties communicate whereas using the maximally entangled qubits can be more efficient when the N parties communicate", "fulltext": "", "keywords": "entangled states;classical information gain efficiency;dense coding;pairwise entangled qubits;alice;bob;quantum communication;maximally entangled qubits;quantum information processing"} {"name": "train_1072", "title": "Quantum-state information retrieval in a Rydberg-atom data register", "abstract": "We analyze a quantum search protocol to retrieve phase information from a Rydberg-atom data register using a subpicosecond half-cycle electric field pulse. Calculations show that the half-cycle pulse can perform the phase retrieval only within a range of peak field values. By varying the phases of the constituent orbitals of the Rydberg wave packet register, we demonstrate coherent control of the phase retrieval process. By specially programming the phases of the orbitals comprising the initial wave packet, we show that it is possible to use the search method as a way to synthesize single energy eigenstates", "fulltext": "", "keywords": "half-cycle pulse;coherent control;phase retrieval;subpicosecond half-cycle electric field pulse;phase information;constituent orbitals;rydberg-atom data register;rydberg wave packet register;initial wave packet;quantum search protocol;quantum-state information retrieval;peak field values;search method;single energy eigenstates"} {"name": "train_1073", "title": "Quantum retrodiction in open systems", "abstract": "Quantum retrodiction involves finding the probabilities for various preparation events given a measurement event. This theory has been studied for some time but mainly as an interesting concept associated with time asymmetry in quantum mechanics. Recent interest in quantum communications and cryptography, however, has provided retrodiction with a potential practical application. For this purpose quantum retrodiction in open systems should be more relevant than in closed systems isolated from the environment. In this paper we study retrodiction in open systems and develop a general master equation for the backward time evolution of the measured state, which can be used for calculating preparation probabilities. We solve the master equation, by way of example, for the driven two-level atom coupled to the electromagnetic field", "fulltext": "", "keywords": "time asymmetry;probabilities;quantum retrodiction;measurement event;open systems;retrodictive master equation;cryptography;driven two level atom-electromagnetic field coupling;preparation events;quantum communications;quantum mechanics;preparation probabilities;backward time evolution"} {"name": "train_1074", "title": "Inhibiting decoherence via ancilla processes", "abstract": "General conditions are derived for preventing the decoherence of a single two-state quantum system (qubit) in a thermal bath. The employed auxiliary systems required for this purpose are merely assumed to be weak for the general condition while various examples such as extra qubits and extra classical fields are studied for applications in quantum information processing. The general condition is confirmed by well known approaches toward inhibiting decoherence. An approach to decoherence-free quantum memories and quantum operations is presented by placing the qubit into the center of a sphere with extra qubits on its surface", "fulltext": "", "keywords": "general condition;single two-state quantum system;quantum operations;decoherence-free quantum memories;extra classical fields;qubit;extra qubits;ancilla processes;thermal bath;decoherence;auxiliary systems;quantum information processing;sphere surface;decoherence inhibition"} {"name": "train_1075", "title": "Numerical simulation of information recovery in quantum computers", "abstract": "Decoherence is the main problem to be solved before quantum computers can be built. To control decoherence, it is possible to use error correction methods, but these methods are themselves noisy quantum computation processes. In this work, we study the ability of Steane's and Shor's fault-tolerant recovering methods, as well as a modification of Steane's ancilla network, to correct errors in qubits. We test a way to measure correctly ancilla's fidelity for these methods, and state the possibility of carrying out an effective error correction through a noisy quantum channel, even using noisy error correction methods", "fulltext": "", "keywords": "ancilla fidelity;numerical simulation;error correction methods;quantum computers;noisy quantum computation processes;fault-tolerant recovering methods;noisy quantum channel;noisy error correction methods;decoherence control;ancilla network;information recovery;qubits"} {"name": "train_1076", "title": "Delayed-choice entanglement swapping with vacuum-one-photon quantum states", "abstract": "We report the experimental realization of a recently discovered quantum-information protocol by Peres implying an apparent nonlocal quantum mechanical retrodiction effect. The demonstration is carried out by a quantum optical method by which each singlet entangled state is physically implemented by a two-dimensional subspace of Fock states of a mode of the electromagnetic field, specifically the space spanned by the vacuum and the one-photon state, along lines suggested recently by E. Knill et al. [Nature (London) 409, 46 (2001)] and by M. Duan et al. [ibid. 414, 413 (2001)]", "fulltext": "", "keywords": "fock states;state entanglement;two-dimensional subspace;quantum optical method;one-photon state;quantum-information;singlet entangled state;delayed-choice entanglement;nonlocal quantum mechanical retrodiction effect;electromagnetic field mode;vacuum-one-photon quantum states;vacuum state"} {"name": "train_1077", "title": "Quantum learning and universal quantum matching machine", "abstract": "Suppose that three kinds of quantum systems are given in some unknown states |f>/sup (X)N/, |g/sub 1/>/sup (X)K/, and |g/sub 2/>/sup (X)K/, and we want to decide which template state |g/sub 1/> or |g/sub 2/>, each representing the feature of the pattern class C/sub 1/ or C/sub 2/, respectively, is closest to the input feature state |f>. This is an extension of the pattern matching problem into the quantum domain. Assuming that these states are known a priori to belong to a certain parametric family of pure qubit systems, we derive two kinds of matching strategies. The first one is a semiclassical strategy that is obtained by the natural extension of conventional matching strategies and consists of a two-stage procedure: identification (estimation) of the unknown template states to design the classifier (learning process to train the classifier) and classification of the input system into the appropriate pattern class based on the estimated results. The other is a fully quantum strategy without any intermediate measurement, which we might call as the universal quantum matching machine. We present the Bayes optimal solutions for both strategies in the case of K=1, showing that there certainly exists a fully quantum matching procedure that is strictly superior to the straightforward semiclassical extension of the conventional matching strategy based on the learning process", "fulltext": "", "keywords": "learning process;two-stage procedure;semiclassical strategy;quantum domain;quantum strategy;quantum learning;matching strategies;universal quantum matching machine;qubit systems;matching strategy;bayes optimal solutions;semiclassical extension;pattern matching problem;quantum matching procedure;pattern class"} {"name": "train_1078", "title": "Action aggregation and defuzzification in Mamdani-type fuzzy systems", "abstract": "Discusses the issues of action aggregation and defuzzification in Mamdani-type fuzzy systems. The paper highlights the shortcomings of defuzzification techniques associated with the customary interpretation of the sentence connective 'and' by means of the set union operation. These include loss of smoothness of the output characteristic and inaccurate mapping of the fuzzy response. The most appropriate procedure for aggregating the outputs of different fuzzy rules and converting them into crisp signals is then suggested. The advantages in terms of increased transparency and mapping accuracy of the fuzzy response are demonstrated", "fulltext": "", "keywords": "transparency;set union operation;crisp signals;mapping accuracy;sentence connective;mamdani-type fuzzy systems;fuzzy response;defuzzification;fuzzy rules;action aggregation"} {"name": "train_1079", "title": "A novel robot hand with embedded shape memory alloy actuators", "abstract": "Describes the development of an active robot hand, which allows smooth and lifelike motions for anthropomorphic grasping and fine manipulations. An active robot finger 10 mm in outer diameter with a shape memory alloy (SMA) wire actuator embedded in the finger with a constant distance from the geometric centre of the finger was designed and fabricated. The practical specifications of the SMA wire and the flexible rod were determined on the basis of a series of formulae. The active finger consists of two bending parts, the SMA actuators and a connecting part. The mechanical properties of the bending part are investigated. The control system on the basis of resistance feedback is also presented. Finally, a robot hand with three fingers was designed and the grasping experiment was carried out to demonstrate its performance", "fulltext": "", "keywords": "lifelike motions;active finger;flexible rod;embedded shape memory alloy actuators;resistance feedback;anthropomorphic grasping;fine manipulations;active robot hand"} {"name": "train_108", "title": "Exploiting randomness in quantum information processing", "abstract": "We consider how randomness can be made to play a useful role in quantum information processing-in particular, for decoherence control and the implementation of quantum algorithms. For a two-level system in which the decoherence channel is non-dissipative, we show that decoherence suppression is possible if memory is present in the channel. Random switching between two potentially harmful noise sources can then provide a source of stochastic control. Such random switching can also be used in an advantageous way for the implementation of quantum algorithms", "fulltext": "", "keywords": "stochastic control;noise;randomness;two-level system;quantum algorithms;random switching;decoherence control;quantum information processing"} {"name": "train_1080", "title": "Car-caravan snaking. 2 Active caravan braking", "abstract": "For part 1, see ibid., p.707-22. Founded on the review and results of Part 1, Part 2 contains a description of the virtual design of an active braking system for caravans or other types of trailer, to suppress snaking vibrations, while being simple from a practical viewpoint. The design process and the design itself are explained. The performance is examined by simulations and it is concluded that the system is effective, robust and realizable with modest and available components", "fulltext": "", "keywords": "active caravan braking;virtual design;dynamics;car-caravan snaking;trailer;snaking vibrations suppression"} {"name": "train_1081", "title": "Stability of W-methods with applications to operator splitting and to geometric", "abstract": "theory We analyze the stability properties of W-methods applied to the parabolic initial value problem u' + Au = Bu. We work in an abstract Banach space setting, assuming that A is the generator of an analytic semigroup and that B is relatively bounded with respect to A. Since W-methods treat the term with A implicitly, whereas the term involving B is discretized in an explicit way, they can be regarded as splitting methods. As an application of our stability results, convergence for nonsmooth initial data is shown. Moreover, the layout of a geometric theory for discretizations of semilinear parabolic problems u' + Au = f (u) is presented", "fulltext": "", "keywords": "nonsmooth initial data;geometric theory;analytic semigroup;linearly implicit runge-kutta methods;abstract banach space;w-methods stability;parabolic initial value problem;operator splitting"} {"name": "train_1082", "title": "Numerical approximation of nonlinear BVPs by means of BVMs", "abstract": "Boundary Value Methods (BVMs) would seem to be suitable candidates for the solution of nonlinear Boundary Value Problems (BVPs). They have been successfully used for solving linear BVPs together with a mesh selection strategy based on the conditioning of the linear systems. Our aim is to extend this approach so as to use them for the numerical approximation of nonlinear problems. For this reason, we consider the quasi-linearization technique that is an application of the Newton method to the nonlinear differential equation. Consequently, each iteration requires the solution of a linear BVP. In order to guarantee the convergence to the solution of the continuous nonlinear problem, it is necessary to determine how accurately the linear BVPs must be solved. For this goal, suitable stopping criteria on the residual and on the error for each linear BVP are given. Numerical experiments on stiff problems give rather satisfactory results, showing that the experimental code, called TOM, that uses a class of BVMs and the quasi-linearization technique, may be competitive with well known solvers for BVPs", "fulltext": "", "keywords": "stopping criteria;mesh selection strategy;quasi-linearization technique;boundary value methods;bvms;newton method;stiff problems;nonlinear differential equation;numerical approximation;nonlinear boundary value problems"} {"name": "train_1083", "title": "Differential algebraic systems anew", "abstract": "It is proposed to figure out the leading term in differential algebraic systems more precisely. Low index linear systems with those properly stated leading terms are considered in detail. In particular, it is asked whether a numerical integration method applied to the original system reaches the inherent regular ODE without conservation, i.e., whether the discretization and the decoupling commute in some sense. In general one cannot expect this commutativity so that additional difficulties like strong stepsize restrictions may arise. Moreover, abstract differential algebraic equations in infinite-dimensional Hilbert spaces are introduced, and the index notion is generalized to those equations. In particular, partial differential algebraic equations are considered in this abstract formulation", "fulltext": "", "keywords": "stepsize restrictions;numerical integration method;low index linear systems;abstract differential algebraic equations;commutativity;differential algebraic systems;inherent regular ode"} {"name": "train_1084", "title": "On quasi-linear PDAEs with convection: applications, indices, numerical", "abstract": "solution For a class of partial differential algebraic equations (PDAEs) of quasi-linear type which include nonlinear terms of convection type, a possibility to determine a time and spatial index is considered. As a typical example we investigate an application from plasma physics. Especially we discuss the numerical solution of initial boundary value problems by means of a corresponding finite difference splitting procedure which is a modification of a well-known fractional step method coupled with a matrix factorization. The convergence of the numerical solution towards the exact solution of the corresponding initial boundary value problem is investigated. Some results of a numerical solution of the plasma PDAE are given", "fulltext": "", "keywords": "indices;initial boundary value problems;plasma physics;quasi-linear partial differential algebraic equations;numerical solution;finite difference splitting procedure;spatial index;convection;fractional step method;matrix factorization"} {"name": "train_1085", "title": "A variable-stepsize variable-order multistep method for the integration of", "abstract": "perturbed linear problems G. Scheifele (1971) wrote the solution of a perturbed oscillator as an expansion in terms of a new set of functions, which extends the monomials in the Taylor series of the solution. Recently, P. Martin and J.M. Ferrandiz (1997) constructed a multistep code based on the Scheifele technique, and it was generalized by D.J. Lopez and P. Martin (1998) for perturbed linear problems. However, the remarked codes are constant steplength methods, and efficient integrators must be able to change the steplength. In this paper we extend the ideas of F.T. Krogh (1974) from Adams methods to the algorithm proposed by Lopez and Martin, and we show the advantages of the new code in perturbed problems", "fulltext": "", "keywords": "multistep code;adams methods;perturbed linear problems integration;variable-stepsize variable-order multistep method;taylor series;constant steplength methods;monomials;perturbed oscillator"} {"name": "train_1086", "title": "Some recent advances in validated methods for IVPs for ODEs", "abstract": "Compared to standard numerical methods for initial value problems (IVPs) for ordinary differential equations (ODEs), validated methods (often called interval methods) for IVPs for ODEs have two important advantages: if they return a solution to a problem, then (1) the problem is guaranteed to have a unique solution, and (2) an enclosure of the true solution is produced. We present a brief overview of interval Taylor series (ITS) methods for IVPs for ODEs and discuss some recent advances in the theory of validated methods for IVPs for ODEs. In particular, we discuss an interval Hermite-Obreschkoff (IHO) scheme for computing rigorous bounds on the solution of an IVP for an ODE, the stability of ITS and IHO methods, and a new perspective on the wrapping effect, where we interpret the problem of reducing the wrapping effect as one of finding a more stable scheme for advancing the solution", "fulltext": "", "keywords": "wrapping effect;interval hermite-obreschkoff scheme;interval methods;validated methods;ordinary differential equations;interval taylor series;qr algorithm;initial value problems"} {"name": "train_1087", "title": "Implementation of DIMSIMs for stiff differential systems", "abstract": "Some issues related to the implementation of diagonally implicit multistage integration methods for stiff differential systems are discussed. They include reliable estimation of the local discretization error, construction of continuous interpolants, solution of nonlinear systems of equations by simplified Newton iterations, choice of initial stepsize and order, and step and order changing strategy. Numerical results are presented which indicate that an experimental Matlab code based on type 2 methods of order one, two and three outperforms ode15s code from Matlab ODE suite on problems whose Jacobian has eigenvalues which are close to the imaginary axis", "fulltext": "", "keywords": "stiff differential systems;simplified newton iterations;local discretization error;diagonally implicit multistage integration methods;reliable estimation;dimsims;nonlinear systems of equations;interpolants;experimental matlab code"} {"name": "train_1088", "title": "Parallel implicit predictor corrector methods", "abstract": "The performance of parallel codes for the solution of initial value problems is usually strongly sensitive to the dimension of the continuous problem. This is due to the overhead related to the exchange of information among the processors and motivates the problem of minimizing the amount of communications. According to this principle, we define the so called Parallel Implicit Predictor Corrector Methods and in this class we derive A-stable, L-stable and numerically zero-stable formulas. The latter property refers to the zero-stability condition of a given formula when roundoff errors are introduced in its coefficients due to their representation in finite precision arithmetic. Some numerical experiment show the potentiality of this approach", "fulltext": "", "keywords": "roundoff errors;parallel implicit predictor corrector methods;finite precision arithmetic;numerically zero-stable formulas;initial value problems;zero-stability condition"} {"name": "train_1089", "title": "Accuracy and stability of splitting with Stabilizing Corrections", "abstract": "This paper contains a convergence analysis for the method of stabilizing corrections, which is an internally consistent splitting scheme for initial-boundary value problems. To obtain more accuracy and a better treatment of explicit terms several extensions are regarded and analyzed. The relevance of the theoretical results is tested for convection-diffusion-reaction equations", "fulltext": "", "keywords": "stabilizing corrections;convection-diffusion-reaction equations;splitting scheme;convergence analysis;stability;initial-boundary value problems"} {"name": "train_109", "title": "An entanglement measure based on the capacity of dense coding", "abstract": "An asymptotic entanglement measure for any bipartite states is derived in the light of the dense coding capacity optimized with respect to local quantum operations and classical communications. General properties and some examples with explicit forms of this entanglement measure are investigated", "fulltext": "", "keywords": "dense coding capacity;local quantum operations;optimization;classical communications;entanglement measure;asymptotic entanglement measure;bipartite states"} {"name": "train_1090", "title": "On the contractivity of implicit-explicit linear multistep methods", "abstract": "This paper is concerned with the class of implicit-explicit linear multistep methods for the numerical solution of initial value problems for ordinary differential equations which are composed of stiff and nonstiff parts. We study the contractivity of such methods, with regard to linear autonomous systems of ordinary differential equations and a (scaled) Euclidean norm. In addition, we derive a strong stability result based on the stability regions of these methods", "fulltext": "", "keywords": "linear autonomous systems;ordinary differential equations;numerical solution;stability result;euclidean norm;contractivity;implicit-explicit linear multistep methods;initial value problems"} {"name": "train_1091", "title": "Car-caravan snaking. 1. The influence of pintle pin friction", "abstract": "A brief review of knowledge of car-caravan snaking is carried out. Against the background described, a fairly detailed mathematical model of a contemporary car-trailer system is constructed and a baseline set of parameter values is given. In reduced form, the model is shown to give results in accordance with literature. The properties of the baseline combination are explored using both linear and non-linear versions of the model. The influences of damping at the pintle joint and of several other design parameters on the stability of the linear system in the neighbourhood of the critical snaking speed are calculated and discussed. Coulomb friction damping at the pintle pin is then included and simulations are used to indicate the consequent amplitude-dependent behaviour. The friction damping, especially when its level has to be chosen by the user, is shown to give dangerous characteristics, despite having some capacity for stabilization of the snaking motions. It is concluded that pintle pin friction damping does not represent a satisfactory solution to the snaking problem. The paper sets the scene for the development of an improved solution", "fulltext": "", "keywords": "mathematical model;pintle pin friction;amplitude-dependent behaviour;coulomb friction damping;car-caravan snaking;linear system;car-trailer system;critical snaking speed"} {"name": "train_1092", "title": "Ride quality evaluation of an actively-controlled stretcher for an ambulance", "abstract": "This study considers the subjective evaluation of ride quality during ambulance transportation using an actively-controlled stretcher (ACS). The ride quality of a conventional stretcher and an assistant driver's seat is also compared. Braking during ambulance transportation generates negative foot-to-head acceleration in patients and causes blood pressure to rise in the patient's head. The ACS absorbs the foot-to-head acceleration by changing the angle of the stretcher, thus reducing the blood pressure variation. However, the ride quality of the ACS should be investigated further because the movement of the ACS may cause motion sickness and nausea. Experiments of ambulance transportation, including rapid acceleration and deceleration, are performed to evaluate the effect of differences in posture of the transported subject on the ride quality; the semantic differential method and factor analysis are used in the investigations. Subjects are transported using a conventional stretcher with head forward, a conventional stretcher with head backward, the ACS, and an assistant driver's seat for comparison with transportation using a stretcher. Experimental results show that the ACS gives the most comfortable transportation when using a stretcher. Moreover, the reduction of the negative foot-to-head acceleration at frequencies below 0.2 Hz and the small variation of the foot-to-head acceleration result in more comfortable transportation. Conventional transportation with the head forward causes the worst transportation, although the characteristics of the vibration of the conventional stretcher seem to be superior to that of the ACS", "fulltext": "", "keywords": "braking;motion sickness;factor analysis;head backward;assistant driver seat;conventional stretcher;ambulance;negative foot-to-head acceleration;rapid acceleration;stretcher angle;comfortable transportation;vibration;nausea;patient head;posture differences;head forward;subjective evaluation;rapid deceleration;blood pressure variation;actively-controlled stretcher;transported subject;ambulance transportation;ride quality evaluation;semantic differential method"} {"name": "train_1093", "title": "A fuzzy logic approach to accommodate thermal stress and improve the start-up", "abstract": "phase in combined cycle power plants Use of combined cycle power generation plant has increased dramatically over the last decade. A supervisory control approach based on a dynamic model is developed, which makes use of proportional-integral-derivative (PID), fuzzy logic and fuzzy PID schemes. The aim is to minimize the steam turbine plant start-up time, without violating maximum thermal stress limits. An existing start-up schedule provides the benchmark by which the performance of candidate controllers is assessed. Improvements regarding possible reduced start-up times and satisfaction of maximum thermal stress restrictions have been realized using the proposed control scheme", "fulltext": "", "keywords": "pid control;combined cycle power plants;start-up schedule;steam turbine plant start-up time minimization;fuzzy pid schemes;fuzzy logic approach;supervisory control;maximum thermal stress limits;dynamic model"} {"name": "train_1094", "title": "Efficient allocation of knowledge in distributed business structures", "abstract": "Accelerated business processes demand new concepts and realizations of information systems and knowledge databases. This paper presents the concept of the collaborative information space (CIS), which supplies the necessary tools to transform individual knowledge into collective useful information. The creation of 'information objects' in the CIS allows an efficient allocation of information in all business process steps at any time. Furthermore, the specific availability of heterogeneous, distributed data is realized by a Web-based user interface, which enables effective search by a multidimensionally hierarchical composition", "fulltext": "", "keywords": "multidimensionally hierarchical composition;knowledge databases;collaborative information space;distributed business structures;business process steps;information objects;web-based user interface;heterogeneous distributed data;information systems;accelerated business processes;interactive system;efficient knowledge allocation"} {"name": "train_1095", "title": "Development of a real-time monitoring system", "abstract": "This paper describes a pattern recognition (PR) technique, which uses learning vector quantization (LVQ). This method is adapted for practical application to solve problems in the area of condition monitoring and fault diagnosis where a number of fault signatures are involved. In these situations, the aim is health monitoring, including identification of deterioration of the healthy condition and identification of causes of the failure in real-time. For this reason a fault database is developed which contains the collected information about various states of operation of the system in the form of pattern vectors. The task of the real-time monitoring system is to correlate patterns of unknown faults with the known fault signatures in the fault database. This will determine cause of failure and degree of deterioration of the system under test. The problem of fault diagnosis may involve a large number of patterns and large sampling time, which affects the learning stage of neural networks. The study here also aims to find a fast learning model of neural networks for instances when a high number of patterns and numerous processing elements are involved. It begins searching for an appropriate solution. The study is extended to the enforcement learning models and considers LVQ as a network emerged from the competitive learning model through enforcement training. Finally, tests show an accuracy of 92.3 per cent in the fault diagnostic capability of the technique", "fulltext": "", "keywords": "fault diagnosis;coolant system;pattern recognition technique;enforcement training;learning vector quantization;large sampling time;competitive learning model;fault database;health monitoring;pattern correlation;fault diagnostic capability;pattern vectors;fast learning model;condition monitoring;real-time failure cause identification;fault signatures;lvq;cnc machine centre;real-time monitoring system;deterioration identification;neural networks"} {"name": "train_1096", "title": "Evaluating alternative manufacturing control strategies using a benchmark", "abstract": "system This paper describes an investigation of the effects of dynamic job routing and job sequencing decisions on the performance of a distributed control system and its adaptability against disturbances. This experimental work was carried out to compare the performance of alternative control strategies in various manufacturing environments and to investigate the relationship between the 'control' and 'controlled' systems. The experimental test-bed presented in this paper consists of an agent-based control system (implemented in C++) and a discrete-event simulation model. Using this test-bed, various control strategies were tested on a benchmark manufacturing system by varying production volumes (to model the production system with looser/tighter schedules) and disturbance frequencies. It was found that hybrid strategies that combine reactive agent mechanisms (and allocation strategies such as the contract net) with appropriate job sequencing heuristics provide the best performance, particularly when job congestion increases on a shop-floor", "fulltext": "", "keywords": "discrete-event simulation model;hybrid strategies;disturbance adaptability;production volumes;benchmark system;dynamic job routing;experimental test-bed;agent-based control system;benchmark manufacturing system;reactive agent mechanisms;job congestion;allocation strategies;contract net;alternative manufacturing control strategies;job sequencing decisions;distributed control system;disturbance frequencies"} {"name": "train_1097", "title": "A study on an automatic seam tracking system by using an electromagnetic sensor", "abstract": "for sheet metal arc welding of butt joints Many sensors, such as the vision sensor and the laser displacement sensor, have been developed to automate the arc welding process. However, these sensors have some problems due to the effects of arc light, fumes and spatter. An electromagnetic sensor, which utilizes the generation of an eddy current, was developed for detecting the weld line of a butt joint in which the root gap size was zero. An automatic seam tracking system designed for sheet metal arc welding was constructed with a sensor. Through experiments, it was revealed that the system had an excellent seam tracking accuracy of the order of +or-0.2 mm", "fulltext": "", "keywords": "automatic seam tracking system;seam tracking accuracy;weld line detection;root gap size;sheet metal arc welding;electromagnetic sensor;butt joints;eddy current generation"} {"name": "train_1098", "title": "Instability phenomena in the gas-metal arc welding self-regulation process", "abstract": "Arc instability is a very important determinant of weld quality. The instability behaviour of the gas-metal arc welding (GMAW) process is characterized by strong oscillations in arc length and current. In the paper, a model of the GMAW process is developed using an exact arc voltage characteristic. This model is used to study stability of the self-regulation process and to develop a simulation program that helps to understand the transient or dynamic nature of the GMAW process and relationships among current, electrode extension and contact tube-work distance. The process is shown to exhibit instabilities at both long electrode extension and normal extension. Results obtained from simulation runs of the model were also experimentally confirmed by the present author, as reported in this study. In order to explain the concept of the instability phenomena, the metal transfer mode and the arc voltage-current characteristic were examined. Based on this examination, the conclusion of this study is that their combined effects lead to the oscillations in arc current and length", "fulltext": "", "keywords": "gas-metal arc welding;exact arc voltage characteristic;weld quality;instability phenomena;metal transfer mode;self-regulation process;gmaw process;arc instability"} {"name": "train_1099", "title": "WebCAD: A computer aided design tool constrained with explicit 'design for", "abstract": "manufacturability' rules for computer numerical control milling A key element in the overall efficiency of a manufacturing enterprise is the compatibility between the features that have been created in a newly designed part, and the capabilities of the downstream manufacturing processes. With this in mind, a process-aware computer aided design (CAD) system called WebCAD has been developed. The system restricts the freedom of the designer in such a way that the designed parts can be manufactured on a three-axis computer numerical control milling machine. This paper discusses the vision of WebCAD and explains the rationale for its development in comparison with commercial CAD/CAM (computer aided design/manufacture) systems. The paper then goes on to describe the implementation issues that enforce the manufacturability rules. Finally, certain design tools are described that aid a user during the design process. Some examples are given of the parts designed and manufactured with WebCAD", "fulltext": "", "keywords": "internet-based cad/cam;webcad;design for manufacturability rules;computer aided design tool;computer numerical control milling;design tools;three-axis cnc milling machine;process-aware cad system;cad/cam systems;manufacturability rules;manufacturing enterprise efficiency"} {"name": "train_11", "title": "Does social capital determine innovation? To what extent?", "abstract": "This paper deals with two questions: Does social capital determine innovation in manufacturing firms? If it is the case, to what extent? To deal with these questions, we review the literature on innovation in order to see how social capital came to be added to the other forms of capital as an explanatory variable of innovation. In doing so, we have been led to follow the dominating view of the literature on social capital and innovation which claims that social capital cannot be captured through a single indicator, but that it actually takes many different forms that must be accounted for. Therefore, to the traditional explanatory variables of innovation, we have added five forms of structural social capital (business network assets, information network assets, research network assets, participation assets, and relational assets) and one form of cognitive social capital (reciprocal trust). In a context where empirical investigations regarding the relations between social capital and innovation are still scanty, this paper makes contributions to the advancement of knowledge in providing new evidence regarding the impact and the extent of social capital on innovation at the two decisionmaking stages considered in this study", "fulltext": "", "keywords": "participation assets;two-stage decision-making process;research network assets;cognitive social capital;degree of radicalness;innovation;reciprocal trust;business network assets;structural social capital;information network assets;manufacturing firms;relational assets"} {"name": "train_110", "title": "A switching synchronization scheme for a class of chaotic systems", "abstract": "In this Letter, we propose an observer-based synchronization scheme for a class of chaotic systems. This class of systems are given by piecewise-linear dynamics. By using some properties of such systems, we give a procedure to construct the gain of the observer. We prove various stability results and comment on the robustness of the proposed scheme. We also present some simulation results", "fulltext": "", "keywords": "chaotic systems;robustness;switching synchronization scheme;state observers;piecewise-linear dynamics"} {"name": "train_1100", "title": "Evaluation of existing and new feature recognition algorithms. 2. Experimental", "abstract": "results For pt.1 see ibid., p.839-851. This is the second of two papers investigating the performance of general-purpose feature detection techniques. The first paper describes the development of a methodology to synthesize possible general feature detection face sets. Six algorithms resulting from the synthesis have been designed and implemented on a SUN Workstation in C++ using ACIS as the geometric modelling system. In this paper, extensive tests and comparative analysis are conducted on the feature detection algorithms, using carefully selected components from the public domain, mostly from the National Design Repository. The results show that the new and enhanced algorithms identify face sets that previously published algorithms cannot detect. The tests also show that each algorithm can detect, among other types, a certain type of feature that is unique to it. Hence, most of the algorithms discussed in this paper would have to be combined to obtain complete coverage", "fulltext": "", "keywords": "concavity;national design repository;face sets;general-purpose feature detection techniques;feature recognition algorithms;convex hull"} {"name": "train_1101", "title": "Evaluation of existing and new feature recognition algorithms. 1. Theory and", "abstract": "implementation This is the first of two papers evaluating the performance of general-purpose feature detection techniques for geometric models. In this paper, six different methods are described to identify sets of faces that bound depression and protrusion faces. Each algorithm has been implemented and tested on eight components from the National Design Repository. The algorithms studied include previously published general-purpose feature detection algorithms such as the single-face inner-loop and concavity techniques. Others are improvements to existing algorithms such as extensions of the two-dimensional convex hull method to handle curved faces as well as protrusions. Lastly, new algorithms based on the three-dimensional convex hull, minimum concave, visible and multiple-face inner-loop face sets are described", "fulltext": "", "keywords": "minimum concave;national design repository;cad/cam software;two-dimensional convex hull method;geometric models;sets of faces;geometric reasoning algorithms;visible inner-loop face sets;three-dimensional convex hull;curved faces;feature recognition algorithms;general-purpose feature detection techniques;single-face inner-loop technique;depression faces;protrusion faces;multiple-face inner-loop face sets;concavity technique"} {"name": "train_1102", "title": "Design and implementation of a reusable and extensible HL7 encoding/decoding", "abstract": "framework The Health Level Seven (HL7), an international standard for electronic data exchange in all health care environments, enables disparate computer applications to exchange key sets of clinical and administrative information. Above all, it defines the standard HL7 message formats prescribed by the standard encoding rules. In this paper, we propose a flexible, reusable, and extensible HL7 encoding and decoding framework using a message object model (MOM) and message definition repository (MDR). The MOM provides an abstract HL7 message form represented by a group of objects and their relationships. It reflects logical relationships among the standard HL7 message elements such as segments, fields, and components, while enforcing the key structural constraints imposed by the standard. Since the MOM completely eliminates the dependency of the HL7 encoder and decoder on platform-specific data formats, it makes it possible to build the encoder and decoder as reusable standalone software components, enabling the interconnection of arbitrary heterogeneous hospital information systems (HIS) with little effort. Moreover, the MDR, an external database of key definitions for HL7 messages, helps make the encoder and decoder as resilient as possible to future modifications of the standard HL7 message formats. It is also used by the encoder and decoder to perform a well-formedness check for their respective inputs (i.e., HL7 message objects expressed in the MOM and encoded HL7 message strings). Although we implemented a prototype version of the encoder and decoder using JAVA, they can be easily packaged and delivered as standalone components using the standard component frameworks", "fulltext": "", "keywords": "abstract message form;structural constraints;health level seven;corba;international standard;message definition repository;mom;clinical information;health care environments;logical relationships;standalone software components;electronic data exchange;message object model;java;extensible encoding/decoding framework;mdr;his;activex;administrative information;reusable framework;javabean;hl7 message formats;external database;key definitions;heterogeneous hospital information systems"} {"name": "train_1103", "title": "New age computing [autonomic computing]", "abstract": "Autonomic computing (AC), sometimes called self-managed computing, is the name chosen by IBM to describe the company's new initiative aimed at making computing more reliable and problem-free. It is a response to a growing realization that the problem today with computers is not that they need more speed or have too little memory, but that they crash all too often. This article reviews current initiatives being carried out in the AC field by the IT industry, followed by key challenges which require to be addressed in its development and implementation", "fulltext": "", "keywords": "computing reliability;new age computing;adaptive algorithms;it industry initiatives;problem-free computing;ac implementation;autonomic computing;ac requirements;ac development;computer crash;self-managed computing;open standards;ac;self-healing computing;ibm initiative;computer memory;computer speed"} {"name": "train_1104", "title": "A 3-stage pipelined architecture for multi-view images decoder", "abstract": "In this paper, we proposed the architecture of the decoder which implements the multi-view images decoding algorithm. The study of the hardware structure of the multi-view image processing has not been accomplished. The proposed multi-view images decoder operates in a three stage pipelined manner and extracts the depth of the pixels of the decoded image every clock. The multi-view images decoder consists of three modules, Node selector which transfers the value of the nodes repeatedly and Depth Extractor which extracts the depth of each pixel from the four values of the nodes and Affine Transformer which generates the projecting position on the image plane from the values of the pixels and the specified viewpoint. The proposed architecture is designed and simulated by the Max+PlusII design tool and the operating frequency is 30 MHz. The image can be constructed in a real time by the decoder with the proposed architecture", "fulltext": "", "keywords": "three-stage pipelined architecture;depth extractor;operating frequency;30 mhz;hardware structure;viewpoint;node selector;pixel depth;max+plusii design tool;multi-view images decoder;affine transformer"} {"name": "train_1105", "title": "Fuzzy business [Halden Reactor Project]", "abstract": "The Halden Reactor Project has developed two systems to investigate how signal validation and thermal performance monitoring techniques can be improved. PEANO is an online calibration monitoring system that makes use of artificial intelligence techniques. The system has been tested in cooperation with EPRI and Edan Engineering, using real data from a US PWR plant. These tests showed that PEANO could reliably assess the performance of the process instrumentation at different plant conditions. Real cases of zero and span drifts were successfully detected by the system. TEMPO is a system for thermal performance monitoring and optimisation, which relies on plant-wide first principle models. The system has been installed on a Swedish BWR plant. Results obtained show an overall rms deviation from measured values of a few tenths of a percent, and giving goodness-of-fits in the order of 95%. The high accuracy demonstrated is a good basis for detecting possible faults and efficiency losses in steam turbine cycles", "fulltext": "", "keywords": "feedwater flow;calibration;thermal performance monitoring;steam generators;pwr;bwr;steam turbine cycles;peano;artificial intelligence;tempo;halden reactor project;fuzzy logic"} {"name": "train_1106", "title": "Virtual projects at Halden [Reactor Project]", "abstract": "The Halden man-machine systems (MMS) programme for 2002 is intended to address issues related to human factors, control room design, computer-based support system areas and system safety and reliability. The Halden MMS programme is intended to address extensive experimental work in the human factors, control room design and computer-based support system areas. The work is based on experiments and demonstrations carried out in the experimental facility HAMMLAB. Pilot-versions of several operator aids are adopted and integrated to the HAMMLAB simulators and demonstrated in a full dynamic setting. The Halden virtual reality laboratory has recently become an integral and important part of the programme", "fulltext": "", "keywords": "computer-based support system;virtual reality;man-machine systems programme;human factors;control room design;safety;halden reactor project;reliability"} {"name": "train_1107", "title": "A knowledge-navigation system for dimensional metrology", "abstract": "Geometric dimensioning and tolerancing (GD&T) is a method to specify the dimensions and form of a part so that it will meet its design intent. GD&T is difficult to master for two main reasons. First, it is based on complex 3D geometric entities and relationships. Second, the geometry is associated with a large, diverse knowledge base of dimensional metrology with many interconnections. This paper describes an approach to create a dimensional metrology knowledge base that is organized around a set of key concepts and to represent those concepts as virtual objects that can be navigated with interactive, computer visualization techniques to access the associated knowledge. The approach can enable several applications. First is the application to convey the definition and meaning of GD&T over a broad range of tolerance types. Second is the application to provide a visualization of dimensional metrology knowledge within a control hierarchy of the inspection process. Third is the application to show the coverage of interoperability standards to enable industry to make decisions on standards development and harmonization efforts. A prototype system has been implemented to demonstrate the principles involved in the approach", "fulltext": "", "keywords": "knowledge navigation;geometric dimensioning;web;dimensional metrology;visualization;manufacturing training;vrml;inspection;tolerancing;interoperability standards"} {"name": "train_1108", "title": "The visible cement data set", "abstract": "With advances in x-ray microtomography, it is now possible to obtain three-dimensional representations of a material's microstructure with a voxel size of less than one micrometer. The Visible Cement Data Set represents a collection of 3-D data sets obtained using the European Synchrotron Radiation Facility in Grenoble, France in September 2000. Most of the images obtained are for hydrating portland cement pastes, with a few data sets representing hydrating Plaster of Paris and a common building brick. All of these data sets are being made available on the Visible Cement Data Set website at http://visiblecement.nist.gov. The website includes the raw 3-D datafiles, a description of the material imaged for each data set, example two-dimensional images and visualizations for each data set, and a collection of C language computer programs that will be of use in processing and analyzing the 3-D microstructural images. This paper provides the details of the experiments performed at the ESRF, the analysis procedures utilized in obtaining the data set files, and a few representative example images for each of the three materials investigated", "fulltext": "", "keywords": "x-ray microtomography;3d representations;plaster of paris;european synchrotron radiation facility;microstructural images;microstructure;cement hydration;two-dimensional images;building brick;voxel size;esrf;hydrating portland cement pastes"} {"name": "train_1109", "title": "The existence condition of gamma -acyclic database schemes with MVDs", "abstract": "constraints It is very important to use database technology for a large-scale system such as ERP and MIS. A good database design may improve the performance of the system. Some research shows that a gamma -acyclic database scheme has many good properties, e.g., each connected join expression is monotonous, which helps to improve query performance of the database system. Thus what conditions are needed to generate a gamma -acyclic database scheme for a given relational scheme? In this paper, the sufficient and necessary condition of the existence of gamma -acyclic, join-lossless and dependencies-preserved database schemes meeting 4NF is given", "fulltext": "", "keywords": "query performance;connected join expression;mvds constraints;gamma -acyclic database schemes;large-scale system;existence condition;sufficient and necessary condition;database technology"} {"name": "train_111", "title": "Modification for synchronization of Rossler and Chen chaotic systems", "abstract": "Active control is an effective method for making two identical Rossler and Chen systems be synchronized. However, this method works only for a certain class of chaotic systems with known parameters both in drive systems and response systems. Modification based on Lyapunov stability theory is proposed in order to overcome this limitation. An adaptive synchronization controller, which can make the states of two identical Rossler and Chen systems globally asymptotically synchronized in the presence of system's unknown constant parameters, is derived. Especially, when some unknown parameters are positive, we can make the controller more simple, besides, the controller is independent of those positive uncertain parameters. At last, when the condition that arbitrary unknown parameters in two systems are identical constants is cancelled, we demonstrate that it is possible to synchronize two chaotic systems. All results are proved using a well-known Lyapunov stability theorem. Numerical simulations are given to validate the proposed synchronization approach", "fulltext": "", "keywords": "adaptive synchronization controller;active control;global asymptotic synchronization;lyapunov stability theory;response systems;chen chaotic systems;rossler chaotic systems;synchronization"} {"name": "train_1110", "title": "A hybrid model for smoke simulation", "abstract": "A smoke simulation approach based on the integration of traditional particle systems and density functions is presented in this paper. By attaching a density function to each particle as its attribute, the diffusion of smoke can be described by the variation of particles' density functions, along with the effect on airflow by controlling particles' movement and fragmentation. In addition, a continuous density field for realistic rendering can be generated quickly through the look-up tables of particle's density functions. Compared with traditional particle systems, this approach can describe smoke diffusion, and provide a continuous density field for realistic rendering with much less computation. A quick rendering scheme is also presented in this paper as a useful preview tool for tuning appropriate parameters in the smoke model", "fulltext": "", "keywords": "look-up tables;rendering;smoke simulation;hybrid model;continuous density field;density functions"} {"name": "train_1111", "title": "The contiguity in R/M", "abstract": "An r.e. degree c is contiguous if deg/sub wtt/(A)=deg/sub wtt/(B) for any r.e. sets A,B in c. In this paper, we generalize the notation of contiguity to the structure R/M, the upper semilattice of the r.e. degree set R modulo the cappable r.e. degree set M. An element [c] in R/M is contiguous if [deg/sub wtt/(A)]=[deg/sub wtt/(B)] for any r.e. sets A, B such that deg/sub T/(A),deg/sub T/(B) in [c]. It is proved in this paper that every nonzero element in R/M is not contiguous, i.e., for every element [c] in R/M, if [c] not=[o] then there exist at least two r.e. sets A, B such that deg/sub T/(A), deg/sub T/(B) in [c] and [deg/sub wtt/(A)] not=[deg/sub wtt/(B)]", "fulltext": "", "keywords": "upper semilattice;turing degree;recursion theory;contiguity;nonzero element;recursively enumerable set"} {"name": "train_1112", "title": "Blending parametric patches with subdivision surfaces", "abstract": "In this paper the problem of blending parametric surfaces using subdivision patches is discussed. A new approach, named removing-boundary, is presented to generate piecewise-smooth subdivision surfaces through discarding the outmost quadrilaterals of the open meshes derived by each subdivision step. Then the approach is employed both to blend parametric bicubic B-spline surfaces and to fill n-sided holes. It is easy to produce piecewise-smooth subdivision surfaces with both convex and concave corners on the boundary, and limit surfaces are guaranteed to be C/sup 2/ continuous on the boundaries except for a few singular points by the removing-boundary approach. Thus the blending method is very efficient and the blending surface generated is of good effect", "fulltext": "", "keywords": "subdivision surfaces;piecewise-smooth subdivision surfaces;piecewise smooth subdivision surfaces;parametric bicubic b-spline surfaces;quadrilaterals;parametric surfaces blending;subdivision patches"} {"name": "train_1113", "title": "Word spotting based on a posterior measure of keyword confidence", "abstract": "In this paper, an approach of keyword confidence estimation is developed that well combines acoustic layer scores and syllable-based statistical language model (LM) scores. An a posteriori (AP) confidence measure and its forward-backward calculating algorithm are deduced. A zero false alarm (ZFA) assumption is proposed for evaluating relative confidence measures by word spotting task. In a word spotting experiment with a vocabulary of 240 keywords, the keyword accuracy under the AP measure is above 94%, which well approaches its theoretical upper limit. In addition, a syllable lattice Hidden Markov Model (SLHMM) is formulated and a unified view of confidence estimation, word spotting, optimal path search, and N-best syllable re-scoring is presented. The proposed AP measure can be easily applied to various speech recognition systems as well", "fulltext": "", "keywords": "confidence estimation;optimal path search;acoustic layer scores;speech recognition systems;syllable-based statistical language model scores;a posterior measure;syllable lattice hidden markov model;n-best syllable re-scoring;relative confidence measures;forward-backward calculating algorithm;zero false alarm assumption;keyword confidence;word spotting task;word spotting;a posteriori confidence measure"} {"name": "train_1114", "title": "A new algebraic modelling approach to distributed problem-solving in MAS", "abstract": "This paper is devoted to a new algebraic modelling approach to distributed problem-solving in multi-agent systems (MAS), which is featured by a unified framework for describing and treating social behaviors, social dynamics and social intelligence. A conceptual architecture of algebraic modelling is presented. The algebraic modelling of typical social behaviors, social situation and social dynamics is discussed in the context of distributed problem-solving in MAS. The comparison and simulation on distributed task allocations and resource assignments in MAS show more advantages of the algebraic approach than other conventional methods", "fulltext": "", "keywords": "social behaviors;multi-agent systems;distributed task allocations;resource assignments;unified framework;social dynamics;social intelligence;algebraic modelling approach;distributed problem-solving"} {"name": "train_1115", "title": "Four-point wavelets and their applications", "abstract": "Multiresolution analysis (MRA) and wavelets provide useful and efficient tools for representing functions at multiple levels of details. Wavelet representations have been used in a broad range of applications, including image compression, physical simulation and numerical analysis. In this paper, the authors construct a new class of wavelets, called four-point wavelets, based on an interpolatory four-point subdivision scheme. They are of local support, symmetric and stable. The analysis and synthesis algorithms have linear time complexity. Depending on different weight parameters w, the scaling functions and wavelets generated by the four-point subdivision scheme are of different degrees of smoothness. Therefore the user can select better wavelets relevant to the practice among the classes of wavelets. The authors apply the four-point wavelets in signal compression. The results show that the four-point wavelets behave much better than B-spline wavelets in many situations", "fulltext": "", "keywords": "weight parameters;image compression;four-point wavelets;b-spline wavelets;linear time complexity;physical simulation;interpolatory four-point subdivision scheme;wavelet representations;numerical analysis;multiresolution analysis;scaling functions"} {"name": "train_1116", "title": "An interlingua-based Chinese-English MT system", "abstract": "Chinese-English machine translation is a significant and challenging problem in information processing. The paper presents an interlingua-based Chinese-English natural language translation system (ICENT). It introduces the realization mechanism of Chinese language analysis, which contains syntactic parsing and semantic analyzing and gives the design of interlingua in details. Experimental results and system evaluation are given. The result is satisfying", "fulltext": "", "keywords": "syntactic parsing;semantic analyzing;interlingua-based chinese-english machine translation system;information processing;natural language translation system"} {"name": "train_1117", "title": "An attack-finding algorithm for security protocols", "abstract": "This paper proposes an automatic attack construction algorithm in order to find potential attacks on security protocols. It is based on a dynamic strand space model, which enhances the original strand space model by introducing active nodes on strands so as to characterize the dynamic procedure of protocol execution. With exact causal dependency relations between messages considered in the model, this algorithm can avoid state space explosion caused by asynchronous composition. In order to get a finite state space, a new method called strand-added on demand is exploited, which extends a bundle in an incremental manner without requiring explicit configuration of protocol execution parameters. A finer granularity model of term structure is also introduced, in which subterms are divided into check subterms and data subterms. Moreover, data subterms can be further classified based on the compatible data subterm relation to obtain automatically the finite set of valid acceptable terms for an honest principal. In this algorithm, terms core is designed to represent the intruder's knowledge compactly, and forward search technology is used to simulate attack patterns easily. Using this algorithm, a new attack on the Dolve-Yao protocol can be found, which is even more harmful because the secret is revealed before the session terminates", "fulltext": "", "keywords": "dolve-yao protocol;attack-finding algorithm;state space explosion;asynchronous composition;data subterms;dynamic strand space model;security protocols;strand-added on demand;strand space model;check subterms"} {"name": "train_1118", "title": "Run-time data-flow analysis", "abstract": "Parallelizing compilers have made great progress in recent years. However, there still remains a gap between the current ability of parallelizing compilers and their final goals. In order to achieve the maximum parallelism, run-time techniques were used in parallelizing compilers during last few years. First, this paper presents a basic run-time privatization method. The definition of run-time dead code is given and its side effect is discussed. To eliminate the imprecision caused by the run-time dead code, backward data-flow information must be used. Proteus Test, which can use backward information in run-time, is then presented to exploit more dynamic parallelism. Also, a variation of Proteus Test, the Advanced Proteus Test, is offered to achieve partial parallelism. Proteus Test was implemented on the parallelizing compiler AFT. In the end of this paper the program fpppp.f of Spec95fp Benchmark is taken as an example, to show the effectiveness of Proteus Test", "fulltext": "", "keywords": "dynamic parallelism;run-time data flow analysis;run-time dead code;parallelizing compilers;proteus test;backward data-flow information;run-time privatization method"} {"name": "train_1119", "title": "A component-based software configuration management model and its supporting", "abstract": "system Software configuration management (SCM) is an important key technology in software development. Component-based software development (CBSD) is an emerging paradigm in software development. However, to apply CBSD effectively in real world practice, supporting SCM in CBSD needs to be further investigated. In this paper, the objects that need to be managed in CBSD is analyzed and a component-based SCM model is presented. In this model, components, as the integral logical constituents in a system, are managed as the basic configuration items in SCM, and the relationships between/among components are defined and maintained. Based on this model, a configuration management system is implemented", "fulltext": "", "keywords": "integral logical constituents;software development;version control;component-based software configuration management model;software reuse"} {"name": "train_112", "title": "Revisiting Hardy's paradox: Counterfactual statements, real measurements,", "abstract": "entanglement and weak values Hardy's (1992) paradox is revisited. Usually the paradox is dismissed on grounds of counterfactuality, i.e., because the paradoxical effects appear only when one considers results of experiments which do not actually take place. We suggest a new set of measurements in connection with Hardy's scheme, and show that when they are actually performed, they yield strange and surprising outcomes. More generally, we claim that counterfactual paradoxes point to a deeper structure inherent to quantum mechanics", "fulltext": "", "keywords": "real measurements;gedanken-experiments;hardy paradox;entanglement;quantum mechanics;counterfactual statements;weak values;paradoxical effects"} {"name": "train_1120", "title": "An effective feedback control mechanism for DiffServ architecture", "abstract": "As a scalable QoS (Quality of Service) architecture, Diffserv (Differentiated Service) mainly consists of two components: traffic conditioning at the edge of the Diffserv domain and simple packet forwarding inside the DiffServ domain. DiffServ has many advantages such as flexibility, scalability and simplicity. But when providing AF (Assured Forwarding) services, DiffServ has some problems such as unfairness among aggregated flows or among micro-flows belonging to an aggregated flow. In this paper, a feedback mechanism for AF aggregated flows is proposed to solve this problem. Simulation results show that this mechanism does improve the performance of DiffServ. First, it can improve the fairness among aggregated flows and make DiffServ more friendly toward TCP (Transmission Control Protocol) flows. Second, it can decrease the buffer requirements at the congested router and thus obtain lower delay and packet loss rate. Third, it also keeps almost the same link utility as in normal DiffServ. Finally, it is simple and easy to be implemented", "fulltext": "", "keywords": "diffserv;traffic conditioning;fairness;qos architecture;qos;tcp;af;feedback control;feedback mechanism;packet forwarding"} {"name": "train_1121", "title": "Optimal bandwidth utilization of all-optical ring with a converter of degree 4", "abstract": "In many models of all-optical routing, a set of communication paths in a network is given, and a wavelength is to be assigned to each path so that paths sharing an edge receive different wavelengths. The goal is to assign as few wavelengths as possible, in order to use the optical bandwidth efficiently. If a node of a network contains a wavelength converter, any path that passes through this node may change its wavelength. Having converters at some of the nodes can reduce the number of wavelengths required for routing. This paper presents a wavelength converter with degree 4 and gives a routing algorithm which shows that any routing with load L can be realized with L wavelengths when a node of an all-optical ring hosts such a wavelength converter. It is also proved that 4 is the minimum degree of the converter to reach the full utilization of the available wavelengths if only one node of an all-optical ring hosts a converter", "fulltext": "", "keywords": "wavelength converter;all-optical network;all-optical ring;all-optical routing;wavelength translation;wavelength assignment;communication paths"} {"name": "train_1122", "title": "Hybrid broadcast for the video-on-demand service", "abstract": "Multicast offers an efficient means of distributing video contents/programs to multiple clients by batching their requests and then having them share a server's video stream. Batching customers' requests is either client-initiated or server-initiated. Most advanced client-initiated video multicasts are implemented by patching. Periodic broadcast, a typical server-initiated approach, can be entirety-based or segment-based. This paper focuses on the performance of the VoD service for popular videos. First, we analyze the limitation of conventional patching when the customer request rate is high. Then, by combining the advantages of each of the two broadcast schemes, we propose a hybrid broadcast scheme for popular videos, which not only lowers the service latency but also improves clients' interactivity by using an active buffering technique. This is shown to be a good compromise for both lowering service latency and improving the VCR-like interactivity", "fulltext": "", "keywords": "video-on-demand;hybrid broadcast scheme;conventional patching;quality-of-service;interactivity;scheduling;multicast;customer request rate"} {"name": "train_1123", "title": "A transactional asynchronous replication scheme for mobile database systems", "abstract": "In mobile database systems, mobility of users has a significant impact on data replication. As a result, the various replica control protocols that exist today in traditional distributed and multidatabase environments are no longer suitable. To solve this problem, a new mobile database replication scheme, the Transaction-Level Result-Set Propagation (TLRSP) model, is put forward in this paper. The conflict detection and resolution strategy based on TLRSP is discussed in detail, and the implementation algorithm is proposed. In order to compare the performance of the TLRSP model with that of other mobile replication schemes, we have developed a detailed simulation model. Experimental results show that the TLRSP model provides an efficient support for replicated mobile database systems by reducing reprocessing overhead and maintaining database consistency", "fulltext": "", "keywords": "mobile database replication;mobile computing;mobile database;conflict reconciliation;multidatabase;data replication;distributed database;transaction;transaction-level result-set propagation"} {"name": "train_1124", "title": "Data extraction from the Web based on pre-defined schema", "abstract": "With the development of the Internet, the World Wide Web has become an invaluable information source for most organizations. However, most documents available from the Web are in HTML form which is originally designed for document formatting with little consideration of its contents. Effectively extracting data from such documents remains a nontrivial task. In this paper, we present a schema-guided approach to extracting data from HTML pages. Under the approach, the user defines a schema specifying what to be extracted and provides sample mappings between the schema and the HTML page. The system will induce the mapping rules and generate a wrapper that takes the HTML page as input and produces the required data in the form of XML conforming to the user-defined schema. A prototype system implementing the approach has been developed. The preliminary experiments indicate that the proposed semi-automatic approach is not only easy to use but also able to produce a wrapper that extracts required data from inputted pages with high accuracy", "fulltext": "", "keywords": "internet;schema;information source;html;distributed database;queries;wrapper generation;data integration;data extraction"} {"name": "train_1125", "title": "Structure of weakly invertible semi-input-memory finite automata with delay 1", "abstract": "Semi-input-memory finite automata, a kind of finite automata introduced by the first author of this paper for studying error propagation, are a generalization of input memory finite automata by appending an autonomous finite automaton component. In this paper, we give a characterization of the structure of weakly invertible semi-input-memory finite automata with delay 1, in which the state graph of each autonomous finite automaton is a cycle. From a result on mutual invertibility of finite automata obtained by the authors recently, it leads to a characterization of the structure of feedforward inverse finite automata with delay 1", "fulltext": "", "keywords": "semi-input-memory;delay 1;weakly invertible;invertibility;feedforward inverse finite automata;semi-input-memory finite automata;state graph;finite automata"} {"name": "train_1126", "title": "A note on an axiomatization of the core of market games", "abstract": "As shown by Peleg (1993), the core of market games is characterized by nonemptiness, individual rationality, superadditivity, the weak reduced game property, the converse reduced game property, and weak symmetry. It was not known whether weak symmetry was logically independent. With the help of a certain transitive 4-person TU game, it is shown that weak symmetry is redundant in this result. Hence, the core on market games is axiomatized by the remaining five properties, if the universe of players contains at least four members", "fulltext": "", "keywords": "market game core axiomatization;nonempty games;converse reduced game property;weak reduced game property;weak symmetry;individual rationality;redundant;superadditive games;transitive 4-person tu game"} {"name": "train_1127", "title": "Repeated games with lack of information on one side: the dual differential", "abstract": "approach We introduce the dual differential game of a repeated game with lack of information on one side as the natural continuous time version of the dual game introduced by De Meyer (1996). A traditional way to study the value of differential games is through discrete time approximations. Here, we follow the opposite approach: We identify the limit value of a repeated game in discrete time as the value of a differential game. Namely, we use the recursive structure for the finitely repeated version of the dual game to construct a differential game for which the upper values of the uniform discretization satisfy precisely the same property. The value of the dual differential game exists and is the unique viscosity solution of a first-order derivative equation with a limit condition. We identify the solution by translating viscosity properties in the primal", "fulltext": "", "keywords": "repeated games;discrete time;repeated game;viscosity solution;discrete time approximations;limit condition;limit value;dual differential game"} {"name": "train_1128", "title": "The semi-algebraic theory of stochastic games", "abstract": "The asymptotic behavior of the min-max value of a finite-state zero-sum discounted stochastic game, as the discount rate approaches 0, has been studied in the past using the theory of real-closed fields. We use the theory of semi-algebraic sets and mappings to prove some asymptotic properties of the min-max value, which hold uniformly for all stochastic games in which the number of states and players' actions are predetermined to some fixed values. As a corollary, we prove a uniform polynomial convergence rate of the value of the N-stage game to the value of the nondiscount game, over a bounded set of payoffs", "fulltext": "", "keywords": "min-max value;semi-algebraic set theory;finite-state zero-sum discounted stochastic game;n-stage game;asymptotic behavior;discount rate;two-player zero-sum finite-state stochastic games;uniform polynomial convergence rate"} {"name": "train_1129", "title": "Computing stationary Nash equilibria of undiscounted single-controller", "abstract": "stochastic games Given a two-person, nonzero-sum stochastic game where the second player controls the transitions, we formulate a linear complementarity problem LCP(q, M) whose solution gives a Nash equilibrium pair of stationary strategies under the limiting average payoff criterion. The matrix M constructed is of the copositive class so that Lemke's algorithm will process it. We will also do the same for a special class of N-person stochastic games called polymatrix stochastic games", "fulltext": "", "keywords": "polymatrix stochastic games;stationary strategies;stationary nash equilibria;n-person stochastic games;undiscounted single-controller stochastic games;two-person nonzero-sum stochastic game;lemke algorithm;linear complementarity problem;copositive class matrix;limiting average payoff criterion"} {"name": "train_113", "title": "Quantum limit on computational time and speed", "abstract": "We investigate if physical laws can impose limits on computational time and speed of a quantum computer built from elementary particles. We show that the product of the speed and the running time of a quantum computer is limited by the type of fundamental interactions present inside the system. This will help us to decide as to what type of interaction should be allowed in building quantum computers in achieving the desired speed", "fulltext": "", "keywords": "quantum computer;fundamental interactions;computational speed;quantum limit;computational time"} {"name": "train_1130", "title": "Node-capacitated ring routing", "abstract": "We consider the node-capacitated routing problem in an undirected ring network along with its fractional relaxation, the node-capacitated multicommodity flow problem. For the feasibility problem, Farkas' lemma provides a characterization for general undirected graphs, asserting roughly that there exists such a flow if and only if the so-called distance inequality holds for every choice of distance functions arising from nonnegative node weights. For rings, this (straightforward) result will be improved in two ways. We prove that, independent of the integrality of node capacities, it suffices to require the distance inequality only for distances arising from (0-1-2)-valued node weights, a requirement that will be called the double-cut condition. Moreover, for integer-valued node capacities, the double-cut condition implies the existence of a half-integral multicommodity flow. In this case there is even an integer-valued multicommodity flow that violates each node capacity by at most one. Our approach gives rise to a combinatorial, strongly polynomial algorithm to compute either a violating double-cut or a node-capacitated multicommodity flow. A relation of the problem to its edge-capacitated counterpart will also be explained", "fulltext": "", "keywords": "double-cut condition;distance inequality;half-integral multicommodity flow;undirected graphs;node-capacitated multicommodity flow problem;violating double-cut;undirected ring network;integer-valued multicommodity flow;edge-cut criterion;fractional relaxation;node-capacitated routing problem;nonnegative node weights;feasibility problem;node capacity integrality;node-capacitated ring routing;distance functions;integer-valued node capacities;farkas lemma;combinatorial strongly polynomial algorithm"} {"name": "train_1131", "title": "A min-max theorem on feedback vertex sets", "abstract": "We establish a necessary and sufficient condition for the linear system {x : Hx >or= e, x >or= 0} associated with a bipartite tournament to be totally dual integral, where H is the cycle-vertex incidence matrix and e is the all-one vector. The consequence is a min-max relation on packing and covering cycles, together with strongly polynomial time algorithms for the feedback vertex set problem and the cycle packing problem on the corresponding bipartite tournaments. In addition, we show that the feedback vertex set problem on general bipartite tournaments is NP-complete and approximable within 3.5 based on the min-max theorem", "fulltext": "", "keywords": "combinatorial optimization problems;np-complete problem;min-max theorem;linear programming duality theory;feedback vertex sets;feedback vertex set problem;cycle-vertex incidence matrix;strongly polynomial time algorithms;all-one vector;totally dual integral system;linear system;cycle packing problem;graphs;bipartite tournament;necessary sufficient condition;covering cycles"} {"name": "train_1132", "title": "Semidefinite programming vs. LP relaxations for polynomial programming", "abstract": "We consider the global minimization of a multivariate polynomial on a semi-algebraic set Omega defined with polynomial inequalities. We then compare two hierarchies of relaxations, namely, LP relaxations based on products of the original constraints, in the spirit of the RLT procedure of Sherali and Adams (1990), and recent semidefinite programming (SDP) relaxations introduced by the author. The comparison is analyzed in light of recent results in real algebraic geometry on various representations of polynomials, positive on a compact semi-algebraic set", "fulltext": "", "keywords": "real algebraic geometry;multivariate polynomial;polynomial inequalities;global minimization;semidefinite programming relaxations;rlt procedure;semi-algebraic set;reformulation linearization technique;polynomial programming;constraint products;lp relaxations"} {"name": "train_1133", "title": "An analytic center cutting plane method for semidefinite feasibility problems", "abstract": "Semidefinite feasibility problems arise in many areas of operations research. The abstract form of these problems can be described as finding a point in a nonempty bounded convex body Gamma in the cone of symmetric positive semidefinite matrices. Assume that Gamma is defined by an oracle, which for any given m * m symmetric positive semidefinite matrix Gamma either confirms that Y epsilon Gamma or returns a cut, i.e., a symmetric matrix A such that Gamma is in the half-space {Y : A . Y 1000 cm/sup 3/), and in the posterior part (toward the pectoral muscle) of both small and large breasts. The application of a breast immobilization cast reduces the tissue shifts in large breasts. A reproducibility margin on the order of 5 mm will take the internal tissue shifts into account that occur between repeat setups. Conclusion: The results demonstrate a high reproducibility of mammary gland structure during repeat setups in a supine position", "fulltext": "", "keywords": "internal tissue shifts;mammary gland structure reproducibility;breast immobilization device;accurate tumor localization;localization methods;breast conserving therapy;repeat setups;reproducibility margins;contrast-enhanced magnetic resonance imaging;supine position"} {"name": "train_1142", "title": "Fast and accurate leaf verification for dynamic multileaf collimation using an", "abstract": "electronic portal imaging device A prerequisite for accurate dose delivery of IMRT profiles produced with dynamic multileaf collimation (DMLC) is highly accurate leaf positioning. In our institution, leaf verification for DMLC was initially done with film and ionization chamber. To overcome the limitations of these methods, a fast, accurate and two-dimensional method for daily leaf verification, using our CCD-camera based electronic portal imaging device (EPID), has been developed. This method is based on a flat field produced with a 0.5 cm wide sliding gap for each leaf pair. Deviations in gap widths are detected as deviations in gray scale value profiles derived from the EPID images, and not by directly assessing leaf positions in the images. Dedicated software was developed to reduce the noise level in the low signal images produced with the narrow gaps. The accuracy of this quality assurance procedure was tested by introducing known leaf position errors. It was shown that errors in leaf gap as small as 0.01-0.02 cm could be detected, which is certainly adequate to guarantee accurate dose delivery of DMLC treatments, even for strongly modulated beam profiles. Using this method, it was demonstrated that both short and long term reproducibility in leaf positioning were within 0.01 cm (1 sigma ) for all gantry angles, and that the effect of gravity was negligible", "fulltext": "", "keywords": "gray scale value profiles;electronic portal imaging device;intensity modulated radiation therapy profiles;two-dimensional method;ionization chamber;accurate leaf verification;leaf pair;ccd-camera based electronic portal imaging device;signal images;accurate dose delivery;leaf position errors;gantry angles;dynamic multileaf collimation;leaf positioning;electronic portal imaging device images;modulated beam profiles;gap widths;noise level;sliding gap"} {"name": "train_1143", "title": "A three-source model for the calculation of head scatter factors", "abstract": "Accurate determination of the head scatter factor S/sub c/ is an important issue, especially for intensity modulated radiation therapy, where the segmented fields are often very irregular and much less than the collimator jaw settings. In this work, we report an S/sub c/ calculation algorithm for symmetric, asymmetric, and irregular open fields shaped by the tertiary collimator (a multileaf collimator or blocks) at different source-to-chamber distance. The algorithm was based on a three-source model, in which the photon radiation to the point of calculation was treated as if it originated from three effective sources: one source for the primary photons from the target and two extra-focal photon sources for the scattered photons from the primary collimator and the flattening filter, respectively. The field mapping method proposed by Kim et al. [Phys. Med. Biol. 43, 1593-1604 (1998)] was extended to two extra-focal source planes and the scatter contributions were integrated over the projected areas (determined by the detector's eye view) in the three source planes considering the source intensity distributions. The algorithm was implemented using Microsoft Visual C/C++ in the MS Windows environment. The only input data required were head scatter factors for symmetric square fields, which are normally acquired during machine commissioning. A large number of different fields were used to evaluate the algorithm and the results were compared with measurements. We found that most of the calculated S/sub c/'s agreed with the measured values to within 0.4%. The algorithm can also be easily applied to deal with irregular fields shaped by a multileaf collimator that replaces the upper or lower collimator jaws", "fulltext": "", "keywords": "target;intensity modulated radiation therapy;extra-focal photon sources;blocks;irregular open fields;segmented fields;head scatter factors;fields;source-to-chamber distance;collimator jaw settings;input data;primary collimator;calculation algorithm;asymmetric;source intensity distributions;lower collimator jaws;flattening filter;multileaf collimator;extra-focal source planes;scattered photons;field mapping method;three-source model;upper collimator jaws;ms windows environment;machine commissioning;symmetric;photon radiation;tertiary collimator;symmetric square fields"} {"name": "train_1144", "title": "Simultaneous iterative reconstruction of emission and attenuation images in", "abstract": "positron emission tomography from emission data only For quantitative image reconstruction in positron emission tomography attenuation correction is mandatory. In case that no data are available for the calculation of the attenuation correction factors one can try to determine them from the emission data alone. However, it is not clear if the information content is sufficient to yield an adequate attenuation correction together with a satisfactory activity distribution. Therefore, we determined the log likelihood distribution for a thorax phantom depending on the choice of attenuation and activity pixel values to measure the crosstalk between both. In addition an iterative image reconstruction (one-dimensional Newton-type algorithm with a maximum likelihood estimator), which simultaneously reconstructs the images of the activity distribution and the attenuation coefficients is used to demonstrate the problems and possibilities of such a reconstruction. As result we show that for a change of the log likelihood in the range of statistical noise, the associated change in the activity value of a structure is between 6% and 263%. In addition, we show that it is not possible to choose the best maximum on the basis of the log likelihood when a regularization is used, because the coupling between different structures mediated by the (smoothing) regularization prevents an adequate solution due to crosstalk. We conclude that taking into account the attenuation information in the emission data improves the performance of image reconstruction with respect to the bias of the activities, however, the reconstruction still is not quantitative", "fulltext": "", "keywords": "thorax phantom;positron emission tomography attenuation correction;attenuation correction factors;crosstalk;activity distribution;statistical noise;iterative image reconstruction;one-dimensional newton-type algorithm;maximum likelihood estimator;image reconstruction;attenuation information;attenuation coefficients;smoothing;activity pixel values;log likelihood distribution"} {"name": "train_1145", "title": "Mammogram synthesis using a 3D simulation. II. Evaluation of synthetic", "abstract": "mammogram texture We have evaluated a method for synthesizing mammograms by comparing the texture of clinical and synthetic mammograms. The synthesis algorithm is based upon simulations of breast tissue and the mammographic imaging process. Mammogram texture was synthesized by projections of simulated adipose tissue compartments. It was hypothesized that the synthetic and clinical texture have similar properties, assuming that the mammogram texture reflects the 3D tissue distribution. The size of the projected compartments was computed by mathematical morphology. The texture energy and fractal dimension were also computed and analyzed in terms of the distribution of texture features within four different tissue regions in clinical and synthetic mammograms. Comparison of the cumulative distributions of the mean features computed from 95 mammograms showed that the synthetic images simulate the mean features of the texture of clinical mammograms. Correlation of clinical and synthetic texture feature histograms, averaged over all images, showed that the synthetic images can simulate the range of features seen over a large group of mammograms. The best agreement with clinical texture was achieved for simulated compartments with radii of 4-13.3 mm in predominantly adipose tissue regions, and radii of 2.7-5.33 and 1.3-2.7 mm in retroareolar and dense fibroglandular tissue regions, respectively", "fulltext": "", "keywords": "cumulative distributions;x-ray image acquisition;computationally compressed phantom;synthetic images;retroareolar tissue regions;synthetic mammogram texture;adipose tissue compartments;breast tissue simulation;dense fibroglandular tissue regions;3d simulation;mammogram synthesis;fractal dimension;3d tissue distribution;mathematical morphology"} {"name": "train_1146", "title": "Mammogram synthesis using a 3D simulation. I. Breast tissue model and image", "abstract": "acquisition simulation A method is proposed for generating synthetic mammograms based upon simulations of breast tissue and the mammographic imaging process. A computer breast model has been designed with a realistic distribution of large and medium scale tissue structures. Parameters controlling the size and placement of simulated structures (adipose compartments and ducts) provide a method for consistently modeling images of the same simulated breast with modified position or acquisition parameters. The mammographic imaging process is simulated using a compression model and a model of the X-ray image acquisition process. The compression model estimates breast deformation using tissue elasticity parameters found in the literature and clinical force values. The synthetic mammograms were generated by a mammogram acquisition model using a monoenergetic parallel beam approximation applied to the synthetically compressed breast phantom", "fulltext": "", "keywords": "computer breast model;force values;rectangular slice approximation;linear young's moduli;composite beam model;tissue elasticity parameters;image acquisition simulation;breast lesions;monoenergetic parallel beam approximation;3d simulation;mammogram synthesis;breast tissue model;ducts;adipose compartments;mammographic compression;x-ray image acquisition"} {"name": "train_1147", "title": "Angular disparity in ETACT scintimammography", "abstract": "Emission tuned aperture computed tomography (ETACT) has been previously shown to have the potential for the detection of small tumors (<1 cm) in scintimammography. However, the optimal approach to the application of ETACT in the clinic has yet to be determined. Therefore, we sought to determine the effect of the angular disparity between the ETACT projections on image quality through the use of a computer simulation. A small, spherical tumor of variable size (5, 7.5 or 10 mm) was placed at the center of a hemispherical breast (15 cm diameter). The tumor to nontumor ratio was either 5:1 or 10:1. The detector was modeled to be a gamma camera fitted with a 4-mm-diam pinhole collimator. The pinhole-to-detector and the pinhole-to-tumor distances were 25 and 15 cm, respectively. A ray tracing technique was used to generate three sets of projections (10 degrees , 15 degrees , and 20 degrees , angular disparity). These data were blurred to a resolution consistent with the 4 mm pinhole. The TACT reconstruction method was used to reconstruct these three image sets. The tumor contrast and the axial spatial resolution was measured. Smaller angular disparity led to an improvement in image contrast but at a cost of degraded axial spatial resolution. The improvement in contrast is due to a slight improvement in the in-plane spatial resolution. Since improved contrast should lead to better tumor detectability, smaller angular disparity should be used. However, the difference in contrast between 10 degrees and 15 degrees was very slight and therefore a reasonable clinical choice for angular disparity is 15 degrees", "fulltext": "", "keywords": "pinhole-to-tumor distances;computer simulation;image sets;hemispherical breast;angular disparity;spherical tumor;pinhole collimator;clinical choice;emission tuned aperture computed tomography scintimammography;ray tracing technique;pinhole-to-detector distances;tuned aperture computed tomography reconstruction method;image quality;axial spatial resolution;in-plane spatial resolution;gamma camera;small tumors"} {"name": "train_1148", "title": "Benchmarking of the Dose Planning Method (DPM) Monte Carlo code using electron", "abstract": "beams from a racetrack microtron A comprehensive set of measurements and calculations has been conducted to investigate the accuracy of the Dose Planning Method (DPM) Monte Carlo code for dose calculations from 10 and 50 MeV scanned electron beams produced from a racetrack microtron. Central axis depth dose measurements and a series of profile scans at various depths were acquired in a water phantom using a Scanditronix type RK ion chamber. Source spatial distributions for the Monte Carlo calculations were reconstructed from in-air ion chamber measurements carried out across the two-dimensional beam profile at 100 cm downstream from the source. The in-air spatial distributions were found to have full width at half maximum of 4.7 and 1.3 cm, at 100 cm from the source, for the 10 and 50 MeV beams, respectively. Energy spectra for the 10 and 50 MeV beams were determined by simulating the components of the microtron treatment head using the code MCNP4B. DPM calculations are on average within +or-2% agreement with measurement for all depth dose and profile comparisons conducted in this study. The accuracy of the DPM code illustrated in this work suggests that DPM may be used as a valuable tool for electron beam dose calculations", "fulltext": "", "keywords": "two-dimensional beam profile;water phantom;50 mev;profile scans;racetrack microtron;10 mev;dose planning method monte carlo code;mcnp4b;ion chamber;source spatial distributions;scoring parameters;electron beam dose calculations;electron transport;in-air spatial distributions;scanned electron beams;benchmarking;central axis depth dose measurements;radiotherapy treatment planning"} {"name": "train_1149", "title": "Deterministic calculations of photon spectra for clinical accelerator targets", "abstract": "A method is proposed to compute photon energy spectra produced in clinical electron accelerator targets, based on the deterministic solution of the Boltzmann equation for coupled electron-photon transport in one-dimensional (1-D) slab geometry. It is shown that the deterministic method gives similar results as Monte Carlo calculations over the angular range of interest for therapy applications. Relative energy spectra computed by deterministic and 3-D Monte Carlo methods, respectively, are compared for several realistic target materials and different electron beams, and are found to give similar photon energy distributions and mean energies. The deterministic calculations typically require 1-2 mins of execution time on a Sun Workstation, compared to 2-36 h for the Monte Carlo runs", "fulltext": "", "keywords": "angular range of interest;coupled electron-photon transport;therapy applications;deterministic calculations;one-dimensional slab geometry;integrodifferential equation;3-d monte carlo methods;pencil beam source representations;linear accelerator;boltzmann equation;relative energy spectra;therapy planning;clinical electron accelerator targets;photon energy spectra"} {"name": "train_115", "title": "Non-optimal universal quantum deleting machine", "abstract": "We verify the non-existence of some standard universal quantum deleting machine. Then a non-optimal universal quantum deleting machine is constructed and we emphasize the difficulty for improving its fidelity. In a way, our results complement the universal quantum cloning machine established by Buzek and Hillery (1996), and manifest some of their distinctions", "fulltext": "", "keywords": "universal quantum cloning machine;nuqdm;fidelity;nonoptimal universal quantum deleting machine"} {"name": "train_1150", "title": "Effect of multileaf collimator leaf width on physical dose distributions in the", "abstract": "treatment of CNS and head and neck neoplasms with intensity modulated radiation therapy The purpose of this work is to examine physical radiation dose differences between two multileaf collimator (MLC) leaf widths (5 and 10 mm) in the treatment of CNS and head and neck neoplasms with intensity modulated radiation therapy (IMRT). Three clinical patients with CNS tumors were planned with two different MLC leaf sizes, 5 and 10 mm, representing Varian-120 and Varian-80 Millennium multileaf collimators, respectively. Two sets of IMRT treatment plans were developed. The goal of the first set was radiation dose conformality in three dimensions. The goal for the second set was organ avoidance of a nearby critical structure while maintaining adequate coverage of the target volume. Treatment planning utilized the CadPlan/Helios system (Varian Medical Systems, Milpitas CA) for dynamic MLC treatment delivery. All beam parameters and optimization (cost function) parameters were identical for the 5 and 10 mm plans. For all cases the number of beams, gantry positions, and table positions were taken from clinically treated three-dimensional conformal radiotherapy plans. Conformality was measured by the ratio of the planning isodose volume to the target volume. Organ avoidance was measured by the volume of the critical structure receiving greater than 90% of the prescription dose (V/sub 90/). For three patients with squamous cell carcinoma of the head and neck (T2-T4 N0-N2c M0) 5 and 10 mm leaf widths were compared for parotid preservation utilizing nine coplanar equally spaced beams delivering a simultaneous integrated boost. Because modest differences in physical dose to the parotid were detected, a NTCP model based upon the clinical parameters of Eisbruch et al. was then used for comparisons. The conformality improved in all three CNS cases for the 5 mm plans compared to the 10 mm plans. For the organ avoidance plans, V/sub 90/ also improved in two of the three cases when the 5 mm leaf width was utilized for IMRT treatment delivery. In the third case, both the 5 and 10 mm plans were able to spare the critical structure with none of the structure receiving more than 90% of the prescription dose, but in the moderate dose range, less dose was delivered to the critical structure with the 5 mm plan. For the head and neck cases both the 5 and 10*2.5 mm beamlets dMLC sliding window techniques spared the contralateral parotid gland while maintaining target volume coverage. The mean parotid dose was modestly lower with the smaller beamlet size (21.04 Gy vs 22.36 Gy). The resulting average NTCP values were 13.72% for 10 mm dMLC and 8.24% for 5 mm dMLC. In conclusion, five mm leaf width results in an improvement in physical dose distribution over 10 mm leaf width that may be clinically relevant in some cases. These differences may be most pronounced for single fraction radiosurgery or in cases where the tolerance of the sensitive organ is less than or close to the target volume prescription", "fulltext": "", "keywords": "physical dose distributions;conformal radiotherapy;optimization parameters;head and neck neoplasms;collimator rotation;acceptable tumor coverage;multileaf collimator leaf width;10 mm;intensity modulated radiation therapy;treatment planning;21.04 gy;cns neoplasms;5 mm;22.36 gy;cns tumors;beamlet size;parotid preservation;minimal toxicity;single fraction radiosurgery"} {"name": "train_1151", "title": "A method for geometrical verification of dynamic intensity modulated", "abstract": "radiotherapy using a scanning electronic portal imaging device In order to guarantee the safe delivery of dynamic intensity modulated radiotherapy (IMRT), verification of the leaf trajectories during the treatment is necessary. Our aim in this study is to develop a method for on-line verification of leaf trajectories using an electronic portal imaging device with scanning read-out, independent of the multileaf collimator. Examples of such scanning imagers are electronic portal imaging devices (EPIDs) based on liquid-filled ionization chambers and those based on amorphous silicon. Portal images were acquired continuously with a liquid-filled ionization chamber EPID during the delivery, together with the signal of treatment progress that is generated by the accelerator. For each portal image, the prescribed leaf and diaphragm positions were computed from the dynamic prescription and the progress information. Motion distortion effects of the leaves are corrected based on the treatment progress that is recorded for each image row. The aperture formed by the prescribed leaves and diaphragms is used as the reference field edge, while the actual field edge is found using a maximum-gradient edge detector. The errors in leaf and diaphragm position are found from the deviations between the reference field edge and the detected field edge. Earlier measurements of the dynamic EPID response show that the accuracy of the detected field edge is better than 1 mm. To ensure that the verification is independent of inaccuracies in the acquired progress signal, the signal was checked with diode measurements beforehand. The method was tested on three different dynamic prescriptions. Using the described method, we correctly reproduced the distorted field edges. Verifying a single portal image took 0.1 s on an 866 MHz personal computer. Two flaws in the control system of our experimental dynamic multileaf collimator were correctly revealed with our method. First, the errors in leaf position increase with leaf speed, indicating a delay of approximately 0.8 s in the control system. Second, the accuracy of the leaves and diaphragms depends on the direction of motion. In conclusion, the described verification method is suitable for detailed verification of leaf trajectories during dynamic IMRT", "fulltext": "", "keywords": "scanning read-out;liquid-filled ionization chambers;geometrical verification method;reference field edge;leaf trajectories;treatment planning;safe delivery;diaphragm positions;control system;dynamic multileaf collimator;dynamic intensity modulated radiotherapy;distorted field edges;motion distortion effects;dose distributions;leaf positions;on-line verification"} {"name": "train_1152", "title": "Incorporating multi-leaf collimator leaf sequencing into iterative IMRT", "abstract": "optimization Intensity modulated radiation therapy (IMRT) treatment planning typically considers beam optimization and beam delivery as separate tasks. Following optimization, a multi-leaf collimator (MLC) or other beam delivery device is used to generate fluence patterns for patient treatment delivery. Due to limitations and characteristics of the MLC, the deliverable intensity distributions often differ from those produced by the optimizer, leading to differences between the delivered and the optimized doses. Objective function parameters are then adjusted empirically, and the plan is reoptimized to achieve a desired deliverable dose distribution. The resulting plan, though usually acceptable, may not be the best achievable. A method has been developed to incorporate the MLC restrictions into the optimization process. Our in-house IMRT system has been modified to include the calculation of the deliverable intensity into the optimizer. In this process, prior to dose calculation, the MLC leaf sequencer is used to convert intensities to dynamic MLC sequences, from which the deliverable intensities are then determined. All other optimization steps remain the same. To evaluate the effectiveness of deliverable-based optimization, 17 patient cases have been studied. Compared with standard optimization plus conversion to deliverable beams, deliverable-based optimization results show improved isodose coverage and a reduced dose to critical structures. Deliverable-based optimization results are close to the original nondeliverable optimization results, suggesting that IMRT can overcome the MLC limitations by adjusting individual beamlets. The use of deliverable-based optimization may reduce the need for empirical adjustment of objective function parameters and reoptimization of a plan to achieve desired results", "fulltext": "", "keywords": "fluence patterns;empirical adjustment;objective function parameters;intensity modulated radiation therapy;newton method;beam delivery;treatment planning;deliverable dose distribution;iterative optimization;beam optimization;optimized intensity;gradient-based search algorithm;dose-volume objective values;tumor dose;multileaf collimator leaf sequencing;beamlet ray intensities"} {"name": "train_1153", "title": "Direct aperture optimization: A turnkey solution for step-and-shoot IMRT", "abstract": "IMRT treatment plans for step-and-shoot delivery have traditionally been produced through the optimization of intensity distributions (or maps) for each beam angle. The optimization step is followed by the application of a leaf-sequencing algorithm that translates each intensity map into a set of deliverable aperture shapes. In this article, we introduce an automated planning system in which we bypass the traditional intensity optimization, and instead directly optimize the shapes and the weights of the apertures. We call this approach \"direct aperture optimization.\" This technique allows the user to specify the maximum number of apertures per beam direction, and hence provides significant control over the complexity of the treatment delivery. This is possible because the machine dependent delivery constraints imposed by the MLC are enforced within the aperture optimization algorithm rather than in a separate leaf-sequencing step. The leaf settings and the aperture intensities are optimized simultaneously using a simulated annealing algorithm. We have tested direct aperture optimization on a variety of patient cases using the EGS4/BEAM Monte Carlo package for our dose calculation engine. The results demonstrate that direct aperture optimization can produce highly conformal step-and-shoot treatment plans using only three to five apertures per beam direction. As compared with traditional optimization strategies, our studies demonstrate that direct aperture optimization can result in a significant reduction in both the number of beam segments and the number of monitor units. Direct aperture optimization therefore produces highly efficient treatment deliveries that maintain the full dosimetric benefits of IMRT", "fulltext": "", "keywords": "intensity distributions;leaf settings;monitor units;automated planning system;maps;beam segments;aperture intensities;optimization step;turnkey solution;leaf-sequencing algorithm;aperture optimization algorithm;direct aperture optimization;patient cases;aperture shapes;highly conformal step-and-shoot treatment plans;mlc;deliverable aperture shapes;aperture weights;egs4/beam monte carlo package;treatment delivery complexity;full dosimetric benefits;intensity map;imrt treatment plans;step-and-shoot imrt;machine dependent delivery constraints;dose calculation engine;highly efficient treatment deliveries;beam angle;simulated annealing algorithm"} {"name": "train_1154", "title": "The effect of voxel size on the accuracy of dose-volume histograms of prostate", "abstract": "/sup 125/I seed implants Cumulative dose-volume histograms (DVH) are crucial in evaluating the quality of radioactive seed prostate implants. When calculating DVHs, the choice of voxel size is a compromise between computational speed (larger voxels) and accuracy (smaller voxels). We quantified the effect of voxel size on the accuracy of DVHs using an in-house computer program. The program was validated by comparison with a hand-calculated DVH for a single 0.4-U iodine-125 model 6711 seed. We used the program to find the voxel size required to obtain accurate DVHs of five iodine-125 prostate implant patients at our institution. One-millimeter cubes were sufficient to obtain DVHs that are accurate within 5% up to 200% of the prescription dose. For the five patient plans, we obtained good agreement with the VariSeed (version 6.7, Varian, USA) treatment planning software's DVH algorithm by using voxels with a sup-inf dimension equal to the spacing between successive transverse seed implant planes (5 mm). The volume that receives at least 200% of the target dose, V/sub 200/, calculated by VariSeed was 30% to 43% larger than that calculated by our program with small voxels. The single-seed DVH calculated by VariSeed fell below the hand calculation by up to 50% at low doses (30 Gy), and above it by over 50% at high doses (>250 Gy)", "fulltext": "", "keywords": "in-house computer program;cumulative dose-volume histograms;computational speed;/sup 125/i model;single-seed dose-volume histograms;hand-calculated dose-volume histograms;prostate /sup 125/i seed implants;voxel size;radioactive seed prostate implants;i;/sup 125/i prostate implant patients;variseed treatment planning software's dose-volume histogram algorithm"} {"name": "train_1155", "title": "A leaf sequencing algorithm to enlarge treatment field length in IMRT", "abstract": "With MLC-based IMRT, the maximum usable field size is often smaller than the maximum field size for conventional treatments. This is due to the constraints of the overtravel distances of MLC leaves and/or jaws. Using a new leaf sequencing algorithm, the usable IMRT field length (perpendicular to the MLC motion) can be mostly made equal to the full length of the MLC field without violating the upper jaw overtravel limit. For any given intensity pattern, a criterion was proposed to assess whether an intensity pattern can be delivered without violation of the jaw position constraints. If the criterion is met, the new algorithm will consider the jaw position constraints during the segmentation for the step and shoot delivery method. The strategy employed by the algorithm is to connect the intensity elements outside the jaw overtravel limits with those inside the jaw overtravel limits. Several methods were used to establish these connections during segmentation by modifying a previously published algorithm (areal algorithm), including changing the intensity level, alternating the leaf-sequencing direction, or limiting the segment field size. The algorithm was tested with 1000 random intensity patterns with dimensions of 21*27 cm/sup 2/, 800 intensity patterns with higher intensity outside the jaw overtravel limit, and three different types of clinical treatment plans that were undeliverable using a segmentation method from a commercial treatment planning system. The new algorithm achieved a success rate of 100% with these test patterns. For the 1000 random patterns, the new algorithm yields a similar average number of segments of 36.9+or-2.9 in comparison to 36.6+or-1.3 when using the areal algorithm. For the 800 patterns with higher intensities outside the jaw overtravel limits, the new algorithm results in an increase of 25% in the average number of segments compared to the areal algorithm. However, the areal algorithm fails to create deliverable segments for 90% of these patterns. Using a single isocenter, the new algorithm provides a solution to extend the usable IMRT field length from 21 to 27 cm for IMRT on a commercial linear accelerator using the step and shoot delivery method", "fulltext": "", "keywords": "intensity pattern;commercial treatment planning system;usable intensity modulated radiation therapy field length;multileaf collimators jaws;leaf-sequencing direction;overtravel distances;random patterns;jaw overtravel limits;leaf sequencing algorithm;upper jaw overtravel limit;step and shoot delivery method;intensity elements;segment field size;conformal radiation therapy;commercial linear accelerator;deliverable segments;single isocenter;random intensity patterns;jaw position constraints;segmentation method;multileaf collimators leaves;treatment field length;areal algorithm;multileaf-based collimators intensity modulated radiation therapy"} {"name": "train_1156", "title": "Favorable noise uniformity properties of Fourier-based interpolation and", "abstract": "reconstruction approaches in single-slice helical computed tomography Volumes reconstructed by standard methods from single-slice helical computed tomography (CT) data have been shown to have noise levels that are highly nonuniform relative to those in conventional CT. These noise nonuniformities can affect low-contrast object detectability and have also been identified as the cause of the zebra artifacts that plague maximum intensity projection (MIP) images of such volumes. While these spatially variant noise levels have their root in the peculiarities of the helical scan geometry, there is also a strong dependence on the interpolation and reconstruction algorithms employed. In this paper, we seek to develop image reconstruction strategies that eliminate or reduce, at its source, the nonuniformity of noise levels in helical CT relative to that in conventional CT. We pursue two approaches, independently and in concert. We argue, and verify, that Fourier-based longitudinal interpolation approaches lead to more uniform noise ratios than do the standard 360LI and 180LI approaches. We also demonstrate that a Fourier-based fan-to-parallel rebinning algorithm, used as an alternative to fanbeam filtered backprojection for slice reconstruction, also leads to more uniform noise ratios, even when making use of the 180LI and 360LI interpolation approaches", "fulltext": "", "keywords": "maximum intensity projection images;low-contrast object detectability;fourier-based fan-to-parallel rebinning algorithm;fourier-based interpolation;reconstruction approaches;helical span geometry;single-slice helical computed tomography;noise uniformity properties;medical diagnostic imaging;more uniform noise ratios;zebra artifacts;conventional ct"} {"name": "train_1157", "title": "Portal dose image prediction for dosimetric treatment verification in", "abstract": "radiotherapy. II. An algorithm for wedged beams A method is presented for calculation of a two-dimensional function, T/sub wedge/(x,y), describing the transmission of a wedged photon beam through a patient. This in an extension of the method that we have published for open (nonwedged) fields [Med. Phys. 25, 830-840 (1998)]. Transmission functions for open fields are being used in our clinic for prediction of portal dose images (PDI, i.e., a dose distribution behind the patient in a plane normal to the beam axis), which are compared with PDIs measured with an electronic portal imaging device (EPID). The calculations are based on the planning CT scan of the patient and on the irradiation geometry as determined in the treatment planning process. Input data for the developed algorithm for wedged beams are derived from (the already available) measured input data set for transmission prediction in open beams, which is extended with only a limited set of measurements in the wedged beam. The method has been tested for a PDI plane at 160 cm from the focus, in agreement with the applied focus-to-detector distance of our fluoroscopic EPIDs. For low and high energy photon beams (6 and 23 MV) good agreement (~1%) has been found between calculated and measured transmissions for a slab and a thorax phantom", "fulltext": "", "keywords": "virtual wedges;slab phantom;portal dose image prediction;irradiation geometry;thorax phantom;electronic portal imaging devices;dosimetric treatment verification;high energy photon beams;fluoroscopic ccd camera;transmission dosimetry;6 mv;wedged photon beam;radiotherapy;in vivo dosimetry;open beams;low energy photon beams;two-dimensional function;planning ct scan;cadplan planning system;wedged beams algorithm;23 mv;pencil beam algorithm"} {"name": "train_1158", "title": "From powder to perfect parts", "abstract": "GKN Sinter Metals has increased productivity and quality by automating the powder metal lines that produce its transmission parts", "fulltext": "", "keywords": "gkn sinter metals;automating;robotic systems;gentle transfer units;powder metal lines;conveyors"} {"name": "train_1159", "title": "Sigma -admissible families over linear orders", "abstract": "Admissible sets of the form HYP(M), where M is a recursively saturated system, are treated. We provide descriptions of subsets M, which are Sigma /sub */-sets in HYP(M), and of families of subsets M, which form Sigma -regular families in HYP(M), in terms of the concept of being fundamental couched in the article. Fundamental subsets and families are characterized for models of dense linear orderings", "fulltext": "", "keywords": "dense linear orderings;sigma -admissible families;linear orders;recursively saturated system;hyp(m);fundamental subsets"} {"name": "train_116", "title": "Frontier between separability and quantum entanglement in a many spin system", "abstract": "We discuss the critical point x/sub c/ separating the quantum entangled and separable states in two series of N spins S in the simple mixed state characterized by the matrix operator rho = x| phi >< phi |+1-x/D/sup N/I/sub D/N, where x in [0, 1], D = 2S + 1, I/sub D/N is the D/sup N/ * D/sup N/ unity matrix and | phi > is a special entangled state. The cases x = 0 and x = 1 correspond respectively to fully random spins and to a fully entangled state. In the first of these series we consider special states | phi > invariant under charge conjugation, that generalizes the N = 2 spin S = 1/2 Einstein-Podolsky-Rosen state, and in the second one we consider generalizations of the Werner (1989) density matrices. The evaluation of the critical point x/sub c/ was done through bounds coming from the partial transposition method of Peres (1996) and the conditional nonextensive entropy criterion. Our results suggest the conjecture that whenever the bounds coming from both methods coincide the result of x/sub c/ is the exact one. The results we present are relevant for the discussion of quantum computing, teleportation and cryptography", "fulltext": "", "keywords": "teleportation;critical point;entangled state;werner density matrices;many spin system;einstein-podolsky-rosen state;quantum entanglement;random spin;partial transposition method;quantum computing;cryptography;separable states;separability;charge conjugation;nonextensive entropy criterion;unity matrix;matrix operator"} {"name": "train_1160", "title": "Monoids all polygons over which are omega -stable: proof of the Mustafin-Poizat", "abstract": "conjecture A monoid S is called an omega -stabilizer (superstabilizer, or stabilizer) if every S-polygon has an omega -stable (superstable, or stable) theory. It is proved that every omega -stabilizer is a regular monoid. This confirms the Mustafin-Poizat conjecture and allows us to end up the description of omega -stabilizers", "fulltext": "", "keywords": "mustafin-poizat conjecture;regular monoid;s-polygon;monoids all polygons;omega -stabilizer"} {"name": "train_1161", "title": "Model theory for hereditarily finite superstructures", "abstract": "We study model-theoretic properties of hereditarily finite superstructures over models of not more than countable signatures. A question is answered in the negative inquiring whether theories of hereditarily finite superstructures which have a unique (up to isomorphism) hereditarily finite superstructure can be described via definable functions. Yet theories for such superstructures admit a description in terms of iterated families TF and SF. These are constructed using a definable union taken over countable ordinals in the subsets which are unions of finitely many complete subsets and of finite subsets, respectively. Simultaneously, we describe theories that share a unique (up to isomorphism) countable hereditarily finite superstructure", "fulltext": "", "keywords": "countable signatures;iterated families;countable hereditarily finite superstructure;finitely many complete subsets;definable union;model theory;model-theoretic properties"} {"name": "train_1162", "title": "Recognition of finite simple groups S/sub 4/(q) by their element orders", "abstract": "It is proved that among simple groups S/sub 4/(q) in the class of finite-groups, only the groups S/sub 4/(3/sup n/), where n is an odd number greater than unity, are recognizable by a set of their element orders. It is also shown that simple groups U/sub 3/(9), /sup 3/D/sub 4/(2), G/sub 2/(4), S/sub 6/(3), F/sub 4/(2), and /sup 2/E/sub 6/(2) are recognizable, but L/sub 3/(3) is not", "fulltext": "", "keywords": "divisibility relation;element orders;finite simple groups recognition"} {"name": "train_1163", "title": "Evaluating the complexity of index sets for families of general recursive", "abstract": "functions in the arithmetic hierarchy The complexity of index sets of families of general recursive functions is evaluated in the Kleene-Mostowski arithmetic hierarchy", "fulltext": "", "keywords": "general recursive functions;kleene-mostowski arithmetic hierarchy;index sets complexity;arithmetic hierarchy"} {"name": "train_1164", "title": "Friedberg numberings of families of n-computably enumerable sets", "abstract": "We establish a number of results on numberings, in particular, on Friedberg numberings, of families of d.c.e. sets. First, it is proved that there exists a Friedberg numbering of the family of all d.c.e. sets. We also show that this result, patterned on Friedberg's famous theorem for the family of all c.e. sets, holds for the family of all n-c.e. sets for any n > 2. Second, it is stated that there exists an infinite family of d.c.e. sets without a Friedberg numbering. Third, it is shown that there exists an infinite family of c.e. sets (treated as a family of d.c.e. sets) with a numbering which is unique up to equivalence. Fourth, it is proved that there exists a family of d.c.e. sets with a least numbering (under reducibility) which is Friedberg but is not the only numbering (modulo reducibility)", "fulltext": "", "keywords": "computability theory;infinite family;friedberg numberings;families of n-computably enumerable sets"} {"name": "train_1165", "title": "Recognizing groups G/sub 2/(3/sup n/) by their element orders", "abstract": "It is proved that a finite group that is isomorphic to a simple non-Abelian group G = G/sub 2/(3/sup n/) is, up to isomorphism, recognized by a set omega (G) of its element orders, that is, H approximately= G if omega (H) = omega (G) for some finite group H", "fulltext": "", "keywords": "element orders;isomorphism;finite group"} {"name": "train_1166", "title": "Embedding the outer automorphism group Out(F/sub n/) of a free group of rank n", "abstract": "in the group Out(F/sub m/) for m > n It is proved that for every n >or= 1, the group Out(F/sub n/) is embedded in the group Out(F/sub m/) with m = 1 + (n - 1)k/sup n/, where k is an arbitrary natural number coprime to n - 1", "fulltext": "", "keywords": "free group;outer automorphism group embedding;arbitrary natural number coprime"} {"name": "train_1167", "title": "A new approach to the d-MC problem", "abstract": "Many real-world systems are multi-state systems composed of multi-state components in which the reliability can be computed in terms of the lower bound points of level d, called d-Mincuts (d-MCs). Such systems (electric power, transportation, etc.) may be regarded as flow networks whose arcs have independent, discrete, limited and multi-valued random capacities. In this paper, all MCs are assumed to be known in advance, and the authors focused on how to verify each d-MC candidate before using d-MCs to calculate the network reliability. The proposed algorithm is more efficient than existing algorithms. The algorithm runs in O(p sigma mn) time, a significant improvement over the previous O(p sigma m/sup 2/) time bounds based on max-flow/min-cut, where p and or are the number of MCs and d-MC candidates, respectively. It is simple, intuitive and uses no complex data structures. An example is given to show how all d-MC candidates are found and verified by the proposed algorithm. Then the reliability of this example is computed", "fulltext": "", "keywords": "multi-state systems;time bounds;max-flow/min-cut;flow networks;multi-state components;d-mc problem;reliability computation;failure analysis algorithm;d-mincuts"} {"name": "train_1168", "title": "Computing failure probabilities. Applications to reliability analysis", "abstract": "The paper presents one method for calculating failure probabilities with applications to reliability analysis. The method is based on transforming the initial set of variables to a n-dimensional uniform random variable in the unit hypercube, together with the limit condition set and calculating the associated probability using a recursive method based on the Gauss-Legendre quadrature formulas to calculate the resulting multiple integrals. An example of application is used to illustrate the proposed method", "fulltext": "", "keywords": "multiple integrals calculation;gauss-legendre quadrature formulae;n-dimensional uniform random variable;recursive method;tail approximation;limit condition;reliability analysis applications;failure probabilities computation;unit hypercube"} {"name": "train_1169", "title": "An efficient algorithm for sequential generation of failure states in a network", "abstract": "with multi-mode components In this work, a new algorithm for the sequential generation of failure states in a network with multi-mode components is proposed. The algorithm presented in the paper transforms the state enumeration problem into a K-shortest paths problem. Taking advantage of the inherent efficiency of an algorithm for shortest paths enumeration and also of the characteristics of the reliability problem in which it will be used, an algorithm with lower complexity than the best algorithm in the literature for solving this problem, was obtained. Computational results will be presented for comparing the efficiency of both algorithms in terms of CPU time and for problems of different size", "fulltext": "", "keywords": "multi-mode components reliability;sequential failure states generation algorithm;network failure states;cpu time;state enumeration problem;k-shortest paths problem"} {"name": "train_117", "title": "Multiresolution Markov models for signal and image processing", "abstract": "Reviews a significant component of the rich field of statistical multiresolution (MR) modeling and processing. These MR methods have found application and permeated the literature of a widely scattered set of disciplines, and one of our principal objectives is to present a single, coherent picture of this framework. A second goal is to describe how this topic fits into the even larger field of MR methods and concepts-in particular, making ties to topics such as wavelets and multigrid methods. A third goal is to provide several alternate viewpoints for this body of work, as the methods and concepts we describe intersect with a number of other fields. The principle focus of our presentation is the class of MR Markov processes defined on pyramidally organized trees. The attractiveness of these models stems from both the very efficient algorithms they admit and their expressive power and broad applicability. We show how a variety of methods and models relate to this framework including models for self-similar and 1/f processes. We also illustrate how these methods have been used in practice", "fulltext": "", "keywords": "multigrid methods;1/f processes;multiresolution markov models;wavelets;statistical multiresolution modeling;pyramidally organized trees;self-similar processes"} {"name": "train_1170", "title": "Upper bound analysis of oblique cutting with nose radius tools", "abstract": "A generalized upper bound model for calculating the chip flow angle in oblique cutting using flat-faced nose radius tools is described. The projection of the uncut chip area on the rake face is divided into a number of elements parallel to an assumed chip flow direction. The length of each of these elements is used to find the length of the corresponding element on the shear surface using the ratio of the shear velocity to the chip velocity. The area of each element is found as the cross product of the length and its width along the cutting edge. Summing up the area of the elements along the shear surface, the total shear surface area is obtained. The friction area is calculated using the similarity between orthogonal and oblique cutting in the 'equivalent' plane that includes both the cutting velocity and chip velocity. The cutting power is obtained by summing the shear power and the friction power. The actual chip flow angle and chip velocity are obtained by minimizing the cutting power with respect to both these variables. The shape of the curved shear surface, the chip cross section and the cutting force obtained from this model are presented", "fulltext": "", "keywords": "friction area;uncut chip area;nose radius tools;chip velocity;chip flow angle;shear surface;upper bound analysis;oblique cutting;shear velocity"} {"name": "train_1171", "title": "Manufacturing data analysis of machine tool errors within a contemporary small", "abstract": "manufacturing enterprise The main focus of the paper is directed at the determination of manufacturing errors within the contemporary smaller manufacturing enterprise sector. The manufacturing error diagnosis is achieved through the manufacturing data analysis of the results obtained from the inspection of the component on a co-ordinate measuring machine. This manufacturing data analysis activity adopts a feature-based approach and is conducted through the application of a forward chaining expert system, called the product data analysis distributed diagnostic expert system, which forms part of a larger prototype feedback system entitled the production data analysis framework. The paper introduces the manufacturing error categorisations that are associated with milling type operations, knowledge acquisition and representation, conceptual structure and operating procedure of the prototype manufacturing data analysis facility. The paper concludes with a brief evaluation of the logic employed through the simulation of manufacturing error scenarios. This prototype manufacturing data analysis expert system provides a valuable aid for the rapid diagnosis and elimination of manufacturing errors on a 3-axis vertical machining centre in an environment where operator expertise is limited", "fulltext": "", "keywords": "machine tool errors;milling type operations;fixturing errors;knowledge acquisition;feature-based approach;forward chaining expert system;conceptual structure;operating procedure;2 1/2d components;3-axis vertical machining centre;knowledge representation;manufacturing data analysis;co-ordinate measuring machine;inspection;product data analysis distributed diagnostic expert system;contemporary small manufacturing enterprise;programming errors"} {"name": "train_1172", "title": "Marble cutting with single point cutting tool and diamond segments", "abstract": "An investigation has been undertaken into the frame sawing with diamond blades. The kinematic behaviour of the frame sawing process is discussed. Under different cutting conditions, cutting and indenting-cutting tests are carried out by single point cutting tools and single diamond segments. The results indicate that the depth of cut per diamond grit increases as the blades move forward. Only a few grits per segment can remove the material in the cutting process. When the direction of the stroke changes, the cutting forces do not decrease to zero because of the residual plastic deformation beneath the diamond grits. The plastic deformation and fracture chipping of material are the dominant removal processes, which can be explained by the fracture theory of brittle material indentation", "fulltext": "", "keywords": "indenting-cutting tests;residual plastic deformation;cutting tests;diamond segments;fracture theory;kinematic behaviour;single point cutting tool;frame sawing;marble cutting;fracture chipping;removal processes;brittle material indentation"} {"name": "train_1173", "title": "A comprehensive chatter prediction model for face turning operation including", "abstract": "tool wear effect Presents a three-dimensional mechanistic frequency domain chatter model for face turning processes, that can account for the effects of tool wear including process damping. New formulations are presented to model the variation in process damping forces along nonlinear tool geometries such as the nose radius. The underlying dynamic force model simulates the variation in the chip cross-sectional area by accounting for the displacements in the axial and radial directions. The model can be used to determine stability boundaries under various cutting conditions and different states of flank wear. Experimental results for different amounts of wear are provided as a validation for the model", "fulltext": "", "keywords": "flank wear;tool wear effect;chatter prediction model;face turning operation;axial directions;process damping;three-dimensional mechanistic frequency domain chatter model;radial directions;stability boundaries"} {"name": "train_1174", "title": "Optimization of cutting conditions for single pass turning operations using a", "abstract": "deterministic approach An optimization analysis, strategy and CAM software for the selection of economic cutting conditions in single pass turning operations are presented using a deterministic approach. The optimization is based on criteria typified by the maximum production rate and includes a host of practical constraints. It is shown that the deterministic optimization approach involving mathematical analyses of constrained economic trends and graphical representation on the feed-speed domain provides a clearly defined strategy that not only provides a unique global optimum solution, but also the software that is suitable for on-line CAM applications. A numerical study has verified the developed optimization strategies and software and has shown the economic benefits of using optimization", "fulltext": "", "keywords": "deterministic approach;economic cutting conditions;process planning;cam software;cutting conditions optimization;constrained economic trends;maximum production rate;mathematical analyses;single pass turning operations"} {"name": "train_1175", "title": "Prediction of tool and chip temperature in continuous and interrupted machining", "abstract": "A numerical model based on the finite difference method is presented to predict tool and chip temperature fields in continuous machining and time varying milling processes. Continuous or steady state machining operations like orthogonal cutting are studied by modeling the heat transfer between the tool and chip at the tool-rake face contact zone. The shear energy created in the primary zone, the friction energy produced at the rake face-chip contact zone and the heat balance between the moving chip and stationary tool are considered. The temperature distribution is solved using the finite difference method. Later, the model is extended to milling where the cutting is interrupted and the chip thickness varies with time. The proposed model combines the steady-state temperature prediction in continuous machining with transient temperature evaluation in interrupted cutting operations where the chip and the process change in a discontinuous manner. The mathematical models and simulation results are in satisfactory agreement with experimental temperature measurements reported in the literature", "fulltext": "", "keywords": "primary zone;heat transfer;finite difference method;tool temperature prediction;interrupted machining;thermal properties;tool-rake face contact zone;orthogonal cutting;continuous machining;temperature distribution;time varying milling processes;friction energy;shear energy;numerical model;chip temperature prediction;first-order dynamic system"} {"name": "train_1176", "title": "A summary of methods applied to tool condition monitoring in drilling", "abstract": "Presents a summary of the monitoring methods, signal analysis and diagnostic techniques for tool wear and failure monitoring in drilling that have been tested and reported in the literature. The paper covers only indirect monitoring methods such as force, vibration and current measurements. Signal analysis techniques cover all the methods that have been used with indirect measurements including e.g. statistical parameters and Fast Fourier and Wavelet Transform. Only a limited number of automatic diagnostic tools have been developed for diagnosis of the condition of the tool in drilling. All of these rather diverse approaches that have been available are covered in this study. Only in a few of the papers have attempts been made to compare the chosen approach with other methods. Many of the papers only present one approach and unfortunately quite often the test material of the study is limited especially in what comes to the cutting process parameter variation and also workpiece material", "fulltext": "", "keywords": "tool wear;force measurements;indirect monitoring methods;diagnostic techniques;failure monitoring;wavelet transform;current measurements;vibration measurements;automatic diagnostic tools;tool condition monitoring;fast fourier transform;drilling;signal analysis;monitoring methods;statistical parameters"} {"name": "train_1177", "title": "Comparative statistical analysis of hole taper and circularity in laser", "abstract": "percussion drilling Investigates the relationships and parameter interactions between six controllable variables on the hole taper and circularity in laser percussion drilling. Experiments have been conducted on stainless steel workpieces and a comparison was made between stainless steel and mild steel. The central composite design was employed to plan the experiments in order to achieve required information with reduced number of experiments. The process performance was evaluated. The ratio of minimum to maximum Feret's diameter was considered as circularity characteristic of the hole. The models of these three process characteristics were developed by linear multiple regression technique. The significant coefficients were obtained by performing analysis of variance (ANOVA) at 1, 5 and 7% levels of significance. The final models were checked by complete residual analysis and finally were experimentally verified. It was found that the pulse frequency had a significant effect on the hole entrance diameter and hole circularity in drilling stainless steel unlike the drilling of mild steel where the pulse frequency had no significant effect on the hole characteristics", "fulltext": "", "keywords": "pulse frequency;laser percussion drilling;analysis of variance;central composite design;anova;mild steel;equivalent entrance diameter;laser pulse width;ferets diameter;process performance;complete residual analysis;linear multiple regression technique;stepwise regression method;hole taper;least squares procedure;stainless steel workpieces;laser peak power;focal plane position;comparative statistical analysis;circularity;assist gas pressure"} {"name": "train_1178", "title": "Network-centric systems", "abstract": "The author describes a graduate-level course that addresses cutting-edge issues in network-centric systems while following a more traditional graduate seminar format", "fulltext": "", "keywords": "graduate level course;network-centric systems"} {"name": "train_1179", "title": "Evolution complexity of the elementary cellular automaton rule 18", "abstract": "Cellular automata are classes of mathematical systems characterized by discreteness (in space, time, and state values), determinism, and local interaction. Using symbolic dynamical theory, we coarse-grain the temporal evolution orbits of cellular automata. By means of formal languages and automata theory, we study the evolution complexity of the elementary cellular automaton with local rule number 18 and prove that its width 1-evolution language is regular, but for every n >or= 2 its width n-evolution language is not context free but context sensitive", "fulltext": "", "keywords": "elementary cellular automaton;formal languages;complexity;cellular automata;evolution complexity;symbolic dynamical theory"} {"name": "train_118", "title": "Sensorless control of induction motor drives", "abstract": "Controlled induction motor drives without mechanical speed sensors at the motor shaft have the attractions of low cost and high reliability. To replace the sensor the information on the rotor speed is extracted from measured stator voltages and currents at the motor terminals. Vector-controlled drives require estimating the magnitude and spatial orientation of the fundamental magnetic flux waves in the stator or in the rotor. Open-loop estimators or closed-loop observers are used for this purpose. They differ with respect to accuracy, robustness, and sensitivity against model parameter variations. Dynamic performance and steady-state speed accuracy in the low-speed range can be achieved by exploiting parasitic effects of the machine. The overview in this paper uses signal flow graphs of complex space vector quantities to provide an insightful description of the systems used in sensorless control of induction motors", "fulltext": "", "keywords": "induction motor drives;model parameter variations;vector-controlled drives;closed-loop observers;stator voltages;space vector quantities;robustness;sensorless control;steady-state speed accuracy;signal flow graphs;stator currents;parasitic effects;open-loop estimators;spatial orientation;fundamental magnetic flux waves;magnitude;sensitivity;reliability"} {"name": "train_1180", "title": "Decomposition of additive cellular automata", "abstract": "Finite additive cellular automata with fixed and periodic boundary conditions are considered as endomorphisms over pattern spaces. A characterization of the nilpotent and regular parts of these endomorphisms is given in terms of their minimal polynomials. Generalized eigenspace decomposition is determined and relevant cyclic subspaces are described in terms of symmetries. As an application, the lengths and frequencies of limit cycles in the transition diagram of the automaton are calculated", "fulltext": "", "keywords": "endomorphisms;cellular automata;finite cellular automaton;computational complexity;transition diagram"} {"name": "train_1181", "title": "Dynamic neighborhood structures in parallel evolution strategies", "abstract": "Parallelizing is a straightforward approach to reduce the total computation time of evolutionary algorithms. Finding an appropriate communication network within spatially structured populations for improving convergence speed and convergence probability is a difficult task. A new method that uses a dynamic communication scheme in an evolution strategy will be compared with conventional static and dynamic approaches. The communication structure is based on a so-called diffusion model approach. The links between adjacent individuals are dynamically chosen according to deterministic or probabilistic rules. Due to self-organization effects, efficient and stable communication structures are established that perform robustly and quickly on a multimodal test function", "fulltext": "", "keywords": "multimodal test function;parallelizing;convergence speed;convergence probability;parallel evolutionary algorithms;evolutionary algorithms"} {"name": "train_1182", "title": "Optimization of the memory weighting function in stochastic functional", "abstract": "self-organized sorting performed by a team of autonomous mobile agents The activity of a team of autonomous mobile agents formed by identical \"robot-like-ant\" individuals capable of performing a random walk through an environment that are able to recognize and move different \"objects\" is modeled. The emergent desired behavior is a distributed sorting and clustering based only on local information and a memory register that records the past objects encountered. An optimum weighting function for the memory registers is theoretically derived. The optimum time-dependent weighting function allows sorting and clustering of the randomly distributed objects in the shortest time. By maximizing the average speed of a texture feature (the contrast) we check the central assumption, the intermediate steady-states hypothesis, of our theoretical result. It is proved that the algorithm optimization based on maximum speed variation of the contrast feature gives relationships similar to the theoretically derived annealing law", "fulltext": "", "keywords": "autonomous mobile agents;algorithm optimization;random walk;sorting;memory weighting function;clustering"} {"name": "train_1183", "title": "Evolving robust asynchronous cellular automata for the density task", "abstract": "In this paper the evolution of three kinds of asynchronous cellular automata are studied for the density task. Results are compared with those obtained for synchronous automata and the influence of various asynchronous update policies on the computational strategy is described. How synchronous and asynchronous cellular automata behave is investigated when the update policy is gradually changed, showing that asynchronous cellular automata are more adaptable. The behavior of synchronous and asynchronous evolved automata are studied under the presence of random noise of two kinds and it is shown that asynchronous cellular automata implicitly offer superior fault tolerance", "fulltext": "", "keywords": "asynchronous cellular automata;synchronous automata;discrete dynamical systems;random noise;cellular automata;fault tolerance"} {"name": "train_1184", "title": "Measuring return: revealing ROI", "abstract": "The most critical part of the return-on-investment odyssey is to develop metrics that matter to the business and to measure systems in terms of their ability to help achieve those business goals. Everything must flow from those key metrics. And don't forget to revisit those every now and then, too. Since all systems wind down over time, it's important to keep tabs on how well your automation investment is meeting the metrics established by your company. Manufacturers are clamoring for a tool to help quantify returns and analyze the results", "fulltext": "", "keywords": "key metrics;roi;automation investment;technology purchases;return-on-investment"} {"name": "train_1185", "title": "Trading exchanges: online marketplaces evolve", "abstract": "Looks at how trading exchanges are evolving rapidly to help manufacturers keep up with customer demand", "fulltext": "", "keywords": "supply chain management;manufacturers;online marketplaces;customer demand;xml standards;enterprise platforms;core software platform;private exchanges;integration technology;middleware;enterprise resource planning;trading exchanges;content management capabilities"} {"name": "train_1186", "title": "Implementing: it's all about processes", "abstract": "Looks at how the key to successful technology deployment can be found in a set of four basic disciplines", "fulltext": "", "keywords": "implementation;manufacturers;third-party integration;vendor-supplied hardware integration services;technology deployment;incremental targets;vendor-supplied software integration services"} {"name": "train_1187", "title": "Ethernet networks: getting down to business", "abstract": "While it seems pretty clear that Ethernet has won the battle for the mindshare as the network of choice for the factory floor, there's still a war to be won in implementation as cutting-edge manufacturers begin to adopt the technology on a widespread basis", "fulltext": "", "keywords": "ethernet;supervisory level;cutting-edge manufacturers;factory floor"} {"name": "train_1188", "title": "It's time to buy", "abstract": "There is an upside to a down economy: over-zealous suppliers are willing to make deals that were unthinkable a few years ago. That's because vendors are experiencing the same money squeeze as manufacturers, which makes the year 2002 the perfect time to invest in new technology. The author states that when negotiating the deal, provisions for unexpected costs, an exit strategy, and even shared risk with the vendor should be on the table", "fulltext": "", "keywords": "exit strategy;money squeeze;bargaining power;vendor;buyers market;shared risk;negotiation;suppliers;unexpected costs"} {"name": "train_1189", "title": "CRM: approaching zenith", "abstract": "Looks at how manufacturers are starting to warm up to the concept of customer relationship management. CRM has matured into what is expected to be big business. As CRM software evolves to its second, some say third, generation, it's likely to be more valuable to holdouts in manufacturing and other sectors", "fulltext": "", "keywords": "manufacturers;customer relationship management;manufacturing;crm"} {"name": "train_119", "title": "JPEG2000: standard for interactive imaging", "abstract": "JPEG2000 is the latest image compression standard to emerge from the Joint Photographic Experts Group (JPEG) working under the auspices of the International Standards Organization. Although the new standard does offer superior compression performance to JPEG, JPEG2000 provides a whole new way of interacting with compressed imagery in a scalable and interoperable fashion. This paper provides a tutorial-style review of the new standard, explaining the technology on which it is based and drawing comparisons with JPEG and other compression standards. The paper also describes new work, exploiting the capabilities of JPEG2000 in client-server systems for efficient interactive browsing of images over the Internet", "fulltext": "", "keywords": "image compression;client-server systems;international standards organization;joint photographic experts group;interoperable compression;review;interactive imaging;scalable compression;jpeg2000"} {"name": "train_1190", "title": "Buying into the relationship [business software]", "abstract": "Choosing the right software to improve business processes can have a huge impact on a company's efficiency and profitability. While it is sometimes hard to get beyond vendor hype about software features and functionality and know what to realistically expect, it is even more difficult to determine if the vendor is the right vendor to partner with. Thus picking the right software is important, but companies have to realize that what they are really buying into is a relationship with the vendor", "fulltext": "", "keywords": "business software;software evaluation;management;functionality;vendor relationship"} {"name": "train_1191", "title": "On the monotonicity conservation in numerical solutions of the heat equation", "abstract": "It is important to choose such numerical methods in practice that mirror the characteristic properties of the described process beyond the stability and convergence. The investigated qualitative property in this paper is the conservation of the monotonicity in space of the initial heat distribution. We prove some statements about the monotonicity conservation and total monotonicity of one-step vector-iterations. Then, applying these results, we consider the numerical solutions of the one-dimensional heat equation. Our main theorem formulates the necessary and sufficient condition of the uniform monotonicity conservation. The sharpness of the conditions is demonstrated by numerical examples", "fulltext": "", "keywords": "monotonicity conservation;characteristic properties;qualitative property;necessary and sufficient condition;heat equation;one-step vector-iterations;numerical solutions"} {"name": "train_1192", "title": "Construction of two-sided bounds for initial-boundary value problems", "abstract": "This paper extends the bounding operator approach developed for boundary value problems to the case of initial-boundary value problems (IBVPs). Following the general principle of bounding operators enclosing methods for the case of partial differential equations are discussed. In particular, continuous discretization methods with an appropriate error bound controlled shift and monotone extensions of Rothe's method for parabolic problems are investigated", "fulltext": "", "keywords": "partial differential equations;two-sided bounds;bounding operators;parabolic problems;bounding operator approach;initial-boundary value problems"} {"name": "train_1193", "title": "Operator splitting and approximate factorization for taxis-diffusion-reaction", "abstract": "models In this paper we consider the numerical solution of 2D systems of certain types of taxis-diffusion-reaction equations from mathematical biology. By spatial discretization these PDE systems are approximated by systems of positive, nonlinear ODEs (Method of Lines). The aim of this paper is to examine the numerical integration of these ODE systems for low to moderate accuracy by means of splitting techniques. An important consideration is maintenance of positivity. We apply operator splitting and approximate matrix factorization using low order explicit Runge-Kutta methods and linearly implicit Runge-Kutta-Rosenbrock methods. As a reference method the general purpose solver VODPK is applied", "fulltext": "", "keywords": "numerical integration;mathematical biology;approximate matrix factorization;taxis-diffusion-reaction models;spatial discretization;runge-kutta methods;numerical solution;nonlinear odes;approximate factorization;pde systems;operator splitting;linearly implicit runge-kutta-rosenbrock methods"} {"name": "train_1194", "title": "New methods for oscillatory problems based on classical codes", "abstract": "The numerical integration of differential equations with oscillatory solutions is a very common problem in many fields of the applied sciences. Some methods have been specially devised for this kind of problem. In most of them, the calculation of the coefficients needs more computational effort than the classical codes because such coefficients depend on the step-size in a not simple manner. On the contrary, in this work we present new algorithms specially designed for perturbed oscillators whose coefficients have a simple dependence on the step-size. The methods obtained are competitive when comparing with classical and special codes", "fulltext": "", "keywords": "numerical integration;oscillatory problems;oscillatory solutions;perturbed oscillators;classical codes;differential equations"} {"name": "train_1195", "title": "Sharpening the estimate of the stability constant in the maximum-norm of the", "abstract": "Crank-Nicolson scheme for the one-dimensional heat equation This paper is concerned with the stability constant C/sub infinity / in the maximum-norm of the Crank-Nicolson scheme applied. to the one-dimensional heat equation. A well known result due to S.J. Serdyukova is that C/sub infinity / < 23. In the present paper, by using a sharp resolvent estimate for the discrete Laplacian together with the Cauchy formula, it is shown that 3 or= 3, with the single exception of P(9,3), whose crossing number is 2", "fulltext": "", "keywords": "crossing number;generalized petersen graph"} {"name": "train_1334", "title": "A shy invariant of graphs", "abstract": "Moving from a well known result of P.L. Hammer et al. (1982), we introduce a new graph invariant, say lambda (G) referring to any graph G. It is a non-negative integer which is non-zero whenever G contains particular induced odd cycles or, equivalently, admits a particular minimum clique-partition. We show that).(G) can be efficiently evaluated and that its determination allows one to reduce the hard problem of computing a minimum clique-cover of a graph to an identical problem of smaller size and special structure. Furthermore, one has alpha (G)