id
stringlengths
1
4
title
stringlengths
13
200
abstract
stringlengths
67
2.93k
keyphrases
sequence
prmu
sequence
1435
Experimental investigations on monitoring and control of induction heating process for semi-solid alloys using the heating coil as sensor
A method of monitoring the state of metal alloys during induction heating and control of the heating process utilizing the heating coil itself as a sensor is proposed, and its usefulness and effectiveness were experimentally investigated using aluminium A357 billets for the semi-solid metal (SSM) casting processes. The impedance of the coil containing the billet was continuously measured by the proposed method in the temperature range between room temperature and 700 degrees C. It was found that the reactance component of the impedance varied distinctively according to the billet state and could clearly monitor the deformation of the billet, while the resistance component increased with temperature, reflecting the variation of the resistivity of the billet which has strong correlation to the solid/liquid fraction of the billets. The measured impedance is very sensitive to the billet states such as temperature, deformation and solid/liquid fraction and could be used as a parameter to monitor and control the heating process for SSMs
[ "induction heating process", "reactance component", "billet state", "resistance component", "solid/liquid fraction", "process monitoring", "process control", "semisolid alloys", "semisolid metal casting", "heating coil sensor", "coil impedance", "billet deformation", "resistivity variation", "solenoid coil", "20 to 700 C" ]
[ "P", "P", "P", "P", "P", "R", "R", "M", "M", "R", "R", "R", "R", "M", "M" ]
1019
Optical setup and analysis of disk-type photopolymer high-density holographic storage
A relatively simple scheme for disk-type photopolymer high-density holographic storage based on angular and spatial multiplexing is described. The effects of the optical setup on the recording capacity and density are studied. Calculations and analysis show that this scheme is more effective than a scheme based on the spatioangular multiplexing for disk-type photopolymer high-density holographic storage, which has a limited medium thickness. Also an optimal beam recording angle exists to achieve maximum recording capacity and density
[ "optical setup", "disk-type photopolymer high-density holographic storage", "spatial multiplexing", "recording capacity", "limited medium thickness", "optimal beam recording angle", "maximum recording capacity", "angular multiplexing", "recording density", "spatio-angular multiplexing", "maximum density" ]
[ "P", "P", "P", "P", "P", "P", "P", "R", "R", "M", "R" ]
1024
Rational systems exhibit moderate risk aversion with respect to "gambles" on variable-resolution compression
In an embedded wavelet scheme for progressive transmission, a tree structure naturally defines the spatial relationship on the hierarchical pyramid. Transform coefficients over each tree correspond to a unique local spatial region of the original image, and they can be coded bit-plane by bit-plane through successive-approximation quantization. After receiving the approximate value of some coefficients, the decoder can obtain a reconstructed image. We show a rational system for progressive transmission that, in absence of a priori knowledge about regions of interest, chooses at any truncation time among alternative trees for further transmission in such a way as to avoid certain forms of behavioral inconsistency. We prove that some rational transmission systems might exhibit aversion to risk involving "gambles" on tree-dependent quality of encoding while others favor taking such risks. Based on an acceptable predictor for visual distinctness from digital imagery, we demonstrate that, without any outside knowledge, risk-prone systems as well as those with strong risk aversion appear in capable of attaining the quality of reconstructions that can be achieved with moderate risk-averse behavior
[ "rational system", "moderate risk aversion", "gambles", "variable-resolution compression", "embedded wavelet scheme", "progressive transmission", "tree structure", "transform coefficients", "local spatial region", "successive-approximation quantization", "reconstructed image", "truncation time", "acceptable predictor", "visual distinctness", "digital imagery", "hierarchical pyramid spatial relationship", "behavioral inconsistency avoidance", "image encoding", "embedded coding", "rate control optimization", "decision problem", "progressive transmission utility functions", "information theoretic measure" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "R", "R", "R", "R", "U", "U", "M", "U" ]
1061
Abacus, EFI and anti-virus
The Extensible Firmware Interface (EFI) standard emerged as a logical step to provide flexibility and extensibility to boot sequence processes, enabling the complete abstraction of a system's BIOS interface from the system's hardware. In doing so, this provided the means of standardizing a boot-up sequence, extending device drivers and boot time applications' portability to non PC-AT-based architectures, including embedded systems like Internet appliances, TV Internet set-top boxes and 64-bit Itanium platforms
[ "anti-virus", "embedded systems", "Extensible Firmware Interface standard" ]
[ "P", "P", "R" ]
1325
X-Rite: more than a graphic arts company
Although it is well known as a maker of densitometers and spectrophotometers, X-Rite is active in measuring light and shape in many industries. Among them are automobile finishes, paint and home improvements, scientific instruments, optical semiconductors and even cosmetic dentistry
[ "X-Rite", "graphic arts", "colour measurement" ]
[ "P", "P", "M" ]
1360
Automated post bonding inspection by using machine vision techniques
Inspection plays an important role in the semiconductor industry. In this paper, we focus on the inspection task after wire bonding in packaging. The purpose of wire bonding (W/B) is to connect the bond pads with the lead fingers. Two major types of defects are (1) bonding line missing and (2) bonding line breakage. The numbers of bonding lines and bonding balls are used as the features for defect classification. The proposed method consists of image preprocessing, orientation determination, connection detection, bonding line detection, bonding ball detection, and defect classification. The proposed method is simple and fast. The experimental results show that the proposed method can detect the defects effectively
[ "automated post bonding inspection", "machine vision", "semiconductor industry", "wire bonding", "packaging", "lead fingers", "bonding line missing", "bonding line breakage", "bonding balls", "defect classification", "image preprocessing", "orientation determination", "connection detection", "bonding line detection", "bonding ball detection", "IC manufacturing", "bond pad connection" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "U", "R" ]
735
IT at the heart of joined-up policing
Police IT is to shift from application-focused to component-based technology. The change of strategy, part of the Valiant Programme, will make information held by individual forces available on a national basis
[ "police IT", "Valiant Programme", "UK" ]
[ "P", "P", "U" ]
770
The 3D visibility complex
Visibility problems are central to many computer graphics applications. The most common examples include hidden-part removal for view computation, shadow boundaries, mutual visibility of objects for lighting simulation. In this paper, we present a theoretical study of 3D visibility properties for scenes of smooth convex objects. We work in the space of light rays, or more precisely, of maximal free segments. We group segments that "see" the same object; this defines the 3D visibility complex. The boundaries of these groups of segments correspond to the visual events of the scene (limits of shadows, disappearance of an object when the viewpoint is moved, etc.). We provide a worst case analysis of the complexity of the visibility complex of 3D scenes, as well as a probabilistic study under a simple assumption for "normal" scenes. We extend the visibility complex to handle temporal visibility. We give an output-sensitive construction algorithm and present applications of our approach
[ "3D visibility complex", "computer graphics", "hidden-part removal", "view computation", "shadow boundaries", "lighting simulation", "smooth convex objects", "light rays", "maximal free segments", "visual events", "probabilistic study", "temporal visibility", "output-sensitive construction algorithm", "mutual object visibility", "worst case complexity analysis", "normal scenes" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "R", "R", "R" ]
1408
PKI: coming to an enterprise near you?
For many years public key infrastructure (PKI) deployments were the provenance of governments and large, security-conscious corporations and financial institutions. These organizations have the financial and human resources necessary to successfully manage the complexities of a public key system. Lately however, several forces have converged to encourage a broader base of enterprises to take a closer look at PKI. These forces are discussed. PKI vendors are now demonstrating to customers how they can make essential business applications faster and more efficient by moving them to the Internet-without sacrificing security. Those applications usually include secure remote access, secure messaging, electronic document exchange, transaction validation, and network authentication. After a brief discussion of PKI basics the author reviews various products available on the market
[ "PKI", "public key infrastructure", "security", "PKI vendors", "secure remote access", "secure messaging", "electronic document exchange", "transaction validation", "network authentication", "business-critical applications", "e-commerce", "IPSec VPNs", "Baltimore Technologies", "Entrust", "GeoTrust", "RSA Security", "VeriSign" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "M", "U", "U", "U", "U", "U", "M", "U" ]
57
Speaker adaptive modeling by vocal tract normalization
This paper presents methods for speaker adaptive modeling using vocal tract normalization (VTN) along with experimental tests on three databases. We propose a new training method for VTN: By using single-density acoustic models per HMM state for selecting the scale factor of the frequency axis, we avoid the problem that a mixture-density tends to learn the scale factors of the training speakers and thus cannot be used for selecting the scale factor. We show that using single Gaussian densities for selecting the scale factor in training results in lower error rates than using mixture densities. For the recognition phase, we propose an improvement of the well-known two-pass strategy: by using a non-normalized acoustic model for the first recognition pass instead of a normalized model, lower error rates are obtained. In recognition tests, this method is compared with a fast variant of VTN. The two-pass strategy is an efficient method, but it is suboptimal because the scale factor and the word sequence are determined sequentially. We found that for telephone digit string recognition this suboptimality reduces the VTN gain in recognition performance by 30% relative. In summary, on the German spontaneous speech task Verbmobil, the WSJ task and the German telephone digit string corpus SieTill, the proposed methods for VTN reduce the error rates significantly
[ "speaker adaptive modeling", "vocal tract normalization", "databases", "training method", "single-density acoustic models", "HMM state", "training speakers", "single Gaussian densities", "training results", "two-pass strategy", "word sequence", "telephone digit string recognition", "German spontaneous speech task", "WSJ task", "German telephone digit string corpus", "SieTill", "frequency scale factor", "error rate reduction", "nonnormalized acoustic model", "Verlimobil" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "R", "M", "M", "U" ]
628
Rank tests of association for exchangeable paired data
We describe two rank tests of association for paired exchangeable data motivated by the study of lifespans in twins. The pooled sample is ranked. The nonparametric test of association is based on R/sup +/, the sum of the smaller within-pair ranks. A second measure L/sup +/ is the sum of within-pair rank products. Under the null hypothesis of within-pair independence, the two test statistics are approximately normally distributed. Expressions for the exact means and variances of R/sup +/ and L/sup +/ are given. We describe the power of these two statistics under a close alternative hypothesis to that of independence. Both the R/sup +/ and L/sup +/ tests indicate nonparametric statistical evidence of positive association of longevity in identical twins and a negligible relationship between the lifespans of fraternal twins listed in the Danish twin registry. The statistics are also applied to the analysis of a clinical trial studying the time to failure of ventilation tubes in children with bilateral otitis media
[ "rank tests", "association", "paired exchangeable data", "pooled sample", "nonparametric test", "within-pair ranks", "within-pair rank products", "null hypothesis", "within-pair independence", "test statistics", "exact means", "nonparametric statistical evidence", "longevity", "identical twins", "fraternal twins", "Danish twin registry", "clinical trial", "bilateral otitis media", "twin lifespans", "exact variances", "ventilation tube failure time" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "R", "R", "R" ]
12
National learning systems: a new approach on technological change in late industrializing economies and evidences from the cases of Brazil and South Korea
The paper has two intertwined parts. The first one is a proposal for a conceptual and theoretical framework to understand technical change in late industrializing economies. The second part develops a kind of empirical test of the usefulness of that new framework by means of a comparative study of the Brazilian and South Korean cases. All the four types of macroevidences of the technical change processes of Brazil and Korea corroborate, directly or indirectly, the hypothesis of the existence of actual cases of national learning systems (NLSs) of passive and active nature, as it is shown to be the cases of Brazil and South Korea, respectively. The contrast between the two processes of technical change prove remarkable, despite both processes being essentially confined to learning. The concepts of passive and active NLSs show how useful they are to apprehend the diversity of those realities, and, consequently, to avoid, for instance, interpretations that misleadingly suppose (based on conventional economic theory) that those countries have a similar lack of technological dynamism
[ "national learning systems", "technological change", "late industrializing economies", "Brazil", "South Korea", "national innovation system" ]
[ "P", "P", "P", "P", "P", "M" ]
1238
Optimization of element-by-element FEM in HPF 1.1
In this study, Poisson's equation is numerically evaluated by the element-by-element (EBE) finite-element method in a parallel environment using HPF 1.1 (High-Performance Fortran). In order to achieve high parallel efficiency, the data structures have been altered to node-based data instead of mixtures of node- and element-based data, representing a node-based EBE finite-element scheme (nEBE). The parallel machine used in this study was the NEC SX-4, and experiments were performed on a single node having 32 processors sharing common memory. The HPF compiler used in the experiments is HPF/SX Rev 2.0 released in 1997 (unofficial), which supports HPF 1.1. Models containing approximately 200 000 and 1,500,000 degrees of freedom were analyzed in order to evaluate the method. The calculation time, parallel efficiency, and memory used were compared. The performance of HPF in the conjugate gradient solver for the large model, using the NEC SX-4 compiler option-noshrunk, was about 85% that of the message passing interface
[ "element-by-element", "HPF", "HPF compiler", "conjugate gradient solver", "message passing", "finite element method", "parallel programs", "Poisson equation" ]
[ "P", "P", "P", "P", "P", "M", "M", "R" ]
1181
Dynamic neighborhood structures in parallel evolution strategies
Parallelizing is a straightforward approach to reduce the total computation time of evolutionary algorithms. Finding an appropriate communication network within spatially structured populations for improving convergence speed and convergence probability is a difficult task. A new method that uses a dynamic communication scheme in an evolution strategy will be compared with conventional static and dynamic approaches. The communication structure is based on a so-called diffusion model approach. The links between adjacent individuals are dynamically chosen according to deterministic or probabilistic rules. Due to self-organization effects, efficient and stable communication structures are established that perform robustly and quickly on a multimodal test function
[ "parallelizing", "evolutionary algorithms", "convergence speed", "convergence probability", "multimodal test function", "parallel evolutionary algorithms" ]
[ "P", "P", "P", "P", "P", "R" ]
903
Modeling and simulation of an ABR flow control algorithm using a virtual source/virtual destination switch
The available bit rate (ABR) service class of asynchronous transfer mode networks uses a feedback control mechanism to adapt to varying link capacities. The virtual source/virtual destination (VS/VD) technique offers the possibility of segmenting the otherwise end-to-end ABR control loop into separate loops. The improved feedback delay and control of ABR traffic inside closed segments provide a better performance for ABR connections. This article presents the use of classical linear control theory to model and develop an ABR VS/VD flow control algorithm. Discrete event simulations are used to analyze the behavior of the algorithm with respect to transient behavior and correctness of the control model. Linear control theory offers the means to derive correct choices of parameters and to assess performance issues, such as stability of the system, during the design phase. The performance goals are high link utilization, fair bandwidth distribution, and robust operation in various environments, which are verified by discrete event simulations. The major contribution of this work is the use of analytic methods (linear control theory) to model and design an ABR flow control algorithm tailored for the special layout of a VS/VD switch, and the use of simulation techniques to verify the result
[ "modeling", "ABR flow control algorithm", "virtual source/virtual destination switch", "feedback control mechanism", "link capacities", "control loop", "feedback delay", "closed segments", "classical linear control theory", "discrete event simulations", "transient behavior", "control model", "performance issues", "stability", "high link utilization", "fair bandwidth distribution", "robust operation", "ATM networks", "available bit rate service class", "traffic control" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "M", "R", "R" ]
140
A high-resolution high-frequency monolithic top-shooting microinjector free of satellite drops - part II: fabrication, implementation, and characterization
For pt. I, see ibid., vol. 11, no. 5, p. 427-36 (2002). Describes the fabrication, implementation and characterization of a thermal driven microinjector, featuring a bubble check valve and monolithic fabrication. Microfabrication of this microinjector is based on bulk/surface-combined micromachining of the silicon wafer, free of the bonding process that is commonly used in the fabrication of commercial printing head, so that even solvents and fuels can be ejected. Droplet ejection sequences of two microinjectors have been studied along with a commercial inkjet printhead for comparison. The droplet ejection of our microinjector with 10 mu m diameter nozzle has been characterized at a frequency over 35 kHz, at least 3 times higher than those of commercial counterparts. The droplet volume from this device is smaller than 1 pl, 10 times smaller than those of commercial inkjets employed in the consumer market at the time of testing. Visualization results have verified that our design, although far from being optimized, operates in the frequency several times higher than those of commercial products and reduces the crosstalk among neighboring chambers
[ "monolithic top-shooting microinjector", "satellite drops", "thermal driven microinjector", "bubble check valve", "bulk/surface-combined micromachining", "bonding process", "inkjet printhead", "nozzle", "35 kHz", "droplet volume", "consumer market", "crosstalk", "10 micron" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "M" ]
591
Approximation theory of fuzzy systems based upon genuine many-valued implications - MIMO cases
It is constructively proved that the multi-input-multi-output fuzzy systems based upon genuine many-valued implications are universal approximators (they are called Boolean type fuzzy systems in this paper). The general approach to construct such fuzzy systems is given, that is, through the partition of the output region (by the given accuracy). Two examples are provided to demonstrate the way in which fuzzy systems are designed to approximate given functions with a given required approximation accuracy
[ "fuzzy systems", "many-valued implication", "multi-input-multi-output fuzzy systems", "universal approximator", "Boolean type fuzzy systems" ]
[ "P", "P", "P", "P", "P" ]
946
Entanglement measures with asymptotic weak-monotonicity as lower (upper) bound for the entanglement of cost (distillation)
We propose entanglement measures with asymptotic weak-monotonicity. We show that a normalized form of entanglement measures with the asymptotic weak-monotonicity are lower (upper) bound for the entanglement of cost (distillation)
[ "entanglement measures", "asymptotic weak-monotonicity", "entanglement of cost", "distillation" ]
[ "P", "P", "P", "P" ]
105
Greenberger-Horne-Zeilinger paradoxes for many qubits
We construct Greenberger-Horne-Zeilinger (GHZ) contradictions for three or more parties sharing an entangled state, the dimension of each subsystem being an even integer d. The simplest example that goes beyond the standard GHZ paradox (three qubits) involves five ququats (d = 4). We then examine the criteria that a GHZ paradox must satisfy in order to be genuinely M partite and d dimensional
[ "Greenberger-Horne-Zeilinger paradoxes", "many qubits", "entangled state", "GHZ paradox", "GHZ contradictions" ]
[ "P", "P", "P", "P", "R" ]
1139
Development and evaluation of a case-based reasoning classifier for prediction of breast biopsy outcome with BI-RADS/sup TM/ lexicon
Approximately 70-85% of breast biopsies are performed on benign lesions. To reduce this high number of biopsies performed on benign lesions, a case-based reasoning (CBR) classifier was developed to predict biopsy results from BI-RADS/sup TM/ findings. We used 1433 (931 benign) biopsy-proven mammographic cases. CBR similarity was defined using either the Hamming or Euclidean distance measure over case features. Ten features represented each case: calcification distribution, calcification morphology, calcification number, mass margin, mass shape, mass density, mass size, associated findings, special cases, and age. Performance was evaluated using Round Robin sampling, Receiver Operating Characteristic (ROC) analysis, and bootstrap. To determine the most influential features for the CBR, an exhaustive feature search was performed over all possible feature combinations (1022) and similarity thresholds. Influential features were defined as the most frequently occurring features in the feature subsets with the highest partial ROC areas (/sub 0.90/AUC). For CBR with Hamming distance, the most influential features were found to be mass margin, calcification morphology, age, calcification distribution, calcification number, and mass shape, resulting in an /sub 0.90/AUC of 0.33. At 95% sensitivity, the Hamming CBR would spare from biopsy 34% of the benign lesions. At 98% sensitivity, the Hamming CBR would spare 27% benign lesions. For the CBR with Euclidean distance, the most influential feature subset consisted of mass margin, calcification morphology, age, mass density, and associated findings, resulting in /sub 0.90/AUC of 0.37. At 95% sensitivity, the Euclidean CBR would spare from biopsy 41% benign lesions. At 98% sensitivity, the Euclidean CBR would spare 27% benign lesions. The profile of cases spared by both distance measures at 98% sensitivity indicates that the CBR is a potentially useful diagnostic tool for the classification of mammographic lesions, by recommending short-term follow-up for likely benign lesions that is in agreement with final biopsy results and mammographer's intuition
[ "case-based reasoning classifier", "breast biopsy outcome", "benign lesions", "biopsy-proven mammographic cases", "CBR similarity", "Euclidean distance measure", "calcification distribution", "calcification morphology", "calcification number", "mass margin", "mass shape", "mass density", "mass size", "associated findings", "special cases", "age", "Round Robin sampling", "bootstrap", "influential features", "feature combinations", "similarity thresholds", "feature subsets", "highest partial ROC areas", "diagnostic tool", "short-term follow-up", "BI-RADS lexicon", "Hamming distance measure", "Receiver Operating Characteristic analysis", "mammographic lesion classification" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "R", "R", "R", "R" ]
1280
Products and polymorphic subtypes
This paper is devoted to a comprehensive study of polymorphic subtypes with products. We first present a sound and complete Hilbert style axiomatization of the relation of being a subtype in presence of to , * type constructors and the For all quantifier, and we show that such axiornatization is not encodable in the system with to , For all only. In order to give a logical semantics to such a subtyping relation, we propose a new form of a sequent which plays a key role in a natural deduction and a Gentzen style calculi. Interestingly enough, the sequent must have the form E implies T, where E is a non-commutative, non-empty sequence of typing assumptions and T is a finite binary tree of typing judgements, each of them behaving like a pushdown store. We study basic metamathematical properties of the two logical systems, such as subject reduction and cut elimination. Some decidability/undecidability issues related to the presented subtyping relation are also explored: as expected, the subtyping over to , *, For all is undecidable, being already undecidable for the to , For all fragment (as proved in [15]), but for the *, For all fragment it turns out to be decidable
[ "polymorphic subtypes", "Hilbert style axiomatization", "logical semantics", "Gentzen style calculi", "finite binary tree", "pushdown store", "decidability", "products subtypes", "metamathernatical properties" ]
[ "P", "P", "P", "P", "P", "P", "P", "R", "M" ]
690
Robust Kalman filter design for discrete time-delay systems
The problem of finite- and infinite-horizon robust Kalman filtering for uncertain discrete-time systems with state delay is addressed. The system under consideration is subject to time-varying norm-bounded parameter uncertainty in both the state and output matrices. We develop a new methodology for designing a linear filter such that the error variance of the filter is guaranteed to be within a certain upper bound for any allowed uncertainty and time delay. The solution is given in terms of two Riccati equations. Multiple time-delay systems are also investigated
[ "robust Kalman filter", "discrete time-delay systems", "state delay", "norm-bounded parameter uncertainty", "output matrices", "linear filter", "Riccati equations", "uncertain systems", "time-varying parameter uncertainty", "state matrices", "robust state estimation" ]
[ "P", "P", "P", "P", "P", "P", "P", "R", "R", "R", "M" ]
859
Developing a hardware and programming curriculum for middle school girls
Techbridge provides experiences and resources that would teach girls technology skills as well as excite their curiosity and build their confidence. Funded by the National Science Foundation and sponsored by Chabot Space and Science Center in Oakland, California, Techbridge is a three-year program that serves approximately 200 girls annually. Techbridge is hosted at 8 middle and high schools in Oakland and at the California School for the Blind in Fremont, California generally as an after-school program meeting once a week. Techbridge comes at a critical time in girls' development when girls have many important decisions to make regarding classes and careers, but often lack the confidence and guidance to make the best choices. Techbridge helps girls plan for the next steps to high school and college with its role models and guidance. Techbridge also provides training and resources for teachers, counselors, and families
[ "hardware and programming curriculum", "middle school girls", "Techbridge", "technology skills teaching" ]
[ "P", "P", "P", "R" ]
1362
Process planning for reliable high-speed machining of moulds
A method of generating NC programs for the high-speed milling of moulds is investigated. Forging dies and injection moulds, whether plastic or aluminium, have a complex surface geometry. In addition they are made of steels of hardness as much as 30 or even 50 HRC. Since 1995, high-speed machining has been much adopted by the die-making industry, which with this technology can reduce its use of Sinking Electrodischarge Machining (SEDM). EDM, in general, calls for longer machining times. The use of high-speed machining makes it necessary to redefine the preliminary stages of the process. In addition, it affects the methodology employed in the generation of NC programs, which requires the use of high-level CAM software. The aim is to generate error-free programs that make use of optimum cutting strategies in the interest of productivity and surface quality. The final result is a more reliable manufacturing process. There are two risks in the use of high-speed milling on hardened steels. One of these is tool breakage, which may be very costly and may furthermore entail marks on the workpiece. The other is collisions between the tool and the workpiece or fixtures, the result of which may be damage to the ceramic bearings in the spindles. in order to minimize these risks it is necessary that new control and optimization steps be included in the CAM methodology. There are three things that the firm adopting high-speed methods should do. It should redefine its process engineering, it should systematize access by its CAM programmers to high-speed knowhow, and it should take up the use of process simulation tools. In the latter case, it will be very advantageous to use tools for the estimation of cutting forces. The new work methods proposed in this article have made it possible to introduce high speed milling (HSM) into the die industry. Examples are given of how the technique has been applied with CAM programming re-engineered as here proposed, with an explanation of the novel features and the results
[ "process planning", "reliable high-speed machining", "moulds", "NC programs", "high-speed milling", "forging dies", "injection moulds", "complex surface geometry", "error-free programs", "optimum cutting strategies", "cutting strategies", "productivity", "surface quality", "hardened steels", "tool breakage", "ceramic bearings", "CAM methodology", "process simulation tools", "CAM programming re-engineering", "tool workpiece collisions", "process engineering redefinition" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "R", "M" ]
737
What's in a name? [mobile telephony branding]
Mobile operators are frantically consolidating businesses into single international brands
[ "mobile telephony", "branding", "consolidating businesses" ]
[ "P", "P", "P" ]
772
Meshed atlases for real-time procedural solid texturing
We describe an implementation of procedural solid texturing that uses the texture atlas, a one-to-one mapping from an object's surface into its texture space. The method uses the graphics hardware to rasterize the solid texture coordinates as colors directly into the atlas. A texturing procedure is applied per-pixel to the texture map, replacing each solid texture coordinate with its corresponding procedural solid texture result. The procedural solid texture is then mapped back onto the object surface using standard texture mapping. The implementation renders procedural solid textures in real time, and the user can design them interactively. The quality of this technique depends greatly on the layout of the texture atlas. A broad survey of texture atlas schemes is used to develop a set of general purpose mesh atlases and tools for measuring their effectiveness at distributing as many available texture samples as evenly across the surface as possible. The main contribution of this paper is a new multiresolution texture atlas. It distributes all available texture samples in a nearly uniform distribution. This multiresolution texture atlas also supports MIP-mapped minification antialiasing and linear magnification filtering
[ "meshed atlases", "real-time procedural solid texturing", "texture atlas", "one-to-one mapping", "texture space", "graphics hardware", "rasterization", "solid texture coordinates", "colors", "object surface", "rendering", "multiresolution texture atlas", "MIP-mapped minification antialiasing", "linear magnification filtering" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P" ]
1026
Use of SPOT images as a tool for coastal zone management and monitoring of environmental impacts in the coastal zone
Modern techniques such as remote sensing have been one of the main factors leading toward the achievement of serious plans regarding coastal management. A multitemporal analysis of land use in certain areas of the Colombian Caribbean Coast is described. It mainly focuses on environmental impacts caused by anthropogenic activities, such as deforestation of mangroves due to shrimp farming. Selection of sensitive areas, percentage of destroyed mangroves, possible endangered areas, etc., are some of the results of this analysis. Recommendations for a coastal management plan in the area have also resulted from this analysis. Some other consequences of the deforestation of mangroves in the coastal zone and the construction of shrimp ponds are also analyzed, such as the increase of erosion problems in these areas and water pollution, among others. The increase of erosion in these areas has also changed part of their morphology, which has been studied by the analysis of SPOT images in previous years. A serious concern exists about the future of these areas. For this reason new techniques like satellite images (SPOT) have been applied with good results, leading to more effective control and coastal management in the area. The use of SPOT images to study changes of the land use of the area is a useful technique to determine patterns of human activities and suggest solutions for severe problems in these areas
[ "SPOT images", "coastal zone management", "remote sensing", "multitemporal analysis", "land use", "Colombian Caribbean Coast", "anthropogenic activities", "shrimp farming", "endangered areas", "shrimp ponds", "erosion problems", "water pollution", "satellite images", "human activities", "environmental impact monitoring", "mangrove deforestation", "supervised classification", "sedimentation", "vectorization", "vector overlay" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "R", "R", "U", "U", "U", "U" ]
1063
Operations that do not disturb partially known quantum states
Consider a situation in which a quantum system is secretly prepared in a state chosen from the known set of states. We present a principle that gives a definite distinction between the operations that preserve the states of the system and those that disturb the states. The principle is derived by alternately applying a fundamental property of classical signals and a fundamental property of quantum ones. The principle can be cast into a simple form by using a decomposition of the relevant Hilbert space, which is uniquely determined by the set of possible states. The decomposition implies the classification of the degrees of freedom of the system into three parts depending on how they store the information on the initially chosen state: one storing it classically, one storing it nonclassically, and the other one storing no information. Then the principle states that the nonclassical part is inaccessible and the classical part is read-only if we are to preserve the state of the system. From this principle, many types of no-cloning, no-broadcasting, and no-imprinting conditions can easily be derived in general forms including mixed states. It also gives a unified view on how various schemes of quantum cryptography work. The principle helps one to derive optimum amount of resources (bits, qubits, and ebits) required in data compression or in quantum teleportation of mixed-state ensembles
[ "partially known quantum states", "quantum system", "classical signals", "Hilbert space", "degrees of freedom", "nonclassical part", "quantum cryptography", "bits", "qubits", "ebits", "quantum teleportation", "mixed-state ensembles", "secretly prepared quantum state" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "R" ]
1282
Completeness of timed mu CRL
Previously a straightforward extension of the process algebra mu CRL was proposed to explicitly deal with time. The process algebra mu CRL has been especially designed to deal with data in a process algebraic context. Using the features for data, only a minor extension of the language was needed to obtain a very expressive variant of time. Previously it contained syntax, operational semantics and axioms characterising timed mu CRL. It did not contain an in depth analysis of theory of timed mu CRL. This paper fills this gap, by providing soundness and completeness results. The main tool to establish these is a mapping of timed to untimed mu CRL and employing the completeness results obtained for untimed mu CRL
[ "completeness", "timed mu CRL", "process algebra", "operational semantics" ]
[ "P", "P", "P", "P" ]
692
A partial converse to Hadamard's theorem on homeomorphisms
A theorem by Hadamard gives a two-part condition under which a map from one Banach space to another is a homeomorphism. The theorem, while often very useful, is incomplete in the sense that it does not explicitly specify the family of maps for which the condition is met. Here, under a typically weak additional assumption on the map, we show that Hadamard's condition is met if, and only if, the map is a homeomorphism with a Lipschitz continuous inverse. An application is given concerning the relation between the stability of a nonlinear system and the stability of related linear systems
[ "partial converse", "homeomorphisms", "Banach space", "Lipschitz continuous inverse", "linearization", "Hadamard theorem", "nonlinear system stability", "linear system stability", "nonlinear feedback systems", "nonlinear networks" ]
[ "P", "P", "P", "P", "P", "R", "R", "R", "M", "M" ]
1183
Evolving robust asynchronous cellular automata for the density task
In this paper the evolution of three kinds of asynchronous cellular automata are studied for the density task. Results are compared with those obtained for synchronous automata and the influence of various asynchronous update policies on the computational strategy is described. How synchronous and asynchronous cellular automata behave is investigated when the update policy is gradually changed, showing that asynchronous cellular automata are more adaptable. The behavior of synchronous and asynchronous evolved automata are studied under the presence of random noise of two kinds and it is shown that asynchronous cellular automata implicitly offer superior fault tolerance
[ "asynchronous cellular automata", "cellular automata", "synchronous automata", "random noise", "fault tolerance", "discrete dynamical systems" ]
[ "P", "P", "P", "P", "P", "U" ]
901
Estimation of the Poisson stream intensity in a multilinear queue with an exponential job queue decay
Times the busy queue periods start are found for a multilinear queue with an exponential job queue decay and uniform resource allocation to individual servers. The stream intensity and the average job are estimated from observations of the times the queue busy periods start
[ "Poisson stream intensity", "stream intensity", "multilinear queue", "exponential job queue decay", "busy queue periods start", "uniform resource allocation", "individual servers" ]
[ "P", "P", "P", "P", "P", "P", "P" ]
142
Surface micromachined paraffin-actuated microvalve
Normally-open microvalves have been fabricated and tested which use a paraffin microactuator as the active element. The entire structure with nominal dimension of phi 600 mu m * 30 mu m is batch-fabricated by surface micromachining the actuator and channel materials on top of a single substrate. Gas flow rates in the 0.01-0.1 sccm range have been measured for several devices with actuation powers ranging from 50 to 150 mW on glass substrates. Leak rates as low as 500 mu sccm have been measured. The normally-open blocking microvalve structure has been used to fabricate a precision flow control system of microvalves consisting of four blocking valve structures. The control valve is designed to operate over a 0.01-5.0 sccm flow range at a differential pressure of 800 torr. Flow rates ranging from 0.02 to 4.996 sccm have been measured. Leak rates as low as 3.2 msccm for the four valve system have been measured
[ "normally-open microvalves", "paraffin microactuator", "active element", "channel materials", "gas flow rates", "flow rates", "actuation powers", "50 to 150 mW", "leak rates", "blocking valve structures", "differential pressure", "800 torr", "surface micromachined microvalve", "600 micron", "30 micron" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "R", "M", "M" ]
593
Fuzzy systems with overlapping Gaussian concepts: Approximation properties in Sobolev norms
In this paper the approximating capabilities of fuzzy systems with overlapping Gaussian concepts are considered. The target function is assumed to be sampled either on a regular gird or according to a uniform probability density. By exploiting a connection with Radial Basis Functions approximators, a new method for the computation of the system coefficients is provided, showing that it guarantees uniform approximation of the derivatives of the target function
[ "fuzzy systems", "overlapping Gaussian concepts", "radial basis functions", "learning", "fuzzy system models", "reproducing kernel Hilbert spaces" ]
[ "P", "P", "P", "U", "M", "U" ]
944
Conditions for the local manipulation of Gaussian states
We present a general necessary and sufficient criterion for the possibility of a state transformation from one mixed Gaussian state to another of a bipartite continuous-variable system with two modes. The class of operations that will be considered is the set of local Gaussian completely positive trace-preserving maps
[ "local manipulation", "Gaussian states", "state transformation", "bipartite continuous-variable system", "trace-preserving maps", "quantum information theory" ]
[ "P", "P", "P", "P", "P", "U" ]
107
Deterministic single-photon source for distributed quantum networking
A sequence of single photons is emitted on demand from a single three-level atom strongly coupled to a high-finesse optical cavity. The photons are generated by an adiabatically driven stimulated Raman transition between two atomic ground states, with the vacuum field of the cavity stimulating one branch of the transition, and laser pulses deterministically driving the other branch. This process is unitary and therefore intrinsically reversible, which is essential for quantum communication and networking, and the photons should be appropriate for all-optical quantum information processing
[ "deterministic single-photon source", "distributed quantum networking", "single three-level atom", "high-finesse optical cavity", "adiabatically driven stimulated Raman transition", "vacuum field", "quantum communication", "all-optical quantum information processing" ]
[ "P", "P", "P", "P", "P", "P", "P", "P" ]
55
Self-testing chips take a load off ATE
Looks at how chipmakers get more life out of automatic test equipment by embedding innovative circuits in silicon
[ "self-testing chips", "ATE", "automatic test equipment", "innovative circuits", "design-for-test techniques", "embedded deterministic testing technique" ]
[ "P", "P", "P", "P", "U", "M" ]
979
Design, analysis and testing of some parallel two-step W-methods for stiff systems
Parallel two-step W-methods are linearly-implicit integration methods where the s stage values can be computed in parallel. We construct methods of stage order q = s and order p = s with favourable stability properties. Generalizations for the concepts of A- and L-stability are proposed and conditions for stiff accuracy are given. Numerical comparisons on a shared memory computer show the efficiency of the methods, especially in combination with Krylov-techniques for large stiff systems
[ "parallel two-step W-methods", "linearly-implicit integration methods", "stage order", "stability", "shared memory computer", "Krylov-techniques", "large stiff systems", "differential equations", "convergence analysis" ]
[ "P", "P", "P", "P", "P", "P", "P", "U", "M" ]
652
A case for end system multicast
The conventional wisdom has been that Internet protocol (IP) is the natural protocol layer for implementing multicast related functionality. However, more than a decade after its initial proposal, IP multicast is still plagued with concerns pertaining to scalability, network management, deployment, and support for higher layer functionality such as error, flow, and congestion control. We explore an alternative architecture that we term end system multicast, where end systems implement all multicast related functionality including membership management and packet replication. This shifting of multicast support from routers to end systems has the potential to address most problems associated with IP multicast. However, the key concern is the performance penalty associated with such a model. In particular, end system multicast introduces duplicate packets on physical links and incurs larger end-to-end delays than IP multicast. We study these performance concerns in the context of the Narada protocol. In Narada, end systems self-organize into an overlay structure using a fully distributed protocol. Further, end systems attempt to optimize the efficiency of the overlay by adapting to network dynamics and by considering application level performance. We present details of Narada and evaluate it using both simulation and Internet experiments. Our results indicate that the performance penalties are low both from the application and the network perspectives. We believe the potential benefits of transferring multicast functionality from end systems to routers significantly outweigh the performance penalty incurred
[ "end system multicast", "Internet protocol", "protocol layer", "IP multicast", "network management", "higher layer functionality", "congestion control", "membership management", "packet replication", "performance penalties", "end-to-end delays", "Narada protocol", "overlay structure", "distributed protocol", "network dynamics", "application level performance", "simulation", "Internet experiments", "network scalability", "network routers", "self-organizing protocol" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "R", "R", "R" ]
617
Estimation of trifocal tensor using GMM
A novel estimation of a trifocal tensor based on the Gaussian mixture model (GMM) is presented. The mixture model is built assuming that the residuals of inliers and outliers belong to different Gaussian distributions. The Bayesian rule is then employed to detect the inliers for re-estimation. Experiments show that the presented method is more precise and relatively unaffected by outliers
[ "GMM", "Gaussian mixture model", "inliers", "outliers", "Gaussian distributions", "Bayesian rule", "trifocal tensor estimation", "motion analysis", "image data", "image analysis" ]
[ "P", "P", "P", "P", "P", "P", "R", "U", "U", "U" ]
68
Human factors research on data modeling: a review of prior research, an extended framework and future research directions
This study reviews and synthesizes human factors research on conceptual data modeling. In addition to analyzing the variables used in earlier studies and summarizing the results of this stream of research, we propose a new framework to help with future efforts in this area. The study finds that prior research has focused on issues that are relevant when conceptual models are used for communication between systems analysts and developers (Analyst Developer models) whereas the issues important for models that are used to facilitate communication between analysts and users (User-Analyst models) have received little attention and, hence, require a significantly stronger role in future research. In addition, we emphasize the importance of building a strong theoretical foundation and using it to guide future empirical work in this area
[ "human factors", "conceptual data modeling", "future efforts", "Analyst Developer models", "User-Analyst models", "database" ]
[ "P", "P", "P", "P", "P", "U" ]
1242
VPP Fortran and the design of HPF/JA extensions
VPP Fortran is a data parallel language that has been designed for the VPP series of supercomputers. In addition to pure data parallelism, it contains certain low-level features that were designed to extract high performance from user programs. A comparison of VPP Fortran and High-Performance Fortran (HPF) 2.0 shows that these low-level features are not available in HPF 2.0. The features include asynchronous interprocessor communication, explicit shadow, and the LOCAL directive. They were shown in VPP Fortran to be very useful in handling real-world applications, and they have been included in the HPF/JA extensions. They are described in the paper. The HPF/JA Language Specification Version 1.0 is an extension of HPF 2.0 to achieve practical performance for real-world applications and is a result of collaboration in the Japan Association for HPF (JAHPF). Some practical programming and tuning procedures with the HPF/JA Language Specification are described, using the NAS Parallel Benchmark BT as an example
[ "VPP Fortran", "data parallel language", "data parallelism", "high performance", "asynchronous interprocessor communication", "explicit shadow", "benchmark", "asynchronous communication", "data locality" ]
[ "P", "P", "P", "P", "P", "P", "P", "R", "R" ]
1207
Packet spacing: an enabling mechanism for delivering multimedia content in computational grids
Streaming multimedia with UDP has become increasingly popular over distributed systems like the Internet. Scientific applications that stream multimedia include remote computational steering of visualization data and video-on-demand teleconferencing over the Access Grid. However, UDP does not possess a self-regulating, congestion-control mechanism; and most best-effort traffic is served by congestion-controlled TCP. Consequently, UDP steals bandwidth from TCP such that TCP flows starve for network resources. With the volume of Internet traffic continuing to increase, the perpetuation of UDP-based streaming will cause the Internet to collapse as it did in the mid-1980's due to the use of non-congestion-controlled TCP. To address this problem, we introduce the counter-intuitive notion of inter-packet spacing with control feedback to enable UDP-based applications to perform well in the next-generation Internet and computational grids. When compared with traditional UDP-based streaming, we illustrate that our approach can reduce packet loss over 50% without adversely affecting delivered throughput
[ "streaming multimedia", "UDP", "distributed systems", "Internet", "remote computational steering", "visualization data", "UDP-based streaming", "inter-packet spacing", "network protocol", "transport protocols" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "M", "U" ]
95
SIA shelves T+1 decision till 2004
The Securities Industry Association has decided that a move to T+1 is more than the industry can handle right now. STP, however, will remain a focus
[ "T+1", "Securities Industry Association", "straight-through-processing" ]
[ "P", "P", "U" ]
553
Application of traditional system design techniques to Web site design
After several decades of computer program construction there emerged a set of principles that provided guidance to produce more manageable programs. With the emergence of the plethora of Internet web sites one wonders if similar guidelines are followed in their construction. Since this is a new technology no apparent universally accepted methods have emerged to guide the designer in Web site construction. This paper reviews the traditional principles of structured programming and the preferred characteristics of Web sites. Finally a mapping of how the traditional guidelines may be applied to Web site construction is presented. The application of the traditional principles of structured programming to the design of a Web site can provide a more usable site for the visitors to the site. The additional benefit of using these time-honored techniques is the creation of a Web site that will be easier to maintain by the development staff
[ "system design techniques", "structured programming", "Internet Web site design" ]
[ "P", "P", "R" ]
984
Bistability of harmonically forced relaxation oscillations
Relaxation oscillations appear in processes which involve transitions between two states characterized by fast and slow time scales. When a relaxation oscillator is coupled to an external periodic force its entrainment by the force results in a response which can include multiple periodicities and bistability. The prototype of these behaviors is the harmonically driven van der Pol equation which displays regions in the parameter space of the driving force amplitude where stable orbits of periods 2n+or-1 coexist, flanked by regions of periods 2n+1 and 2n-1. The parameter regions of such bistable orbits are derived analytically for the closely related harmonically driven Stoker-Haag piecewise discontinuous equation. The results are valid over most of the control parameter space of the system. Also considered are the reasons for the more complicated dynamics featuring regions of high multiple periodicity which appear like noise between ordered periodic regions. Since this system mimics in detail the less analytically tractable forced van der Pol equation, the results suggest extensions to situations where forced relaxation oscillations are a component of the operating mechanisms
[ "bistability", "harmonically forced relaxation oscillations", "external periodic force", "entrainment", "van der Pol equation", "harmonically driven Stoker-Haag piecewise discontinuous equation", "control parameter space", "nonlinear dynamics" ]
[ "P", "P", "P", "P", "P", "P", "P", "M" ]
1143
A three-source model for the calculation of head scatter factors
Accurate determination of the head scatter factor S/sub c/ is an important issue, especially for intensity modulated radiation therapy, where the segmented fields are often very irregular and much less than the collimator jaw settings. In this work, we report an S/sub c/ calculation algorithm for symmetric, asymmetric, and irregular open fields shaped by the tertiary collimator (a multileaf collimator or blocks) at different source-to-chamber distance. The algorithm was based on a three-source model, in which the photon radiation to the point of calculation was treated as if it originated from three effective sources: one source for the primary photons from the target and two extra-focal photon sources for the scattered photons from the primary collimator and the flattening filter, respectively. The field mapping method proposed by Kim et al. [Phys. Med. Biol. 43, 1593-1604 (1998)] was extended to two extra-focal source planes and the scatter contributions were integrated over the projected areas (determined by the detector's eye view) in the three source planes considering the source intensity distributions. The algorithm was implemented using Microsoft Visual C/C++ in the MS Windows environment. The only input data required were head scatter factors for symmetric square fields, which are normally acquired during machine commissioning. A large number of different fields were used to evaluate the algorithm and the results were compared with measurements. We found that most of the calculated S/sub c/'s agreed with the measured values to within 0.4%. The algorithm can also be easily applied to deal with irregular fields shaped by a multileaf collimator that replaces the upper or lower collimator jaws
[ "three-source model", "head scatter factors", "intensity modulated radiation therapy", "segmented fields", "fields", "fields", "collimator jaw settings", "calculation algorithm", "symmetric", "asymmetric", "irregular open fields", "tertiary collimator", "multileaf collimator", "blocks", "source-to-chamber distance", "photon radiation", "target", "extra-focal photon sources", "scattered photons", "primary collimator", "flattening filter", "field mapping method", "extra-focal source planes", "source intensity distributions", "MS Windows environment", "input data", "symmetric square fields", "machine commissioning", "lower collimator jaws", "upper collimator jaws" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "R" ]
1106
Virtual projects at Halden [Reactor Project]
The Halden man-machine systems (MMS) programme for 2002 is intended to address issues related to human factors, control room design, computer-based support system areas and system safety and reliability. The Halden MMS programme is intended to address extensive experimental work in the human factors, control room design and computer-based support system areas. The work is based on experiments and demonstrations carried out in the experimental facility HAMMLAB. Pilot-versions of several operator aids are adopted and integrated to the HAMMLAB simulators and demonstrated in a full dynamic setting. The Halden virtual reality laboratory has recently become an integral and important part of the programme
[ "human factors", "control room design", "computer-based support system", "safety", "reliability", "virtual reality", "Halden Reactor Project", "man-machine systems programme" ]
[ "P", "P", "P", "P", "P", "P", "R", "R" ]
899
Mathematical model of functioning of an insurance company with allowance for advertising expenses
A mathematical model of the functioning of an insurance company with allowance for advertising expenses is suggested. The basic characteristics of the capital of the company and the advertising efficiency are examined in the case in which the advertising expenses are proportional to the capital
[ "mathematical model", "capital", "insurance company functioning", "advertising expenses allowance" ]
[ "P", "P", "R", "R" ]
864
Valuing corporate debt: the effect of cross-holdings of stock and debt
We have developed a simple approach to valuing risky corporate debt when corporations own securities issued by other corporations. We assume that corporate debt can be valued as an option on corporate business asset value, and derive payoff functions when there exist cross-holdings of stock or debt between two firms. Next we show that payoff functions with multiple cross-holdings can be solved by the contraction principle. The payoff functions which we derive provide a number of insights about the risk structure of company cross-holdings. First, the Modigliani-Miller theorem can obtain when there exist cross-holdings between firms. Second, by establishing cross-shareholdings each of stock holders distributes a part of its payoff values to the bond holder of the other's firm, so that both firms can decrease credit risks by cross-shareholdings. In the numerical examples, we show that the correlation in firms can be a critical condition for reducing credit risk by cross-holdings of stock using Monte Carlo simulation. Moreover, we show we can calculate the default spread easily when complicated cross-holdings exist, and find which shares are beneficial or disadvantageous
[ "securities", "option", "corporate business asset value", "payoff functions", "multiple cross-holdings", "Modigliani-Miller theorem", "cross-shareholdings", "bond holder", "credit risks", "correlation", "Monte Carlo simulation", "risky corporate debt valuation", "stock holdings", "debt holdings" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "M", "M", "M" ]
821
Digital rights (and wrongs)
Attempting to grasp the many conflicts and proposed safeguards for intellectual property is extremely difficult. Legal, political, economic, and cultural issues-both domestic and international-loom large, almost dwarfing the daunting technological challenges. Solutions devised by courts and legislatures and regulatory agencies are always late out of the blocks and fall ever farther behind. Recently proposed legislation only illustrates the depth and complexity of the problem
[ "intellectual property", "cultural issues", "economic issues", "political issues", "legal issues" ]
[ "P", "M", "M", "M", "M" ]
1437
Improving the frequency stability of microwave oscillators by utilizing the dual-mode sapphire-loaded cavity resonator
The design and experimental testing of a novel control circuit to stabilize the temperature of a sapphire-loaded cavity whispering gallery resonator-oscillator and improve its medium-term frequency stability is presented. Finite-element software was used to predict frequencies and quality factors of WGE/sub 7,0,0/ and the WGH/sub 9,0,0/ modes near 9 GHz, and separated in frequency by approximately 80 MHz. Calculations show that the novel temperature control circuits from the difference frequency can result in a frequency stability of better than one part in 10/sup 13/ at 270 K. Also, we present details on the best way to couple orthogonally to two modes of similar frequency but different polarization
[ "frequency stability", "microwave oscillators", "dual-mode sapphire-loaded cavity resonator", "whispering gallery resonator-oscillator", "9 GHz", "temperature control circuit", "difference frequency", "frequency standard", "temperature stabilisation", "finite-element analysis", "whispering gallery modes", "high-quality factor", "270 K" ]
[ "P", "P", "P", "P", "P", "P", "P", "M", "M", "M", "R", "M", "M" ]
577
A robust H/sub infinity / control approach for induction motors
This paper deals with the robustness and stability of an induction motor control structure against internal and external disturbances. In the proposed control scheme, we have used an H/sub infinity / controller with field orientation and input-output linearization to achieve the above-specified features. Simulation results are included to illustrate the control approach performances
[ "robust H/sub infinity / control", "robustness", "stability", "induction motors control", "external disturbances", "field orientation", "input-output linearization", "internal disturbances" ]
[ "P", "P", "P", "P", "P", "P", "P", "R" ]
1167
A new approach to the d-MC problem
Many real-world systems are multi-state systems composed of multi-state components in which the reliability can be computed in terms of the lower bound points of level d, called d-Mincuts (d-MCs). Such systems (electric power, transportation, etc.) may be regarded as flow networks whose arcs have independent, discrete, limited and multi-valued random capacities. In this paper, all MCs are assumed to be known in advance, and the authors focused on how to verify each d-MC candidate before using d-MCs to calculate the network reliability. The proposed algorithm is more efficient than existing algorithms. The algorithm runs in O(p sigma mn) time, a significant improvement over the previous O(p sigma m/sup 2/) time bounds based on max-flow/min-cut, where p and or are the number of MCs and d-MC candidates, respectively. It is simple, intuitive and uses no complex data structures. An example is given to show how all d-MC candidates are found and verified by the proposed algorithm. Then the reliability of this example is computed
[ "d-MC problem", "multi-state systems", "multi-state components", "d-Mincuts", "flow networks", "time bounds", "max-flow/min-cut", "reliability computation", "failure analysis algorithm" ]
[ "P", "P", "P", "P", "P", "P", "P", "R", "M" ]
1122
Hybrid broadcast for the video-on-demand service
Multicast offers an efficient means of distributing video contents/programs to multiple clients by batching their requests and then having them share a server's video stream. Batching customers' requests is either client-initiated or server-initiated. Most advanced client-initiated video multicasts are implemented by patching. Periodic broadcast, a typical server-initiated approach, can be entirety-based or segment-based. This paper focuses on the performance of the VoD service for popular videos. First, we analyze the limitation of conventional patching when the customer request rate is high. Then, by combining the advantages of each of the two broadcast schemes, we propose a hybrid broadcast scheme for popular videos, which not only lowers the service latency but also improves clients' interactivity by using an active buffering technique. This is shown to be a good compromise for both lowering service latency and improving the VCR-like interactivity
[ "video-on-demand", "multicast", "conventional patching", "customer request rate", "hybrid broadcast scheme", "interactivity", "quality-of-service", "scheduling" ]
[ "P", "P", "P", "P", "P", "P", "U", "U" ]
676
Impossible choice [web hosting service provider]
Selecting a telecoms and web hosting service provider has become a high-stakes game of chance
[ "web hosting service provider", "selection", "IT managers", "customer service" ]
[ "P", "P", "U", "M" ]
633
Using k-nearest-neighbor classification in the leaves of a tree
We construct a hybrid (composite) classifier by combining two classifiers in common use - classification trees and k-nearest-neighbor (k-NN). In our scheme we divide the feature space up by a classification tree, and then classify test set items using the k-NN rule just among those training items in the same leaf as the test item. This reduces somewhat the computational load associated with k-NN, and it produces a classification rule that performs better than either trees or the usual k-NN in a number of well-known data sets
[ "k-nearest-neighbor classification", "classification trees", "k-NN rule", "computational load", "data sets", "tree leaves", "hybrid composite classifier", "feature space division" ]
[ "P", "P", "P", "P", "P", "R", "R", "M" ]
1266
An intelligent information gathering method for dynamic information mediators
The Internet is spreading into our society rapidly and is becoming one of the information infrastructures that are indispensable for our daily life. In particular, the WWW is widely used for various purposes such as sharing personal information, academic research, business work, and electronic commerce, and the amount of available information is increasing rapidly. We usually utilize information sources on the Internet as individual stand-alone sources, but if we can integrate them, we can add more value to each of them. Hence, information mediators, which integrate information distributed on the Internet, are drawing attention. In this paper, under the assumption that the information sources to be integrated are updated frequently and asynchronously, we propose an information gathering method that constructs an answer to a query from a user, accessing information sources to be integrated properly within an allowable time period. The proposed method considers the reliability of data in the cache and the quality of answer in order to efficiently access information sources and to provide appropriate answers to the user. As evaluation, we show the effectiveness of the proposed method by using an artificial information integration problem, in which some parameters can be modified, and a real-world flight information service compared with a conventional FIFO information gathering method
[ "intelligent information gathering method", "dynamic information mediators", "Internet", "information infrastructures", "WWW", "academic research", "business work", "electronic commerce", "artificial information integration problem", "real-world flight information service" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P" ]
1223
Formalising optimal feature weight setting in case based diagnosis as linear programming problems
Many approaches to case based reasoning (CBR) exploit feature weight setting algorithms to reduce the sensitivity to distance functions. We demonstrate that optimal feature weight setting in a special kind of CBR problems can be formalised as linear programming problems. Therefore, the optimal weight settings can be calculated in polynomial time instead of searching in exponential weight space using heuristics to get sub-optimal settings. We also demonstrate that our approach can be used to solve classification problems
[ "optimal feature weight setting", "case based diagnosis", "linear programming", "case based reasoning", "distance functions", "polynomial time", "searching", "exponential weight space", "heuristics", "classification" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P" ]
918
Schema evolution in data warehouses
We address the issues related to the evolution and maintenance of data warehousing systems, when underlying data sources change their schema capabilities. These changes can invalidate views at the data warehousing system. We present an approach for dynamically adapting views according to schema changes arising on source relations. This type of maintenance concerns both the schema and the data of the data warehouse. The main issue is to avoid the view recomputation from scratch especially when views are defined from multiple sources. The data of the data warehouse is used primarily in organizational decision-making and may be strategic. Therefore, the schema of the data warehouse can evolve for modeling new requirements resulting from analysis or data-mining processing. Our approach provides means to support schema evolution of the data warehouse independently of the data sources
[ "schema evolution", "data warehouses", "data sources", "source relations", "organizational decision-making", "system maintenance", "containment", "structural view maintenance", "view adaptation", "SQL query", "data analysis" ]
[ "P", "P", "P", "P", "P", "R", "U", "M", "R", "U", "R" ]
840
Gender benders [women in computing profession]
As a minority in the upper levels of the computing profession, women are sometimes mistreated through ignorance or malice. Some women have learned to respond with wit and panache
[ "women", "computing profession" ]
[ "P", "P" ]
805
Active pitch control in larger scale fixed speed horizontal axis wind turbine systems. I. linear controller design
This paper reviews and addresses the principles of linear controller design of the fixed speed wind turbine system in above rated wind speed, using pitch angle control of the blades and applying modern control theory. First, the nonlinear equations of the system are built in under some reasonable suppositions. Then, the nonlinear equations are linearised at set operating point and digital simulation results are shown in this paper. Finally, a linear quadratic optimal feedback controller is designed and the dynamics of the closed circle system are simulated with digital calculation. The advantages and disadvantages of the assumptions and design method are also discussed. Because of the inherent characteristics of the linear system control theory, the performance of the linear controller is not sufficient for operating wind turbines, as is discussed
[ "active pitch control", "horizontal axis wind turbine systems", "wind turbines", "linear controller design", "fixed speed wind turbine system", "pitch angle control", "control theory", "nonlinear equations", "digital simulation", "linear quadratic optimal feedback controller", "closed circle system", "linear system control theory", "aerodynamics", "drive train dynamics" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "U", "M" ]
1413
Web content extraction. A WhizBang! approach
The extraction technology that Whizbang uses consists of a unique approach to scouring the Web for current, very specific forms of information. FlipDog, for example, checks company Web sites for hyperlinks to pages that list job opportunities. It then crawls to the deeper page and, using the WhizBang! Extraction Framework, extracts the key elements of the postings, such as job title, name of employer, job category, and job function. Click on a job and you are transferred to the company Web site to view the job description as it appears there
[ "Web content extraction", "FlipDog", "company Web sites", "WhizBang! Extraction Framework", "job description", "job-hunting site" ]
[ "P", "P", "P", "P", "P", "M" ]
1087
Implementation of DIMSIMs for stiff differential systems
Some issues related to the implementation of diagonally implicit multistage integration methods for stiff differential systems are discussed. They include reliable estimation of the local discretization error, construction of continuous interpolants, solution of nonlinear systems of equations by simplified Newton iterations, choice of initial stepsize and order, and step and order changing strategy. Numerical results are presented which indicate that an experimental Matlab code based on type 2 methods of order one, two and three outperforms ode15s code from Matlab ODE suite on problems whose Jacobian has eigenvalues which are close to the imaginary axis
[ "DIMSIMs", "stiff differential systems", "diagonally implicit multistage integration methods", "reliable estimation", "local discretization error", "interpolants", "nonlinear systems of equations", "simplified Newton iterations", "experimental Matlab code" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P" ]
1456
Look who's talking [voice recognition]
Voice recognition could be the answer to the problem of financial fraud, but in the world of biometric technology, money talks
[ "voice recognition", "financial fraud", "biometric", "cost" ]
[ "P", "P", "P", "U" ]
796
Quadratic Newton iteration for systems with multiplicity
Newton's iterator is one of the most popular components of polynomial equation system solvers, either from the numeric or symbolic point of view. This iterator usually handles smooth situations only (when the Jacobian matrix associated to the system is invertible). This is often a restrictive factor. Generalizing Newton's iterator is still an open problem: How to design an efficient iterator with a quadratic convergence even in degenerate cases? We propose an answer for an m-adic topology when the ideal m can be chosen generic enough: compared to a smooth case we prove quadratic convergence with a small overhead that grows with the square of the multiplicity of the root
[ "quadratic Newton iteration", "systems with multiplicity", "Newton's iterator", "polynomial equation system solvers", "Jacobian matrix", "quadratic convergence", "m-adic topology" ]
[ "P", "P", "P", "P", "P", "P", "P" ]
1386
When the unexpected happens [disaster planning in banks]
A business disruption can be as simple as a power failure or as complex as a terrorist attack. Regardless, you will need to have a plan to minimize interruptions to both your bank and your customers. Marketers have a role in this readiness process
[ "disaster planning", "planning", "banks", "recovery", "public relations", "emergency management" ]
[ "P", "P", "P", "U", "U", "U" ]
1002
Selective representing and world-making
We discuss the thesis of selective representing-the idea that the contents of the mental representations had by organisms are highly constrained by the biological niches within which the organisms evolved. While such a thesis has been defended by several authors elsewhere, our primary concern here is to take up the issue of the compatibility of selective representing and realism. We hope to show three things. First, that the notion of selective representing is fully consistent with the realist idea of a mind-independent world. Second, that not only are these two consistent, but that the latter (the realist conception of a mind-independent world) provides the most powerful perspective from which to motivate and understand the differing perceptual and cognitive profiles themselves. Third, that the (genuine and important) sense in which organism and environment may together constitute an integrated system of scientific interest poses no additional threat to the realist conception
[ "selective representing", "world-making", "mental representations", "organisms", "realism", "mind-independent world", "cognitive profiles" ]
[ "P", "P", "P", "P", "P", "P", "P" ]
1047
Dynamics and control of initialized fractional-order systems
Due to the importance of historical effects in fractional-order systems, this paper presents a general fractional-order system and control theory that includes the time-varying initialization response. Previous studies have not properly accounted for these historical effects. The initialization response, along with the forced response, for fractional-order systems is determined. The scalar fractional-order impulse response is determined, and is a generalization of the exponential function. Stability properties of fractional-order systems are presented in the complex w-plane, which is a transformation of the s-plane. Time responses are discussed with respect to pole positions in the complex w-plane and frequency response behavior is included. A fractional-order vector space representation, which is a generalization of the state space concept, is presented including the initialization response. Control methods for vector representations of initialized fractional-order systems are shown. Finally, the fractional-order differintegral is generalized to continuous order-distributions which have the possibility of including all fractional orders in a transfer function
[ "dynamics", "control", "initialized fractional-order systems", "initialization response", "forced response", "impulse response", "exponential function", "vector space representation", "state space concept", "fractional-order differintegral", "transfer function" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P" ]
880
Computing 2002: democracy, education, and the future
Computer scientists, computer engineers, information technologists, and their collective products have grown and changed in quantity, quality, and nature. In the first decade of this new century, it should become apparent to everyone that the computing and information fields, broadly defined, will have a profound impact on every element of every person's life. The author considers how women and girls of the world have been neither educated for computing nor served by computing. Globally, women's participation in computer science grew for a while, then dropped precipitously. Computing, science, engineering, and society will suffer if this decline continues, because women have different perspectives on technology, what it is important for, how it should be built, which projects should be funded, and so on. To create a positive future, to assure that women equally influence the future, computing education must change
[ "democracy", "future", "women", "girls", "society", "computer science education", "gender issues" ]
[ "P", "P", "P", "P", "P", "R", "U" ]
1303
Reply to Carreira-Perpinan and Goodhill [mathematics in biology]
In a paper by Carreira-Perpinan and Goodhill (see ibid., vol.14, no.7, p.1545-60, 2002) the authors apply mathematical arguments to biology. Swindale et al. think it is inappropriate to apply the standards of proof required in mathematics to the acceptance or rejection of scientific hypotheses. To give some examples, showing that data are well described by a linear model does not rule out an infinity of other possible models that might give better descriptions of the data. Proving in a mathematical sense that the linear model was correct would require ruling out all other possible models, a hopeless task. Similarly, to demonstrate that two DNA samples come from the same individual, it is sufficient to show a match between only a few regions of the genome, even though there remains a very large number of additional comparisons that could be done, any one of which might potentially disprove the match. This is unacceptable in mathematics, but in the real world, it is a perfectly reasonable basis for belief
[ "biology", "mathematical arguments", "scientific hypotheses", "linear model", "DNA", "genome", "hypothesis testing", "cortical maps", "neural nets" ]
[ "P", "P", "P", "P", "P", "P", "U", "U", "U" ]
1346
Automatic multilevel thresholding for image segmentation by the growing time adaptive self-organizing map
In this paper, a Growing TASOM (Time Adaptive Self-Organizing Map) network called "GTASOM" along with a peak finding process is proposed for automatic multilevel thresholding. The proposed GTASOM is tested for image segmentation. Experimental results demonstrate that the GTASOM is a reliable and accurate tool for image segmentation and its results outperform other thresholding methods
[ "automatic multilevel thresholding", "image segmentation", "growing time adaptive self-organizing map", "Growing TASOM", "GTASOM", "peak finding process" ]
[ "P", "P", "P", "P", "P", "P" ]
713
Efficient feasibility testing for dial-a-ride problems
Dial-a-ride systems involve dispatching a vehicle to satisfy demands from a set of customers who call a vehicle-operating agency requesting that an item tie picked up from a specific location and delivered to a specific destination. Dial-a-ride problems differ from other routing and scheduling problems, in that they typically involve service-related constraints. It is common to have maximum wait time constraints and maximum ride time constraints. In the presence of maximum wait time and maximum ride time restrictions, it is not clear how to efficiently determine, given a sequence of pickups and deliveries, whether a feasible schedule exists. We demonstrate that this, in fact, can be done in linear time
[ "feasibility testing", "dial-a-ride problems", "dispatching", "vehicle-operating agency", "routing", "scheduling", "service-related constraints", "maximum wait time constraints", "maximum ride time constraints" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P" ]
756
A new high resolution color flow system using an eigendecomposition-based adaptive filter for clutter rejection
We present a new signal processing strategy for high frequency color flow mapping in moving tissue environments. A new application of an eigendecomposition-based clutter rejection filter is presented with modifications to deal with high blood-to-clutter ratios (BCR). Additionally, a new method for correcting blood velocity estimates with an estimated tissue motion profile is detailed. The performance of the clutter filter and velocity estimation strategies is quantified using a new swept-scan signal model. In vivo color flow images are presented to illustrate the potential of the system for mapping blood flow in the microcirculation with external tissue motion
[ "eigendecomposition-based adaptive filter", "signal processing strategy", "high frequency color flow mapping", "moving tissue environments", "clutter rejection filter", "high blood-to-clutter ratios", "estimated tissue motion profile", "swept-scan signal model", "in vivo color flow images", "microcirculation", "high resolution colour flow system", "HF colour flow mapping", "blood velocity estimates correction", "blood flow mapping", "echoes", "clutter suppression performance" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "M", "M", "R", "R", "U", "M" ]
838
Pool halls, chips, and war games: women in the culture of computing
Computers are becoming ubiquitous in our society and they offer superb opportunities for people in jobs and everyday life. But there is a noticeable sex difference in use of computers among children. This article asks why computers are more attractive to boys than to girls and offers a cultural framework for explaining the apparent sex differences. Although the data are fragmentary, the world of computing seems to be more consistent with male adolescent culture than with feminine values and goals. Furthermore, both arcade and educational software is designed with boys in mind. These observations lead us to speculate that computing is neither inherently difficult nor uninteresting to girls, but rather that computer games and other software might have to be designed differently for girls. Programs to help teachers instill computer efficacy in all children also need to be developed
[ "women", "culture of computing", "sex difference", "children", "male adolescent culture", "educational software", "computer games", "teachers" ]
[ "P", "P", "P", "P", "P", "P", "P", "P" ]
71
A study of computer attitudes of non-computing students of technical colleges in Brunei Darussalam
The study surveyed 268 non-computing students among three technical colleges in Brunei Darussalam. The study validated an existing instrument to measure computer attitudes of non-computing students, and identified factors that contributed to the formation of their attitudes. The findings show that computer experience and educational qualification are associated with students' computer attitudes. In contrast, variables such as gender, age, ownership of a personal computer (PC), geographical location of institution, and prior computer training appeared to have no impact on computer attitudes
[ "computer attitudes", "technical colleges", "survey", "computer experience", "educational qualification", "gender", "age", "computer training", "noncomputing students", "personal computer ownership", "educational computing", "end user computing" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "M", "R", "R", "M" ]
925
A fundamental investigation into large strain recovery of one-way shape memory alloy wires embedded in flexible polyurethanes
Shape memory alloys (SMAs) are being embedded in or externally attached to smart structures because of the large amount of actuation deformation and force that these materials are capable of producing when they are heated. Previous investigations have focused primarily on using single or opposing SMA wires exhibiting the two-way shape memory effect (SME) because of the simplicity with which the repeatable actuation behavior of the structure can be predicted. This repeatable actuation behavior is achieved at the expense of reduced levels of recoverable deformation. Alternatively, many potential smart structure applications will employ multiple SMA wires exhibiting a permanent one-way SME to simplify fabrication and increase the recoverable strains in the structure. To employ the one-way wires, it is necessary to investigate how they affect the recovery of large strains when they are embedded in a structure. In this investigation, the large strain recovery of a one-way SMA wire embedded in a flexible polyurethane is characterized using the novel deformation measurement technique known as digital image correlation. These results are compared with a simple actuation model and a three-dimensional finite element analysis of the structure using the Brinson model for describing the thermomechanical behavior of the SMA. Results indicate that the level of actuation strain in the structure is substantially reduced by the inelastic behavior of the one-way SMA wires, and there are significant differences between the deformations of the matrix material adjacent to the SMA wires and in the region surrounding it. The transformation behavior of the SMA wires was also determined to be volume preserving, which had a significant effect on the transverse strain fields
[ "strain recovery", "one-way shape memory", "alloy wires", "flexible polyurethanes", "flexible polyurethanes", "smart structures", "actuation deformation", "deformations", "SMA wires", "two-way shape memory effect", "recoverable strains", "three-dimensional finite element analysis", "actuation strain", "matrix material", "transverse strain fields", "flexible polyurethane", "embedded sensor" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "M" ]
960
Bisimulation minimization and symbolic model checking
State space minimization techniques are crucial for combating state explosion. A variety of explicit-state verification tools use bisimulation minimization to check equivalence between systems, to minimize components before composition, or to reduce a state space prior to model checking. Experimental results on bisimulation minimization in symbolic model checking contexts, however, are mixed. We explore bisimulation minimization as an optimization in symbolic model checking of invariance properties. We consider three bisimulation minimization algorithms. From each, we produce a BDD-based model checker for invariant properties and compare this model checker to a conventional one based on backwards reachability. Our comparisons, both theoretical and experimental, suggest that bisimulation minimization is not viable in the context of invariance verification, because performing the minimization requires as many, if not more, computational resources as model checking the unminimized system through backwards reachability
[ "bisimulation minimization", "symbolic model checking", "state space minimization techniques", "state explosion", "explicit-state verification tools", "experimental results", "optimization", "invariance properties", "backwards reachability", "invariance verification", "BDD", "binary decision diagram" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "U", "U" ]
123
A new identification approach for FIR models
The identification of stochastic discrete systems disturbed with noise is discussed in this brief. The concept of general prediction error (GPE) criterion is introduced for the time-domain estimate with optimal frequency estimation (OFE) introduced for the frequency-domain estimate. The two estimation methods are combined to form a new identification algorithm, which is called the empirical frequency-domain optimal parameter (EFOP) estimate, for the finite impulse response (FIR) model interfered by noise. The algorithm theoretically provides the global optimum of the model frequency-domain estimate. Some simulation examples are given to illustrate the new identification method
[ "identification approach", "FIR models", "stochastic discrete systems", "time-domain estimate", "optimal frequency estimation", "frequency-domain estimate", "general prediction error criterion", "empirical frequency-domain optimal parameter estimate" ]
[ "P", "P", "P", "P", "P", "P", "R", "R" ]
792
Remember e-commerce? Yeah, well, it's still here
Sandy Kemper, the always outspoken CEO of successful e-commerce company eScout, offers his views on the purported demise of "commerce" in e-commerce, and what opportunities lie ahead for those bankers bold enough to act in a market turned tentative by early excesses
[ "e-commerce", "eScout", "bankers" ]
[ "P", "P", "P" ]
1382
Loop restructuring for data I/O minimization on limited on-chip memory embedded processors
In this paper, we propose a framework for analyzing the flow of values and their reuse in loop nests to minimize data traffic under the constraints of limited on-chip memory capacity and dependences. Our analysis first undertakes fusion of possible loop nests intra-procedurally and then performs loop distribution. The analysis discovers the closeness factor of two statements which is a quantitative measure of data traffic saved per unit memory occupied if the statements were under the same loop nest over the case where they are under different loop nests. We then develop a greedy algorithm which traverses the program dependence graph to group statements together under the same loop nest legally to promote maximal reuse per unit of memory occupied. We implemented our framework in Petit, a tool for dependence analysis and loop transformations. We compared our method with one based on tiling of fused loop nest and one based on a greedy strategy to purely maximize reuse. We show that our methods work better than both of these strategies in most cases for processors such as TMS320Cxx, which have a very limited amount of on-chip memory. The improvements in data I/O range from 10 to 30 percent over tiling and from 10 to 40 percent over maximal reuse for JPEG loops
[ "loop restructuring", "data I/O minimization", "on-chip memory", "embedded processors", "data traffic", "closeness factor", "program dependence graph", "Petit", "fused loop nest", "loop fusion", "data locality", "DSP" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "R", "M", "U" ]
844
Women in computing history
Exciting inventions, innovative technology, human interaction, and intriguing politics fill computing history. However, the recorded history is mainly composed of male achievements and involvements, even though women have played substantial roles. This situation is not unusual. Most science fields are notorious for excluding, undervaluing, or overlooking the accomplishments of their female scientists. As Lee points out, it is up to the historians and others to remedy this imbalance. Steps have been taken towards this goal through publishing biographies on women in technology, and through honoring the pioneers with various awards such as the GHC'97 Pioneering Awards, the WITI Hall of Fame, and the AWC Lovelace Award. A few online sites contain biographies of women in technology. However, even with these resources, many women who have contributed significantly to computer science are still to be discovered
[ "women", "computing history" ]
[ "P", "P" ]
801
International customers, suppliers, and document delivery in a fee-based information service
The Purdue University Libraries library fee-based information service, the Technical Information Service (TIS), works with both international customers and international suppliers to meet its customers' needs for difficult and esoteric document requests. Successful completion of these orders requires the ability to verify fragmentary citations; ascertain documents' availability; obtain pricing information; calculate inclusive cost quotes; meet customers' deadlines; accept international payments; and ship across borders. While international orders make tip a small percent of the total workload, these challenging and rewarding orders meet customers' needs and offer continuous improvement opportunities to the staff
[ "international customers", "document delivery", "Technical Information Service", "international suppliers", "document requests", "pricing information", "inclusive cost quotes", "international payments", "Purdue University Libraries fee-based information service", "fragmentary citation verification", "document availability", "customer deadline meeting", "continuous staff improvement" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "R", "M", "R", "R", "R" ]
1417
Craigslist: virtual community maintains human touch
If it works why change it? This might have been the thought on the minds of dot com executives back when Internet businesses were booming, and most of the Web content was free. Web sites were overflowing with advertisements of every kind and size. Now that dot com principals know better, Web ads are no longer the only path to revenue generation. Community portals, however, never seemed to have many ads to begin with, and their content stayed truer to who they served. Many of them started off as simple places for users to list announcements, local events, want ads, real estate, and mingle with other local users. The author saw the need for San Franciscans to have a place to do all of that for free, without any annoying advertising, and ended up offering much more to his community with the creation of craigslist. "[Polling users] was a good way for us to connect with our members, this is the way to operate successfully in situations like these - your members come first."
[ "craigslist", "virtual community", "Internet businesses", "Web content", "revenue generation", "community portals", "announcements", "local events", "want ads", "real estate", "San Francisco Bay community" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "M" ]
1083
Differential algebraic systems anew
It is proposed to figure out the leading term in differential algebraic systems more precisely. Low index linear systems with those properly stated leading terms are considered in detail. In particular, it is asked whether a numerical integration method applied to the original system reaches the inherent regular ODE without conservation, i.e., whether the discretization and the decoupling commute in some sense. In general one cannot expect this commutativity so that additional difficulties like strong stepsize restrictions may arise. Moreover, abstract differential algebraic equations in infinite-dimensional Hilbert spaces are introduced, and the index notion is generalized to those equations. In particular, partial differential algebraic equations are considered in this abstract formulation
[ "differential algebraic systems", "low index linear systems", "numerical integration method", "inherent regular ODE", "commutativity", "stepsize restrictions", "abstract differential algebraic equations" ]
[ "P", "P", "P", "P", "P", "P", "P" ]
1452
Creating Web-based listings of electronic journals without creating extra work
Creating up-to-date listings of electronic journals is challenging due to frequent changes in titles available and in URLs for electronic journal titles. However, many library users may want to browse Web pages which contain listings of electronic journals arranged by title and/or academic disciplines. This case study examines the development of a system which automatically exports data from the online catalog and incorporates it into dynamically-generated Web sites. These sites provide multiple access points for journals, include Web-based interfaces enabling subject specialists to manage the list of titles which appears in their subject area. Because data are automatically extracted from the catalog, overlap in updating titles and URLs is avoided. Following the creation of this system, usage of electronic journals dramatically increased and feedback has been positive. Future challenges include developing more frequent updates and motivating subject specialists to more regularly monitor new titles
[ "Web-based listings", "electronic journals", "URL", "library", "Web pages", "case study", "online catalog", "Web sites", "feedback", "technical services", "public services partnerships" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "U", "U" ]
637
A digital fountain approach to asynchronous reliable multicast
The proliferation of applications that must reliably distribute large, rich content to a vast number of autonomous receivers motivates the design of new multicast and broadcast protocols. We describe an ideal, fully scalable protocol for these applications that we call a digital fountain. A digital fountain allows any number of heterogeneous receivers to acquire content with optimal efficiency at times of their choosing. Moreover, no feedback channels are needed to ensure reliable delivery, even in the face of high loss rates. We develop a protocol that closely approximates a digital fountain using two new classes of erasure codes that for large block sizes are orders of magnitude faster than standard erasure codes. We provide performance measurements that demonstrate the feasibility of our approach and discuss the design, implementation, and performance of an experimental system
[ "digital fountain", "asynchronous reliable multicast", "autonomous receivers", "broadcast protocols", "scalable protocol", "heterogeneous receivers", "optimal efficiency", "high loss rates", "erasure codes", "large block size", "performance measurements", "multicast protocol", "experimental system performance", "Internet", "FEC codes", "forward error correction", "RS codes", "Tornado codes", "Luby transform codes", "bulk data distribution", "IP multicast", "simulation results", "interoperability", "content distribution methods", "Reed-Solomon codes", "decoder" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "R", "R", "U", "M", "U", "M", "M", "M", "M", "M", "U", "U", "M", "M", "U" ]
1262
The development and evaluation of SHOKE2000: the PCI-based FPGA card
This paper describes a PCI-based FPGA card, SHOKE2000, which was developed in order to study reconfigurable computing. Since the latest field programmable gate arrays (FPGA) consist of input/output (I/O) configurable blocks as well as internal configurable logic blocks, they not only realize various user logic circuits but also connect with popular I/O standards easily. These features enable FPGA to connect several devices with different interfaces, and thus new reconfigurable systems would be realizable by connecting the FPGA with devices such as digital signal processors (DSP) and analog devices. This paper describes the basic functions of SHOKE2000, which was developed for realizing hybrid reconfigurable systems consisting of FPGA, DSP, and analog devices. We also present application examples of SHOKE2000, including a simple image recognition application, a distributed shared memory computer cluster, and teaching materials for computer education
[ "SHOKE2000", "FPGA card", "FPGA", "reconfigurable computing", "field programmable gate arrays", "user logic circuits", "I/O standard", "interfaces", "digital signal processors", "DSP", "analog devices", "hybrid reconfigurable systems", "image recognition application", "distributed shared memory computer cluster", "teaching materials", "computer education", "PCI", "intellectual property" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "U", "U" ]
1227
Will new Palms win laurels.?
PalmSource's latest operating system for mobile devices harnesses the ARM architecture to support more powerful business software, but there are concerns over compatibility with older applications
[ "PalmSource", "operating system", "mobile devices", "ARM architecture", "compatibility", "Palm OS 5.0" ]
[ "P", "P", "P", "P", "P", "M" ]
959
Silicon debug of a PowerPC TM microprocessor using model checking
When silicon is available, newly designed microprocessors are tested in specially equipped hardware laboratories, where real applications can be run at hardware speeds. However, the large volumes of code being run, plus the limited access to the internal nodes of the chip, make it very difficult to characterize the nature of any failures that occur. We describe how temporal logic model checking was used to quickly characterize a design error exhibited during hardware testing of a PowerPC microprocessor. We outline the conditions under which model checking can efficiently characterize such failures, and show how the particular error we detected could have been revealed early in the design cycle, by model checking a short and simple correctness specification. We discuss the implications of this for verification methodologies over the full design cycle
[ "model checking", "temporal logic", "hardware testing", "PowerPC microprocessor", "correctness specification", "verification methodologies", "circuit design error", "Computation Tree Logic", "circuit debugging" ]
[ "P", "P", "P", "P", "P", "P", "M", "M", "M" ]
573
ECG-gated /sup 18/F-FDG positron emission tomography. Single test evaluation of segmental metabolism, function and contractile reserve in patients with coronary artery disease and regional dysfunction
/sup 18/F-fluorodeoxyglucose (/sup 18/F-FDG)-positron emission tomography (PET) provides information about myocardial glucose metabolism to diagnose myocardial viability. Additional information about the functional status is necessary. Comparison of tomographic metabolic PET with data from other imaging techniques is always hampered by some transfer uncertainty and scatter. We wanted to evaluate a new Fourier-based ECG-gated PET technique using a high resolution scanner providing both metabolic and functional data with respect to feasibility in patients with diseased left ventricles. Forty-five patients with coronary artery disease and at least one left ventricular segment with severe hypokinesis or akinesis at biplane cineventriculography were included. A new Fourier-based ECG-gated metabolic /sup 18/F-FDG-PET was performed in these patients. Function at rest and /sup 18/F-FDG uptake were examined in the PET study using a 36-segment model. Segmental comparison with ventriculography revealed a high reliability in identifying dysfunctional segments (>96%). /sup 18/F-FDG uptake of normokinetic/hypokinetic/akinetic segments was 75.4+or-7.5, 65.3+or-10.5, and 35.9+or-15.2% (p<0.001). In segments >or=70% /sup 18/F-FDG uptake no akinesia was observed. No residual function was found below 40% /sup 18/F-FDG uptake. An additional dobutamine test was performed and revealed inotropic reserve (viability) in 42 akinetic segments and 45 hypokinetic segments. ECG-gated metabolic PET with pixel-based Fourier smoothing provides reliable data on regional function. Assessment of metabolism and function makes complete judgement of segmental status feasible within a single study without any transfer artefacts or test-to-test variability. The results indicate the presence of considerable amounts of viable myocardium in regions with an uptake of 40-50% /sup 18/F-FDG
[ "functional", "patients", "coronary artery disease", "regional dysfunction", "myocardial glucose metabolism", "myocardial viability", "transfer uncertainty", "Fourier-based ECG-gated PET technique", "high resolution scanner", "diseased left ventricles", "left ventricular segment", "severe hypokinesis", "akinesis", "biplane cineventriculography", "ventriculography", "dysfunctional segments", "normokinetic/hypokinetic/akinetic segments", "akinetic segments", "residual function", "dobutamine test", "inotropic reserve", "hypokinetic segments", "pixel-based Fourier smoothing", "regional function", "segmental status", "transfer artefacts", "viable myocardium", "Fourier-based ECG-gated metabolic /sup 18/F-fluorodeoxyglucose-positron emission tomography", "/sup 18/F-fluorodeoxyglucose uptake", "thirty six-segment model" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "M", "R", "M" ]
1163
Evaluating the complexity of index sets for families of general recursive functions in the arithmetic hierarchy
The complexity of index sets of families of general recursive functions is evaluated in the Kleene-Mostowski arithmetic hierarchy
[ "general recursive functions", "arithmetic hierarchy", "Kleene-Mostowski arithmetic hierarchy", "index sets complexity" ]
[ "P", "P", "P", "R" ]
1126
A note on an axiomatization of the core of market games
As shown by Peleg (1993), the core of market games is characterized by nonemptiness, individual rationality, superadditivity, the weak reduced game property, the converse reduced game property, and weak symmetry. It was not known whether weak symmetry was logically independent. With the help of a certain transitive 4-person TU game, it is shown that weak symmetry is redundant in this result. Hence, the core on market games is axiomatized by the remaining five properties, if the universe of players contains at least four members
[ "individual rationality", "weak reduced game property", "converse reduced game property", "weak symmetry", "transitive 4-person TU game", "redundant", "market game core axiomatization", "nonempty games", "superadditive games" ]
[ "P", "P", "P", "P", "P", "P", "R", "R", "R" ]
999
The importance of continuity: a reply to Chris Eliasmith
In his reply to Eliasmith (see ibid., vol.11, p.417-26, 2001) Poznanski considers how the notion of continuity of dynamic representations serves as a beacon for an integrative neuroscience to emerge. He considers how the importance of continuity has come under attack from Eliasmith (2001) who claims: (i) continuous nature of neurons is not relevant to the information they process, and (ii) continuity is not important for understanding cognition because the various sources of noise introduce uncertainty into spike arrival times, so encoding and decoding spike trains must be discrete at some level
[ "continuity", "dynamic representations", "integrative neuroscience", "neurons", "cognition", "uncertainty", "spike arrival times", "spike trains", "cognitive systems", "neural nets" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "M", "U" ]
88
Planning linear construction projects: automated method for the generation of earthwork activities
Earthworks planning for road construction projects is a complex operation and the planning rules used are usually intuitive and not well defined. An approach to automate the earthworks planning process is described and the basic techniques that are used are outlined. A computer-based system has been developed, initially to help planners use existing techniques more efficiently. With their input, the system has been extended to incorporate a knowledge base and a simulation of the earthworks processes. As well as creating activity sets in a much shorter time, the system has shown that for a real project, the model is able to generate activity sets that are comparable to those generated by a project planner
[ "linear construction projects", "earthwork activities", "road construction projects", "planning rules", "earthworks planning process", "computer-based system", "knowledge base" ]
[ "P", "P", "P", "P", "P", "P", "P" ]
75
A portable Auto Attendant System with sophisticated dialog structure
An attendant system connects the caller to the party he/she wants to talk to. Traditional systems require the caller to know the full name of the party. If the caller forgets the name, the system fails to provide service for the caller. In this paper we propose a portable Auto Attendant System (AAS) with sophisticated dialog structure that gives a caller more flexibility while calling. The caller may interact with the system to request a phone number by providing just a work area, specialty, surname, or title, etc. If the party is absent, the system may provide extra information such as where he went, when he will be back, and what he is doing. The system is built modularly, with components such as speech recognizer, language model, dialog manager and text-to-speech that can be replaced if necessary. By simply changing the personnel record database, the system can easily be ported to other companies. The sophisticated dialog manager applies many strategies to allow natural interaction between user and system. Functions such as fuzzy request, user repairing, and extra information query, which are not provided by other systems, are integrated into our system. Experimental results and comparisons to other systems show that our approach provides a more user friendly and natural interaction for auto attendant system
[ "Auto Attendant System", "attendant system", "speech recognizer", "dialog manager", "fuzzy request", "clear request", "semantic frame", "spoken dialog systems", "telephone", "telephone-based system" ]
[ "P", "P", "P", "P", "P", "M", "U", "M", "U", "M" ]
921
Processing of complexly shaped multiply connected domains in finite element mesh generation
Large number of finite element models in modern materials science and engineering is defined on complexly shaped domains, quite often multiply connected. Generation of quality finite element meshes on such domains, especially in cases when the mesh must be 100% quadrilateral, is highly problematic. This paper describes mathematical fundamentals and practical -implementation of a powerful method and algorithm allowing transformation of multiply connected domains of arbitrary geometrical complexity into a set of simple domains; the latter can then be processed by broadly available finite element mesh generators. The developed method was applied to a number of complex geometries, including those arising in analysis of parasitic inductances and capacitances in printed circuit boards. The quality of practical results produced by the method and its programming implementation provide evidence that the algorithm can be applied to other finite element models with various physical backgrounds
[ "complexly shaped multiply connected domains", "finite element mesh generation", "finite element models", "arbitrary geometrical complexity", "set of simple domains", "parasitic inductances", "printed circuit boards", "programming implementation", "quadrilateral mesh", "domains transformation", "parasitic capacitances", "metal forming processes", "structural engineering models", "iterative basis", "general domain subdivision algorithm", "artificial cut", "automatic step calculation" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "R", "R", "R", "M", "M", "U", "M", "U", "U" ]
964
Modeling group foraging: individual suboptimality, interference, and a kind of matching
A series of agent-based models support the hypothesis that behaviors adapted to a group situation may be suboptimal (or "irrational") when expressed by an isolated individual. These models focus on two areas of current concern in behavioral ecology and experimental psychology: the "interference function" (which relates the intake rate of a focal forager to the density of conspecifics) and the "matching law" (which formalizes the observation that many animals match the frequency of their response to different stimuli in proportion to the reward obtained from each stimulus type). Each model employs genetic algorithms to evolve foraging behaviors for multiple agents in spatially explicit environments, structured at the level of situated perception and action. A second concern of the article is to extend the understanding of both matching and interference per se by modeling at this level
[ "group foraging", "individual suboptimality", "agent-based models", "group situation", "isolated individual", "behavioral ecology", "experimental psychology", "interference function", "focal forager", "matching law", "genetic algorithms", "multiple agents", "spatially explicit environments", "situated perception", "suboptimal behavior", "situated action" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "R", "R" ]
127
Asymptotical stability in discrete-time neural networks
In this work, we present a proof of the existence of a fixed point and a generalized sufficient condition that guarantees the stability of it in discrete-time neural networks by using the Lyapunov function method. We also show that for both symmetric and asymmetric connections, the unique attractor is a fixed point when several conditions are satisfied. This is an extended result of Chen and Aihara (see Physica D, vol. 104, no. 3/4, p. 286-325, 1997). In particular, we further study the stability of equilibrium in discrete-time neural networks with the connection weight matrix in form of an interval matrix. Finally, several examples are shown to illustrate and reinforce our theory
[ "asymptotical stability", "stability", "discrete-time neural networks", "fixed point", "generalized sufficient condition", "Lyapunov function method", "asymmetric connections", "unique attractor", "connection weight matrix", "interval matrix", "symmetric connections", "equilibrium stability" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "R", "R" ]
1307
Law librarians' survey: are academic law librarians in decline?
The author reports on the results of one extra element in the BIALL/SPTL survey, designed to acquire further information about academic law librarians. The survey has fulfilled the aim of providing a snapshot of the academic law library profession and has examined the concerns that have been raised. Perhaps most importantly, it has shown that more long-term work needs to be done to monitor the situation effectively. We hope that BIALL will take on this challenge and help to maintain the status of academic law librarians and aid them in their work
[ "survey", "academic law library", "academic law librarians", "BIALL/SPTL" ]
[ "P", "P", "P", "P" ]